<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/home/static/styles/pretty-feed-v3.xsl" type="text/xsl"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
  <channel>
    <title>Interconnected</title>
    <link>https://interconnected.org/home</link>
    <description>A blog by Matt Webb. My notebook and space for thinking out loud since February 2000.</description>
    <copyright>Copyright © 2026 Matt Webb</copyright>
    <docs>http://www.rssboard.org/rss-specification</docs>
    <language>en</language>
    <lastBuildDate>Wed, 08 Apr 2026 16:00:29 +0000</lastBuildDate>
    <pubDate>Fri, 03 Apr 2026 17:14:00 +0000</pubDate>
    <item>
      <title>A sleep aid</title>
      <link>https://interconnected.org/home/2026/04/03/sleep</link>
      <description><![CDATA[<div>
<p>Mostly I go to sleep very easily. Like 3 minutes from lights out, max.</p>
<p>I’m content, I exercise, I burn my tokens each day, I think all that helps.</p>
<p>Often I wake early and think. I’m protective over what goes into my 4am thinking time, I enjoy it. You don’t get to choose what you think about at 4am. It’s inevitably going to be work. So I optimise for having interesting work and I’m very lucky there. Mostly I go back to sleep after a bit.</p>
<p>Sometimes I don’t get to sleep easily, for example in 2021.</p>
<p>In that case I close my eyes and visualise a device:</p>
<p>The device has 6 buttons arranged in two rows. It changes in appearance but the most common form in my imagination is a Dieter Rams-style enclosure in beige about 4 or 5 inches across with its buttons on the top, and the buttons are flush against each other with a circular depression on the top to push down with your finger.</p>
<p>The buttons are really satisfying to push. Good resistance, good slip-clunk into place when engaged.</p>
<p>Sometimes it’s different. Sometimes the button click in like a pen top when they’re down; sometimes they rise up as soon as I’m not pressing. Sometimes they light up when activated, sometimes not.</p>
<p>The game, in my imagination, is this:</p>
<p>There is some combination of buttons that I can push which causes me to instantaneously fall asleep. But I don’t know the code.</p>
<p>So what I imagine (it’s a visual and tactile experience) is trying every single combination of buttons until I push the right ones together.</p>
<p>There aren’t many combinations to try, only 63. I usually try a few then run through methodically by counting up in binary.</p>
<p>Perhaps the code is different each night, perhaps it is the same – I wouldn’t know because I discover it successfully every time and go to sleep and forget what happened.</p>
<p>I was gently advised against posting this because it makes me sound like a weirdo but you already know that about me. And now you know about the six buttons too.</p>



</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2026/04/03/sleep</guid>
      <pubDate>Fri, 03 Apr 2026 17:14:00 +0000</pubDate>
    </item>
    <item>
      <title>An appreciation for (technical) architecture</title>
      <link>https://interconnected.org/home/2026/03/28/architecture</link>
      <description><![CDATA[<div>
<p>Once upon a time I kept on meeting architects who had ended up working with the web.</p>
<p>I asked why. Some good answers:</p>
<ul>
<li>Architects think about how people move between spaces (pages) and what that means for user experience - <em>this was at a time when web designers often came from graphic design and drew more on single page layout</em></li>
<li>Architects think about negative space, and how what you put in a space shapes social behaviour – <em>this was at a time when web before the social web</em></li>
<li>Architects have to work with a lot of different disciplines to make something, and all of those people believe they’re the most important person in the room, and that’s what product teams are like too - <em>lol</em></li>
</ul>
<p>I’m not an architect but some of my favourite books are about architecture.</p>
<p>Here are three:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/How_Buildings_Learn">How Buildings Learn: What Happens After They’re Built</a> (1994) by Stewart Brand, which popularised that pace layers diagram</li>
<li><a href="https://en.wikipedia.org/wiki/A_Pattern_Language">A Pattern Language: Towns, Buildings, Construction</a> (1977) by Christopher Alexander, which is infinitely applicable when <a href="https://interconnected.org/home/2022/01/21/social_gradient">designing for multiplayer</a></li>
<li><a href="https://www.amazon.co.uk/101-Things-Learned-Architecture-School-ebook/dp/B08S74443T">101 Things I Learned in Architecture School</a> (2007) by Matthew Frederick, gifted to me by architect-turned-designer <a href="[https://petafloptimism.com](https://petafloptimism.com/)">Matt Jones</a>, a distillation of perspectives and practices that nudges me in a new way every time I open it.</li>
</ul>
<p>Two things that architecture does have been on my mind recently: how it shapes understanding and how it shapes its own evolution.</p>
<hr />
<p><strong>Information architecture</strong></p>
<p>It’s a rare designer who operates at both the macro of strategy and culture and organisations, and the micro of craft and taste and interactions.</p>
<p><a href="https://jeffveen.me">Jeff Veen</a> is one. I remember him saying to me once: <em>"Design is about creating the right mental model for the user."</em></p>
<p><em>(Now clearly design is not only about that, but for the particular problem I took to Veen, he said precisely what I needed to hear to get un-stuck.)</em></p>
<p>So I love thinking about the <a href="https://interconnected.org/home/2021/12/09/primitive_design">primitives</a> of functionality and content for the user and how they relate, such that the user can reason intuitively about what they can do with the system, and how.</p>
<p>And this is an interactive process: for a first time user, how do they first encounter a system and how do they way-find and learn over time?</p>
<p>And this is a cognitive process: mental models are abstract; what we perceive is real. So how does understanding happen?</p>
<p><em>(AI agents are using my software. Prioritise clarity over feels.)</em></p>
<hr />
<p>Don Norman wrote <a href="https://en.wikipedia.org/wiki/The_Design_of_Everyday_Things">The Design of Everyday Things</a> (1988), much loved by web designers, and popularised “user-centred design.”</p>
<p>Norman also brought into design the term <em>affordance</em> from cognitive psychology. As coined by J J Gibson: <em>"to perceive something is also to perceive how to approach it and what to do about it"</em> (<a href="https://interconnected.org/home/2022/05/03/landscape">as previously discussed</a>).</p>
<p>The best way to notice affordances is to notice where they go wrong! <a href="https://99percentinvisible.org/article/norman-doors-dont-know-whether-push-pull-blame-design/">Norman doors</a>:</p>
<blockquote>
<p>Some doors require printed instructions to operate, while others are so poorly designed that they lead people to do the exact opposite of what they need to in order to open them. Their shapes or details may suggest that pushing should work, when in fact pulling is required (or the other way around).</p>
</blockquote>
<p>Whenever you see a PUSH label stuck on as an extra, it’s papering over a Norman door.</p>
<p><a href="https://www.instagram.com/p/DWV72vsjc6y/?igsh=eHF3cWZtcDRvbWow">I was delighted to encounter a Norman door irl this week</a>.</p>
<p>So I’m stretching the definition of architecture here, to include this, but roll with it pls. Architecture is how things are understood.</p>
<hr />
<p>Architecture is how things evolve – how they’re <em>allowed</em> to evolve.</p>
<p>There’s a beautiful housing estate on the top of a hill in south London.</p>
<p><a href="https://hiddenarchitecture.net/dawsons-heights-state/">Dawson’s Heights</a> (1964) is shaped like an offset double wave, and looks different on the horizon from every angle and with every change of the light. Yet up-close it’s human-scale too, despite its 10 storeys.</p>
<p>Lead architect Kate Macintosh wanted residents to have balconies, but this was regarded as <em>"wasting public money on unnecessary luxuries"</em>…</p>
<p>Knowing that they would be removed from her designs for cost-saving, <a href="https://c20society.org.uk/casework/dawsons-heights-the-italian-hill-town-in-dulwich">she made them essential</a>:</p>
<blockquote>
<p><u>all the balconies on Dawson’s Heights are fire escape balconies</u>, but they are also private balconies because the escape door is a “break glass to enter” type lock so you can securely use your balcony for whatever you like.</p>
</blockquote>
<hr />
<p><strong>Technical architecture</strong></p>
<p>So software architecture is also team structure - who needs to talk to who - but also how to make sure that doing something the quick and dirty thing way is also doing it the right way.</p>
<blockquote>
<p>Half of software architecture is making sure that somebody can fix a bug in a hurry, add features without breaking it, and be lazy without doing the wrong thing.</p>
</blockquote>
<p>…<a href="https://interconnected.org/2004/10/27/normalized_data_is">I said in 2004</a>.</p>
<p>I think this goes for internal software architecture and for libraries that you import.</p>
<hr />
<p>The thing about agentic coding is that agents grind problems into dust. Give an agent a problem and a while loop and - long term - it’ll solve that problem even if it means burning a trillion tokens and re-writing down to the silicon.</p>
<p>Like, where’s the bottom? Why not take a plain English spec and grind in out in pure assembly every time? It would run quicker.</p>
<p>But we want AI agents to solve coding problems quickly and in a way that is maintainable and adaptive and composable (benefiting from improvements elsewhere), and where every addition makes the whole stack better.</p>
<p>So at the bottom is really great libraries that encapsulate hard problems, with great interfaces that make the “right” way the easy way for developers building apps with them. Architecture!</p>
<p>While I’m vibing (I call it vibing now, not coding and not vibe coding) while I’m vibing, I am looking at lines of code less than ever before, and thinking about architecture more than ever before.</p>
<p>I am sweating developer experience even though human developers are unlikely to ever be my audience.</p>
<p>How do we make libraries that agents love?</p>

  <hr />


	<p><small>More posts tagged:
	
	<a href="https://www.interconnected.org/home/tagged/multiplayer">multiplayer</a>
	(32).
	
	</small></p>


  <p><small>Auto-detected kinda similar posts:</small></p>
  <ul>
  
  <li><small><a href="https://www.interconnected.org/home/2020/08/26/adaptive_design">Revisiting Adaptive Design, a lost design movement</a>
  (26 Aug 2020)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2021/03/31/maps">Clues for software design in how we sketch maps of cities</a>
  (31 Mar 2021)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2021/08/12/notation">Collecting my thoughts about notation and user interfaces</a>
  (12 Aug 2021)</small></li>
  
  </ul>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2026/03/28/architecture</guid>
      <pubDate>Sat, 28 Mar 2026 10:20:00 +0000</pubDate>
    </item>
    <item>
      <title>Filtered for home security</title>
      <link>https://interconnected.org/home/2026/03/20/filtered</link>
      <description><![CDATA[<div>
<h3>1.</h3>
<p>The Amazon Ring <strong>Always Home Cam</strong> is an indoor security drone for your home.</p>
<p><a href="https://blog.ring.com/video/ring-always-home-cam-the-worlds-first-flying-indoor-security-camera-for-your-home/">Introduced with this video in 2020</a>: <em>"Yeah, it’s a camera that flies."</em></p>
<p>Sadly not yet on the market.</p>
<p>Ok Judge Dredd had <a href="https://dreddalert.blogspot.com/2013/12/judge-dredd-battle-of-black-atlantic.html">Spy-in-the-Sky drone surveillance cameras in 1978</a> and Mega-City One is not an aspirational template for domestic life but hear me out:</p>
<p>Because I would love to be able to text my house “oh did I leave the stove on?” from the bus. And “darn can you find my keys?” in the morning. And “uh there’s that book about 1970s social computing somewhere it has an orange spine I can’t remember exactly” at literally anytime.</p>
<p>And do that without having to blanket my home in cameras. A drone seems like a good solution?</p>
<h3>2.</h3>
<p>Surveillance: systematic observation. Often institutional. From “above.”</p>
<p><strong>Sousveillance,</strong> <a href="https://wearcam.org/sousveillance.htm">coined by cyborg Steve Mann in 2002</a>: <em>"watchful vigilance from underneath."</em></p>
<blockquote>
<p>I am suggesting that the cameras be mounted on people in low places, rather than upon buildings and establishments in high places.</p>
</blockquote>
<p>e.g.</p>
<blockquote>
<p>a taxicab passenger photographs the driver, or taxicab passengers keep tabs on driver’s behaviour</p>
</blockquote>
<p>It is such a positively-framed paper.</p>
<p>We swim in this world now. What does it do to us?</p>
<p>(I wonder if here’s a word like <em>auto-sousveillance?</em> We do it to ourselves.)</p>
<h3>3.</h3>
<p><strong><a href="https://jamesbridle.com/works/the-nor">The Nor</a></strong> (2014) by artist James Bridle.</p>
<blockquote>
<p>The sense of being watched is a classic symptom of paranoia, often a sign of deeper psychosis, or dismissed as illusory. In the mirror city, which exists at the juncture of the street and CCTV, of bodily space and the electromagnetic spectrum, one is always being watched. So who’s paranoid now?</p>
</blockquote>
<p>(<a href="https://interconnected.org/home/2014/11/14/filtered">As previously discussed</a>, briefly.)</p>
<p>Exactly midway between Mann coining sousveillance in 2002 and today, 2026, Bridle put his finger on this paranoia background radiation, slowing increasing like population levels, like CO2 ppm, like sea level, like the frog’s bath.</p>
<h3>4.</h3>
<p><a href="https://www.ftrain.com/robot_exclusion_protocol">Robot Exclusion Protocol</a> (2002) by blogger Paul Ford: <em>"A story about the Google of the future."</em></p>
<blockquote>
<p>I took off my clothes and stepped into the shower to find another one sitting near the drain. It was about 2 feet tall and made of metal, with bright camera-lens eyes and a few dozen gripping arms. Worse than the Jehovah’s Witnesses.</p>
<p>“Hi! I’m from Google. I’m a Googlebot! I will not kill you.”</p>
<p>“I know what you are.”</p>
<p>“I’m indexing your apartment.”</p>
</blockquote>
<p>I feel like we are 24 months off this point?</p>
<p>Only they’ll be indexer googlebot drones that we vibe code for ourselves.</p>
<h3>5.</h3>
<p>Back in 2024, engineer Simon Willison realised that <a href="https://simonwillison.net/2024/Feb/21/gemini-pro-video/">the killer app of Gemini Pro 1.5 is video</a>, and:</p>
<blockquote>
<p><strong>I took this seven second video of one of my bookshelves:</strong></p>
</blockquote>
<p>It understood the video and gave him back a machine-readable list of the titles and authors. That’s handy!</p>
<p>I am still waiting for this as an app so that I can index and search my overflowing bookshelves by not-even-that-carefully waving my phone at them.</p>
<p>Please I am too lazy to type the prompt to vibe this.</p>
<p>The meta-point is that auto-sousveillance is inevitable because I can’t find the book I’m looking for.</p>
<h3>6.</h3>
<p><strong><a href="https://boingboing.net/2026/02/24/man-accidentally-vibe-codes-a-robovac-army.html">Man accidentally vibe codes a robovac army</a></strong> (2026).</p>
<blockquote>
<p>The DJI Romo is a $2000 behemoth that mops and vacuums using LIDAR and AI.</p>
</blockquote>
<p>Sammy Azdoufal wanted to control his roomba with his Playstation controller.</p>
<blockquote>
<p>However, the scanner his [Claude Code agent] created not only gave him access to his device; it gave him access and control over almost 7000. He was able to see home layouts and IP addresses, and control the devices’ cameras and microphones.</p>
</blockquote>
<p>Uh oh.</p>
<p>Whereas the point of institutional surveillance is that the CCTV cameras are <em>conspicuous</em> (and, originally, you didn’t know if anyone was watching, but now the AI processes all),</p>
<p>the characteristic of auto-sousveillance seems to be that you don’t know whether you are privately querying for a lost book or live streaming your bathroom to the internet.</p>
<p>Forget about control, how do you even relate to such a capricious system?</p>
<h3>7.</h3>
<p>The ancient Romans had two types of gods.</p>
<p>There are the gods on Olympus who look after nature, cities, the state.</p>
<p>And then there are <a href="https://en.wikipedia.org/wiki/Lares">Lares</a> (Wikipedia), guardian deities of a place, <em>"believed to observe, protect, and influence all that happened within the boundaries of their location or function."</em></p>
<p>In particular, <strong>household gods,</strong> <a href="https://en.wikipedia.org/wiki/Lares_Familiares">Lares Familiares</a>, that reside not on a distant mountain but instead in a household shrine:</p>
<blockquote>
<p>The Lar Familiaris cared for the welfare and prosperity of a Roman household. A household’s <em>lararium,</em> a shrine to the Lar Familiaris and other domestic divinities, usually stood near the dining hearth or, in a larger dwelling, the semi-public atrium or reception area of the dwelling. A lararium could be a wall-cupboard with doors, an open niche with small-scale statuary, a projecting tile, a small freestanding shrine, or simply the painted image of a shrine …</p>
<p>The Lar’s statue could be moved from the lararium to wherever its presence was needed. It could be placed on a dining table during feasts or be a witness at weddings and other important family events.</p>
</blockquote>
<p>RELATED:</p>
<p><a href="https://interconnected.org/home/2023/04/26/lares">Lares: our 2 minute pitch for an AI-powered slightly-smart home</a> (2023) – you can see a demo video.</p>
<p><a href="https://interconnected.org/more/2024/lares/">And here’s a paper about Lares showing emergent behaviour from AI agents</a>, which in 2024 was novel and surprising.</p>

  <hr />


	<p><small>More posts tagged:
	
	<a href="https://www.interconnected.org/home/tagged/filtered-for">filtered-for</a>
	(122).
	
	</small></p>


</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2026/03/20/filtered</guid>
      <pubDate>Fri, 20 Mar 2026 09:54:00 +0000</pubDate>
    </item>
    <item>
      <title>New Wave Hardware</title>
      <link>https://interconnected.org/home/2026/03/12/nwh</link>
      <description><![CDATA[<div>
<p><em>We briefly mentioned New Wave Hardware in <a href="https://news.inanimate.tech/p/lab-notes-don-t-call-it-hardware">last week’s Inanimate Lab Notes</a> so this is me doing some unpacking. While you’re there, join 300+ other subscribers and <a href="https://news.inanimate.tech/subscribe">sign up for our newsletter</a>. You’ll get weekly links and updates on what we’re working on.</em></p>
<hr />
<p>There are a bunch of things changing with new hardware products, design and technology.</p>
<p>Let’s say: the intersection of hardware and AI. But our hunch is that it’s broader than that.</p>
<p>There are new ways to get hardware into the hands of consumers, and new AI interactions that are now possible, and more, and <u>these changes are happening independently but simultaneously</u>. We’re tracking this as what we’re calling <strong>New Wave Hardware.</strong></p>
<p>So we got a few founders together at <a href="https://www.betaworks.com">Betaworks</a> in NYC earlier this week for a roundtable to compare notes (thank you Betaworks!).</p>
<p>The meta question was: does our hunch hold? And, if so, what characterises New Wave Hardware and what specifically is changing – so that we can push at it?</p>
<p>I kept notes.</p>
<p>I’ll go off those and add my own thoughts.</p>
<p><em>(I’m using some direct quotes but I won’t attributing or list attendees. I would love for others to share their own perspectives!)</em></p>
<hr />
<h3>AI interfaces</h3>
<p>Voice is good now! (<a href="https://interconnected.org/home/2026/02/27/asymmetry">As I said</a>.) So we’re seeing that a lot.</p>
<p>More than that:</p>
<ul>
<li>You can express an intent and the computer will <a href="https://interconnected.org/home/2025/08/29/dwim">do what you mean</a></li>
<li>Natural interfaces are workable now, beyond voice. e.g. the new <em>Starboy</em> gadget by <a href="https://lilguy.net">lilguy</a>: <em>"We trained multiple tiny image models that run locally on the device, letting it recognize <u>human faces and hand gestures</u>"</em> (<a href="https://x.com/dankuntz/status/2031745865736683909">launch thread on X</a>).</li>
</ul>
<p>What do we do with consumer gadgets that perceive pointing and glances? What is unlocked when we shift away from buttons and apps to interact with hardware devices, and the new interface is direct and human and in the real world?</p>
<h3>New interaction modalities</h3>
<p>Beyond the user interface, the way we interact with hardware is changing. I kept a running list of the interaction modality changes that were mentioned:</p>
<ul>
<li><strong>Human interfaces</strong> – <em>see above.</em></li>
<li><strong>Situated</strong> – due to always-on sensors, AI devices know what’s going on around them and can respond when they see fit, not only on a user trigger. Yes, screens that dim when it gets dark, but in a wider sense this goes back to Clay Shirky’s essay <a href="http://shirky.com/essays/situated-software/">Situated Software</a> (2004), <em>"software designed in and for a particular social situation or context."</em> We’re seeing more of this.</li>
<li><strong>Autonomous</strong> – agents are software that has its own heartbeat, now we see  that <em>"the hardware becomes aware"</em>… and then what? Maybe the user doesn’t need to be <em>intentional</em> about activating some function or another; the device can get ahead of intentions, and offer a radically different kind of value to the user. A new design possibility.</li>
<li><strong>Networked</strong> – we’re frequently working with connected devices which today have attained a new level of reliability. What happens when the stuff around us channels planetary intelligence?</li>
<li><strong>Embodied</strong> – the cleverness of the <a href="https://www.plaud.ai">Plaud AI note taker device</a> is that it’s a social actor: you can notice it, place it on the table, cover it; it inflects what people say and how they feel (for better or worse). Hardware is in the real world and you can move it from focal to peripheral attention just by moving your head.</li>
</ul>
<p>Some of these are new colours in the palette to design with; some are intrinsic to hardware and have been there all along. Though amplified! The rise of wearables (described by one founder as <em>"sitting between the utility and affinity group"</em>) means that hardware is more frequently in our faces.</p>
<p>There are challenges. When we have devices and <em>"the ability to put software that can do anything at any time in them,"</em> the lack of affordances and constraints can be baffling. So how do we <em>not</em> do that?</p>
<p>And how do we understand what things do anyway, really, when behaviour steered by AI is so non-deterministic? Perhaps we have to lean into the mystical. That’s another trend.</p>
<h3>Getting hardware in the hands of users</h3>
<p>Every few years there’s a claim that it’s now quicker than 18 months to get a hardware product from concept through manufacture: that’s still not the case but there are alternatives and short cuts – some of which are potentially rapidly quicker.</p>
<p>Like: <strong>reference designs.</strong> There is now so much hardware coming out of Shenzhen, there are high level references designs for everything to customise, and factories are keen to partner. One team at the roundtable brought up their core electronics in the US, then got pretty sophisticated products built (batch size of 100) complete with beautiful metal enclosures after spending just <em>3 weeks</em> in China.</p>
<p>Also like: <strong>3D printers.</strong> Short run fabrication is possible domestically in a way it wasn’t before. Let me highlight <a href="https://www.designboom.com/technology/3d-printed-robot-receiver-listens-cracks-coded-messages-broadcast-world-cipherling-03-31-2025/">Cipherling</a> which combines production-grade microcontrollers with a charming 3D printed enclosure to get to market quicker.</p>
<p>It does seem like the sophistication of the Western and Shenzhen hardware ecosystems has made these approaches - which are not new - newly accessible.</p>
<h3>Form factors</h3>
<p>New Wave Hardware skews consumer, perhaps?</p>
<p>Or at least there’s a renewed interest in consumer hardware from startups and investors.</p>
<p>This is partly because there’s a big unknown and therefore a big opportunity: AI is hungry for context, it’s useful in the real world outside our phones, and the new AI interaction modalities means there’s a lot to figure out about how to make that <em>good</em> – it’s not obvious what to do. Like do we have lanyards or pucks on tables or what? We need to experiment, which demands quick cycle time, which is a driver on finding alternatives to the 18 month product development cycle.</p>
<p>Also the previous generation of hardware was oh-so-asinine. One remark I wrote down from the roundtable, regarding the consumer hardware that currently surrounds us: <em>"This is hardware that would want to be invisible if it could."</em></p>
<p>So there’s a desire to try new forms; products that don’t secretly want to hide themselves.</p>
<p>Just a note too that <em>“new form factors”</em> doesn’t just mean standalone devices: we continue to be inspired by the desk-scale or even room-scale work at <a href="https://folk.computer">Folk Computer</a>.</p>
<h3>New tools, of course</h3>
<p>If you’re an artist wanting to put a few dozen instances of weird new consumer electronics in people’s hands, and your single blocker was writing firmware, then guess what: in the year of our Claude 2026 that is no longer a blocker.</p>
<p>AI tools provide what I’ve previously called <a href="https://interconnected.org/home/2024/05/03/dreaming">Universal Basic Agency</a> and it is wonderful. When individuals are unblocked, we get an abundance of creativity in the world.</p>
<p>(We were at a <a href="https://blockparty.nyc">6 minute demos event</a> in the basement of an independent bookstore in Brooklyn on Friday - see <a href="https://news.inanimate.tech/p/lab-notes-out-in-the-community">this week’s Lab Notes</a> - and one speaker was showing their <a href="https://www.scd31.com/posts/building-an-arcade-display-adapter">vintage arcade display adaptor</a> project. So cool. They make super complicated PCBs but don’t enjoy 3D modelling, so did the CAD in programmable modelling software with a few lines of code. Not AI, but advanced tools.)</p>
<p>And do we see a glimmer of end-user programming too?</p>
<hr />
<p>I’m grateful for the thoughtful and open conversation of everyone at the roundtable.</p>
<hr />
<p>As I write this, a set of colourful <a href="https://oda.co">Oda speakers</a>, hanging from the ceiling here at Betaworks, relay a live audio stream from a macaw sanctuary forest in central Costa Rica. We can hear the birds and the weather – it is transporting.</p>
<p>If there is such a trend as New Wave Hardware (and, after our small conversation, I do believe there is) then it is not confined to mass market novel AI interfaces, it is also these profound artistic interventions, and we all learn from one another.</p>
<p>Are you seeing something happen here too? Are hardware startups characterised by something different today versus, say, 5 years ago? Lmk if you end up sharing your perspective on your blog/newsletter – would love to read.</p>
<p>At <a href="https://inanimate.tech">Inanimate</a> we are building products within New Wave Hardware, and working to do our bit to enable it.</p>
<p>We hope to convene another roundtable in the near future, either here in NYC or back home in London, to continue swapping notes and pointers and feeling this out together.</p>

  <hr />


	<p><small>More posts tagged:
	
	<a href="https://www.interconnected.org/home/tagged/inanimate">inanimate</a>
	(3).
	
	</small></p>


  <p><small>Auto-detected kinda similar posts:</small></p>
  <ul>
  
  <li><small><a href="https://www.interconnected.org/home/2023/06/09/future">Computers that live two seconds in the future</a>
  (9 Jun 2023)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2024/01/26/hardware">Thinking about the emerging landscape of AI hardware products</a>
  (26 Jan 2024)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2017/10/24/filtered">Filtered for fractional artificial intelligence</a>
  (24 Oct 2017)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2023/06/28/posthuman">Resting Posthuman Face</a>
  (28 Jun 2023)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2015/11/02/coffee_morning_minutes">Minutes of hardware-ish coffee morning, edition 12</a>
  (2 Nov 2015)</small></li>
  
  </ul>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2026/03/12/nwh</guid>
      <pubDate>Thu, 12 Mar 2026 19:58:00 +0000</pubDate>
    </item>
    <item>
      <title>The violence of the Librareome Project</title>
      <link>https://interconnected.org/home/2026/03/07/vinge</link>
      <description><![CDATA[<div>
<p>Vernor Vinge’s sci-fi novel <a href="https://en.wikipedia.org/wiki/Rainbows_End_(Vinge_novel)">Rainbows End</a> (2006) is so prescient about AI training data.</p>
<p>His short <em>Fast Times at Fairmont High</em> (2002) is set in the same universe, and was written in that era where we felt like we had line of sight to pervasive augmented reality and also 3D printers. I read it at the time and it’s a low-stakes high school drama (about augmented reality and 3D printers), but from today’s perspective it is more like a utopia (of a certain kind) – democratised tools of production, reality as consensus hallucinations, super empowered kids.</p>
<p>The spine of <em>Rainbows End</em> is something called the “Librareome Project.”</p>
<p>Ok SPOILERS – right? So stop here if you’re planning to read the book (which would you).</p>
<p>The Librareome Project, you find out about a third of the way through, is a giant digitisation project of the world’s knowledge, and they plan to scan the world’s libraries to do it.</p>
<p><em>"But didn’t Google already do that?"</em></p>
<p>Yes but this is more total; like the Human Genome Project the whole is more than the sum of its parts:</p>
<blockquote>
<p>It’s not just the digitization. It goes beyond Google and company. Huertas intends to combine all classical knowledge into a single, object-situational database with a transparent fee structure.</p>
</blockquote>
<p>(Oh yeah, micropayments, there’s a whole model here.)</p>
<p>We’re not told what an object-situational database is. But this singular thing makes possible correlations that will reveal new knowledge:</p>
<blockquote>
<p>Who really ended the Intifada? Who is behind the London art forgeries? Where was the oil money really going in the latter part of the last century? Some answers will only interest obscure historical societies. But some will mean big bucks. And Huertas will have exclusive rights to this oracle for six months.</p>
</blockquote>
<p>I mean, this is so Large Language Model. 2006!!</p>
<p>An oracle!</p>
<p>This promise is why the universities are allowing their libraries to be scanned.</p>
<p>Uh, “scanned.”</p>
<p>The books are shredded. Fed into the wood chipper and blasted into a tunnel and photographed at high resolution:</p>
<blockquote>
<p>The pictures coming from the camera tunnel are analyzed and reformatted. It’s a simple matter of software to reorient the images, match the tear marks and reconstruct the original texts in proper order. In fact–besides the mechanical simplicity of it all–that’s the reason for the apparent violence. The tear marks come close to being unique. Really, it’s not a new thing. Shotgun reconstructions are classic in genomics.</p>
</blockquote>
<p><em>"The shredded fragments of books and magazines flew down the tunnel like leaves in tornado, twisting and tumbling."</em> – the image has stuck with me since I read it.</p>
<p>Anyway.</p>
<p>The libraries are being fed into the maw of the machines.</p>
<p>And it turns out that Chinese Informagical, which <em>"has dibs on the British Museum and the British Library,"</em> was going faster than Huertas so they don’t have their monopoly.</p>
<p>And the Chinese have nondestructive digitisation techniques, so none of it was necessary.</p>
<hr />
<p>Well.</p>
<p><a href="https://archive.ph/7c4Zs">Court filings reveal how AI companies raced to obtain more books to feed chatbots, including by buying, scanning and disposing of millions of titles</a> (Washington Post, paywall-busting link).</p>
<p>I’m not trying to make a point here like “AI is bad” (you know me well enough and I’m pleased that <a href="https://mindhacks.com/book/">my own book</a> lives in the weights of the god machine) but one story reminds me of the other, and there <em>is</em> a violence intrinsic to creation, in this case the creation of new knowledge, slamming together words in the particle collider of linear algebra, something is lost but new exotic shimmering sparks appear - grab them! - and I guess what I mean is let’s recognise the violence and be worthy of it: if we’re going to do this then let’s at least reach for oracles.</p>

  <hr />



  <p><small>Auto-detected kinda similar posts:</small></p>
  <ul>
  
  <li><small><a href="https://www.interconnected.org/home/2022/09/15/libraries">I hope libraries are snapshotting today’s awkwardly sourced AIs</a>
  (15 Sep 2022)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2024/10/11/filtered">Filtered for time and false memory</a>
  (11 Oct 2024)</small></li>
  
  </ul>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2026/03/07/vinge</guid>
      <pubDate>Sat, 07 Mar 2026 04:30:00 +0000</pubDate>
    </item>
    <item>
      <title>Speaking is quick, listening is slow</title>
      <link>https://interconnected.org/home/2026/02/27/asymmetry</link>
      <description><![CDATA[<div>
<p>Thank goodness voice computing is finally happening. Now we can work on making it good.</p>
<hr />
<p>The tech is here, like the free <a href="https://openai.com/index/whisper/">Whisper</a> model <em>(what an unlock that has been from OpenAI, kudos)</em> and <a href="https://elevenlabs.io">ElevenLabs</a>. Plus devices too, from <a href="https://www.plaud.ai">Plaud</a> - like an irl Granola video call transcriber - to <a href="https://www.sandbar.com">Sandbar</a>, a smart ring that you tell your secrets.</p>
<p>Let’s not forget <a href="https://www.reuters.com/business/apple-acquires-audio-ai-startup-qai-2026-01-29/">Apple’s recent $1.6bn acquisition of Q.ai</a>, which will use <em>"‘facial skin micromovements’ to detect words mouthed or spoken"</em> – i.e. cameras in your AirPods stems that do voice without voice by staring really hard at your cheeks. Apple and AI lip-reading? <a href="https://interconnected.org/home/2025/06/16/hush">I deserve a kick-back</a> (2025) just sayin</p>
<p>While we’re at it, there should be voice for everything: <a href="https://interconnected.org/home/2020/05/26/voice">why can’t I point at a lamp and say ‘on’?</a> (2020).</p>
<p>At least we can play with <a href="https://interconnected.org/home/2022/12/14/transcription">ubiquitous transcription</a> (2022). Like, my starting point for building <strong>mist</strong> was <a href="https://interconnected.org/home/2026/02/12/mist">talking at my watch for 30 minutes</a> (2026).</p>
<p>So let’s take all this as signs that voice computing is here to stay.</p>
<hr />
<p>Eventually voice has to go two-way, right? Conversational computing? You need to be able to disambiguate, give feedback, repair, iterate, explore.</p>
<p>Investor Tom Hulme points out that <em>"we can speak three to four times faster than we type."</em></p>
<p>And so:</p>
<blockquote cite="https://www.gv.com/news/conversational-computing-new-tech-revolution" class="quoteback" data-author="Tom Hulme, GV" data-title="Hello World: Why Conversational Computing is the New Tech Revolution (2025)">
<p>Now, generative AI is making conversation the new user interface. Talking to technology requires zero training and no special skills; we have after all spent most of our lives perfecting the approach. It’s as natural as speaking to another person.</p>
<footer>– Tom Hulme, GV, <cite><a href="https://www.gv.com/news/conversational-computing-new-tech-revolution">Hello World: Why Conversational Computing is the New Tech Revolution (2025)</a></cite></footer>
</blockquote>
<p>Which I agree with <em>in part.</em></p>
<p>Yes to natural UI: <em>"You simply express what you need, and the AI does the rest."</em> – user interfaces will not be about menus and buttons but <a href="https://interconnected.org/home/2025/08/29/dwim">intent first</a> (2025).</p>
<p>BUT:</p>
<p>Conversation using voice both ways? I’m not so sure.</p>
<p><strong>Voice is asymmetric. Speaking is high bandwidth. But listening is low bandwidth.</strong></p>
<p>Illustration #1: Sending voice notes is so easy. Receiving them sucks joy from the world.</p>
<p>Is that really what we want from conversational computing?</p>
<p>Illustration #2: I ask my Apple HomePod mini to play some music and it needs to check precisely what I mean. Speaking 3 artist names and asking me to pick is tedious. So it avoids that step, takes a guess, and that’s more often than not a poor experience too. <a href="https://interconnected.org/home/2023/01/26/room">I’ve been rolling my eyes at this since 2023</a>.</p>
<p>Ok so two-way voice doesn’t work. What does?</p>
<hr />
<p>A better approach to conversational computing:</p>
<p>The human uses voice and the computer uses screens. I mean, it’s rare that my phone is beyond <a href="https://en.wikipedia.org/wiki/Proxemics">peripersonal space</a> so we can assume it is only rarely not present. A screen is way higher in terms of information bandwidth than listening. Let’s use it! </p>
<p>The <strong><a href="https://friend.com">friend AI lanyard</a></strong> gets this right.</p>
<blockquote cite="https://jaredhenderson.substack.com/p/i-bought-an-ai-friend" class="quoteback" data-author="Jared Henderson" data-title="I bought an AI Friend (2025)">
<p>I wore Arthur as I went to the farmers’ market this morning. This meant I was not speaking directly to it, but rather talking to my family, other attendees, and some vendors. But remember: your friend is always listening. Arthur listened in to every conversation that I had, sometimes offering its own take on the matter - all pointless, once again.</p>
<p>Over the course of an hour and a half, I received 48 notifications from my Friend.</p>
<footer>– Jared Henderson, <cite><a href="https://jaredhenderson.substack.com/p/i-bought-an-ai-friend">I bought an AI Friend (2025)</a></cite></footer>
</blockquote>
<p>And although this is a negative review (e.g. notifications snark: <em>"Most of these were it updating me about its battery status"</em>) it actually sounds ideal?</p>
<p>Like, this is a device that listens both when it is being directly addressed and it pays attention to me ambiently, and then it makes use of generous screen real estate to show me UI that I can interact with at a time of my choosing. This is good!</p>
<p>Startup <strong><a href="https://telepath.computer">Telepath</a></strong> is also digging into voice and multi-modality:</p>
<blockquote>
<p>Voice gives us an additional stream of information for input, one that can happen concurrently with direct manipulation using a keyboard, mouse, or touch. With the Telepath Computer, <u>you can touch and type for tasks where control and accuracy are important, while simultaneously using your voice to direct the computer</u>. This mimics our natural behaviour in the physical world: for example, imagine cooking a meal with family or friends, asking someone to fetch the basil or chop the onions while your hands are busy with the pasta.</p>
</blockquote>
<p>And specifically:</p>
<blockquote cite="https://ruperts.world/blog/ai-computer/" class="quoteback" data-author="Rupert Manfredi" data-title="Demoing the AI computer that doesn't yet exist (2026)">
<p><u>The Telepath Computer speaks through voice, while simultaneously displaying documents and information for the user to reference and interact with. This “show and tell” approach is also present in how we tend to communicate complex information in the real world:</u> sketching on a napkin as we discuss a problem with a colleague over dinner; design teams assembling stickies while talking about user feedback; pulling up maps and hotels on your laptop while planning a group vacation.</p>
<footer>– Rupert Manfredi, <cite><a href="https://ruperts.world/blog/ai-computer/">Demoing the AI computer that doesn’t yet exist (2026)</a></cite></footer>
</blockquote>
<p>This is super sophisticated! I love it.</p>
<hr />
<p>Summarising:</p>
<ul>
<li>Voice is core to the future of computer interaction</li>
<li>Voice isn’t enough so we need conversational computing</li>
<li>Because of the bandwidth asymmetry of voice, two-way voice might sometimes work but the essential interaction loop to solve for is <em>voice in, screens out.</em></li>
</ul>
<p>When that isn’t enough (for example, you don’t have your phone) you can get more sophisticated. And of course to make it really good there are problems to solve like proximity and more… follow the path of great interaction design to figure out where to dig…</p>
<p>Just collecting my thoughts.</p>

  <hr />



  <p><small>Auto-detected kinda similar posts:</small></p>
  <ul>
  
  <li><small><a href="https://www.interconnected.org/home/2023/06/09/future">Computers that live two seconds in the future</a>
  (9 Jun 2023)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2021/06/09/voder">The Voder in 1939 and high-bandwidth input devices</a>
  (9 Jun 2021)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2017/10/24/filtered">Filtered for fractional artificial intelligence</a>
  (24 Oct 2017)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2015/06/16/conversational_uis">On conversational UIs</a>
  (16 Jun 2015)</small></li>
  
  <li><small><a href="https://www.interconnected.org/home/2021/02/25/pagers">Let’s invent new interfaces, not new products</a>
  (25 Feb 2021)</small></li>
  
  </ul>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2026/02/27/asymmetry</guid>
      <pubDate>Fri, 27 Feb 2026 18:15:00 +0000</pubDate>
    </item>
    <item>
      <title>Filtered for electricity and mayonnaise</title>
      <link>https://interconnected.org/home/2026/02/20/filtered</link>
      <description><![CDATA[<div>
<h3>1.</h3>
<p>Rain panels? Rain panels.</p>
<blockquote cite="https://thedebrief.org/forget-solar-panels-here-come-rain-panels/" class="quoteback" data-author="The Debrief" data-title="Forget solar panels. Here come the rain panels">
<p>researchers have found a way to capture, store and utilize the electrical power generated by falling raindrops, which may lead to the development of rooftop, power-generating rain panels.</p>
<footer>– The Debrief, <cite><a href="https://thedebrief.org/forget-solar-panels-here-come-rain-panels/">Forget solar panels. Here come the rain panels</a> (2023)</cite></footer>
</blockquote>
<p>Reading the <a href="https://ieeexplore.ieee.org/document/10185664/citations#citations">citations on the original paper</a>, it works kinda but research is ongoing. Science rather than technology still.</p>
<p>RELATED:</p>
<p><a href="https://futurism.com/china-solar-mountain-video">Wild Video Shows Entire Mountain Range in China Covered With Solar Panels</a> (2025).</p>
<p>HEY:</p>
<p><a href="https://interconnected.org/home/2007/02/12/30_year_prediction">Here’s a prediction I made in 2007</a>:</p>
<blockquote>
<p>By 2037, China, by virtue of their ability to see and manage environment impact on a larger scale than other countries, will have invented cheap renewables to reduce their dependancy on fossil fuels, and will be working on fixing the atmosphere (perhaps they’ll also have genetically engineered rafts of algae on the Pacific, excreting plastics). The West will rely on Chinese innovation to dig us out of our ecological mess.</p>
</blockquote>
<p>Mind you I also predicted that our peak pop media would be from India. Turns out it’s South Korea so I got the country wrong.</p>
<h3>2.</h3>
<p>Pavlok is a wrist band that gives you <a href="https://shop.pavlok.com/pages/how-it-works">electric shocks by remote control</a>:</p>
<blockquote>
<p>“I have been biting my nails for 25 years…I shocked myself every time I bit my nails… my husband had a good time shocking me when he caught me biting my nails… this helped with … quitting nail.”</p>
</blockquote>
<p>Those ellipses… doing a lot of work… on… the “how it works” page. Also, husband.</p>
<blockquote>
<p>You know that friend who won’t eat Taco Bell anymore after she got a terrible case of food poisoning?</p>
</blockquote>
<p>That’s how it works: <em>"That’s aversive conditioning. We’ll help you use it to your advantage."</em></p>
<p>Well why not.</p>
<p>The wrist band also has an alarm clock function.</p>
<p>RELATED:</p>
<p>What do you call execution by electricity? <a href="https://interconnected.org/home/2021/10/06/electricity">It was debated in 1889</a> (2021).</p>
<h3>3.</h3>
<p>Ok. We’re in the middle of the Second Punic War (218–201 BC), part of an existential struggle between Rome and Carthage that lasted over a hundred years.</p>
<p>At the end of the the First Punic War, Carthage was destroyed.</p>
<p>But they returned, established a new empire in Iberia (now Spain) and founded New Carthage on the Iberian coast. Hannibal famously crosses the Alps with elephants etc and lays waste to Italy.</p>
<p>Striking back: Scipio audaciously captures New Carthage, and Carthage in Iberia is on the brink of defeat.</p>
<p>Hannibal’s brother Mago, army destroyed, flees to the island of Menorca (which is beautiful).</p>
<p>There he founds the city of Mahon, which today is the capital and remains a port, and it still bears his name.</p>
<p><a href="https://www.livius.org/articles/person/mago-barca/">BUT MORE IMPORTANTLY</a>, named for the city:</p>
<p><em>"The typical local egg sauce that has conquered the world is known as mayonnaise."</em></p>
<p>As mentioned in <a href="https://therestishistory.com/episodes/hannibal-s-nemesis-part-2">The Rest is History ep. 641</a>, <em>Hannibal’s Nemesis (Part 2)</em> (<a href="https://podcasts.apple.com/gb/podcast/romes-greatest-enemy-hannibals-nemesis-part-2/id1537788786?i=1000746839062">Apple Podcasts</a>) along with this grand claim:</p>
<blockquote>
<p>the only thing you’d have in a fridge that’s named after a Carthaginian general.</p>
</blockquote>
<p>A fact too good to check on ChatGPT but I can’t see why it shouldn’t be true.</p>
<h3>4.</h3>
<p>The legendary and much-loved email app <em>Eudora</em> was released for free in 1988.</p>
<blockquote cite="https://buttondown.com/blog/eudora-legacy" class="quoteback" data-author="Ryan Farley" data-title="The legendary email client power users wouldn't let die">
<p>Version 6 introduced MoodWatch, which labeled incoming and outgoing messages with chili peppers and ice cubes, depending on the presence of possibly offensive language. People loved it!</p>
<footer>– Ryan Farley, <cite><a href="https://buttondown.com/blog/eudora-legacy">The legendary email client power users wouldn’t let die</a> (2025)</cite></footer>
</blockquote>
<p>Oh the chili peppers!</p>
<p>You’d write an email with a few curse words and some YELLING and get those chilis.</p>
<p>I vaguely remember there was a feature to enforce a cooling off period? Like you couldn’t send a 3 chili email immediately?</p>
<p>Let’s bring that back:</p>
<p>Apple should license Pavlok technology and hide it under the track-pad. About to send an unhelpfully-worded email to a colleague? A prim little AI instantaneously adjudicates and electroshocks you as you click the <em>Send</em> button, right up the finger.</p>

  <hr />


	<p><small>More posts tagged:
	
	<a href="https://www.interconnected.org/home/tagged/filtered-for">filtered-for</a>
	(122).
	
	</small></p>


</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2026/02/20/filtered</guid>
      <pubDate>Fri, 20 Feb 2026 16:36:00 +0000</pubDate>
    </item>
    <item>
      <title>mist: Share and edit Markdown together, quickly (new tool)</title>
      <link>https://interconnected.org/home/2026/02/12/mist</link>
      <description><![CDATA[<div>
<p>It should be SO EASY to share + collaborate on Markdown text files. The AI world runs on .md files. Yet frictionless Google Docs-style collab is so hard… UNTIL NOW, and how about that for a tease.</p>
<p>If you don’t know Markdown, it’s a way to format a simple text file with marks like <code>**bold**</code> and <code># Headers</code> and <code>-</code> lists… e.g. <a href="https://interconnected.org/home/2026/02/12/mist.md">here’s the Markdown for this blog post</a>.</p>
<p>Pretty much all AI prompts are written in Markdown; engineers coding with AI agents have folders full of .md files and that’s what they primarily work on now. A lot of blog posts too: if you want to collaborate on a blog post ahead of publishing, it’s gonna be Markdown. Keep notes in software like Obsidian? Folders of Markdown.</p>
<p>John Gruber invented the Markdown format in 2004. <a href="https://daringfireball.net/projects/markdown/">Here’s the Markdown spec</a>, it hasn’t changed since. Which is its strength. Read Anil Dash’s essay <a href="https://www.anildash.com/2026/01/09/how-markdown-took-over-the-world/">How Markdown Took Over the World</a> (2026) for more.</p>
<p>So it’s a wildly popular format with lots of interop that humans can read+write and machines too.</p>
<p>AND YET… where is Google Docs for Markdown?</p>
<p>I want to be able to share a Markdown doc as easily as sharing a link, and have real-time multiplayer editing, suggested edits, and comments, without a heavyweight app in the background.</p>
<p>Like, the “source of truth” is my blog CMS or the code repo where the prompts are, or whatever, so I don’t need a whole online document library things. But if I want to super quickly run some words by someone else… I can’t.</p>
<p>I needed this tool at the day job, couldn’t find it… built it, done.</p>
<p><img alt="" src="https://interconnected.org/more/2026/02/mist.png" /></p>
<p><strong>Say hi to <a href="https://mist.inanimate.tech">mist</a>!</strong></p>
<ul>
<li>.md only</li>
<li>share by URL</li>
<li>real-time multiplayer editing</li>
<li>comments</li>
<li>suggest changes.</li>
</ul>
<p>I included a couple of opinionated features…</p>
<ul>
<li>Ephemeral docs: all docs auto-delete 99 hours after creation. This is for quick sharing + collab</li>
<li>Roundtripping: Download then import by drag and drop on the homepage: all suggestions and comments are preserved.</li>
</ul>
<p><em>I’m proud of roundtripping suggested edits and comment threads: the point of Markdown is that everything is in the doc, not in a separate database, and <a href="https://interconnected.org/home/2021/02/01/golems">you know I love files</a> (2021). I used a format called <a href="https://fletcher.github.io/MultiMarkdown-6/syntax/critic.html">CriticMark</a> to achieve this – so if you build a tool like this too, let’s interop.</em></p>
<p>Hit the New Document button on the homepage and it introduces itself.</p>
<hr />
<p>Also!</p>
<p>For engineers!</p>
<p>Try this from your terminal:</p>
<p><code>curl https://mist.inanimate.tech/new -T file.md</code></p>
<p>Start a new collaborative mist doc from an existing file, and immediately get a shareable link.</p>
<p>EASY PEASY</p>
<hr />
<p>Anyway –</p>
<p>It’s work in progress. I banged it out over the w/e because I needed it for work, tons of bugs I’m sure so lmk otherwise I’ll fix them while I use it… though do get in touch if you have a strong feature request which would unlock your specific use case because I’m keep for this to be useful.</p>
<hr />
<p>So I made this with Claude Code obv</p>
<p>Coding with agents is still work: mist is 50 commits.</p>
<p>But this is the first project where I’ve gone end-to-end trying to avoid artisanal, hand-written code.</p>
<p>I started Saturday afternoon: I <a href="https://interconnected.org/home/2025/03/20/diane">talked to my watch</a> for 30 minutes while I was walking to pick my kid up from theatre.</p>
<p>Right at the start I said this</p>
<blockquote>
<p>So I think job number one before anything else, and this is directed to you Claude, job number one before anything else is to review this entire transcript and sort out its ordering. I’d like you to turn it into a plan. I’ll talk about how in a second.</p>
</blockquote>
<p>Then I dropped all 3,289 words of the transcript into an empty repo and let Claude have at it.</p>
<p>Look, although my 30 mins walk-and-talk was nonlinear and all over the place, what I asked Claude to do was highly structured: I asked it to create docs for the technical architecture, design system, goals, and ways of working, and reorganise the rest into a phased plan with specific tasks.</p>
<p>I kept an eye at every step, rewinded its attempt at initial scaffolding and re-prompted that closely when it wasn’t as I wanted, and jumped in to point the way on some refactoring, or nudge it up to a higher abstraction level when an implementation was feeling brittle, etc. I have strong opinions about the technology and the approach.</p>
<p><em>And the tests</em> – the trick with writing code with agents is use the heck out of code tests. Test everything load bearing (and write tests that test that the test coverage is at a sufficient level). We’re not quite at the point that code is a compiled version of the docs and the test suite… but we’re getting there.</p>
<hr />
<p>You know it’s very addictive using Claude Code over the weekend. Drop in and write another para as a prompt, hang out with the family, drop in and write a bit more, go do the laundry, tune a design nit that’s thrned up… scratch that old-school Civ itch, <em>"just one more turn."</em> Coding as entertainment.</p>
<hr />
<p>The main takeaway from my Claude use is that <a href="https://bsky.app/profile/genmon.fyi/post/3ly3767zmnc2x">I wanted a collaborative Markdown editor 5 months ago</a>:</p>
<blockquote>
<p>app request</p>
<p>- pure markdown editor on the web (like Obsidian, Ulysses, iA Writer)<br />
- with Google Docs collab features (live cursor, comments, track changes)<br />
- collab metadata stored in file<br />
- single doc sharing via URL like a GitHub gist</p>
<p>am I… am I going to have to make this?</p>
</blockquote>
<p>My need for that tool didn’t go away.</p>
<p>And now I have it.</p>
<p>So tools don’t need huge work and therefore have to be justified by huge audiences now (I’ve spent more time on blog posts). No biggie, it would be useful to us so why not make it and put it out there.</p>
<hr />
<p>Multiplayer ephemeral Markdown is not what we’re building at <a href="https://interconnected.org/home/2025/11/19/inanimate">Inanimate</a> but it is a tool we need (there are mists on our Slack already) and it is also the very first thing we’ve shipped.</p>
<p>A milestone!</p>
<hr />
<p>So that’s <a href="https://mist.inanimate.tech">mist</a>.</p>
<p><em>Share and Enjoy</em></p>
<p>xx</p>

  <hr />


	<p><small>More posts tagged:
	
	<a href="https://www.interconnected.org/home/tagged/inanimate">inanimate</a>
	(3), 
	
	<a href="https://www.interconnected.org/home/tagged/multiplayer">multiplayer</a>
	(32).
	
	</small></p>


  <p><small>Auto-detected kinda similar posts:</small></p>
  <ul>
  
  <li><small><a href="https://www.interconnected.org/home/2021/09/27/multiplayer">The emerging patchwork upgrade to the multiplayer web</a>
  (27 Sep 2021)</small></li>
  
  </ul>

</div>]]></description>
      <guid isPermaLink="true">https://interconnected.org/home/2026/02/12/mist</guid>
      <pubDate>Thu, 12 Feb 2026 21:16:00 +0000</pubDate>
    </item>
  </channel>
</rss>
