Okay, so how about a pager with an AI in it?
In some ways the last few years have been inventive for consumer tech: smart watches, smart speakers, car dashboards, and smart glasses almost inevitably just over the horizon.
But take the watch: I feel like it suffers from being jammed through the same app-based interaction paradigm that was perfected on the smartphone, which itself was a simplified version of the desktop UI.
I don’t think we need to look far for alternate interaction paradigms.
- World Wide Web, as originally conceived, a hypertext of interlinking documents, with no homescreen and no prohibition against deeplinking – that was new.
- The command line (and then, in games, text adventures), with the ability to pipe data between commands – and hey, even Excel, as a canvas of data sources and user recombination, that’s novel.
- Dynamicland, a physical environment of smart light, human writeable code, and a social interaction with the computer.
All of those!
Yet… we’re stuck with apps?
I don’t know, it just feels like we might be missing a trick.
What if the goal was to come up with whole new ways of interacting?
You could play a game:
- Take an old product category. Say, the pager. A box that you wear on your belt that gives you three lines of text out of the air, and it can buzz.
- Give it superpowers. Fill this pager with a large language model so it speak like a human, give it the internet, give it perfect dictation so it can understand me when I speak, and add a subvocal throat mic using Google’s radar-based Project Soli technology it can detect my larynx from my waist, I don’t need to wear anything on my face, and I don’t even need to speak out loud. (Could that work? I just made it up.)
- And then imagine the interface.
I think that would take you somewhere new.
Like maybe such an interaction mode would slipstream directly into the internal monologue. You’d mumble to yourself “I think I’ll spend the next hour writing,” and a buzz at your waist would make you look at the screen: No sorry you have a call in 30 minutes.
Or you would say, under your breath: “I’m heading into town now” – and the buzz would only come if it picked up that your usual train was delayed or whatever.
Or you would rapid-fire scan the headlines and triage your email, muttering quietly as you go through, with your phone by your side to pick up bigger tasks. What sophisticated data structures could you manipulate? If our primitives in the current world are basically apps, scrollable canvases, lists, form widgets, and tapping, and in the desktop world were windows, documents, and menus, what would they be for this?
I’ve been using dictation as an alternative to the keyboard on my iPad recently, and I feel like multimodal text+voice is under-explored right now, and more powerful than voice on its own.
Okay, so how about a pager with an AI in it?
In some ways the last few years have been inventive for consumer tech: smart watches, smart speakers, car dashboards, and smart glasses almost inevitably just over the horizon.
But take the watch: I feel like it suffers from being jammed through the same app-based interaction paradigm that was perfected on the smartphone, which itself was a simplified version of the desktop UI.
I don’t think we need to look far for alternate interaction paradigms.
All of those!
Yet… we’re stuck with apps?
I don’t know, it just feels like we might be missing a trick.
What if the goal was to come up with whole new ways of interacting?
You could play a game:
I think that would take you somewhere new.
Like maybe such an interaction mode would slipstream directly into the internal monologue. You’d mumble to yourself “I think I’ll spend the next hour writing,” and a buzz at your waist would make you look at the screen: No sorry you have a call in 30 minutes.
Or you would say, under your breath: “I’m heading into town now” – and the buzz would only come if it picked up that your usual train was delayed or whatever.
Or you would rapid-fire scan the headlines and triage your email, muttering quietly as you go through, with your phone by your side to pick up bigger tasks. What sophisticated data structures could you manipulate? If our primitives in the current world are basically apps, scrollable canvases, lists, form widgets, and tapping, and in the desktop world were windows, documents, and menus, what would they be for this?
I’ve been using dictation as an alternative to the keyboard on my iPad recently, and I feel like multimodal text+voice is under-explored right now, and more powerful than voice on its own.