New computing metaphors for AI co-created tools
Requirements for a new post-app computing metaphor
See this earlier post for context:
There will be infinitely many “apps” and so a lot of the current metaphors break. Both for organizing tools and for organizing data.
Today’s means of building applications and their distribution mechanisms means developers are solving for the lowest common denominator for some sweet spots of audience sizes: The service has to remain simple enough to be widely usable, yet powerful enough to be useful.
With AI-created tools that equation disappears and we can reach for other extremes:
Extremely simple UIs for a very specific, tailored task, e.g. very quickly writing messages to your spouse, maybe even two: One for logistics and one for sweetly saying “I'm thinking of you”. (This is also a good example for a tool that is co-created by two people and AI)
Or powerful but complex UIs tailored to one specific workflow, e.g. not Photoshop, but a tool for a specific process of retouching food photography for high end Japanese restaurants. That tool would have a very steep learning curve, but it doesn’t matter, as it’s just for exactly one user, who created this – and keeps evolving it – over time. (This already happens today in e.g. investment banking where making a particular trader faster is worth assigning someone to improve just that one person’s tool – Imagine this, but for everyone)
Many of these tools are collaborative, forming social spaces. Many aren’t just about efficiency, but about doing something that is meaningful to oneself or a group. Today these live in services, and often – informally – across a few services. This can now be inverted and this forms a new class of entities to manage.
So we need a new way to manage these new kinds of tools and spaces and anything in between.
Natural language interaction with the system AI is one option, especially for interrogating and changing the tools, but by far not the only option. Common tasks could still be buttons in a launcher UI. And external events – actions by collaborators or maybe the physical environment a user walks into – will trigger experiences to start or resurface. Where screens are available, the platform part of the experience will not resemble a chatbot UI.
AI will be part of the lower layers of the platform, effectively experienced as part of the platform UI.
Tools are a mix of AI and more classic code (whether written by AIs for humans). As they are introspectable by AI, they might contain specialized AIs that guide their generation and adaptation for specific domains or aspects – maybe one optimized for effective data visualizations, or one for a particular UI style the user likes. The system-level UI just helps bootstrap into this.
The system metaphors are much more organized around tasks, data and relationships; not apps and services. And managing those is of course an AI-task as well, in many cases themselves expressed as emergent tools.
Identity builds up across collaboration experiences. "This is the same Amy that shared the landscaping proposal yesterday" and the ability to connect to identities as atoms. Traditional identity providers and future self-sovereign ones are more about global namespaces and recovery mechanisms. Profiles are interactive tools published by others, maybe with their preferred way of doing things:
People and companies can shape how they are experienced, e.g. a default tool on how to buy from them, that a user's system AI potentially adapts to its user's needs and context.
Across all of that, users are in control of their data and how it is used. These tools work for users, which comes with a big shift in power balances with companies, and of course with trusting your AIs. Traditional permissions are not only too burdensome, they make no sense for AI created tools. We need new, effective ways to manage this, and it'll be data centric (vs service centric) and involve both AI assistance and delegating to trusted parties. This then opens the door for wider participation in governance.