On the architecture of AI co-created tools
Requirements for an architecture supporting malleable, AI co-created and safe tools
See this earlier post for context:
Safety- and privacy-first: These tools should have access to a lot of data, can perform potentially dangerous actions, etc. – Apply the principle of least privilege and start with a sandbox, isolating both what goes in and out, and give it only carefully scoped capabilities, following strong safety and privacy principles. Most operations should have no external side effects and are thus easily undoable. This implies that there are trusted runtimes that host these applications, both on-device and server-side with confidential compute.
Malleable: These tools will adapt to their environments and their users will be able to change them, including - especially - while they are running. But today's software is mostly hostile to customization at the architectural level. We need a new architecture that is not just flexible and robust, but also designed to be AI-friendly; a new framework with AIs as primary developers:
Composable: The tools consist of a combination of both AI and traditional code (which may also be written by AI). One key difference from the way Langchain or ChatGPT plugins currently work is that the AI doesn't necessarily come into play at every step - instead, it simply connects different components. The tool is stateful, and the AI that creates it can monitor its progress and make adjustments as needed. This allows us to represent a much wider range of tools, whereas the Langchain/ChatGPT plugin version is just one limited example.
Correctness: How these components are wired up can be checked for formal properties, not just type checks but also correctness properties of distributed systems and of course safety and privacy properties. An interesting case is where the AI composes a flow, but all steps are symbolic, like ChatGPT with Wolfram|Alpha, but all intermediate steps are still expressed in symbolic terms: In cases like that the system can make correctness claims backed by correctness claims of the (weakest) components. Likewise, this would allow flagging which parts a user might want to verify.
Collaborative: Social software and real-time collaboration tools are probably the most important class of tools; we spend most of our computing time in them! But the service-centric architecture, especially with the high onboarding friction for everyone in a group, will run into tension with AI-created tools. We need a new notion of flexible collaborative spaces, owned and controlled by participants who can add tools to them and co-evolve them.
Governable: Permissions don’t make sense for AI generated tools. We need new ways for users to feel in control, and we can do a lot better than today’s permission dialogs and consent bumps. AI generated tools work for their users, and the architecture must support translating user’s preferences and guardrails into compliant behavior.
Transparent and verifiable: The entire stack has to be externally auditable to earn widespread trust. And what users create can be signed, not just by their human authors, but also by the tools that helped create them: So that when a user receives something, their system can help them decide whether to trust it, and so on.