AI alignment & experiences AI through tools
Why the embodiment of AI matters and advantages of AI co-created tools
I must admit, the conversational agent approach, despite its popularity in modern discussions and sci-fi, leaves me feeling uneasy. In contrast, the tools approach resonates with me on a deeper level, and here are three key reasons why:
More genuine embodiment of "AI serves you": The tools approach puts the user in control, emphasizing that AI is a resource to assist and augment human capabilities. This contrasts with the agent approach, where a conversational, human-like agent mediates tasks. In the agent approach, the AI system may inadvertently take on a more dominant role, potentially undermining the notion that AI should be working for the user. By treating AI as a tool rather than an anthropomorphized agent, the focus remains on augmenting the human, thereby more genuinely embodying the "AI serves you" concept.
Natural alignment with promising AI safety approaches: The tools approach resonates well with the AI safety strategy of prioritizing learning processes over achieving specific outcomes. Building and refining tools is inherently focused on establishing and enhancing processes. In contrast, the agent approach may inadvertently foster reliance on AI for specific outcomes, while imposing processes on an increasingly intelligent seeming, anthropomorphized entity could seem overbearing. By adopting the tools approach, AI and human users can cultivate a more synergistic relationship, encouraging mutual growth and co-evolution.
Promoting a democratic ecosystem for safety governance: An ecosystem centered on building and refining tools invites collaboration, innovation, and the sharing of best practices. This open atmosphere empowers diverse stakeholders to contribute to AI safety and governance, fostering a more inclusive and resilient approach. In comparison, relying on agents produced by a single or a few companies can centralize control and decision-making, potentially limiting the range of perspectives. By embracing the tools approach, the public can work together to address safety concerns and create a more equitable and accountable AI landscape.
None of that should imply that AI co-created tools are less capable than AI agents, they just manifest differently.
Much of the discussion around ensuring safe and aligned artificial intelligence focuses on advanced hypothetical agents - superintelligent machines with human-level autonomy and general reasoning abilities. But the embodiment of AI matters greatly, and I wish it would be a more commonly covered aspect of the discussion.
The notion of AI as tool, even if the tool itself is created by an AI, gives us interesting affordances for how AI safety mechanisms manifest themselves:
Conceptualizing AI systems as "tools" rather than autonomous agents can help reinforce the idea that they should be carefully designed, regulated, and constrained for safe and ethical use. We have a lot of experience building constraints and oversight into the design of complex technologies and tools.
Many real-world tools that could cause harm if misused are regulated or require licenses and training. We could require "AI licenses" to develop or operate advanced AI systems, with mandatory safety and ethics education. Some AI researchers have proposed similar ideas.
Building in constraints and shutdown mechanisms into AI tools seems more natural and less likely to provoke objections than trying to overly constrain a fictional "free-willed" AI agent. Some interpretations of AGI as having human-level autonomy and free will can actually be counterproductive. See for example all the calls to “free Sydney”.
Where those constraints come from is the next big question. There will be many sources of course, but the tools affordance lends itself to some approaches:
The act of co-creating a tool with AI doesn’t necessarily imply a single AI model assisting the user. This could itself be an ensemble of specialized AIs working together, some specializing on skills (e.g. getting the visual design right) while others specialize on domains. Some of these tool creating AIs are designed to comply with their industry's regulation, while some potential tools might just fall outside of the domain of any available tool creation AIs.
Shared spaces, i.e. collaborative tools, have to naturally keep every participants safety in mind. Each participant's system could bring in their corresponding concerns and the platform will aim to ensure that all constraints are met. And while some might have very specific concerns, most people will adopt broadly shared principles that emerge across the ecosystem. This is an example of shared governance mentioned elsewhere in the newsletter.
Similarly, artifacts created by tools can be signed with the constraints that were in place during their creation, and others receiving those artifacts can then automatically judge whether they deem them safe (a later post will play out how this could limit the negative impact of highly personalized, overly persuasive automatically generated messages). This in turn encourages both applying safety constraints by default in many tools and developing widely shared common expectations across the ecosystem.
These are just my early thoughts on the topic. There are many gaps to explore in the future:
The difference between “AI co-created tools” and “agent” isn’t as clear-cut as implied above. Arguably the AI creating the tools feels quite agent like. And many tools will themselves feel like a thin layer on top of a lot of agent-like automation. Breaking things down into thinner layers might help, but needs more exploring.
Of course there are many situations where an agent intermediating between them and the world is exactly what a user wants. What if these agents are users of these kinds of AI co-created tools? What implications does that have on safety, interpretability, and so on?
Certain overly-restrictive constraints on AI tools could limit their beneficial use. They need to avoid being so inhibitive that they make the tools essentially useless. We need to develop paths to establishing and evolving that balance.
No technology, no matter how constrained or regulated, is immune to misuse. While conceptualizing AIs as tools can help promote better design and intent, it does on its own not prevent someone sufficiently motivated from attempting to misuse the technology. Broader safety practices around development and deployment will still be needed.
Please subscribe for future updates and send any feedback and thoughts you have: