See this earlier post for context:
Even though AIs will be able to write increasingly complex applications, users will expect more than dynamic tools formed out of some generic goo, and feedback loops beyond aggregated signals.
Our tools are part of our culture, and humans will find a way to still play a very active and direct role in shaping them. We'll collectively seek out to shape each other's experiences, whether for higher goals, to pursue economic opportunities or to play status games.
AIs might create a first draft, and increasingly often that will be good enough, but we'll be able to improve on it: Set the high level requirements and propose new approaches or improve the craft by refining the UI or even improve the code. And for quite a while also just filling in gaps where the AI can't do it by itself.
Those improvements can be shared. And when users start new tasks, they'll start with those prior existing tools and maybe modify it, not from scratch, even for no other reason than familiarity.
And those improvements aren’t just about efficiency, but also about emotion. After all, application designers spend a lot of time getting that right, even for the simplest tools. And how the experience feels becomes even more important where multiple parties are involved, be it an AI powered storefront that is on brand or a social space that fosters meaningful interactions between its participants.
So we'll still publish tools, but instead of monolithic tools, they will be this AI-controllable tools: We'll publish recipes that describe how they are build – in as much or little detail as makes sense, we'll publish reusable components, specific UIs or broadly applicable themes, and so on.
There's an inversion of "this is a feature, not a product": Assembled products will be abundant, but great features, specifically crafted for certain problems in specific domains, won't be and that will be one new economic opportunity. This could be fine-tuned image generation models, or a set of finance components that are trusted to comply with local regulation, or maybe an especially beautifully crafted showcase experience that a shop will subsidize on their potential customer's behalf.
Trust will be a new scarcity and (re)building trusted institutions hence a key part of the vision.
Governance will be largely a function of the ecosystem, and trustworthiness another dimension of differentiation for participants. There will be new roles like verifiers, guardrails authors and community organizers. Often overlapping with other creator roles. Some non-profit and some for profit.
Automatically enforceable guardrails will govern things from privacy (e.g. the role of privacy preserving aggregation) to many aspects of AI safety (e.g. constraints on what mass AI customized messages go through to the user) or other safety (e.g. that expiring messages can be flagged for abuse, and then don't disappear).
Such guardrails will both come from broadly trusted entities and emerge bottom up from interactions. Emerging behavior will stem from the system's ability to set guardrails on data and verify what guardrails were in place when incoming data was generated and the implicit negotiation this fosters.
All of that plays a key role in shifting the power dynamics to users and their communities. And of course in aligning AI with users and society at large. (We should note here that this is mostly an attempt to deal with nearer term problems like influence campaigns, optimizing for e.g. engagement goals that aren't in the users' interests, etc. and it's so far unclear how much this helps with the risk of a runaway super intelligent AGI)