Reframing Agents
Jan 28, 2026
I am not confident that “Agents” are the future of AI software.
We have a long history of mislabelling new technology. When something unfamiliar emerges, we reach for familiar metaphors to make sense of it. Using Agents as the dominant metaphor for AI follows this pattern.
Agents offer a low-friction way to introduce AI into existing workflows and organisations. They allow us to replicate human roles, behaviours, and computer use in a way that is intuitive to us. In that sense, Agents have been an effective transitional form.
That being said, AI does not need to be confined to our systems or our way of doing things. AI does not need to click buttons, open files, move cursors, or navigate tabs to retrieve information. It does not need explicit language to reason, nor does it need to be divided into discrete units or organised as teams to execute tasks. AI is not an entity, a form factor, or an interface. It is a capability.
My concern is that framing AI as Agents constrains how we think about what AI is, what it can do, and how it can manifest in software.
Agents have become so ubiquitous that we have started conflating AI with Agents. Treating Agents as the default expression of AI reduces AI to a single form factor and interaction model, narrowing the design space of the products we build and use cases we enable.
An Agent is a way to package AI, not AI itself. This packaging is part UX and part mental model. It provides us with the framework (and associated constraints) to operate in.
When we treat AI as Agents, we force its use into skeuomorphic patterns borrowed from human interaction. Many of these patterns are inefficient. Human coordination requires context sharing, instruction, and constant alignment. It is often where complexity accumulates and ideas are compromised. Language introduces abstraction, ambiguity, and personal filters that distort intent and meaning through layers of interpretation. Human coordination is a problem to be solved, not a model to be replicated.
By anthropomorphising AI into Agents, we build systems that inherit our human inefficiencies by design. As agentic systems scale, the human-modelled constraints we impose on them scale along with them.
From my own use of AI, I find myself wanting to explain less and command more. The best software enables me to execute faster, with little to no external coordination or dependencies. I want software to structure my work, surface the right capabilities, and guide me through well-designed interfaces. This is what software already does well, and what AI has the potential to augment significantly.
In their current state, Agents are often the opposite of what I look for in software. They are devoid of structure, require long paragraphs to prompt, and place the burden on the user to figure out what to do and how to ask for it. They increase overhead by simulating headcount. Agents can help explore and work on solutions, but are not the solutions themselves. Perhaps this is just semantic, but I have yet to see many products labelled as “Agents” that meaningfully differ in practice.
I see Agents as a stepping stone in our use of AI: from humans using software (before AI), to humans using AI to operate software (where we are now with Agents), and eventually to software itself using AI to serve humans (where we might go next).
While Agents allow us to scale capacity by multiplying ourselves, leveraging AI to make our software more intelligent would enable us to scale our capacity on our own.
I am more interested in embedding AI in software than in intermediating our interactions with it.
The vision I have for AI software is better framed as “Intelligent Interfaces” over “Agents”. It is a shift a shift from delegation to augmentation, and from assistance to disintermediation.
Intelligent Interfaces are a different way to package AI capabilities. They are a more responsive, proactive, and personal form of software. Unlike Agents, AI is not embodied or treated as a separate entity. AI becomes ambient, invisible, and integrated into the system.
In this model, the interface becomes the environment where context is captured and user experience is rendered. Intelligent interfaces adapt to user input and behaviour, surface relevant information and actions, and compress workflows proactively.
Intelligent Interfaces are not bound to human-to-human interaction patterns. They communicate primarily through components that render controls, content, and actions, acting as prompts for both the user and the system. Components can be encoded, prompted by the user, or generated by the system.
Users can engage with Intelligent Interfaces in the modality of their choice (e.g. text, voice, visuals, code, etc.). Once a workflow is initiated, the interface assists inline: suggesting actions, fetching assets, providing insights, formatting content, and completing tasks as the user progresses. Components emerge throughout, making the user faster and better at every step. The interface itself is a multi-modal "autocomplete".
This is where AI capabilities manifest most prominently: transforming software from a layer of interaction into a system of execution.
The difference between embedding AI in software and packaging AI as Agents is ultimately about how we relate to intelligence. One is visible and anthropomorphic. The other is invisible and infrastructural. It is a subtle distinction, but an important one.
Agents encourage us to treat AI as something we work with. Intelligent Interfaces work for us by shortcutting our process automatically. One implies calling an intermediary, the other keeps users in control.
Intelligent Interfaces can technically leverage AI in the same way as Agents, but present it differently. Models are called, routed, and orchestrated in the back-end, not exposed to the user as Agents to manage. From my perspective, ”Agent activity” should not affect our software front-ends. Interfaces are designed for human experience, not for AI. Models should interact at the infrastructure layer, not the interface. It would be more efficient, and avoid strange non-human UI/UX patterns.
Sometimes I wonder if the appeal of Agents is fundamentally less about leveraging AI capabilities, and more about “hacking interoperability”. Agents are effective at bypassing navigation and moving data across systems. They enable users to benefit from cross-platform communication without API access or configuration. This is valuable, but not sustainable. Calling Agents to source data or run workflows through front-ends is expensive - in terms of compute, money, and time. Perhaps this is a signal that the underlying system deserves to be re-engineered. The cost of Agents may be the exact forcing function we need to make open standards a priority.
In the shorter term, a shared context, memory, and foundation model layer at the OS or web level (or MCP) can facilitate data exchange and function execution across Intelligent Interfaces. In this paradigm, data lock-in is no longer a defensible strategy (spoiler: it already is not). The return on user and ecosystem utility will increasingly outweigh the cost of competition.
Intelligent Interface defensibility will index on experience design and capability curation. Each Intelligent Interface is a distinct service, with its own design system, and feature menu. Their “experience package” is what will differentiate them, enabling users to benefit from a variety of Intelligent Interfaces, each purpose-built for different use cases.
In regards to persistent Agent use, I anticipate general-purpose assistants (e.g. Gemini, Siri, etc.) to stay relevant for quick actions and content previews. I do not foresee them displacing Intelligent Interfaces. They provide a different utility. They will most likely act as complementary interfaces rather than substitutes.
Agents may continue to scale human interaction functions (e.g. customer service, mentorship, education, etc.), although I also anticipate that will change. Intelligent Interfaces have the potential to reset user standards, preferences, and expectations. In the context of customer service, resolution could shift from being mediated by an agent (human or AI) to being rendered directly by the system. Instead of asking an airline agent for a refund, a user would default to their account interface, where execution-ready options would appear based on passenger status, preferences, and entitlements. In edge cases, context could be added, or new options generated. When resolution is rendered in this way, the need to open a chat or call an agent becomes unnecessary, or irrational.
Perhaps a more compelling role for Agents is as MVPs. Agents function well as minimal viable interfaces in early solution discovery. They are easy to introduce into existing workflows or channels with little to no learning curve. They are ideal as “AI capability prototypes”, from which Intelligent Interface design can be derived. We are already seeing examples emerge, from AI Slackbots evolving into applications, to prompts turning into templates or skills, to ChatGPT embedding app UI. In this way, Agents can be useful precursors to more comprehensive AI software products.
My bias is that I do not want my computer to feel human. I value the efficiency of neutral (and hopefully beautiful) interfaces that allow me to act without conversation or coordination. I do not aspire to a future where we are primarily Agent managers. Although possible, it is not clear that it would lead to better outcomes, or be more efficient.
The reality is perhaps more nuanced, as it always is. Agents are valuable and will remain important in certain contexts. I do not believe they are a replacement for software, nor the most expansive way to benefit from AI’s capabilities.
AI has the potential to inspire many more interfaces and mental models to exist. The language we use to define new technology determines how we understand and use it. We should be intentional about choosing language that supports our most expansive and ambitious vision of what AI can be - I am not sure “Agents” is it.