Coheso Team
Coheso Team
For decades, software design has been optimized for one user: humans. We've developed deep expertise in typography, spacing, color theory, and attention patterns. We know where users look, what they click, and how to guide them through complex workflows.
But we're entering a new era. AI agents are now using the same platforms as humans, accomplishing tasks on our behalf, working alongside us in shared workspaces. This isn't about replacing the human experience. It's about designing for both.
The Dual-User Challenge
Think about any enterprise platform built before 2023. Slack, Salesforce, your internal tools were all designed with one assumption: a human would be on the other end, reading text, clicking buttons, making decisions.
Now consider how AI agents interact with these same systems. They don't care about font weights or visual hierarchy. They're parsing HTML, calling APIs, and trying to accomplish tasks programmatically. The visual interface that works beautifully for humans is largely irrelevant to them.
If you're building Slack from scratch today, it would look very different. There would be aspects where you could trigger AI natively from the system, and aspects requiring oversight and permission for the AI to act on your behalf.
This creates two distinct challenges:
- Interface design for agents - Building APIs and interaction patterns that AI can use effectively
- Oversight design for humans - Creating visibility into what agents are doing and appropriate permission structures
What Changes (And What Doesn't)
Here's the nuance: human-to-human interactions within software would still look the same. The visual design principles we've developed over decades remain valid for those interactions.
But whenever an agent gets involved, everything changes. Suddenly we need:
- Permission gathering - Agents need to request approval before acting
- Context surfacing - Humans need enough information to grant that permission intelligently
- Thought process visibility - When an agent drafts a response, should it show its reasoning?
- Progressive disclosure - How do you show the AI's work without overwhelming the user?
Consider a simple example: an AI agent drafting a response to a colleague. Instead of sending it directly, perhaps the human's preference is that the agent should draft it and let them hit send. They want to edit before committing.
If that draft required underlying research, the agent needs to show its work, including which sources it consulted and what reasoning it applied. That's a lot of visual information that can get cluttered quickly.
The Pattern of Expandable Thought
One pattern emerging across AI products is expandable thought processes. ChatGPT shows "thought for x seconds" with a toggle to expand. Claude shows a status bar that you can click to see the full reasoning.
This pattern exists because:
- Showing all reasoning by default is overwhelming
- Hiding it entirely removes trust and oversight
- Progressive disclosure lets users dig in when they want to
We're in early stages of figuring out this oversight problem. As a world, as humans, we're still learning how to maintain appropriate control while benefiting from AI assistance.
The AI-Native Advantage
Here's the uncomfortable truth for existing platforms: retrofitting AI onto software built for humans is extremely difficult.
Consider how much optimization has gone into existing products. The gap between pixels has been decided. The interaction patterns are baked in. Adding a paradigm as significant as AI agents means battling against years of design decisions that assumed humans were the only users.
When you don't have that technical debt to clear, building AI-native becomes much easier.
New platforms building from scratch have a significant edge. They can design the API layer, the permission model, and the visual interface all in harmony. Incumbents who succeed will likely need to change their platforms so extensively that it's almost a rebuild anyway.
This doesn't mean incumbents can't adapt. But it does mean the playing field is more level than it might appear. The paradigm shift is real, and everyone is learning together.
This is Part 1 of a 3-part series on Building for AI. In Part 2, we'll dig into how LLMs work and why that matters for product design.
