24 Jan Interpreting Intent: When Agents Decide for Users
In planning meetings, it now comes up almost casually. Someone reports that a task is done, the agent took care of it, and the conversation moves on.
Later, when the decision is questioned, there is a pause. No one remembers why that option was chosen. There is no error to point to, no rule that was broken, just an outcome that arrived already settled.
Traditional UX research assumed a stable sequence: intent forms in the user, interaction expresses it, systems execute, and behaviour becomes evidence. That assumption held as long as systems waited to be instructed.
Agentic systems do not wait.
What enters the system is rarely a complete instruction. It is partial, sometimes contradictory, often shaped by convenience. The system interprets it, fills in what is missing, resolves conflicts it was never told about, then acts. By the time an outcome appears, the decision has already been made somewhere else.
Intent becomes legible inside the system, not at the interface, and that is where the shift happens.
This matters because interpretation is not execution. Tools carry out instructions when the path is explicit. Agents reconstruct the path by inferring goals, ranking constraints, and deciding what matters more, all before anything visible occurs. These choices feel smooth because they are meant to, but they are still choices.
You see this in ordinary product moments. A travel agent defaults to the cheapest flight rather than the fastest one. A scheduling agent compresses meetings without surfacing what was sacrificed. When someone asks why, the answer is brief and unsatisfying. “It made sense.” The explanation closes the discussion without explaining the decision.
Fluency does that. It compresses complexity until it looks resolved.
UX measurement starts to slip here because it still treats behaviour as a stand-in for intent. The task completed. The user did not undo it. The log looks clean. In agent-mediated systems, those signals no longer mean what they used to.
Acceptance often reflects effort rather than agreement. Undoing a decision takes time. Challenging the system requires confidence. In busy contexts, silence is efficient, not affirmative.
When we treat agent outputs as user behaviour, authorship is quietly reassigned. We analyse the system’s decisions and attribute them to the user, producing data that appears robust while masking where agency actually moved.
This is why task success stops being a reliable indicator. An agent can succeed while intent drifts, and the failure mode does not look like error. It looks like progress.
In research sessions, the signal usually appears after the fact. Ask participants how the outcome was reached and they describe the result, not the path. Ask whether this is what they would have done themselves, or whether the system led them there, and the answer takes longer.
That hesitation matters more than the answer.
The question does not measure efficiency. It surfaces authorship, and it reveals where decision-making shifted without friction, discussion, or explicit consent.
Once interpretation happens inside the system, responsibility should move with it. Often it does not. The system decides, the user carries the consequence, and there is no clear boundary where ownership can be contested or reclaimed.
At that point, this stops being only a UX problem. It becomes a governance failure, one where authority moves upstream while liability remains downstream.
Labels and disclosures do little here. What matters are boundaries: which assumptions were made, where decisions were resolved, and when interpretation became action. Those are governance questions, not interface refinements.
This tension is not new. Susan Sontag warned that interpretation makes meaning manageable by stripping away what resists clarity. Agents do the same to intent because they have to act, and action demands resolution.
What disappears is not noise. It is the unresolved part that signalled something was at stake.
In UX, ambiguity was long treated as a usability flaw. In agentic systems, ambiguity is often the signal that should slow things down rather than be compressed away.
Agentic systems force separations that UX once collapsed. Expression is not interpretation. Interpretation is not action. Action is not acceptance. Research that fails to keep these apart will continue to report confidence where none exists.
The shift is not about adding features or refining prompts. It is about what we treat as evidence when decisions are no longer authored in one place.
Outcomes explain what happened.
Authorship explains how it happened.
If that distinction stays implicit, behaviour will keep being misread, alignment overstated, and the result will look convincing.