From Chat to Control: Why AI Interfaces Need Symbols, Not Sentences

I was reading a short post by Jakob Nielsen when something clicked uncomfortably into place.

His argument was clean. As AI agents mature, traditional user interfaces dissolve. Users stop navigating. They instruct. Screens become temporary. In some cases, they disappear.

That claim is directionally correct. But it leaves a gap that matters in practice.

If the interface recedes, control does not vanish with it. It relocates. And right now, that control is being pushed almost entirely onto conversational language.

I started noticing the cost of that shift in small moments. Planning meetings where prompts kept getting longer. Reviews where nobody could explain why an answer felt wrong, only that it did. Research summaries that sounded confident until someone asked where a claim came from.

Language was doing too much work.

 

The Roman Numeral Phase of AI

Natural language is powerful. It is also inefficient when used as a control surface.

We are already compensating. Prompts expand, the same constraints reappear in request after request, and tone gets negotiated instead of enforced. When the system hesitates, users explain themselves again, usually in longer and more careful ways, hoping precision will emerge from volume.

This is the Roman Numeral phase of AI.

Roman numerals were fine for labelling. They failed at calculation. The system broke not because people lacked intelligence, but because the notation could not express state, absence, or transformation. What changed mathematics was not fluency. It was the introduction of zero and positional logic.

Zero mattered because it altered what the system could do, not how politely it described itself.

That distinction matters here.

What we are missing in AI interaction is not better wording. It is a symbolic layer that compresses intent into something the system can execute reliably, without requiring the user to restate rules every time.

Not a new language. Not “AI-speak”. Something closer to operators.

 

Symbols as Control, Not Style

I started sketching this out informally while working. Nothing formal. Just marks I kept wishing I could add without explanation.

Take a simple task.

Old way:

“Hey, can you help me summarise this article? Please don’t be too wordy, make sure you cite sources accurately, avoid your usual intro, and if there’s controversy, show both sides.”

It works. Sometimes. It also relies on interpretation, memory, and goodwill.

New way:

Summarise this article [-][#][~]

Those symbols are not shorthand. They change behaviour.

[-] strips conversational padding. No greetings. No framing. Output starts with content.

[#] enforces attribution. Claims must be grounded or marked as uncertain.

[~] allows synthesis without forcing convergence. Nuance stays visible.

Read left to right, they function as constraints. Remove one, and the output shifts. Combine them, and you get something closer to an instrument than a conversation.

This is not about efficiency theatre. It is about where errors surface.

Without explicit constraints, problems appear late. During review. During decision-making. Sometimes after shipping. With them, failure shows up earlier, where it is cheaper to deal with.

That is the practical difference.

 

When Friction Disappears Too Cleanly

Someone commented on my post a few days later, her framing widened the picture.

She described adaptive UI as a bridge. A messy middle where voice, agents, and screens overlap. Hybrid systems that mostly disappoint, but still teach teams where things break. She is right about that phase. Anyone working in this space has seen it.

She also described hardware “kits”. Rings, glasses, watches. Personal ecosystems shaped by context and profession.

I like the vision. I share the concern.

Jaron Lanier’s You Are Not a Gadget keeps coming back to me here. Users rarely choose what is best. They choose what is bundled, frictionless, or already there. Hardware kits look like choice. In practice, they tend to collapse around defaults.

Once that happens, control becomes harder to recover.

The same risk applies to personal agents. The agent that “knows you best” may simply be the one that has collected the most data across the widest surface. That does not automatically make it the one that serves you best.

Continuity feels empowering until it becomes enclosing.

Without a portable grammar of intent, something you can carry across systems, you lose the ability to break the glass. You inherit behaviour you did not explicitly choose. Correction becomes verbose again, because it has to fight accumulated assumptions.

That is where symbolic control starts to matter. Not as elegance. As friction you can apply deliberately.

 

The Humanisation Problem

Caleb Sponheim’s article arrived later and closed the loop for me.

His argument is blunt. Humanising AI is a trap. Personality modes, conversational fluff, emotional language. All of it increases engagement. Much of it reduces reliability.

I have seen this play out in practice. A summary opens with “Love this brief!” and nobody questions the substance. A system says it is “thinking”, and users wait patiently for something that is not cognition at all, just computation wrapped in metaphor.

Human language invites human mental models. Those models expect judgement, consistency, accountability. LLMs offer none of those things.

Caleb cites evidence showing that warmth correlates with higher error rates and lower trust. Even without the studies, the pattern is familiar. When the interface feels like a person, people forgive it like one. That is rarely what organisations want from a tool.

Symbols cut through that. They do not pretend to care. They do not reassure. They specify.

That is their advantage.

 

Control Does Not Disappear

Nielsen is right about one thing that is easy to miss. UX is not dying. It is moving.

When UI recedes, control does not disappear. It relocates into language, defaults, policies, and unseen execution paths. If designers do not shape those layers, they still exist. They just harden without scrutiny.

Right now, conversational interfaces are carrying too much of that load. They are being asked to express intent, enforce boundaries, convey confidence, and negotiate tone, all at once. That is why prompts grow. That is why constraints repeat. That is where systems begin to break.

Symbolic grammar is not a solution in itself. It will fail in places. It will be misused. Some teams will treat it as style rather than control. Others will resist the friction entirely.

That tension is real and unresolved.

But the direction is clear enough to name. As interfaces fade, grammar becomes infrastructure. Not expressive grammar. Operational grammar. The kind that decides what the system is allowed to do before it decides how friendly it sounds.

When that layer is missing, language fills the gap. And language, on its own, is a fragile place to put control.