Heuristics as Reflective Practice

What’s left out when we rely on heuristics?

We were reviewing an onboarding flow for a government-facing service, a design that, on paper, respected several usability heuristics: clear feedback, minimalist design, consistency. But one line stood out. A user reached a confirmation screen, and the system displayed a success message in bright green with a tick. “You’ve completed this stage,” it said.
Only, they hadn’t.

The heuristic flagged it, a violation of match between system and real-world expectations. But what it didn’t show us was why the message had been written that way, or why no one had changed it. It had passed three rounds of internal review.

In follow-up interviews, a participant summed it up:

“It says I’m done, but I know I’m not. So I don’t trust it.”

The issue wasn’t just mislabelling. It was a small moment of institutional self-protection, a pattern of overpromising to reduce call centre load. The green tick wasn’t just a mistake. It was a compromise.

 

Heuristics are sharp, but shallow. They bring clarity at the cost of context. When you evaluate against them, you often find problems quickly, but that speed can lull you into stopping too soon.

In the case I mentioned, the heuristic diagnosis might have ended the inquiry: “Success message needs better labelling.” But lived behaviour showed us more. That surface error had roots in deeper dynamics, organisational habits, legacy fears, even the KPIs used to evaluate service calls.

This wasn’t triangulation, we didn’t combine methods. We traced meaning through layers of context, beginning from a violated heuristic and unfolding outward.

I’ve learned to treat heuristics as invitations to reflect, not conclusions. But that took time. Early in my practice, I used them like scorecards. What shifted was realising that a flagged issue isn’t the end of an investigation, it’s the beginning of one.

What changed was seeing the heuristic not as an authority, but as a prompt: something that signalled where to slow down.

Sometimes, it’s that slowness that makes the method meaningful.

 

I used to treat them as fixed criteria. Now I see them more as evolving patterns of judgement, context-sensitive, culturally dependent, and shaped by the teams that apply them.

Take Nielsen’s “Consistency and Standards.” It makes sense, but what counts as consistent varies with platform, history, and expectation. A swipe gesture may be intuitive in one app, opaque in another.

Heuristics don’t resolve those differences. But they surface where reflection is needed.

They don’t answer the question. They show you where to ask one.

One thing that helped was adapting the list itself. In one project, we created a localised version of the ten heuristics to evaluate internal tools. We combined Nielsen’s principles with our org’s own interface patterns. We even named the tensions, like where error prevention conflicted with user control. That friction didn’t weaken the evaluation. It made it real.

Heuristics became more useful when we stopped pretending they were universal.

 

Always. Especially in time-sensitive projects or procurement-led settings. There’s a comfort in having something objective to point to — “This violates heuristic #5” — even if, as I explored in The Method is the Medium, that objectivity is often constructed.

But that’s where we risk mistaking the method for the meaning. Heuristics aren’t answers, they’re artefacts of judgement.

A good heuristic shows you where something’s broken. A reflective team asks why it broke that way.

To be clear, I still use them. They’re fast, communicable, and surprisingly durable. But I no longer treat them as the end of the conversation.
They’re a first filter, not a full account.

 

As my personal reflection, I keep returning to the moment the user said, “I don’t trust it.”

The interface followed most of the rules, and yet trust broke. That tension stays with me. It reminds me that evaluation isn’t just about identifying problems, but understanding what they mean.

Heuristics can spotlight friction. But they rarely explain its cause.

So I use them, but never alone. What matters is what we do after they show us something. How we resist the temptation to tidy the issue away.
Because sometimes, the problem isn’t the design. It’s the compromise behind it.



Disclaimer: Articles are developed with the support of AI tools. I review and edit all work, and share this openly so readers can see how the writing is made. Peer feedback to correct or improve content is welcome.