22 Aug Triangulation in UX Research
Why does the call for rigour so often collapse into a checklist?
Triangulation promises rigour. But too often, it delivers repetition. Researchers reach for it not to surface contradiction, but to secure consensus, three methods, three data types, one tidy insight. It becomes less a practice of inquiry than a form of pre-emption: anticipate critique by outnumbering it.
This article explores what triangulation in UX research might become if we resist that instinct. Not as a seal of truth, but as a process of interpretive tension. What happens when we treat friction between methods as a sign of insight, not failure?
The scenario that follows is fictional, but drawn from real patterns I’ve seen repeatedly in practice. The aim is not to simulate realism, but to examine how meaning takes shape when signals disagree.
A familiar trap
Picture a budgeting tool, just launched. In early usage, one pattern catches the team’s attention: users consistently ignore the pencil icon beside each category name, the one meant for renaming. Clip after clip shows it: people scroll, pause, skip.
The speculation begins.
“She doesn’t even see the rename icon.”
“He’s moving too fast to personalise.”
“They’re not trying to make it their own.”
A story forms: users aren’t engaging with customisation, they just want to get through setup.
It’s the kind of insight that travels easily. In one version, it supports a minimalist roadmap. In another, it’s flagged as a design flaw. Either way, the story seems self-evident. The behaviour is there. It repeats. It looks conclusive.
But behaviour alone doesn’t explain pacing, priority, or mental load. A pattern is not a preference. Not yet.
So the team brings in other data sources, not for confirmation, but to widen the aperture.
Three signals, no closure
Scroll maps show a hesitation near the top of the page, users slow down at the category labels, then move smoothly past. Click data reveals high interaction with the “+” icon (adding categories) but almost none on the pencil icon. These signals repeat the pattern, but don’t yet explain it.
Then, in a small interview set, a few participants give a different view:
> “I knew I’d want to rename things later, but it didn’t feel urgent. I just wanted to start tracking.”
> “I got what the categories meant — even if the names weren’t perfect. I made the change in my head.”
These aren’t disengaged users. They’re strategic. They’re managing attention, not opting out.
Suddenly, the scroll pause looks like momentary mapping, not confusion. The absence of a click becomes a form of deferral, not disinterest. The intention to personalise is there, but postponed, not performed.
This is where triangulation becomes something more than method layering. The different signals don’t converge. They collide. And the contradiction becomes instructive.
Contradiction as method
When triangulation is treated as proof, disagreement between methods looks like error. One source must be “off.” But when it’s treated as dialogue, disagreement becomes a doorway.
Here, the screen recordings offered one lens: what people skipped.
The scroll data offered another: where attention slowed.
The interviews reframed both: users were deciding what to delay.
Together, they changed the frame of inquiry. Not “Do users personalise their budget?” but “How do users pace their understanding of a system that feels incomplete?”
In practice, this shifted the structure of our readouts. Instead of presenting “findings,” we began mapping tensions: quote vs heatmap, interview vs clickstream. Anomalies were preserved, not smoothed out. We stopped looking for alignment, and started looking for friction.
As in A Week of Searching, contradiction became a signal of interpretive depth.
Personal reflection:
As my personal reflection, I’ve come to understand triangulation less as an exercise in confirmation and more as a practice of patience.
The urge to wrap things up, to move from signal to statement, is strong. Especially under time pressure, especially in stakeholder meetings. But in many of the most formative projects I’ve worked on, clarity emerged not through alignment but through tension. A quote that didn’t match the graph. A metric that refused to cohere with the story.
These moments weren’t dead ends. They were invitations to look again, not at the user, but at how we were reading them.
So now, when I write a research plan, I ask different questions:
Where might methods pull away from each other?
Where might contradiction help me see the edge of what I think I know?
Triangulation doesn’t give certainty. It gives contour.