22 Aug UX Research and Sensemaking
What do we do when the data doesn’t agree?
In one study, we heard two conflicting things about the same feature. Some users described it as “super intuitive, just works”, while others couldn’t even find it. Both accounts came from usability sessions using the same prototype. The scroll heatmaps weren’t conclusive either, most users reached the section, but some bypassed it altogether, either scrolling too fast or getting distracted by another element.
At first, we treated it like a clarity problem. Maybe one group misunderstood the task. Maybe the findings were noise. But then came the temptation: we could just resolve the tension. Choose a story. Pick the more strategic finding. Tell the team what they needed to hear.
That pressure, subtle, unspoken, was real. But clarity, in this case, would have meant flattening something valuable: the contradiction itself. Because it wasn’t a glitch in the data. It was the pattern.
Step 1, The contradictory findings
What made this harder was that both groups of users were “right”. The design hadn’t failed outright, but it hadn’t adapted either. For some, its invisibility was a mark of success, seamlessly embedded in their workflow. For others, the same invisibility meant confusion. When we filtered by device, we saw that most struggling users were on smaller screens. When we looked again at the session videos, a pattern emerged, the ones who understood the feature tended to arrive via a different entry point, a help article, or a campaign page.
The contradiction was real, not apparent. And that distinction mattered. We weren’t uncovering noise. We were surfacing a structural inconsistency, a product behaviour that held up differently under varied conditions.
One user put it plainly: “Oh, I thought that was just a label — not something I could click.” That comment shifted everything. The element hadn’t changed. But the reading of it had.
If we’d treated the dataset as binary, “some users found it, some didn’t”, we’d have missed what the contradiction was pointing to: a split in how the feature was contextualised. The disagreement wasn’t the problem. It was the clue.
Step 2, Sensemaking as iterative framing
This was where sensemaking became a method in itself. Not a step after analysis, but an approach threaded through it.
We used Karl Weick’s view of sensemaking as “retrospective, social, and identity-driven” to ask: what frame are we applying, and how are we adjusting it in response to the evidence? Donald Schön’s idea of “reflection-in-action” also helped: design and interpretation are not separate, the way we read the field already reshapes it.
Rather than declare an insight, we mapped multiple storylines:
- One for users arriving through guided flows.
- One for those landing from search.
- One for support-driven re-entries.
These were not segments or personas, but frames, interpretive pathways through which the same UI was being read. Each one highlighted different tensions. The “seamless” experience, for instance, only held up when onboarding had done the work upstream. In other contexts, the same feature became invisible in the wrong sense, under-described, under-signalled.
We returned to the prototype and re-ran two sessions. This time, we narrated aloud what we thought the experience would communicate, and asked users to do the same. Their interpretations broke our assumptions open. A coloured label we considered purely functional turned out to be read as a badge, making the element seem less interactive than intended. That changed how we framed the scroll data. Suddenly, the speed of descent made more sense.
Sensemaking here wasn’t resolution. It was a method of holding disagreement long enough to let new patterns surface.
Step 3, The ethics of interpretation
But we had to decide. The roadmap team was asking: should this feature be flagged for redesign?
Here the ethical dimension surfaced. Interpretation wasn’t neutral. If we led with the seamless users, we risked exclusion. If we led with the struggling ones, we risked undermining a design that worked well in specific flows. Either choice would be read back as the finding.
We brought in a facilitation technique: instead of framing one interpretation as dominant, we presented the three narrative lines side by side, asking stakeholders to annotate the friction points they found most risky. It changed the conversation. Suddenly, they weren’t asking “which story is true?”, but “which gap matters most to address?”
That shift in posture, from validation to framing, became the real outcome. It didn’t flatten complexity. It located it.
Personal reflections:
As my personal reflection, I learned that patience isn’t just a soft skill here, it’s methodological. What felt like indecision was actually a refusal to oversimplify.
Sensemaking, in this context, wasn’t a path to consensus. It was a way of preserving ambiguity until its contours became meaningful. In retrospect, what I thought was a stuck moment, where nothing agreed, was actually the turning point. That’s where we began to see the outlines of a system in flux, not a feature in failure.