Portfolio
As UX work expanded across multiple journeys and markets, the organisation faced a growing disconnect between behavioural insight and business decision-making.
Teams were improving checkout flows, refining product listings, adjusting navigation, and iterating on templates. Each initiative showed signs of behavioural change, yet there was no shared way to compare their value or prioritise investment across the portfolio.
The problem was not a lack of data. It was the absence of a common financial language for UX impact.
This case study documents how a scalable ROI framework was designed to translate UX behaviour into credible, comparable business signals.
Project context
Challenge
UX initiatives were evaluated in isolation.
Checkout changes affected a small proportion of users but carried high intent. Product page listing and navigation changes reached more users but produced subtler behavioural shifts. Template updates varied by market and maturity.
Without a shared framework, UX prioritisation stalled across initiatives and discussions defaulted to subjective judgement rather than evidence.
The core challenge was comparability and credibility, not measurement volume.
Process and method
Strategy
The strategy was to design a single, reusable ROI framework that could be applied consistently to any UX change, regardless of journey depth or market size.
The framework needed to:
Core Requirements
- Connect behavioural metrics to business impact
- Normalise performance across different exposure levels
- Support forecasting before launch and accountability after release
- Prevent inflated ROI claims in deep-funnel contexts
- Produce clear, trusted ROI tiers for decision-making
Scalability was a deliberate design goal, not an afterthought.
Process and method
Execution Highlights
A Consistent Measurement Logic
The framework translates UX behaviour into business impact using a single principle: impact is a function of behavioural change and exposure.
Rather than relying on relative uplift or raw analytics, the model:
- Measures conversion change in percentage points
- Weights impact by the proportion of users actually exposed
- Applies consistent time normalisation across initiatives
- Evaluates performance over multiple post-launch windows
This ensured that improvements were neither overstated nor dismissed.
Time-Based Validation
To avoid premature conclusions:
- Early post-launch windows captured adoption effects
- Later checkpoints confirmed behavioural stabilisation
This approach allowed the team to detect short-term volatility, long-term consistency, and false positives driven by novelty or traffic noise.
Portfolio-Level Visibility
Each UX change was documented in a dedicated update view and rolled into an overview layer showing journey step, relative exposure, direction and stability of impact, ROI tier classification, and confidence notes.
This shifted conversations from “Is this UX change good?” to “Where should we invest next for the strongest return?”
Discipline Through Rejection
Several commonly used ROI approaches were explicitly rejected:
- Relative uplift percentages that exaggerated deep-funnel impact
- Applying changes to total site traffic regardless of exposure
- Blind use of industry benchmarks without contextual adjustment
- Volatile revenue-per-session models
The final framework prioritised realism over persuasion.
Result and impact
Outcome
The framework was first validated through a checkout optimisation initiative, then adopted as the standard evaluation model for UX changes.
Key outcomes included:
- A shared, auditable ROI language across teams
- Increased trust in UX impact reporting
- Faster, evidence-based prioritisation decisions
- More disciplined allocation of engineering effort
Importantly, the framework was also used to deprioritise initiatives with limited exposure and low strategic leverage. It proved capable of constraining investment, not just justifying it.
UX shifted from a cost discussion to a decision-support function.
Reflection
Reflection
This work changed one foundational assumption: conversion change has no meaning without exposure context.
Before the framework, impact discussions focused on the size of behavioural shifts. Afterwards, they focused on how many users those shifts actually affected.
That shift reframed UX ROI from advocacy to accountability.
The framework does not replace qualitative research, brand thinking, or accessibility judgement. It complements them by providing a clear validation layer where financial decisions require evidence.
In doing so, it raised the maturity of UX conversations not by inflating impact, but by making it comparable, bounded, and trustworthy.