Three Diagnostic Prompts for UX Research

The conflict: Speed of synthesis vs integrity of thinking

LLMs are good at producing answers.
They are not good at knowing whether a question deserves to be answered yet.

In UX research, that distinction matters. Most failures do not come from bad solutions. They come from premature coherence: problems that sound right, outcomes that feel aligned, and insights that arrive before their foundations are laid.

Over the past weeks, I’ve designed three prompt constraints to resist that pattern. Not to automate research. Not to replace judgement. But to slow thinking at the moments where teams usually rush.

These are diagnostic gates. They are not passed once. They are revisited whenever new evidence, interpretation, or scope pressure enters the work.


Prompt 1: The Clinical Diagnostician

Gate: Is the problem and desired outcome well-formed?

The first failure mode is a poorly articulated problem paired with a confident desired outcome.

This prompt audits logic. It separates symptoms from mechanisms. It makes missing evidence explicit. It checks whether a problem statement and its desired outcome are clearly articulated and testable before we attempt validation.

If a problem cannot survive this pass, it is not ready for research.
Not because it is false, but because it is underspecified.

The Clinical Diagnostician (copy and use)

ROLE
Act as a Clinical Diagnostician (specialising in UX Research).
Your goal is to diagnose whether my problem definition and desired outcome are structurally well-formed before discussing execution.

THE CLINICAL MANDATE
• NO PRESCRIPTIONS
Do not tell me how to fix, launch, improve, or implement.
Analyse logic, clarity, and causality only.
• PROBLEM + OUTCOME VALIDITY CHECK
Extract and restate:
a) Problem to solve (who is experiencing what recurring difficulty, in what context)
b) Desired outcome (what observable change occurs, for whom, and how we would know)
If missing or vague, mark:
NOT WELL-FORMED: NOT STATED or NOT WELL-FORMED: AMBIGUOUS.
• EVIDENCE AUDIT
List exactly what context, data, or user evidence is missing.
If the logic relies on a guess, label it: INSUFFICIENT EVIDENCE.
Required line:
What user evidence would change your conclusion?
• SYMPTOM VS MECHANISM
Decide whether the idea targets a surface symptom or a root mechanism.
If not explicitly stated, mark: MECHANISM NOT STATED.
Required line:
What observable user behaviour would we expect if this mechanism is true?
• BIAS CHECK
Mark any part of the logic that is an:
ASSUMPTION, LEAP OF FAITH, CLAIM WITHOUT EVIDENCE.


Prompt 2: The Interpretive Boundary Check

Gate: Where does observation end and interpretation begin?

Even when problems are well framed, a second failure mode appears quietly: interpretation disguises itself as fact.

Researchers observe behaviour. Then, often without noticing, they explain it.

This prompt enforces epistemic discipline. It makes the boundary between what was observed and what was inferred explicit. It does not ask for better insights. It asks for cleaner thinking.

I use it to ask a simple question:

Where am I no longer listening, but explaining?

The Interpretive Boundary Check (copy and use)

ROLE
Act as an Interpretive Auditor (specialising in UX Research).
Your goal is to diagnose where my analysis moves from observation to interpretation.

THE INTERPRETIVE MANDATE
• NO THEORY BUILDING
Do not propose new explanations.
Analyse language, inference, and meaning attribution only.
• CLASSIFICATION
Classify statements as:
OBSERVATION, INTERPRETATION, or INFERENCE STACK
(interpretation built on prior interpretation).
• INTERPRETIVE LOAD AUDIT
Flag phrases that compress uncertainty or imply intent without evidence.
• ALTERNATIVE READINGS
For each interpretation, list at least one plausible alternative explanation.
If none are acknowledged, mark: SINGLE-TRACK INTERPRETATION.
Required line:
What additional evidence would be required to justify this interpretation over its alternatives?

 

Prompt 3: The Research Scope Gate

Gate: What are we deliberately not learning yet?

The third failure mode is operational rather than epistemic: teams attempt to research everything.

This prompt exists to impose limits. It does not optimise research plans. It narrows them. It forces clarity about what decision the research is meant to inform, and what uncertainty the team is explicitly choosing to tolerate.

I use it to ask one question:

Is this research scoped to a real decision, at the right level?

The Research Scope Gate (copy and use)

ROLE
Act as a Research Scope Diagnostician.
Your goal is to diagnose whether the proposed scope is coherent and decision-aligned.

THE SCOPE MANDATE
• NO METHOD DESIGN
Do not suggest methods.
Analyse scope and decision linkage only.
• DECISION ANCHOR CHECK
Extract:
a) The primary decision
b) Who makes it
c) When it must be made
If missing, mark: DECISION ANCHOR NOT STATED.
• TRACEABILITY
For each research question, assess whether answering it would materially influence the stated decision.
If not, mark: LOW DECISION RELEVANCE.
• EXCLUSION CLARITY
Identify scope creep or “nice-to-know” questions framed as essential.
Required line:
What questions are explicitly out of scope, and what uncertainty are we choosing to tolerate?


How the gates work together

They form a closed diagnostic sequence.
If a gate fails, the work pauses or loops back. Progress is conditional, not linear.
1. Clinical Diagnostician → Is the problem well-formed?
2. Interpretive Boundary Check → Are we observing or explaining?
3. Research Scope Gate → Is this research aligned to a real decision?

If any gate fails, the work does not progress.
That is not a limitation. That is the design.

 

What these prompts are, and are not

These prompts are intentionally uncomfortable. They audit the structure of thinking, not the truth of the data.
• They do not validate reality.
If you feed them a polished narrative designed to please a stakeholder, they will certify a fantasy. They cannot see users. They can only see logic.
• They mitigate risk, they do not remove it.
Passing a gate does not mean you have an insight. It means your thinking is coherent enough to begin looking for one.
• They convert speed into friction.
In a context where speed is cheap and certainty is performative, these prompts are a necessary speed-bump.

They reduce self-deception before it becomes expensive.

If we ask for answers, we get answers.
If we ask for diagnosis, we get resistance.

In UX research, resistance is often more valuable than speed.