The Self-Referential Loop

Before we get to metaphors, it helps to ask a practical question: how do we avoid self-referential loops in UX work, whether we are talking with users or prompting AI? The danger is the same in both cases: answers that circle back on themselves, giving the illusion of progress while nothing new is learned.

A few inputs can help break the loop:

  • Vary your questions. In usability tests, do not always ask “Was that easy?” Try “What would you do next?” or “What slowed you down?” In AI prompts, ask “Why might this design succeed, and why might it fail?” to invite both sides, not only confirmation.
  • Encourage contrast. With participants, compare two flows instead of rating one. With LLMs, ask for “three different explanations and one possible outlier.” Contrast pulls the answer outward.
  • Follow up carefully. If a user says “I like it,” ask “What part?” or “Was anything missing?” If the model repeats a phrase, prompt: “Where are you circling back to yourself?” or “What new angle have we not covered?”
  • Rotate perspectives. In research, ask how a first-time user and a returning user might differ. In AI, shift frames: “How would a stakeholder see this?” versus “How would a competitor frame it?”
  • Anchor in evidence. For humans, triangulate with numbers and stories. For AI, push outward with “Give me a concrete example from practice or literature,” not just a generic statement.
  • With these inputs, loops can be broken before they harden.

 

Expansion versus Collapse

The golden ratio is often used as a symbol of beauty and growth. Its spiral expands forever, always outward, always balanced. But what happens when the movement goes the other way? Instead of expansion, what if the spiral folds back on itself, repeating the same thing? This is the self-referential loop.

The golden ratio spiral shows infinity as something generous. Each turn grows larger, and each step reveals something new but still connected. The self-referential loop shows infinity as something closed. Each turn brings us back to what was already said. Instead of widening our view, it makes it smaller. The lesson is simple: not all infinities are the same. Some open up, others close in.

Umberto Eco helps explain this. In The Open Work (1962), he described books and artworks that stay unfinished on purpose, so that readers and viewers can add their own meaning. The golden ratio spiral is like that: open, growing, never complete. The self-referential loop is the opposite: closed, repeating, not allowing anything from outside to enter.

The Semiotic Trap

Mathematicians such as Cantor showed that infinity can take different forms. Semiotics, the study of signs, shows another difference: signs can point outward to the world, or they can point inward to themselves.

Eco described this difference using the dictionary and the encyclopaedia. A dictionary can fall into a loop. For example:
• “Truth” → “Fact”
• “Fact” → “Truth”

The circle closes, with no way out. That is a self-referential loop. An encyclopaedia works differently. Instead of circling, it connects ideas outward: “truth” might link to law, science, philosophy, or religion. This keeps meaning alive.

Large language models risk falling into the dictionary model at its worst, circling around the same definitions or references. In The Limits of Interpretation (1990), Eco warned against this kind of empty overinterpretation, where signs only chase each other instead of reaching reality.

Contexts of the Self-Referential Loop

The loop is not only a problem for AI. We can see it in many parts of life:

  • Mathematics: A student says, “I know 10 – 5 = 5, because 5 + 5 = 10.” Then, when asked why 5 + 5 = 10, they answer, “Because 10 – 5 = 5.” The reasoning circles back on itself. Nothing is really explained.
  • Media: A rumour starts on Twitter, gets quoted in a blog, then reported in the news. The story seems stronger, but all sources point back to the first tweet.
  • UX Research: A company asks customers only about speed at checkout. Customers answer about speed. The company concludes speed is the only thing that matters.
  • Everyday Life: Someone says, “Trust me, because I always say I can be trusted.” The claim supports itself, nothing more.

Each example shows the same trap: the loop looks like movement, but it never brings in anything new.

Implications for Research

For researchers, this difference matters. The golden ratio spiral is a good metaphor for discovery, where each turn adds more. The self-referential loop warns us of closure, where repetition hides as insight.

Eco’s Kant and the Platypus (1997) offers a useful reminder. When the platypus was first discovered, it did not fit existing categories. Scientists had to adjust. If they had only circled within their old categories, they would have missed the truth. In research, the anomaly, the unexpected, is what breaks the loop.

Recent AI studies echo this point. Shumailov et al. (2024) showed that language models trained on their own outputs experience model collapse – a degenerative loop where the system loses touch with reality. Kommers et al. (2025) proposed computational hermeneutics as a framework for evaluating AI, arguing that meaning must emerge in context and dialogue. Both works highlight that loops without outside anchors erode meaning.

Without triangulation—using more than one method or viewpoint—the loop can trick us into thinking we have depth. What matters is not only the tools we use, but the ability to step outside the loop when it closes in.

Reflection

From my side, I see the self-referential loop as both a warning and a mirror. It warns us how easy it is to confuse movement with progress, or repetition with growth. And it mirrors our own habits: we too can circle inside familiar categories instead of reaching outward. Eco’s semiotics gives us language for this choice: the golden ratio as an open work, infinity as growth, and the loop as the dictionary model, infinity as stasis. For research, the task is clear. We must notice when the spiral is opening, and when it is only turning back on itself.


References
Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R. (2024). AI models collapse when trained on recursively generated data. Nature. Link
Kommers, C. et al. (2025). Evaluating Generative AI as a Cultural Technology. SSRN Preprint. Link
Eco, U. (1962). The Open Work. Harvard University Press.
Eco, U. (1976). A Theory of Semiotics. Indiana University Press.
Eco, U. (1990). The Limits of Interpretation. Indiana University Press.
Eco, U. (1997). Kant and the Platypus. Harcourt.
Hofstadter, D. (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books.
Pattee, H. H. (2006). The Physics and Metaphysics of Biosemiotics: BioSystems. Elsevier.
Corballis, M. C. (2011). The Recursive Mind: The Origins of Human Language, Thought, and Civilization. Princeton University Press.



Disclaimer: Articles are developed with the support of AI tools. I review and edit all work, and share this openly so readers can see how the writing is made. Peer feedback to correct or improve content is welcome.