← Back to Blog
by Daniel Griffin

What do you mean by 'real'?

Recently, while driving family to the airport, we got stuck in traffic while in the express lanes in downtown Seattle. As we crawled south, we had a tour of a wide range of graffiti (including street art, tagging, and warnings of various sorts). A family member, introduced to Hypandra just a couple days before, wondered aloud about what would drive someone to paint in such a seemingly dangerous location. Then with an excited, "let's ask Hypandra!," she talked aloud as she typed in her question: "What is the real motivation…" And then, "Oh, I know Hypandra is going to ask what I mean by 'real' right away. It's already influencing how I think about my questions." 

Sure enough, that was one of the first aspects that Hypandra flagged, along with the possibility that there were likely multiple fluctuating motivations that shape each street artist differently. Searching elsewhere for "the real motivation" for something might return a satisfying answer—but will it just extinguish your curiosity or help it grow?

We want people to learn to think more clearly about their questions from the start. At Hypandra, we will help people build better habits when searching or prompting. Most of us can recognize that oversimplifications and lopsided phrasings are likely going to return narrow, biased results. Yet we still so often just google it, bing it, plex it, chat it... How can we build what we know about search into how we actually search?

Hypandra's reflections are designed to challenge the framing of your questions. In part, we aim to identify unstated assumptions, root out ambiguity, and clarify scope. This can feel a little pedantic and maybe even harsh or unnecessary, but users quickly start catching themselves after just a few interactions.

That's the point. The friction introduced by Hypandra isn't designed to slow you down, it's designed to make you think.


Researchers Jutta Haider & Malte Rödl (2023) have described and labelled some of these problematic queries as “search terms that confirm preconceptions”.[1]. Earlier, Francesca Tripodi (2018) found in her research into search practices that "simple syntax differences can create and reinforce ideological biases."[2]. Some such querying is like only asking "why am I right about X" and not asking about different perspectives or deeper backgrounds on a subject.

In 2023, Gagan Ghotra, an SEO consultant, shared how even seemingly neutral queries can return very different results:

  • "Should I take a bath if I have a fever?"
  • "Should I not take a bath if I have a fever?"

Also in 2023, Zachary Lipton, CMU professor and Cofounder & CTO of Abridge, described this aspect of LLMs with a technical term from mathematics: "Prompts are not Lipschitz. There are no “small” changes to prompts. Seemingly minor tweaks can yield shocking jolts in model behavior." See more on Wikipedia: Lipschitz continuity.

Search and AI companies have attempted to address these issues and of course asking a person slightly different questions might also result in very different responses.

Imagine or explore different responses to these queries:

  • Why would someone graffiti on the freeway?
  • What do people who graffiti on freeways say motivates them?
  • What mix of motivations do researchers find in people who graffiti on freeways?

  • Haider, J., & Rödl, M. (2023). Google Search and the creation of ignorance: The case of the climate crisis. Big Data & Society, 10(1). https://doi.org/10.1177/20539517231158997

    ↩︎
  • Tripodi, F. (2018). Searching for alternative facts: Analyzing scriptural inference in conservative news practices. Data & Society. https://datasociety.net/output/searching-for-alternative-facts/

    ↩︎