← Back to Blog
by Daniel Griffin

How Do We Make Search Work?

This is the first post in a series exploring my dissertation. See the second post: We Admit We Search, but We Don’t Show How.


Questions Worth Asking

What questions do we ask when we're stuck on a problem at work? How do we know what to search for? Who taught us how to search? What does it mean when everyone says "just Google it" but nobody talks about how they Google it? When or where or for whom does search work?

While much of the academic world focused on where and how web search fails—the algorithmic bias, automation bias, commercial pressures, anti-competitive practices, data voids, the filter bubbles—I became curious about a different question: When does web search work?

Learning from Successful Searchers

I turned to data engineers. These are the people building the infrastructure that powers modern data systems. Their work is highly technical, constantly changing, and intensely competitive. And they search the web. All the time. They weren't perfect searchers with special search literacy superpowers, but somehow web search had become essential to their work.

Through interviews with data engineers and digital ethnography (extensive lurking on Twitter, Stack Overflow, Reddit, and HackerNews), I examined how data engineers actually use web search. What I discovered challenged the conventional wisdom that says people need to understand how search engines work to search successfully.

The data engineers I talked to didn't know anything special about Google's algorithms. And it didn't seem to matter.

Instead, their successful searching depended on the social and technical context surrounding their work—community norms, workplace practices, documentation patterns, and the careful process of packaging and reformulating questions when searches failed. Search worked not because of something they knew, but because it was woven into their entire work environment.

I also found that this successful searching was happening in a surprisingly solitary way. Data engineers described searching as an individual responsibility, a practice they mostly kept to themselves. This protected the time, attention, and reputation needed to learn on the fly—a core expectation of their work. While this privacy is functional, it has a clear downside: it creates barriers for newcomers and limits shared learning across teams.

While engineers kept most of their specific search tactics private, they normalized the practice through what I call "search confessions"—open, often humorous, acknowledgements of their total reliance on Google. This social practice legitimated searching, but it doesn't teach anyone how to do it better. So how could others learn? How could searching be made to work outside the contexts where it thrived?

From Research to Design

These findings shaped my initial reactions to ChatGPT and early attempts by others to build LLM-powered search engines. Now it forms the basis for my thinking about the design of Hypandra. I saw how data engineers succeeded by:

  • Improving queries through professional context. Their ability to reformulate questions wasn't just trial and error. They were making use of community norms, documentation patterns, and shared technical language to refine their queries and better evaluate the results.
  • Using colleagues for repair and validation. When they did turn to a colleague after a failed search it was more than just asking for an answer. It was a key moment of "repair" that also served to validate & verify their approach, share broader knowledge, and legitimate their own expertise within the team.
  • Embracing 'learning on the fly' as the work. Their workflow assumed that knowledge would be incomplete and that problems would be novel. Searching wasn't an interruption of their work; it was the work of flexibly learning what they needed to move forward.

But I also saw what was missing: explicit support for the questioning process itself.

Hypandra exists to make that process visible, shareable, and generative. Instead of hiding how we refine our questions or that we have questions, we can celebrate it. Instead of rushing past curiosity to get answers, we can slow down and explore what we're really asking.

About This Series

Over the coming weeks, I'll share pieces of this research through blog posts exploring the core chapters of my dissertation:

  • Admitting. Is it useful to admit that we search? How might changing how we talk about searching or using AI be good for us individually and collectively? How might we better show how we use search or AI tools?
  • Extending. Where is the knowledge that makes search work for data engineers? AI? Can we apply those lessons to searches elsewhere? How do we come up with queries? How do we even think to search? How do we avoid automation bias? Are warnings about AI may be be wrong enough? Do we need to know what a token is?
  • Repairing. What happens when web searches fail? How do data engineers navigate failed searches? Who failed? Why don't people share more about their searches? What is the value of friction in search? Why might we still want to protect privacy in search?
  • Owning. Why is search, so heavily used and so very much admitted into the work practices, still so solitary and secretive? Is it just fearing of being thought ignorance? Is this changing with LLMs? Are searchers still the "uncertainty sinks"? What might "technocratization of search" look like?

Each post will connect the research back to Hypandra's design and focus—showing how understanding how search is made to "work" for data engineers might help others. Maybe we can build tools and practices that help us share and grow everyone's curiosity rather than silo and extinguish it?

The full dissertation is available here, but these posts will make the journey more digestible and more useful for anyone thinking about search, learning, and curiosity.


This is the first post in a series exploring my dissertation. See the second post: We Admit We Search, but We Don’t Show How.