We Admit We Search, but We Don’t Show How
This is the second post in a series exploring my dissertation. See the first post: How Do We Make Search Work?.
"It's kind of like 90% of my job to just look things up," Amar said, laughing as though revealing an embarrassing secret.
"Probably 90% of my job is Googling things," Christina chuckled.
When I interviewed data engineers about how they use web search, they all made similar confessions—hyperbolic, self-deprecating, yet somehow proud. I search constantly. We all do.
But that's not it: they would admit to searching all the time, but they wouldn't normally show each other how they searched. Christina would pull searches up on a separate screen during pair programming. Ross did the same. Multiple people mentioned turning off screen sharing to search, then turning it back on once they found something.
Counterexamples
- Sophie Koonin twice shared her week's searches in blog posts:
- Kevin Murphy presented and wrote up his searches in 'Browser History Confessional: Searching My Recent Searches' (2022)
- Eric Leung shared his searches in 'Everything I googled in a week as a professional data scientist' (2022)
Search Confessions
I call these admissions "search confessions"—the jokes, memes, and remarks data engineers make to each other about their reliance on web search.
These confessions serve a purpose. They make web search legitimate as professional practice—they establish it as acceptable, normal, and expected. Newcomers learn that it's okay to rely on search by hearing these confessions.
But the confessions don't teach anyone how to search (other than to keep it hidden). They crudely legitimate the practice without making the practice visible.
Still Sheepish
We see the same pattern today with AI tools. People admit to using ChatGPT constantly—often in hushed tones or with disclaimers. "I use it but I verify everything." "It's just for brainstorming." "I would never use it for real work." The admission comes with qualification.
The tools are often hidden. Most prompts aren't shared. The process of reformulation stays private. We're repeating the same patterns: admitting reliance while hiding practice.
Counterexamples
- Andrej Karpathy, former Director of AI at Tesla and a founding member of OpenAI, has a 2-hour-long video: 'How I use LLMs'
- Simon Willison, co-creator of the Django web framework and creator of Datasette, regularly shares his prompts in on his blog, in a series called 'How I use LLMs and ChatGPT'
- Mitchell Hashimoto, co-founder of HashiCorp and creator of tools like Terraform and Ghostty, shared a full transcript of building a new feature with LLM coding agents: 'Vibing a Non-Trivial Ghostty Feature'
These are individuals who have already been recognized for their expertise and are in a position to share their practices without fear of being judged.
Why do we hide the how? Perhaps showing our actual queries—with their false starts, basic questions, and multiple reformulations—feels more vulnerable than a general admission? A confession signals participation in a practice we all do or even being part of an in-group of adventurous early adopters. In contrast, a raw search history might seem to say: 'I didn't know X' or 'I thought Y would work but it didn't.'
Is this useful?[1] What if we changed how we talk about using these tools? How might we better show how we use search or AI tools?
Hypandra's Approach
- We want to make it easier to share our questions and queries—both technically and socially.
- Technically: You can share links to standalone questions, letting the other person choose to explore them, build from them, or launch them directly to external tools. Like this: Does curiosity really kill the cat?
- Socially: In Hypandra we work to help you share your question in the explicit context of being about curiosity—sidestepping or perhaps providing a frame for addressing questions of ignorance or competence. It is a chance to shift the conversation from "just Google it" to "let's explore this together."
- We want to make it easier to share reformulations.
- We are building out different ways to share your path: a trace or trail of your work and learning.
- Our classroom features show a version history to show the evolution of a question. (Reach out for a preview!)
- We want to let you talk about your questions and searches and AI chats with others.
- We are developing features to share question sets (and reflections) with others, including commenting together and then using the combined context to generate further questions.
- Try out our workspaces feature, built from feedback from early users.
What do you want to share? What do you want to show others about your searches? What do you want to see from how others search or use AI tools?
If we continue to rely on search and explore the use of new AI tools, we should ensure we learn from each other about how and where to use them well. And that requires admitting not just that we use them, but how we think through what to ask.
Legitimate Peripheral Participation
Researchers Lave and Wenger studied apprenticeships and developed the concept of "legitimate peripheral participation"—learning by doing in community (initially at the edges). They found that newcomers learn by starting on the periphery and gradually moving toward full participation. The key insight? Learning can't be separated from the social context where it happens, and starting on the edges gives you valuable perspectives.[2] This was a core analytical tool for my dissertation research.
Search confessions legitimate the practice but hide the expertise. By keeping the actual work invisible, data engineers prevent each other—newcomers and experienced practitioners alike—from learning better search practices. Legitimacy without visibility means everyone stays stuck figuring it out alone.
This is the second post in a series exploring my dissertation. See the first post: How Do We Make Search Work?.
