Writing with AI

Handoffs: What Changes When AI Enters the Writing Process

Four things shift when AI is part of how you write — understanding, verification, transparency, and integrity. Here's what that means and what to do about it.

Read with Hypandra →

Writing norms — around attribution, integrity, verification, transparency — aren't arbitrary customs. They carry weight because they serve specific values: honest inquiry, reliable knowledge, fair recognition, the ability to build on each other's work (Nissenbaum, 2002). When a new technology changes how we write, the question isn't just "what are the new rules?" It's whether the practices we adopt still serve those values — or whether we've let the convenience of the tool dictate the shape of the norm.

We use an analytic frame from Mulligan & Nissenbaum (2020), calling these shifts handoffs. For writing, these are moments where part of the process moves from you to the machine. Not all handoffs are problems. Some genuinely help. The question, for each one, is what changed — and whether your process accounts for the change.

For each handoff below, we describe what changes, why it matters, and what you can do about it. If you're looking for concrete tools to help, see our Tools page.


Understanding

What changes: Writing is thinking. The act of writing is one way we work through what we actually believe, discover gaps in our understanding, figure out who we're talking to and what they need to hear. When AI takes over parts of that process, it's not just that we end up with words we didn't write — it's that the thinking those words would previously have required may never have happened. We can have polished prose on a topic we haven't actually thought through.

Why it matters: Understanding here isn't just "can we explain what's in the text." It's whether we've done the work that writing is supposed to do: engaging with the subject, wrestling with our own position, considering our audience, making choices about what matters and what doesn't. If AI made those choices for us, our reader is encountering something that looks like our thinking but isn't — and we've lost the understanding that the work of writing was supposed to give us.

This is true even when you change the words. Paraphrasing AI output is not the same as writing. It skips the struggle of figuring out what you're trying to say — which is where understanding actually forms. If you used AI for the draft, you need additional process to make sure the understanding happened: reviewing, questioning, rewriting from your own knowledge.

What to do about it:

  • Start with your own draft. Write a messy first version before involving AI. This doesn't guarantee the ideas are genuinely yours, but it forces you to grapple with them — to struggle with what you're trying to say before handing any of it off.
  • Self-quiz before publishing. Can you answer questions about the content without looking? If not, you don't understand it well enough to claim it.
  • Set the drafts aside and write it again. For some types of writing, after all the brainstorming and drafting, set everything aside and write it again. Not from scratch — but not just editing AI suggestions either. This is one way to test for yourself the extent to which you've actually internalized the ideas or are just rearranging someone else's words.

Verification

What changes: AI models sound confident even when they're wrong. When AI assists with research, generates claims, or states facts, your verification burden increases — not decreases. The output looks authoritative whether or not it's accurate.

Why it matters: A hallucinated citation looks exactly like a real one. A confidently stated falsehood reads the same as a verified fact. If you used AI anywhere in the process that shaped the claims in your final text, those claims need checking — even if AI didn't write the final sentences.

This includes fake citations — AI can generate plausible-looking references to papers that don't exist. Never include a citation you haven't verified from the original source.

What to do about it:

  • Check factual claims against sources. Don't trust the model's confidence. Verify independently.
  • Watch for AI involvement you might not think of as "writing." If you used AI for research or brainstorming, it shaped the claims you're making. The fact that you wrote the prose yourself doesn't mean the verification question goes away.
  • Ask: "Could I point someone to the source for this?" If not, either find the source or cut the claim.
  • Flag claims, even before checking them. The important thing is not that verification is perfect but that it's traceable. Mark which claims came from or were shaped by AI so you can see where the verification burden sits. Depending on the type of writing, you may want to expose this to readers too.

Transparency

What changes: When AI is part of your process, your reader can't tell from the output alone what role it played. They don't know whether you wrote every word, edited AI drafts, or submitted generated text with minimal review. The process becomes invisible.

Why it matters: Different contexts have different expectations. An instructor expects student work to reflect the student's learning. A reader trusts a bylined author to stand behind their claims. A collaborator needs to know which sections have been human-verified. When the process is invisible, your reader can't calibrate their trust.

What to do about it:

  • Be transparent where it matters. Disclose AI use where it directly shaped the text and where your audience has a reasonable expectation of knowing.
  • Match disclosure to the context. A blog post, a student essay, and an internal doc have different stakes and different norms. There's no universal rule — but there is a question: what does your reader deserve to know?
  • Don't disclaim unnecessarily. If AI wasn't involved in shaping the text or claims, you don't need a writing-process disclaimer. Over-disclosure becomes noise.

See our Disclaimer Templates → for context-specific language you can adapt.


Integrity

Writing has never been a solitary act. We always build on others — on prior work, on conversations, on the thinking of people who came before us. Using AI doesn't change that. What it changes is the scale and invisibility of the contribution, and the ease with which we can put our name on work we haven't fully engaged with.

When we put our name on something, we're not claiming we created it from nothing. We're claiming it meets the standards that authorship norms exist to protect: that the inquiry was honest, that the claims are reliable, that recognition is fair, that others can build on what we've produced (Nissenbaum, 2002). Those norms carry weight not because they're customs but because they serve real values — values like truth-seeking, open exchange, and the ability to trust each other's work.

The test, when AI enters the picture, is whether our practices still serve those values — or whether we've let the convenience of the tool reshape the norm. When we're called out for inappropriate language, a hallucinated citation, a false claim, or a confusing argument — "AI wrote it" is not an acceptable excuse. It's an admission that we put our name on something without doing the work those norms require.

This is where the handoffs above converge. Understanding asks whether we've engaged with the ideas. Verification asks what we've done to verify the claims. Transparency asks whether our reader can see the process. Integrity is what holds those together: the commitment that when our name is on something, the values authorship was designed to protect are actually being served.


Questions That Clarify

If someone questions your work, or if you're unsure whether your process was sound, these questions can help:

Questions for yourself:

  • Can I explain these ideas without looking at the text?
  • Did I verify the factual claims independently?
  • Could I defend this work in conversation?
  • Was I transparent about my process where expected?
  • Do the values that authorship norms exist to protect — honest inquiry, reliable knowledge, fair recognition — still hold in what I produced?

Questions to ask in return:

  • What specifically concerns you about this work?
  • What would convince you I understand this material?
  • Would you like me to walk you through my research and thinking?

The goal isn't to prove we didn't use AI. The goal is to show that the values writing is supposed to serve are actually being served.

At Hypandra, we're interested in something more: can we serve these values better with AI? Can our writing process make us more curious — not just while working with a tool that augments our practice, but even after we set the phone down or step away from our computer? Can we build processes that leave us better off?


Further Reading


Back to Writing with AI.

How we wrote this: These pages were drafted and revised with AI assistance (Claude Code). We verified all claims and citations independently, and we stand behind the content. See our Disclaimer Templates for language you can adapt for your own work.

Did this help? What's missing? Let us know