Writing with AI responsibly takes more than good intentions — it helps to have tools that support the process. Some of these are ours. Some are built by others. We include both because the goal is helping you write well, not selling you something.
Our tools
Write (work in progress) -- Our writing tool. Will include quizzes to test whether you've internalized the material, and a rewrite mode that asks you to set drafts aside and write again from understanding. A provocation for the Understanding handoff.
Sparrow -- Pushes productive difficulty into search itself. The process of finding information builds understanding rather than bypassing it. When you use Sparrow for research, you arrive at the writing stage having done more of the thinking.
Probe -- Compares how different AI models respond to the same question. Useful for the Verification handoff: if models disagree, that's a signal the claim is worth checking independently.
Gadfly -- Raises the questions you should be asking about your own work. Designed to push back on your thinking before your reader does.
External tools worth looking at
This is an area where tooling is developing rapidly. These are worth looking at as you think about your own processes.
Jina AI's Grounding API — Returns factuality scores with source references. Useful for checking whether AI-generated claims have grounding in published sources.
Exa's Hallucination Detector — Extracts claims from text and searches for supporting or refuting sources. Good for systematic claim-by-claim verification.
GPTZero's Hallucination Check — Focuses on citation verification. Checks whether the references in your text actually exist and say what you claim they say.
GradPilot -- Helps students navigate the tension where schools use imperfect detection tools to enforce their AI policies. Includes an AI disclosure generator and essay review with detection analysis. Relevant to both the Transparency and Integrity handoffs.
Back to Writing with AI.
Did this help? What's missing? Let us know →
