Included with every Full Build
HAIP Security Review
Hypandra AI Principles and Security Review
A review that checks both the technical security of your AI system and whether it behaves in line with your values.
What is a HAIP Security Review?
Most security reviews check whether your app leaks data or has broken access controls. That matters — but it's not enough when AI is involved. HAIP stands for Hypandra AI Principles. A HAIP Security Review adds a layer: does the AI behave in ways that match your values? Can users see what it's doing? Can they push back when it's wrong? Does it help people think, or does it think for them? We check both.
What we review
Principles
- Can users see where AI is involved and where it breaks?
- Can they push back, correct, or override AI decisions?
- Does the system preserve thinking — or just hand people answers?
- Is the system transparent about where it’s weakest?
- Are edge cases handled honestly, or papered over?
Security
- What data does the system collect, store, and send to third parties?
- Who can access what — and are those boundaries enforced?
- How does the system handle errors, bad input, and unexpected behavior?
- Are dependencies up to date and free of known vulnerabilities?
- Are API keys, credentials, and secrets properly secured?
Pick one question from either column. Can you answer it for your system right now? If not — that's what the review is for.
Who it's for
Anyone building or using AI systems — whether you're shipping a tool to external users, adopting AI internally for your team, or relying on third-party AI tools in your workflows. A HAIP Security Review applies equally to a chatbot you built for customers, a triage system your staff uses every day, or an external tool you're trusting with sensitive decisions. If AI is making or shaping decisions that affect people, the review asks: are those decisions ones you'd stand behind?
What you get
- Written report — Clear findings organized by area — designed to be revisited as your system evolves
- Severity ratings — Each finding rated so you know what to fix first and what can wait
- Concrete repair suggestions — Specific steps to address each finding — not vague recommendations
- Principles profile — Seamfulness, Contestability, and Productive Difficulty ratings — plus handoff analysis showing what changed when functions moved to AI and what values are at stake
Seamfulness
We look at where AI is involved and whether users can tell. If a handoff from human to machine happened silently, we flag it.
Contestability
We check whether users can question, correct, or give feedback on AI decisions. If the system says "take it or leave it," that’s a finding.
Productive Difficulty
We ask whether useful thinking survived the move to AI. If deliberation or judgment got optimized away, we name what was lost.
Building something with AI?
Every Full Build includes a HAIP Security Review. Or book a conversation to talk through what a standalone review would look like for your project.
