POSITION PAPER

The Missing Pillar of AI Literacy

A complementary 5th pillar for the OECD/European Commission AI Literacy Framework

The goal is not better AI users. The goal is better thinkers in an AI-saturated world.

Trevan Hauck · Hypandra · January 2026

THE PROBLEM

It's not AI capability. It's human substitution.

01

AI literacy today focuses on using AI well. But there’s a deeper risk: the better AI gets, the less we practice thinking for ourselves.

When answers come easy, we stop doing the hard work that builds real understanding:

  • Cognitive offloading — letting AI think so we don’t have to
  • Deskilling — losing abilities we no longer practice
  • False confidence — mistaking smooth answers for real knowledge
  • Blind spots — missing errors because we’ve stopped looking
THE CONTEXT

What the current framework gets right—and what it misses.

02

The OECD’s AI Literacy Framework covers four areas: engaging with AI, creating with AI, managing AI, and designing AI. It’s solid.

But it focuses on using AI systems. It doesn’t protect the human skills that fade when AI is always available—curiosity, questioning, and knowing what you actually understand.

You can become "AI fluent" while losing the ability to think independently.

THE GAP

The skills that fade fastest.

03

The current framework can’t fully protect:

  • The drive to ask questions worth exploring
  • The skill of forming questions that lead somewhere
  • The habit of checking whether you truly understand
  • The instinct to test answers before trusting them
  • The discipline to work without AI when learning requires it

These aren’t extras. They’re the missing pillar.

THE PROPOSAL

Questioning AI—a fifth pillar.

04

We propose adding a fifth pillar: Questioning AI.

Not "how to get better answers from AI." How to stay a capable thinker when answers are everywhere.

This means managing your own mind—not just the tool. Knowing what to delegate, what to keep, what to verify, and what you can explain on your own.

AI doesn’t care if you understand. It won’t protect your curiosity. That’s your job.

THE FRAMEWORK

Five competencies for the AI age.

05
01
Generative Curiosity
The drive to wonder and question—not just retrieve answers.
02
Question Craft
Shaping curiosity into precise, useful questions.
03
Epistemic Self-Defense
Catching yourself when you feel like you understand but don’t.
04
Testing AI Limits
Probing for errors, bias, and weak spots before you trust or act.
05
Strategic Refusal
Knowing when not using AI is the smarter choice.
THE PRACTICE

How this actually gets taught.

06

You can’t teach this as a checklist. It’s a stance—habits practiced repeatedly:

  • Friction by design — pause to clarify what you’re asking before getting answers
  • Curiosity loops — notice, wonder, identify what you need to learn
  • Verification traces — show how you checked, not just what you concluded
  • Independence practice — try without AI first, then use it to refine
  • Failure rehearsal — look for how answers could break or mislead
  • Social calibration — share early, invite pushback, revise based on feedback
THE STAKES

The bottom line.

07

Current frameworks teach how to use AI. This pillar teaches when, why, and whether to use it—and how to stay sharp either way.

Without it, we risk mass deskilling: people who can operate AI fluently but can’t think without it.

Build curiosity and thinking habits first. Tool skills follow naturally.

The goal isn’t better AI users. It’s better thinkers in an AI-saturated world.

POSITION PAPER

The goal isn’t better AI users. It’s better thinkers in an AI-saturated world.

A position paper proposing what's missing from AI literacy—and how to protect human thinking as AI accelerates.

Trevan Hauck · Hypandra · January 2026

Scroll or use arrows to navigate