AI Literacy Crosswalk

DRAFT — This page is a work in progress.

Hypandra exists to protect and promote curiosity.

Read Hypandra's AI Principles →

A lot of "AI literacy" today is either hype ("this changes everything!") or fear ("keep it away!"). We think both miss the point. The future isn't inevitable — it's shaped by the tools we build, the norms we tolerate, and the questions we keep asking.

This page does two things:

  1. It shows how Hypandra's AI principles line up with existing AI literacy work.
  2. It explains what we add: a practice-first approach that keeps curiosity human, responsibility explicit, and systems contestable—and the curiosity to look for what can be better, watch out for problems, and know where to make different choices.

Coming soon

Two perspectives on what most AI literacy commentary skips:

#

What "AI literacy" usually covers

Across education, research, and policy, AI literacy is often treated as a blend of:

  • understanding how AI works (enough to reason about capabilities and limits),
  • using AI tools effectively (without outsourcing judgment),
  • evaluating and creating with AI (not just consuming),
  • and accounting for ethics and societal impacts (not as a footnote, but as part of competence).

We like frameworks because they give shared language. We don't like when they become checklists that people "complete" without changing how they think, build, or take responsibility.

#

Frameworks we learn from (and why)

We don't claim one "right" definition. We borrow from multiple traditions:

Long & Magerko (2020): A canonical synthesis of AI literacy competencies and mental models, widely used in AI literacy discussions.
Ng et al. (2021): A clean four-part framing (understand / apply / evaluate+create / ethics). The split between "know/understand," "use/apply," "evaluate/create," and "ethical issues" shapes how we organize our crosswalk.
UNESCO (2024): Student and teacher competency frameworks organized into four dimensions—human-centred mindset, ethics of AI, AI techniques and applications, AI system design. Maps naturally onto our ethics/harm/power and understanding/creating groupings.
AILit Framework (EC/OECD, review draft): Four domains—engage with AI, create with AI, manage AI, design AI—which strongly motivates separating use/apply from manage/govern as distinct literacy concerns.
AI4K12: "Five Big Ideas"—a strong conceptual scaffold for foundational understanding, especially Big Idea 5: Societal Impact.
NIST AI RMF 1.0: Structures risk management into govern, map, measure, manage and treats governance as cross-cutting. Useful grounding for why "managing and governing AI" belongs on an AI literacy page (not only on an enterprise policy page).
The Royal Society rapid review: Defines AI literacy in technological, practical, and human dimensions. We split the "practical" dimension into two (use/apply and evaluate/create) and separate manage/govern to make repair/contestability explicit.
#

Crosswalk: common AI literacy domains → Hypandra principles

#

Why these five headings?

AI literacy frameworks don't all use the same words, but they tend to cover the same territory: how AI works, how people use it, how to evaluate it, what can go wrong, and how to keep responsibility anchored over time.

We use these five headings as a translation layer—a compact vocabulary that lets us compare frameworks without pretending there's one "official" checklist. The wording is our synthesis, but it's grounded in patterns that show up repeatedly across research and policy frameworks (e.g., "know/understand," "use/apply," "evaluate/create," "ethical issues," plus explicit attention to "managing" and "governing" AI over time).

Each section below names the domain, explains what it includes in plain language, and then maps it to Hypandra's principles. Expand the accordion for the deeper Hypandra take.

#

1) Understanding AI (concepts, limits, mechanisms)

This domain is the "how it works (enough to reason about it)" layer: what AI can and can't do, where its behavior comes from, and what kinds of failures are predictable from how the system is built.

Different frameworks slice this differently—some emphasize foundational concepts (how learning works, how systems represent the world, what "confidence" is), while others emphasize practical mental models ("when should I distrust this output?"). We keep it broad on purpose: understanding is not trivia—it's the ability to explain why something might be right or wrong.

#

2) Using and applying AI (prompting, workflows, tool choice, iteration)

This domain is "AI in the middle of real work." It includes prompting and iteration, but it's bigger than prompting: choosing tools, deciding where AI fits in a workflow, recognizing when AI is the wrong tool, and developing habits that keep humans in the loop.

Many frameworks frame this as "use/apply" or "engage with AI in daily life." We treat it as a creative competency: using AI as a drafting partner without letting it set your goals or your standards.

#

3) Evaluating and creating with AI (testing, measuring, building, improving)

This domain is the shift from "I used a tool" to "I can shape outcomes." It includes evaluating outputs (quality, relevance, failure modes), but also making things: building prototypes, improving prompts and workflows, selecting evaluation criteria, and iterating based on evidence rather than vibes.

Some frameworks explicitly pair "evaluate" with "create," because creation without evaluation turns into unearned confidence—and evaluation without building turns into spectatorship. We keep them together for the same reason: literacy is participation.

#

4) Ethics, harm, power, accountability (bias, privacy, manipulation, downstream effects)

This domain is the "human dimension": who is affected, who benefits, who carries the risks, and who is accountable when delegation shifts from people to systems.

We make two questions explicit here because they're often implied rather than named:

  • Curiosity about who wins / who pays: when this works, who benefits? when it fails, who absorbs the cost?
  • Curiosity about what gets normalized: what behavior becomes "standard" once this tool is common—opacity, dependency, "no one's responsible," or extraction?

For creatives, this domain also includes provenance + IP + consent: where material comes from, what permissions apply, and what you owe collaborators and communities when provenance is uncertain.

#

5) Managing and governing AI (ongoing oversight, boundaries, documentation, repair)

This domain is "what happens after you ship." It includes ongoing oversight, boundary setting, documentation, monitoring, incident response, and repair. A lot of harm comes not from a model's existence, but from how it's integrated, maintained, and defended as 'normal.'

Some frameworks treat this as a first-class domain (managing AI systems; governing risk across a lifecycle). We do too—because responsibility is ongoing, and contestability requires artifacts: ways to critique, complain, diagnose, and change a system without begging for permission.

(…and our commitment to Repair + Contestability makes this real in practice.)

#

What we make explicit (because it's where things actually break)

Many AI literacy resources talk about "risks" in general. We like risks, but we prefer named questions.

#

Curiosity about who wins / who pays

When this works, who benefits?

When it fails, who eats the cost — time, money, reputational harm, surveillance, exclusion, degraded craft, or shifted responsibility?

#

Curiosity about what gets normalized

What behavior becomes "standard" once this tool is common?

What does it quietly make acceptable — opacity, dependency, copy-paste culture, or treating people as optional?

#

Creative workflows need provenance questions:

  • Where might this material come from?
  • What permissions apply?
  • Whose consent is missing?

If we can't establish clean provenance, we label uncertainty, choose safer inputs, or redesign the workflow.

#

Repair + contestability

Responsibility isn't a disclaimer. It includes maintenance, feedback listening, and making tools challengeable and changeable.

Contestability needs artifacts: ways for people to critique, complain, and improve systems without begging for permission.

#

Our approach: from competencies to practices

Frameworks often tell you what to know. We focus on how to keep learning without outsourcing your agency.

Two interconnected commitments run through everything: humility (we need to keep learning and reflecting with others about how to use these tools responsibly—or when to refuse them) and demanded curiosity (neither uncritical adoption nor reflexive dismissal—a duty to be curious about what is happening, what is possible, and what choices we have).

We call these Curiosity Practices for Creatives: Question Surfacing, Handoff Spotting, Exploratory Prototyping, Critical Evaluation, and Collaboration.

Read the full Curiosity Practices →