What is generative AI? (The non-techy guide)
It's a creator, not a search engine. It learns patterns to build new stuff.
Read article βFor educators
AI is already in your classroom. Whether or not it's in your policy. Your students are using it. Most of them are winging it. And most of what passes for guidance is either a blanket ban or a shallow tutorial. Neither one teaches them to think. This guide is built around one idea: AI literacy isn't about knowing every tool. It's about knowing how to judge. By the end, you'll have the frameworks, lesson structures, activities, and classroom language to teach AI well across any subject, with or without devices. No specialist training required.
π‘ The single most important reframe
Don't teach AI as a subject. Teach it as a lens.
A history class can examine AI in media and democracy.
A science class can examine AI in research and data.
A business class can examine AI in labour and decision-making.
Same competencies, different contexts.
AI literacy is a critical thinking class, not a technology class. Here are the seven core concepts your students need.
What AI is (and isn't).
Systems that generate outputs from inputs. Not magic, not a brain, not a truth machine.
Learning from data.
Models learn from patterns, not from understanding. What's in the training data shapes what comes out.
Bias and fairness.
Performance disparities across groups arise from unrepresentative data or flawed design. Bias is a consequence of choices, not a glitch.
Privacy and data minimisation.
What you put into public AI tools matters. Consent, sensitivity, and re-identification risk are real.
Confidence is not correctness.
AI is calibrated to produce fluent, confident-sounding output. Fluency is not accuracy. Verification is not optional.
Prompting as a literacy.
Prompts are structured instructions. Iteration matters. Disclosure is part of responsible use.
AI in society.
Labour, democracy, media, accessibility, governance. Who benefits, who is harmed, who is accountable.
Core capabilities by year. Each builds on the last.
| Grade | Core capability |
|---|---|
| 9 (age ~14β15) | Identify AI in daily life, distinguish automation from learning, and apply a verification checklist to AI-generated claims. |
| 10 (age ~15β16) | Build or simulate a simple classifier and identify at least two plausible sources of bias in a dataset. |
| 11 (age ~16β17) | Conduct a structured risk/benefit analysis using a trustworthiness lens and apply a classroom privacy protocol. |
| 12 (age ~17β18) | Produce and defend a public-facing AI impact artifact with clear claims, evidence, and stated limitations. |
This structure works for secondary school through university level. Each module can stand alone if you have less time.
Module 1
Students read "What is generative AI?" (4 min). Class discussion: what surprised them, what they already assumed. Key takeaway: AI predicts plausible text. It does not retrieve verified facts.
Module 2
Students read "Why AI hallucinates" (4 min). Class discussion: has anyone seen this happen? Key takeaway: hallucination is a structural feature, not a bug. It will not simply be fixed.
Module 3
Share a real AI-generated response with the class. Work through the verification checklist together. Assign students to find one claim to verify independently using an outside source.
No prior knowledge required
This lesson assumes students have used a chatbot but do not know how it works. No technical background is needed for the teacher or the students.
A few principles from educators who have taught this material.
Start with something they've already seen go wrong.
Students engage faster when the lesson starts with a real AI failure, preferably one they can relate to. The case studies in "AI in the Real World" work well here.
Let them use the tool.
Students learn the limits of AI faster by using it than by reading about it. Assign them to ask an AI something verifiable, then check it.
Reframe the conversation.
Avoid "is AI good or bad?" It's a dead-end. The better questions are: What is this tool doing? When does it help? When does it not? Who is accountable if it's wrong?
Model verification out loud.
Show students your own verification process in real time. Checking a claim openly is more powerful than describing what verification is.
Most consumer AI tools (ChatGPT, Claude, Gemini) use conversation data to train future models unless you opt out. Before asking students to use these tools in class, check your institution's policies and the tool's privacy settings.
Students should never enter personally identifiable information into a consumer AI tool. Demonstrate this expectation explicitly.
Use these to open class discussion or as written reflection prompts.
An AI confidently tells you something that turns out to be false. Who is responsible: the AI, the user, or the company that built it?
You are writing an essay. You use AI to draft a paragraph. How much of the essay is still yours?
A doctor uses AI to help diagnose a patient. The AI is wrong. What safeguards should be in place?
AI training data reflects history, including its biases. What are the implications for who AI works well for, and who it doesn't?
If you couldn't verify a piece of AI-generated information, what would you do before sharing it?
Share this checklist with students before they use AI for any assignment where accuracy matters. It is designed to be printed or shared as a link.
Treat AI responses as starting points for thinking, not final answers.
Free to print and share in classrooms.
Each article is written for a non-technical audience and takes 3β5 minutes to read. They can be assigned individually or in sequence as a full unit.
It's a creator, not a search engine. It learns patterns to build new stuff.
Read article βCurated curricula, professional development, and classroom-ready materials.
Download as PDF βThe Five Big Ideas framework plus curated activity guides. Not a full course on its own, but the best conceptual spine available.
Free, designed for Kβ12, teacher-friendly. Browse the curriculum portal. You will still need to align it to local policy and assessment.
Downloadable, no devices required. Pair with real-world connections so students see why it matters.
Covers age restrictions, equity, privacy, and governance. Start here if your school doesn't have an AI policy yet.
Focused on using generative AI in educator workflows and classroom integration. Good for building confidence before teaching it.
A good starting point if you're less familiar with generative AI. Educator-friendly introduction with practical classroom resources.
For broader context on AI literacy as a global education competency, see the OECD's work on AI and the future of skills.
Your students are already using AI for assignments, research, and writing. Most of them are doing it without a framework for evaluating what it produces or understanding why it sometimes gets things badly wrong.
AI literacy is not a technology class. It is a critical thinking class. The skills it builds (evaluating sources, questioning confident-sounding claims, and verifying information independently) are the same skills that make a good reader, a good researcher, and an engaged citizen.
These materials don't ask students to become AI experts. They ask students to think clearly about a tool they are already using.
Human+AI is a weekly newsletter written for professionals and educators who need to understand AI without wading through hype or panic. One email a week. Clear, skeptical, and free.