Start here
Your no-gatekeeping guide to AI
AI literacy is not about knowing every tool. It is about knowing how to think alongside one — and when to question it. Find your starting point below.
Where do you want to go?
Pick the path that matches where you are right now.
I'm new to AI and want to start using it
AI Starter Kit →
A 7-day plan. Fifteen to thirty minutes a day. Real tasks, real tools.
I want structured learning, in order
The Curriculum →
From understanding how AI works to questioning it critically.
I'm a parent thinking about my kids
Parents' Guide →
Age-by-age AI literacy for families. Written for the real questions.
I'm an educator
Teacher's Lounge →
Classroom-ready resources. A full intro fits in a 50-minute period.
I want to read sharp AI analysis
Deep Dives →
The Human+AI archive. Plain-English takes on AI that actually matter.
I'm not sure where I stand
Take the quiz →
10 questions covering hallucination, bias, deepfakes, and more.
The three pillars
Everything on this site is organised around three skills. They are interdependent: you can't use AI well without understanding it, and you can't question it without using it.
What you need to know first
AI predicts. It does not know.
When you ask ChatGPT or Claude a question, it is not searching for an answer. It is building a response word by word based on statistical patterns. It is a world-class guesser, not a librarian.
It will sound confident when it is wrong.
Because AI predicts plausible text, it can state incorrect information with complete authority. It will invent citations, dates, and quotes without any signal that it is guessing. This is called hallucination, and it happens to everyone.
Verification is not asking it again.
Check claims against primary sources: official data, peer-reviewed research, reputable journalism. If an AI cites a source, verify the source exists and says what the AI claims. Do not ask the same AI the same question.
Treat every AI response as a first draft that needs your judgement.
Common questions
What is generative AI?
Generative AI is a type of artificial intelligence that creates new content (text, images, or code) by learning patterns from large amounts of existing data. It predicts plausible outputs rather than retrieving facts from a database.
Why does AI sometimes give wrong answers?
AI language models generate text by predicting what is statistically likely to follow previous words. They do not verify facts. This means they can produce confident-sounding but incorrect information, a phenomenon called hallucination.
How do I verify information from an AI?
Check specific claims against independent primary sources: official databases, peer-reviewed articles, or reputable news organisations. If an AI cites a source, verify it exists and supports the claim. Do not ask the same AI the same question; use independent sources.
What is AI literacy?
AI literacy is the ability to understand how artificial intelligence works, use it effectively, and question it critically. It does not require a technical background. It requires good judgement.
What is an AI hallucination?
An AI hallucination is when a large language model generates information that sounds plausible but is factually wrong. This happens because AI predicts likely text rather than retrieving verified facts. Always check AI-generated claims against primary sources.
Is AI going to take my job?
The more useful question is whether AI will change how your job works, and the answer to that is almost certainly yes. AI literacy helps you stay in the driver's seat rather than being replaced by someone who uses AI more effectively than you do.
Not sure where you stand?
Take the quiz and find out how AI literate you really are.
