For parents
AI literacy for parents
AI literacy isn't about banning tools. It's about protecting your child's developing judgement in a world that rewards effortless output. The risk isn't that kids use AI. It's that they stop believing in their own ability to think. Here's what to do about it, by age.
“Students with uninhibited AI access did 48% better on practice problems — and 17% worse on the final test. They’d outsourced their thinking.”
University of Wharton study, via Brookings Institution
Protecting the boundary between tools and relationships
Young children naturally anthropomorphize objects. They believe devices that respond to their voice might have feelings, preferences, and inner lives. Dr. Ying Xu, a professor of education at Harvard, notes that kids who believe AI has agency may feel it is “choosing” to talk to them.
UNESCO recommends children under 13 not have independent access to general-purpose chatbots. At this age, the rule of thumb is adult presence, not a parental filter. AI literacy comes down to two foundations: this is a tool, and private information stays private.
Explain it plainly
“Alexa doesn’t understand feelings. It’s a computer that matches word patterns.” When your child asks if a device is alive, say no simply and consistently. Avoid language that frames AI as a friend or a character with an inner life.
The ‘guess the next word’ game
Ask your child what word comes next in a sentence, then show them your phone’s autocomplete to see if it matches. Letting kids in on AI’s core mechanism before they’re old enough to be dazzled by it is a useful head start.
Watch for
- →A child sharing personal information with a device
- →Distress when access to a device is removed
- →Treating a smart speaker as a friend
The 'Is it alive?' sorting game
Printable card templates for ages 3–6. Lay out the cards, ask the sorting questions together, and help your child begin separating tools from relationships.
Separating confidence from evidence
By this age, kids move from treating AI as a friend to using it as an information source. The risk is outsourcing effort: letting the bot do the thinking. Brookings suggests steering kids toward using AI for drafts and ideas, but insisting they do the core reasoning themselves.
The preteen years are also when algorithmic systems become a dominant force in a child’s information environment. TikTok, Instagram, and YouTube are AI systems that have learned with precision what keeps each user watching. The subtler risk is psychological: when identity conversations happen through an algorithm, kids can struggle to distinguish between who they are and who their feed says they are.
The two-source rule
When a child asks AI a factual question, check one additional trusted source together. A library book, a museum’s educational page, or a kids’ news site all work. Frame it simply: “AI is a fast first draft. Real learning is checking.”
Recommendation diary (one week)
Each day, have your kid screenshot or note three items served to them via social media. Then discuss together: “What did you watch or search that might have triggered this?” “How does it make you feel?” “What would you choose on purpose?” This connects the abstract concept of algorithms to their lived experience.
Watch for
- →Copying AI answers without engaging with them
- →Insisting "the computer said so" when challenged
- →Social conflict linked to images or audio
- →A child becoming fearful of being photographed or recorded
Choosing capacity over convenience
A 2025 KPMG survey found that while most Canadian students believed AI improved the quality of their submitted work, nearly half said their critical thinking had deteriorated since they started using it. The tension between short-term output and long-term capacity is one teenagers are old enough to understand directly.
High school teacher Adam Davidson-Harden uses a weight-lifting analogy: if a robot did your reps for you, your muscles wouldn’t grow. The discomfort of not knowing, of having to work something out, isn’t a bug in the learning process. It is the process.
The mental health conversation
Emerging research shows meaningful use of AI for emotional support among teens. A non-judgmental opener: “If you ever use a chatbot for support, I want to know, not to punish you, but to keep you safe.” Then two rules: no identifying details shared, and any self-harm or crisis conversation goes to an adult immediately.
Consent and synthetic media
Pick two or three realistic scenarios: a fabricated voice message, an edited video, an image made to look like your teenager. Ask: “What would you do in the first 10 minutes? Who would you tell? How would you preserve evidence without spreading it?” Have this conversation before it’s urgent.
Watch for
- →Secretive AI use framed as emotional intimacy
- →Refusal to discuss the sources behind schoolwork
- →A sudden reputational crisis linked to content no one admits to creating
Family AI collaboration contract
A one-page printable agreement covering the Cognitive Muscle rule, what AI use is and isn't allowed for schoolwork, privacy rules, and a disclosure format. Fill it in together — the point is the conversation.
Modelling healthy AI habits
Over-reliance on AI can reduce autonomy and critical thinking. We can inadvertently model this by treating AI like an oracle. The dinner table, free of devices, remains one of the most protective spaces a family can maintain.
We don’t need to become AI experts. But showing our own judgement so kids can hear what deliberate thinking sounds like is how they learn. Some things worth saying out loud:
- →"I'm using AI to brainstorm options, but I'm making the final decision."
- →"I don't put personal information into this tool."
- →"I'm checking that claim with another source."
- →"I used AI here, so I'm disclosing it."
AI is probably already a part of your child’s world. Whether it strengthens or weakens their judgement depends less on the tool, and more on the habits built around it.
More on the site
Teacher's Lounge →
Lesson plans, discussion prompts, and classroom tools for educators.
The curriculum →
Understand, use, and question AI — 15 plain-language guides.
Glossary →
Every AI term your kids might mention, explained simply.
AI in the real world →
How AI is actually being used — and misused — right now.
The Human+AI newsletter
One email a week. Clear, skeptical, and 100% independent takes for people who work with AI. Join 406+ subscribers and followers on Substack.
