AI Literacy Guide

Start here

Your no-gatekeeping guide to AI

AI literacy is not about knowing every tool. It is about knowing how to think alongside one — and when to question it. Find your starting point below.

The three pillars

Everything on this site is organised around three skills. They are interdependent: you can't use AI well without understanding it, and you can't question it without using it.

A diagram showing the three skills of AI literacy: 01 Understanding AI: knowing how AI systems work and their limitations. 02 Using AI: working effectively with AI tools. 03 Questioning AI: critically evaluating AI outputs.

Understanding AI

How training data, prediction, and hallucination actually work.

Understand AI →

Using AI

Better prompts, the right tools, and knowing when it actually helps.

Use AI →

Questioning AI

Spotting errors, verifying claims, and knowing when to close the tab.

Question AI →

What you need to know first

AI predicts. It does not know.

When you ask ChatGPT or Claude a question, it is not searching for an answer. It is building a response word by word based on statistical patterns. It is a world-class guesser, not a librarian.

It will sound confident when it is wrong.

Because AI predicts plausible text, it can state incorrect information with complete authority. It will invent citations, dates, and quotes without any signal that it is guessing. This is called hallucination, and it happens to everyone.

Verification is not asking it again.

Check claims against primary sources: official data, peer-reviewed research, reputable journalism. If an AI cites a source, verify the source exists and says what the AI claims. Do not ask the same AI the same question.

Treat every AI response as a first draft that needs your judgement.

Common questions

What is generative AI?

Generative AI is a type of artificial intelligence that creates new content (text, images, or code) by learning patterns from large amounts of existing data. It predicts plausible outputs rather than retrieving facts from a database.

Why does AI sometimes give wrong answers?

AI language models generate text by predicting what is statistically likely to follow previous words. They do not verify facts. This means they can produce confident-sounding but incorrect information, a phenomenon called hallucination.

How do I verify information from an AI?

Check specific claims against independent primary sources: official databases, peer-reviewed articles, or reputable news organisations. If an AI cites a source, verify it exists and supports the claim. Do not ask the same AI the same question; use independent sources.

What is AI literacy?

AI literacy is the ability to understand how artificial intelligence works, use it effectively, and question it critically. It does not require a technical background. It requires good judgement.

What is an AI hallucination?

An AI hallucination is when a large language model generates information that sounds plausible but is factually wrong. This happens because AI predicts likely text rather than retrieving verified facts. Always check AI-generated claims against primary sources.

Is AI going to take my job?

The more useful question is whether AI will change how your job works, and the answer to that is almost certainly yes. AI literacy helps you stay in the driver's seat rather than being replaced by someone who uses AI more effectively than you do.

10 questions · 5 minutes

Not sure where you stand?

Take the quiz and find out how AI literate you really are.

Take the quiz →