AI Literacy Guide

AI Glossary

Jargon, decoded.

Every term here is explained without assumed technical knowledge. If you see a word in an article that isn't clear, this is where to look.

AI agent

An AI system capable of taking autonomous actions like browsing the web, running code, or managing files to complete a multi-step task. Unlike a standard chatbot that just talks, an agent does. Because of this autonomy, an agent's errors can have more significant real-world consequences.

Bias

Systematic imbalances in AI outputs that reflect the inequalities and gaps in the training data or the human decisions made during development. Bias isn't a patchable bug. It is a statistical reflection of how human knowledge has been recorded.

Read: Decoding AI bias

Context window

The specific amount of text an AI can "see" or remember at one time during a conversation. When your chat exceeds this window, the model starts to "forget" the beginning of the thread, which can lead to inconsistencies or loss of detail.

Fine-tuning

An additional layer of training where a pre-existing model is refined on a smaller, specialized dataset. This process shapes the model's behaviour, making it safer or better at a specific job, without rebuilding its entire architecture.

Read: What is a large language model (LLM)?

Foundation model

A massive, general-purpose AI model (like GPT-4 or Gemini) trained on broad data that serves as the "base" for other applications. These are the engines that power the specific AI tools you use every day.

Generative AI

AI that creates brand-new content (text, images, or code) by identifying patterns in massive amounts of existing data. Crucially, it generates responses from scratch based on statistical probability. It doesn't retrieve them from a database.

Read: What is generative AI? (The non-techy guide)

Hallucination

When an AI generates false information but presents it with absolute confidence. This isn't a glitch. It's a structural byproduct of how language models work. They are designed to predict plausible-sounding text, not verified facts.

Read: The hallucination problem

Large language model (LLM)

An AI system trained on billions of words to understand and generate human-like language. The "Large" refers to the scale of the training data and the "parameters" (internal settings) that help it make predictions. ChatGPT, Claude, and Gemini are all LLMs.

Read: What is a large language model (LLM)?

Multimodal AI

An AI system that can process and generate multiple types of information at once, such as text, images, and audio. A multimodal model can process a photo and describe it to you in text.

Neural network

A type of AI architecture loosely inspired by the human brain. It is built from layers of connected nodes that process information. While "neural" is a metaphor, these systems learn by adjusting the strength of connections based on the data they see.

Parameter

The internal numerical values (the "dials") that an AI model adjusts during training to improve its predictions. Generally, more parameters allow for more complex behaviour, but they also require more computing power to run.

Read: How AI actually 'learns'

Prompt

The specific instruction or question you provide to an AI. The quality of the prompt determines the utility of the response. The art of refining these instructions is known as "Prompt Engineering."

Read: Write better prompts

RAG (Retrieval-augmented generation)

A technique that gives an AI model access to a specific, verified set of documents (like your company's handbook) before it generates an answer. RAG significantly reduces hallucinations by grounding the AI in real facts.

RLHF (Reinforcement learning from human feedback)

A training method where humans rank AI responses to teach the model what humans find "helpful" or "safe." This process polishes the AI's personality but doesn't change its underlying text-prediction math.

Read: What is a large language model (LLM)?

Token

The basic unit of text an AI model processes, usually a word or a fragment of a word. AI models "think" and charge you based on these tokens, not character counts.

Read: What is a large language model (LLM)?

Training data

The massive library of text, images, or code that an AI model studies to learn patterns. The quality and bias of this data directly determine how the model behaves and what it "knows."

Read: The truth about training data

Transformer

The specific type of neural network architecture that makes modern AI (like ChatGPT) possible. Introduced in 2017, Transformers are uniquely good at understanding the relationship between words across long sentences.

Read: What is a large language model (LLM)?

Want to go deeper on any of these concepts?

Browse the full AI literacy curriculum →