AI Literacy Guide

About

Smart AI literacy for non-techies

Nicolle Weeks

Written by

Nicolle Weeks

Journalist · Skeptic · Human

Twenty years of reporting for the Globe and Mail, Chatelaine, Today's Parent, CBC, and Channel 4. The Human+AI newsletter has been published on Substack since April 2024.

This site grew out of a simple observation: the people who most need to understand AI (teachers, journalists, policy professionals, parents) are the least well served by existing AI coverage, which tends to assume either deep technical knowledge or no critical thinking at all. AI Literacy Guide exists to fill that gap.

Professional disclosure: Nicolle Weeks works in communications at Manulife. This site is independent, written in a personal capacity, and has no affiliation with her employer.

Mission

AI Literacy Guide exists to cut through the noise. The tools are moving fast, the hype is moving faster, and most of the coverage assumes you either have a CS degree or a tin-foil hat. This is for everyone else.

The goal isn't to make you an AI expert. It's to give you the conceptual vocabulary and practical habits to use AI thoughtfully, and question it when it matters.

No technical background required. No uncritical enthusiasm. No doom. Just clear thinking about a genuinely complicated set of tools.

The Human + AI literacy framework

The site is organised around three interconnected skills. You develop all three simultaneously, and each represents a distinct dimension of AI literacy.

Understanding AI

What AI systems actually are: how language models work, what training data is, why hallucination happens, and what the fundamental limitations of these technologies are. You cannot use AI well without this foundation.

Using AI

Working with AI tools effectively: writing clearer prompts, asking better questions, knowing which tasks actually benefit from AI assistance, and recognising when it's the wrong tool entirely. Practical skill, not hype.

Questioning AI

Building your lie detector: recognising when AI outputs may be wrong, verifying information through independent sources, spotting where bias shows up, and knowing the contexts where human expertise and accountability cannot be replaced by AI.

Editorial principles

Independence

This site has no commercial relationships with AI companies and does not receive funding from technology industry sources. Editorial decisions are made independently. See the professional disclosure above.

Accuracy over accessibility

Where there is tension between making something easy to read and making it accurate, accuracy comes first. Simplification that misleads is not useful.

Plain language

Everything is written for readers without a technical background. Technical terms are explained when they are introduced. No assumed knowledge, ever.

Scepticism, not cynicism

AI tools are genuinely useful. They are also genuinely limited and sometimes harmful. The goal is to give readers the judgement to tell the difference, without dismissing AI wholesale or uncritically celebrating it.

Attribution and transparency

Sources are cited. Claims are supported. When something is uncertain or contested, that is stated clearly. If something is the author's opinion, it is labelled as such.

Keep up with Human+AI

One email a week. Clear, skeptical, independent takes for people who work with AI but didn't study computer science. Join 406+ subscribers on Substack.