Hypothesis Testing Student Guide
Introduction - Hypothesis Testing Student Guide
Alright everyone, grab your pens — we’re going to tackle hypothesis testing today.
Now before you sigh, hear me out. This isn’t as awful as it sounds. It’s basically a fancy way of saying, “Do the numbers back up what someone’s claiming, or not?” That’s it.
I always tell my classes this: if you can tell whether something’s believable or not based on evidence, you already understand hypothesis testing. The rest is just learning how exam boards want you to show it.
🔙 Previous topic:
“Review the normal distribution before diving into hypothesis testing.”
What’s a Hypothesis Anyway?
So, a hypothesis is just a claim — an idea we want to test.
When you do a hypothesis test, you start with what’s called the null hypothesis. That’s the statement you assume is true until the data says otherwise.
Then you’ve got the alternative hypothesis, which is what you’ll accept if the evidence against the first one is strong enough.
It’s a bit like a court case — the null hypothesis is “innocent until proven guilty.” We don’t throw it out unless the data makes it look really suspicious.
Let’s Picture It
Imagine a company that claims that their new snack bars have, on average, 200 calories. You test a bunch and find the average seems lower — maybe 190 or so.
Now, is that small difference just random luck, or does it actually show the company’s off with their numbers?
That’s where hypothesis testing comes in. You’re checking whether the data you’ve got fits their story or suggests something’s up.
I once had a student test whether my coffee shop’s “medium” really meant 300 ml. Turns out… it didn’t. The data spoke.
The Testing Logic — Step by Step
Every exam board — AQA, Edexcel, OCR — basically wants the same flow:
- State your two hypotheses clearly: what’s being claimed and what you’re testing instead.
- Decide your model — like, are you counting successes or measuring something continuous?
- Work out how surprising your result is if the original claim were true.
- Compare that surprise value (that’s the p-value) with your threshold for doubt — usually 5%.
- Write your conclusion in context — what the result actually means in plain English.
If you can follow those steps calmly, you’ll always pick up most of the marks.
The Significance Level — How “Sure” You Want to Be
Right, so that 5% thing you always see? That’s called the significance level.
It’s basically your “risk budget.” You’re saying, “I’m willing to be wrong 5% of the time.”
Sometimes questions use 1% instead — that’s just being extra cautious.
If the probability of your data happening is smaller than that level, you say, “That’s too rare to ignore — something’s probably changed.” And boom, you reject the original claim.
One-Tailed or Two-Tailed — What Are We Even Talking About?
This one trips up loads of students.
If the alternative hypothesis only looks one way — like “the average is less than 200” — that’s a one-tailed test. You’re only checking one direction.
If it’s “the average is different,” that’s a two-tailed test — you’re looking both ways.
And in two-tailed tests, you split that 5% level across both ends — 2.5% each. OCR loves that trick. Don’t fall for it.
The Famous p-Value — Your “How Weird Is This?” Number
Alright, this is where everyone zones out, but stay with me.
The p-value is just a measure of how surprising your data would be if the claim were true.
If that p-value is tiny — smaller than 5% — your result is too weird to fit the original story. You say, “Yep, the claim doesn’t hold up,” and you reject it.
If it’s bigger, you shrug and keep the claim. You’re not saying it’s definitely right, just that the data didn’t prove it wrong.
Think of it as your “surprise rating.” Small p-value? You’re shocked. Big p-value? Meh, nothing unusual here.
Common Exam Traps (and How to Avoid Them)
Let’s be real — the maths isn’t the hard part here; it’s the exam wording.
Here’s what catches people out again and again:
- Forgetting to define your hypotheses.
AQA and OCR hand out a mark just for that line. Write it clearly — always. - Mixing up tails.
“Greater than” means right-hand side. “Less than” means left. Simple, but easy to flip when you’re stressed. - Skipping the conclusion in words.
Every paper wants a short English sentence — “There is evidence that the average is lower,” or “No significant evidence of change.” - Confusing the p-value with the probability of H₀ being true.
Nope. p-value just tells you how rare your data would be if H₀ were true.
Not checking which model to use.
Counting successes? Binomial. Measuring something like height or time? Normal. That one decision can make or break your whole answer.
Real Classroom Moment
A few years back, one of my students — let’s call her Amira — got a question about testing whether a dice was fair.
She ran her numbers, got a p-value of about 0.04, and wrote, “Reject H₀ because 0.04 < 0.05.”
Then she added, “The dice is cursed.”
Okay, bit dramatic — but the logic was flawless.
That’s the point: you’re not proving anything absolutely; you’re deciding whether the evidence is strong enough to stop believing in the original claim.
How to Structure a Perfect Answer
If you freeze up in exams, just follow this pattern. It works for every board.
Step 1: Write both hypotheses clearly.
Step 2: Say what you’re measuring or counting.
Step 3: Show how you calculated or found the p-value.
Step 4: Compare p with 0.05 (or whatever α they give).
Step 5: Finish with a short, clear sentence in context.
If you hit all five, you’re already looking at full method marks.
Why This Topic Actually Matters
Outside the exam hall, this is real science.
Doctors use it to test whether treatments work. Companies use it to see if new adverts boost sales. Even psychologists use it to check if a result’s genuine or just random luck.
So, once you understand hypothesis testing, you’re learning how people in the real world make decisions using data.
And honestly, that’s a superpower.
🧭 Next topic:
Next, explore key tips for mastering the binomial distribution.
Teacher Reflection
I’ll be honest — when I first taught this topic, half the class looked terrified.
But once they realised it’s just structured decision-making, they relaxed.
You’re not expected to prove things with 100% certainty — you’re just weighing evidence. It’s logical, not mystical.
And when you see it that way, the formulas stop feeling like formulas — they’re just the maths version of “Hmm… does this look believable?”
Make Hypothesis Testing Click for Good
Start your revision for A-Level Maths today with our A Level Maths Easter Revision, where we teach statistics, mechanics, and pure maths step by step, in plain English.
We’ll walk you through tricky topics like hypothesis testing until they finally make sense — and you can walk into your exam confident, not guessing.
Author Bio
S. Mahandru • Head of Maths, Exam.tips
S. Mahandru is Head of Maths at Exam.tips. With over 15 years of experience, he simplifies complex calculus topics and provides clear worked examples, strategies, and exam-focused guidance.