Why do researchers test hypotheses?
Because data never speaks for itself, it needs interpretation. Hypothesis testing is the structured way of deciding whether the evidence supports a claim or not. Without it, business decisions would rely more on instinct than on analysis. Let’s carefully walk through this cornerstone of statistics.

What is a Statistical Hypothesis?
A statistical hypothesis is simply an assumption or claim about a population parameter.
For example, a company might claim that the average productivity of its workers is 8 hours per day. This assumption becomes a hypothesis that can be tested using sample data. In essence, it’s about comparing what we believe versus what the data suggests.
Key Point: A hypothesis is always framed in terms of population parameters (like mean, proportion, variance), never sample statistics.
Types of Hypotheses
a. Null Hypothesis (H₀)
Example: “The mean salary of employees is ₹30,000.”
b. Alternative Hypothesis (H₁ or Ha)
Think of H₀ as the “status quo” and H₁ as the “new claim.” Hypothesis testing is about deciding whether there’s enough evidence to reject H₀.
One-Tailed vs. Two-Tailed Tests
When framing hypotheses, direction matters:
- One-tailed test: Tests for effect in a specific direction (greater than or less than). Example: H₀: μ ≤ 50, H₁: μ > 50.
- Two-tailed test: Tests for effect in both directions (not equal). Example: H₀: μ = 50, H₁: μ ≠ 50.
A two-tailed test is more cautious since it checks for differences on both sides. A one-tailed test is used when the researcher is certain about the expected direction.
Type I and Type II Errors
Errors are inevitable in decision-making. Hypothesis testing is no exception.
- Type I Error (α): Rejecting H₀ when it’s actually true. This is a “false alarm.” Example: Concluding a new drug works when it doesn’t.
- Type II Error (β): Failing to reject H₀ when H₁ is true. This is a “missed detection.” Example: Overlooking that a new drug is effective.
In short: Type I error is like punishing an innocent, and Type II error is like letting the guilty go free.
Level of Significance (α) and Power of a Test (1–β)
Level of Significance (α): The probability of committing a Type I error. Common values are 0.05 (5%) and 0.01 (1%). It represents how much risk we are willing to take in rejecting a true null hypothesis.
Power of a Test (1–β): The probability of correctly rejecting H₀ when H₁ is true. A powerful test reduces the risk of Type II error. Researchers aim for high power (often 80% or more).
Balancing α and β is critical. Lowering α reduces false alarms but increases missed detections. Raising power requires larger samples or stronger effects.
General Procedure for Hypothesis Testing
- Step 1: State the null (H₀) and alternative (H₁) hypotheses.
- Step 2: Choose the significance level (α).
- Step 3: Select the appropriate statistical test (z-test, t-test, etc.) based on sample size and data type.
- Step 4: Compute the test statistic from the sample data.
- Step 5: Determine the critical value(s) or p-value.
- Step 6: Compare the test statistic with the decision rule:
- If the test statistic falls in the rejection region, reject H₀.
- If not, fail to reject H₀.
- Step 7: Draw the conclusion in simple words, relating back to the problem context.
Example: A soft drink company claims its bottles contain 500 ml. A sample of 30 bottles is tested. Hypothesis testing will confirm whether the average content significantly differs from 500 ml or not.
Summary
Hypothesis testing transforms uncertainty into structured decision-making. By distinguishing between null and alternative, recognizing possible errors, setting significance levels, and following systematic steps, researchers make conclusions that are not just guesses but informed judgments.
Final Thought: Statistics doesn’t eliminate doubt, it manages it. Next time you hear a bold claim backed by “data,” ask yourself, has it passed the test?