p-value
The probability of observing your data (or something more extreme) if the null hypothesis were true.
More formally, if we have a null hypothesis H₀ and we observe some test statistic T, then:
Where t is the observed value of our test statistic.
Interactive p-value Simulation
Interactive Parameters
Number of observations in the sample
The hypothesized population mean under H₀
The actual population mean for data generation
The threshold for statistical significance
Interactive Plot:
💡 Try: hover over elements, zoom with mouse wheel, pan by dragging, use toolbar buttons
Common Misconceptions
p-values are frequently misunderstood. Let's clarify some common misconceptions:
p-value is the probability that the null hypothesis is true
The p-value is calculated assuming the null hypothesis IS true. It cannot tell us the probability that it's true.
A smaller p-value means a larger effect
p-values depend on both effect size AND sample size. A tiny effect with huge sample size can have a very small p-value.
p-value measures evidence against the null hypothesis
Smaller p-values provide stronger evidence that our data is inconsistent with the null hypothesis.
Mathematical Foundation
For a one-sample t-test, we calculate the t-statistic as:
Where:
- x̄ is the sample mean
- μ₀ is the hypothesized population mean
- s is the sample standard deviation
- n is the sample size
The p-value is then calculated using the t-distribution with n-1 degrees of freedom:
The factor of 2 accounts for the two-tailed test (we care about differences in either direction).
Key Takeaways
- → p-values measure evidence against the null hypothesis
- → They depend on both effect size and sample size
- → Statistical significance ≠ practical significance
- → Always interpret p-values in context
- → Consider complementary measures like confidence intervals