Effect Size
Definition
Effect size is a quantifiable measure of the strength of a relationship between two variables in a study, or the magnitude of the difference between two groups. It tells us how much of a difference there actually is, going beyond just saying a difference is "statistically significant". Statistical significance simply tells you if a result is likely not due to chance, but doesn’t tell you how meaningful that result is. Effect size does. It’s usually expressed as a number, and larger numbers indicate larger, more meaningful effects. Common effect size measures include Cohen’s d (for differences between groups) and correlation coefficients like r (the strength of a relationship between variables). Think of it as the “real world” importance of a finding.
Example
A researcher is testing a new teaching method. They compare a class taught with the new method to a class taught with the traditional method on a standardized test.
-
Scenario 1: Statistically Significant, Small Effect: The researcher finds a statistically significant difference: the new method group scores, on average, 5 points higher on the test (with a p-value less than 0.05). However, the effect size (Cohen's d) is 0.2. This means the difference between the groups, while real (not due to chance), is relatively small. The new method isn't dramatically improving scores.
-
Scenario 2: Statistically Significant, Large Effect: Another researcher tests a different new teaching method, finding a statistically significant difference of 20 points on the same test (with a p-value less than 0.05). The effect size (Cohen’s d) is 0.8. This means the difference between the groups is large and the new method is clearly having a substantial impact on test scores.
Both scenarios could be statistically significant, but the effect size tells us which finding is more practically important.
Why it Matters
Understanding effect size is useful for interpreting research findings. Statistical significance can be affected by sample size. A very large sample size can make even a tiny difference statistically significant. However, a statistically significant result with a small effect size might not be meaningful in the real world. Effect size gives you a sense of the practical importance of a research finding. It helps researchers, practitioners, and consumers of research determine if a new treatment, intervention, or relationship is truly worthwhile. When comparing different studies, focusing on effect sizes allows you to compare the magnitude of the findings, even if the studies used different sample sizes or statistical tests. Effect size goes beyond simply identifying if something works, and helps us understand how well it works.