Skip to main content
    Skip to main contentSkip to navigationSkip to footer
    Data & Analytics
    (Effektgröße)

    Effect Size

    Also known as:
    Effect Magnitude
    Cohen's d
    Practical Significance
    Updated: 2/11/2026

    Quantifies the strength of a difference or relationship – independent of sample size, unlike the p-value.

    Quick Summary

    Effect Size measures HOW STRONG an effect is (not just whether it exists) – the missing piece alongside the p-value for real business decisions.

    Explanation

    Common measures: Cohen's d (mean difference in standard deviations), r² (explained variance), Odds Ratio, Relative Risk. Rule of thumb (Cohen): d=0.2 small, d=0.5 medium, d=0.8 large.

    Marketing Relevance

    p-value only says "significant or not"; Effect Size says "is it worth it?" – critical for business decisions in marketing and product.

    Common Pitfalls

    Cohen's rules of thumb are context-dependent. Small effects can be business-relevant at large populations. Effect Size ≠ causality.

    Origin & History

    Jacob Cohen published "Statistical Power Analysis" in 1969 and standardized effect size measures. The APA has recommended always reporting effect sizes since 2001. The Replication Crisis made them even more important.

    Comparisons & Differences

    Effect Size vs. p-Value

    p-value depends on sample size (large N = almost always significant); Effect Size is sample-independent and shows practical relevance.

    Effect Size vs. Confidence Interval

    Confidence intervals show uncertainty around an estimate; Effect Size standardizes the estimate for comparability.

    Related Services

    Related Terms

    👋Questions? Chat with us!