1. Definition

  • A priori power analysis = calculating the required sample size (n) for a study before collecting data, based on:
    • Desired significance level (α) → risk of Type I error
    • Desired power (1 – β) → probability of detecting a true effect
    • Expected effect size (δ) → how big of a difference you expect/want to detect
    • Variance in the data (σ²)

Purpose: Make sure your study is large enough to detect meaningful effects but not unnecessarily large (wasting time/resources).


2. Why It’s Important

  • Prevents underpowered studies → risk of false negatives (missing real effects).
  • Prevents overpowered studies → wasting resources, detecting trivial effects.
  • Forces researchers to define minimum meaningful effect size (MDE) in advance.
  • Strengthens study credibility (avoids post-hoc justifications).

3. Formula (for a two-sample mean test)

$n = \left(\frac{Z_{1-α/2} + Z_{1-β}}{δ}\right)^2$

Where:

  • $Z_{1-α/2}$​ = critical z-value for significance level (e.g., 1.96 for α=0.05 two-tailed)
  • $Z_{1-β}$​ = z-value for desired power (e.g., 0.84 for 80% power)
  • $δ = \frac{μ_1 – μ_2}{σ}$​​ = standardized effect size (Cohen’s d)

In practice, we use software (e.g., G*Power, R, Python’s statsmodels) to compute this.


4. Example

A/B Test Example

  • Goal: Detect if new website design increases conversion from 10% → 11%.
  • Inputs:
    • α = 0.05
    • Power = 0.80
    • Effect size (difference in proportions) = 0.01

→ A priori power analysis shows:

  • Need about ≈ 7,850 users per group.

If you only test 1,000 per group, the study will be underpowered → high risk of missing the 1% lift.


5. Steps in A Priori Power Analysis

  1. Define research question and minimum meaningful effect.
  2. Choose α (commonly 0.05).
  3. Choose desired power (commonly 0.80 or 0.90).
  4. Estimate effect size (from theory, past studies, pilot data).
  5. Compute required n using formulas or software.

6. Relation to Other Power Analyses

  • A priori power analysis → before study, determines sample size.
  • Post hoc power analysis → after study, estimates achieved power (controversial, often misleading).
  • Sensitivity analysis → given n, α, and power, what is the smallest effect size detectable?

7. Key Takeaways

  • A priori power analysis ensures study is designed to detect effects worth caring about.
  • Prevents both false negatives (β too high) and detecting trivial effects.
  • Common standard: α = 0.05, Power = 0.80.

In short:
A priori power analysis is done before running an experiment to calculate the minimum sample size required to detect a meaningful effect, given α, power, and expected effect size. It ensures your study is neither underpowered nor wasteful.