Effect size in regression analysis
In regression analysis, effect size refers to the strength or practical importance of the relationship between the predictor(s) and the outcome variable. Unlike t-tests, regression effect sizes focus on how much variance is explained or how much change in the dependent variable is associated with a change in predictors.
📐 Common Effect Size Measures in Regression
✅ 1. R-squared (R²)
- Represents the proportion of variance in the dependent variable that is explained by the predictors.
- Ranges from 0 to 1:
- 0 = model explains none of the variance
- 1 = model explains all the variance
Interpretation (Cohen’s guidelines, very general):
- 0.01 = small
- 0.09 = medium
- 0.25 = large
📝 Note: R² increases with more predictors. Use Adjusted R² to correct for that.
✅ 2. f² (Cohen’s f-squared)
Used to measure local effect size of individual predictors or for the model as a whole.
Formula: f_square=R_square/(1−R_square)
Interpretation:
- 0.02 = small effect
- 0.15 = medium effect
- 0.35 = large effect
✅ 3. Standardized Beta Coefficients (β)
- SPSS can give standardized coefficients, which show the relative effect size of each predictor.
- These coefficients are measured in standard deviations, making them easier to compare across variables.
📊 Where to See This in SPSS
- Linear regression path:
- Go to: Analyze > Regression > Linear
- Under Statistics, select:
- R squared change
- Descriptives
- Part and partial correlations (for squared semi-partial correlation as another effect size)
- Standardized Coefficients:
- Automatically shown in the regression output under “Beta” if “Standardized coefficients” is selected.
- f² Calculation:
- Manually calculate using R² from the model:
f² = R² / (1 - R²)
- Manually calculate using R² from the model:
✅ Final Tip
Use effect size in regression alongside p-values. A predictor might be statistically significant but still explain a tiny amount of variance, which effect sizes will reveal.