Effect size in regression analysis
In regression analysis, effect size refers to the strength or practical importance of the relationship between the predictor(s) and the outcome variable. Unlike t-tests, regression effect sizes focus on how much variance is explained or how much change in the dependent variable is associated with a change in predictors.
đ Common Effect Size Measures in Regression
â 1. R-squared (R²)
- Represents the proportion of variance in the dependent variable that is explained by the predictors.
- Ranges from 0 to 1:
- 0 = model explains none of the variance
- 1 = model explains all the variance
Interpretation (Cohenâs guidelines, very general):
- 0.01 = small
- 0.09 = medium
- 0.25 = large
đ Note: R² increases with more predictors. Use Adjusted R² to correct for that.
â 2. f² (Cohenâs f-squared)
Used to measure local effect size of individual predictors or for the model as a whole.
Formula: f_square=R_square/(1âR_square)â
Interpretation:
- 0.02 = small effect
- 0.15 = medium effect
- 0.35 = large effect
â 3. Standardized Beta Coefficients (β)
- SPSS can give standardized coefficients, which show the relative effect size of each predictor.
- These coefficients are measured in standard deviations, making them easier to compare across variables.
đ Where to See This in SPSS
- Linear regression path:
- Go to: Analyze > Regression > Linear
- Under Statistics, select:
- R squared change
- Descriptives
- Part and partial correlations (for squared semi-partial correlation as another effect size)
- Standardized Coefficients:
- Automatically shown in the regression output under âBetaâ if âStandardized coefficientsâ is selected.
- f² Calculation:
- Manually calculate using R² from the model:
f² = R² / (1 - R²)
- Manually calculate using R² from the model:
â Final Tip
Use effect size in regression alongside p-values. A predictor might be statistically significant but still explain a tiny amount of variance, which effect sizes will reveal.