I would like to discuss the issue of what effect size to report for factors in multiple regression. Of course, for the total model the standard is to report the multiple correlation R-squared. My concern is with the effect sizes for the individual predictors.
It seems to me that several different effect sizes are being reported for the factors in multiple regression: usually the standardized regression coefficient (Beta), but also partial r, semi-partial r, and semi-partial r squared. One problem with standardized regression coefficient is that it is a bit meaningless for dichotomous variables (you cannot increase a dichotomous variable with 1 SD). It seems to me that the partial r is more useful as an effect size in multiple regression, and it can easily be computed from the t value and the degrees of freedom.
When comparing groups (i.e., when using t-tests and ANOVAs) a handy effect size is r, as it can easily be computed from t and dfs: r = square root (t^2 / (t^2 + df)).
It seems to me that effect size r can be computed as such in a multiple regression context. The r so computed is equal to the partial r between the predictor and the outcome. (At least for continuous and dichotomous predictors in the examples I computed.)
Is there any reason that r and partial r are not widely used as effect sizes in a regression context? Should partial r become the standard effect size reported in a regression context?