Initial setup

rm(list=ls())
library(ggplot2)
library(MASS)
library(effects)
ts=25



Correlation

Example of scatter plot featuring a correlation of exactly r = 0.40.

It can be understood as a +1.00 SD increase in one variable corresponding to an estimated +0.40 SD increase in the other one.

Standardized regression coefficient in a linear model (continuous predictor) can be interpreted in a similar fashion.




Standardized Mean Difference

SMDs (e.g., Cohen’s d, possibly with Hedges’ correction for small samples) are reported to compare group average values on a standardized metric. Here is an example of Cohen's d = 1.50 between “population green” and “population purple”.

Height is commonly used for offering examples to understand SMDs, because it is an immediately visible, non-latent, more or less normally-distributed trait. For example, from Funder and Ozer (2019):

Is it true? Here is the WHO data with growth curves in height. In fact, it looks like there is no change at all between 16- and 17-year-old girls.

… and here is a shiny app for producing plots and computing effect sizes based on those data.

https://enricotoffalini.shinyapps.io/Dati_altezza_WHO/

Indeed, for 16- vs 17-year-old girls we get Cohen's d = 0.05, which is not “small”, it’s practically negligible.

SMD estimated from model parameters

Let’s take this linear model with group as predictor, plus three variables adjusted for (aka “covariates”). Coefficients are clearly not standardized. We want to estimate an effect size for the between-group difference (i.e., in terms of a SMD, but adjusting for covariates. What shall we do?

## 
## Call:
## lm(formula = y ~ group + x1 + x2 + x3, data = df)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -45.855 -10.479  -0.155  10.741  47.004 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 56.46380    6.71143   8.413 4.31e-16 ***
## group1       3.91832    1.44061   2.720  0.00676 ** 
## x1           0.07567    0.04982   1.519  0.12944    
## x2          -0.22857    0.05073  -4.505 8.27e-06 ***
## x3           1.85766    0.22476   8.265 1.29e-15 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 15.52 on 495 degrees of freedom
## Multiple R-squared:  0.159,  Adjusted R-squared:  0.1522 
## F-statistic:  23.4 on 4 and 495 DF,  p-value: < 2.2e-16
plot(allEffects(fit))

Here is the trick: take the raw “group coefficient”, which represent the between-group contrast net of all other predictors. Then divide it by the residual standard deviation, which is an estimate of the pooled standard deviation (net of all predictors).

→ Here it is: 3.92 / 15.52 = 0.25




Correlation and Cohen’s d can be transformed into each other

From Borenstein et al. (2009) SUPER useful for meta-analysis:

Let’s write the conversion functions (here only for effect size estimates):

r2d = function(r){
  d = 2*r / sqrt(1 - r^2)
  return(d)
}
d2r = function(d,n1=NA,n2=NA){
  if(is.na(n1)|is.na(n2)) n1=n2=1e5
  a = (n1 + n2)^2 / (n1 * n2)
  r = d / sqrt(d^2 + a)
  return(r)
}

EXAMPLE OF CONVERSION TABLE:

tab = data.frame(Cohens.d=seq(-1.6,1.6,.2),Correlation=NA)
tab$Correlation = round(d2r(tab$Cohens.d),2)
tab
##    Cohens.d Correlation
## 1      -1.6       -0.62
## 2      -1.4       -0.57
## 3      -1.2       -0.51
## 4      -1.0       -0.45
## 5      -0.8       -0.37
## 6      -0.6       -0.29
## 7      -0.4       -0.20
## 8      -0.2       -0.10
## 9       0.0        0.00
## 10      0.2        0.10
## 11      0.4        0.20
## 12      0.6        0.29
## 13      0.8        0.37
## 14      1.0        0.45
## 15      1.2        0.51
## 16      1.4        0.57
## 17      1.6        0.62



Small, medium, large?

From Funder and Ozer (2019):

In fact, meta-analyses of effects in individual differences generally suggest a more sobering reality in psychological research. This makes sense! spontaneous behavior depends on a lot of factors, so any single effect must be small as compared to the overall observed variability:

Eventually, Funder and Ozer (2019) suggest the following interpretation:

r = 0.05very small effect “for the explanation of single events but potentially consequential in the not-very-long run”

r = 0.10small effect “at the level of single events but potentially more ultimately consequential”

r = 0.20medium effect “that is of some explanatory and practical use even in the short run and therefore even more important”

r = 0.30large effect “that is potentially powerful in both the short and the long run”

Example by Funder and Ozer (2019) of a very small correlation for a single event that is relevant in the long run:

→ Agreeableness may correlate around r = 0.05 with success of a single social interaction … But what happens at the end of the year if a person has had 20 social interactions per day? (My computation: out of 7300 interactions in a year, having +1 SD in agreeableness implies about +185 positive interactions.)

→ Other example: “very small” treatment effect… but what happens if that very small effect acts every single day in the life of a person?