This library computes the Cohen’s dp and its confidence intervals in any experimental design. In the past, researchers developed distinct versions of standardized mean difference for between and within-subject design. The consequence is that these various estimators could not be compared between each others and more importantly, across experimental design. Lakens (2013) noted the existence of two different measures in within-subject design, and Westfall (2016) noted the existence of at least 5 difference sorts of standardized mean difference. He concluded by making this very important point: all these estimators ARE NOT Cohen’s d measures.
The measure that J. Cohen (Cohen, 1969) created is obtained from the mean difference standardized using the pooled standard deviation. Hence, measures such as dav, dz, da, etc. are not Cohen’s d and more importantly, they cannot be compared! They all return different values because they measure different things. They are not just different, they can be markedly different. As an example, a dz, given the means and standard deviations, can be smaller or larger than the Cohen’s d depending on the amount of correlation across the pairs of data.
This whole mess implies lack of comparability and confusion as to what statistics was actually reported. For that reason, I chose to call the true Cohen’s d with a distinct subscript p, as in dp so that (i) we clearly see the difference (the reader is not left guessing what d represents); (ii) is is clear that the pooled standard deviation and only this statistic was used to standardized the mean difference. Further, by advocating a unique statistic for standardized mean difference, it allows for comparisons across studies, whether they used within-subject or between-subject design.
MBESS is an excellent package which already computes standardized mean difference and returns confidence intervals (Kelley, 2022). However, it does not compute confidence intervals in within-subject design directly. The Algina and Keselman approximate method can be implemented within MBESS with some programming (Cousineau & Goulet-Pelletier, 2021). This package, on the other hand, can be used with any experimental design. It only requires an argument
design which specifies the type of experimental design.
The confidence interval in within-subect design was unknown until recently. In recent work (Cousineau, 2022, submitted), its exact expression was found when the population correlation is know and an approximation was proposed when the sample correlation is known, but not the population correlation.
You can install this library on you computer from CRAN (note the uppercase C and uppercase L)
or if the library devtools is installed with:
and before using it:
The main function is
Cohensdp, which returns the Cohen’s dp and its confidence intervals under various designs. For example, this returns the triplet (lower 95% confidence interval bound, dp, upper 95% confidence interval bound) given the sample means, the sample standard deviations, and the correlation
##  -0.3340867 0.2364258 0.7998962
You get a more readable output with
summarize(Cohensdp( statistics = list(m1=76, m2=72, n=20, s1=14.8, s2=18.8, r=0.2), design = "within") )
## Cohen's dp = 0.236 ## 95.0% Confidence interval = [-0.334, 0.800]
The design can be replaced with
between for a between-subject design:
summarize(Cohensdp( statistics = list(m1=76, m2=72, n1=10, n2=10, s1=14.8, s2=18.8), design = "between") )
## Cohen's dp = 0.236 ## 95.0% Confidence interval = [-0.647, 1.113]
r is removed as there is no correlation in between-group design, and
n is provided separately for each group,
Finally, it is also possible to get a Cohen’s dp from a single group as long as you have an hypothetical mean
m0 to compare the sample mean to, e.g.,
## Cohen's dp = 0.270 ## 95.0% Confidence interval = [-0.180, 0.713]
explain for additional information on the result.
Check the web site https://github.com/dcousin3/CohensdpLibrary for more. also,
help(CohensdpLibrary) will get you started.