2

I've been asked by a reviewer to provide effect sizes for pairwise, planned comparisons. In the experimental design, I have one fixed factor with three conditions (Consistency). Passage & Subject are random effects. I would like to report Cohen's D for three planned comparisons (Consistency 1 - Consistency2; Consistency 1 - Consistency3; Consistency2 - Consistency3). However, when I compute Cohen's D, I get two effect sizes, one labelled Consistency 2 (d = .732), and the other labelled as Consistency 3 (d = .296). First, I'm not sure which comparisons these effect sizes refer to and the output only provides two effect sizes. And second, how can I get Cohen's D for the third comparison?

Here is my code:

PdatFIN$RT_target <- as.numeric(PdatFIN$RT_target)
PdatFIN$Consistency <- as.factor(PdatFIN$Consistency)
str(PdatFIN)
m <- lmer(RT_target ~ Consistency + (1 | Subject) + (1 | Passage), PdatFIN); summary(m); Anova(m)
lme.dscore(m,data=PdatFIN,type = "lme4")
m.contrasts <- emmeans(m,"Consistency")
pairs(m.contrasts)

Here is my output:

group: 1
             vars   n    mean     sd median trimmed    mad min  max range skew kurtosis    se
RT_target       1 148 1709.04 527.09   1631 1697.14 650.12 846 2974  2128 0.22    -1.06 43.33
group: 2
             vars   n    mean     sd median trimmed    mad min  max range skew kurtosis    se
RT_target       1 162 2137.73 741.01 2056.5 2081.52 717.58 897 4414  3517 0.70     0.34 58.22
group: 3
             vars   n    mean     sd median trimmed    mad min  max range skew kurtosis    se
RT_target       1 166 1888.73 736.85   1740 1811.35 708.68 750 4318  3568 1.01     0.83 57.19

Linear mixed model fit by REML ['lmerMod'] Formula: RT_target ~ Consistency + (1 | Subject) + (1 | Passage) Data: PdatFIN

REML criterion at convergence: 7314.2

Scaled residuals: Min 1Q Median 3Q Max -2.0682 -0.6320 -0.1771 0.5346 4.1376

Random effects: Groups Name Variance Std.Dev. Subject (Intercept) 203923 451.6
Passage (Intercept) 23091 152.0
Residual 239054 488.9
Number of obs: 476, groups: Subject, 30; Passage, 18

Fixed effects: Estimate Std. Error t value (Intercept) 1711.58 98.75 17.333 Consistency2 427.71 56.19 7.612 Consistency3 173.18 55.99 3.093

Correlation of Fixed Effects: (Intr) Cnsst2 Consistncy2 -0.301
Consistncy3 -0.303 0.532 Analysis of Deviance Table (Type II Wald chisquare tests)

Response: RT_target Chisq Df Pr(>Chisq)
Consistency 59.216 2 1.385e-13 ***


Signif. codes: 0 ‘*’ 0.001 ‘’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > #Cohens d > lme.dscore(m,data=PdatFIN,type = "lme4") t df d Consistency2 7.612028 432.5373 0.7320127 Consistency3 3.092820 434.6247 0.2967068 > m.contrasts <- emmeans(m,"Consistency") > pairs(m.contrasts) contrast estimate SE df t.ratio p.value Consistency1 - Consistency2 -428 56.2 432 -7.607 <.0001 Consistency1 - Consistency3 -173 56.1 434 -3.090 0.0060 Consistency2 - Consistency3 255 54.3 429 4.688 <.0001

Degrees-of-freedom method: kenward-roger P value adjustment: tukey method for comparing a family of 3 estimates

I really appreciate the help and guidance. I'm new to LME and it's been a challenge trying to interpret my output. Thank you.

dipetkov
  • 9,805
Elizabeth
  • 135
  • 6

1 Answers1

2

The Cohen's d's you get are for Consistency1 vs. Consistency2 and Consistency1 vs Consistency3. You get the final d (C2 vs. C3) by changing your reference group: e.g.

m2<-lmer(RT_target ~ relevel(Consistency, ref=2) + (1 | Subject) + (1 | Passage), PdatFIN) #note that Consistency needs to be an unordered factor for releveling to work

lme.dscore(m2, data=PdatFIN,type = "lme4")

(There may be a more elegant way that I don't know about, but this at least gives you the final d relatively easily)

However, please note that computing effect sizes from multilevel (mixed) models is controversial; see for instance this thread and this thread and this discussion. It may be better to just report the estimated marginal means or contrast estimates along with CIs and/or p-values.

Sointu
  • 1,846
  • Thank you so much, your feedback is very helpful. I did come across some of the controversy regarding effect sizes and mixed modeling in my search and rather not report them; however, I need to satisfy the reviewers (although, I like your suggestion to consider reporting estimates and CIs). The reviewer asked for Bayes Factors or effect sizes for the planned comparisons. I computed Bayes Factors but not sure if I did them correctly. I guess I can add another post! Again, thanks so much for your help!!!! – Elizabeth Jan 01 '24 at 16:55
  • I'm glad it helped! And yes, I forgot while typing that it was a reviewer request, probably best to give them what they want :) – Sointu Jan 01 '24 at 17:01