I've seen some similar (ex ex2) questions, but hopefully this is not a duplicate. As it is mentioned in one of them, I'm using eemmeans to do pairwise comparisons after my linear mixed effects model. Question: WHAT is the most appropriate effect size estimate for these comparisons? eta ? partial eta?*
- My model + my post-hoc:
mod1 <- lmer(MY_CONT ~ YEAR * GROUP_2 + (1|ID), data = data, REML = FALSE)
group <- emmeans(mod1,~MY_GROUP|YEAR) %>% pairs(adjust="Tukey")
year <- emmeans(mod1,~YEAR|MY_GROUP) %>% pairs(adjust="Tukey")
I've also seen
eff_size(such as here) option from the same package, but I couldn't understand from its documentation which estimate it is actually doing. I need some help to comprehend which would be the best estimate for me and how to perform that in R. Thanks in advance!tips on how to report these results would be very appreciated too :)
is
eff_sizeequivalent to cohen's d?EDIT for bounty:
Russel kindly answered my issue, but I still have some remaining questions:
A) What is the Cohen' D type I'm getting see
B) Am I estimating it right?
### the model is:
mod1 <- lmer(CONT_Y ~ MY_GROUP * YEAR + (1|ID), data = dfModels)
estimate eemmeans:
group <- emmeans(mod1,~ MY_GROUP|YEAR)
year <- emmeans(mod1,~ YEAR|MY_GROUP)
pairwise comparisons:
group_p <- emmeans(mod1,~ MY_GROUP|YEAR) %>% pairs(adjust="Tukey")
year_p <- emmeans(mod1,~ YEAR|MY_GROUP) %>% pairs(adjust="Tukey")
correctiong sigma and edf
sigmaValues <- VarCorr(mod1)
sigmaValues
sigma <- sqrt((0.25743)^2 + (0.15054)^2)
calculate Cohen's d:
eff1 <- eff_size(emm1, sigma = sigma(mod1), edf = df.residual(mod1)) ### before:
group_p ### check the lowest df
eff1 <- eff_size(group, sigma = sigma, edf = 60) ## and for group
year_p ### check the lowest df
eff2 <- eff_size(year, sigma = sigma, edf = 60) ## and for year
- C) how can I adapt this for
lmers?
simp <- lm(CONT_Y ~ MY_GROUP * YEAR), data = dfModels)
eff_size(pairs(emm), sigma(fiber.lm), df.residual(fiber.lm), method = "identity")
NOTE: I guess it goes without saying, but I DON'T have a background on math, so please bear with me :)
data:
data <- structure(list(PARTICIPANTS = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L,
3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L,
7L, 7L, 7L, 7L, 8L, 8L, 8L, 8L, 9L, 9L, 9L, 9L, 10L, 10L, 10L,
10L, 11L, 11L, 11L, 11L, 12L, 12L, 12L, 12L, 13L, 13L, 13L, 13L,
14L, 14L, 14L, 14L, 15L, 15L, 15L, 15L, 16L, 16L, 16L, 16L, 17L,
17L, 17L, 17L, 18L, 18L, 18L, 18L, 19L, 19L, 19L, 19L, 20L, 20L,
20L, 20L, 21L, 21L, 21L, 21L), CONT_Y = c(19.44, 20.07, 19.21,
16.35, 11.37, 12.82, 19.42, 18.94, 19.59, 20.01, 19.7, 17.92,
18.78, 19.21, 19.27, 18.46, 19.52, 20.02, 16.19, 19.97, 13.83,
15.93, 14.79, 21.55, 18.8, 19.42, 19.27, 19.37, 17.14, 14.45,
17.63, 20.01, 20.28, 17.93, 19.36, 20.15, 16.06, 17.04, 19.16,
20.1, 16.44, 18.39, 18.01, 19.05, 18.04, 19.69, 19.61, 16.88,
19.02, 20.42, 18.27, 18.43, 18.08, 17.1, 19.98, 19.43, 19.71,
19.93, 20.11, 18.41, 20.31, 20.1, 20.38, 20.29, 13.6, 18.92,
19.05, 19.13, 17.75, 19.15, 20.19, 18.3, 19.43, 19.8, 19.83,
19.53, 16.14, 21.14, 17.37, 18.73, 16.51, 17.51, 17.06, 19.42
), CATEGORIES = structure(c(1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L,
1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L,
1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L,
1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L,
1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L,
1L, 2L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 2L, 2L), .Label = c("A",
"B"), class = "factor"), MY_GROUP = structure(c(1L, 2L, 1L, 2L,
1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L
), .Label = c("G1", "G2"), class = "factor")), row.names = c(NA,
-84L), class = c("tbl_df", "tbl", "data.frame"))
rename column:
data <- data %>% rename(., YEAR = CATEGORIES)
eff_size()returning me. I know it's Cohen's from my issue on Github, but I don't know what kind of Cohen's d (link above) it due to the change in the calculation . I came across this article yesterday, I guess that I'm being required to do something similar to calculate Cohen's d: https://www.tandfonline.com/doi/full/10.1080/13825585.2021.1991262 (but they reported the betas, i was reporting as a t-test) – Larissa Cury Feb 06 '23 at 11:48