Consider three different studies based on three different datasets. Each of them has a continuous predictor x, a dichotomous predictor z, and a continuous outcome y.
In each of these three studies, researchers analyze the x*z interaction on y.
Here are three reproducible datasets that meet these criteria:
# generating data
set.seed(1839) # set seed
# dataset 1
dat1 <- data.frame(x=rnorm(50), z=factor(c(rep("A", 25), rep("B", 25))))
dat1$y <- c(rnorm(25), dat1$x[26:50]+rnorm(25,0,2))
# dataset 2
dat2 <- data.frame(x=rnorm(50), z=factor(c(rep("A", 25), rep("B", 25))))
dat2$y <- c(rnorm(25), dat2$x[26:50]+rnorm(25,0,2.3))
# dataset 3
dat3 <- data.frame(x=rnorm(50), z=factor(c(rep("A", 25), rep("B", 25))))
dat3$y <- c(rnorm(25), dat3$x[26:50]+rnorm(25,0,1.5))
Now, here are the coefficient tables from each of the three interactions:
> # data 1 results
> summary(lm(y~x*z, dat1))$coef
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.27501165 0.2685408 -1.02409633 0.31114537
x 0.02228078 0.3083321 0.07226228 0.94270647
zB -0.59286879 0.3791864 -1.56352861 0.12478258
x:zB 0.70988125 0.3978048 1.78449621 0.08093913
> # data 2 results
> summary(lm(y~x*z, dat2))$coef
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.09181368 0.3377780 0.2718166 0.786979289
x -0.31598416 0.3566014 -0.8860990 0.380173233
zB -0.06912280 0.4773531 -0.1448043 0.885497964
x:zB 1.66773706 0.4983741 3.3463555 0.001638411
> # data 3 results
> summary(lm(y~x*z, dat3))$coef
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.14508485 0.2977212 0.48731777 0.6283475
x 0.03496345 0.3963061 0.08822337 0.9300821
zB -0.34413335 0.4124389 -0.83438616 0.4083758
x:zB 0.30622182 0.4672678 0.65534538 0.5155096
Is there a way to get a meta-analytic estimate of x:zB across these three studies?
Other things to consider:
I have access to the raw data, and the scales
xandyare all the same, but the manipulations involved inzwere slightly different (although conceptually the same). I feel as if it is not best practice to simply collapse across the three datasets; is there any support for my intuition here? Or would collapsing across the datasets be defensible?I know that one could always get the effect of
zas a Cohen's d, meta-analyze that, and use the mean ofxfrom each study as predictors in a meta-regression. But note that the small number of studies here (three) makes that untenable.
lmer(y ~ x*z + (1+x+z|studyid)– Mark White Jul 12 '17 at 13:42