So I went ahead and generated some data to demonstrate that these work as expected.
library(tidyverse)
library(lme4)
if(!require(modelr)){
install.packages('modelr')
}
library(modelr)
pop_mean<-10
n_groups<-4
groups<-gl(n_groups, 20)
Z<-model.matrix(~groups-1)
group_means<-rnorm(n_groups, 0, 2.5)
y<- pop_mean + Z%*%group_means + rnorm(length(groups), 0, 0.5)
d<-tibble(y, groups)
The data generating mechanism from the top down is as follows...
$$ \theta_i \sim \mathcal{N}(10, 2.5) $$
$$y_{i,j} \sim \mathcal{N}(\theta_i, 0.5) $$
Let's take a look at complete, no, and partial pooling.
Complete Pooling
This should return the same as the sample mean of y. This assumes that all the data are generated from a single normal distribution, with some mean and variance. The complete pooling uses all the data to estimate that one mean.
complete_pooling<-lm(y~1, data = d)
summary(complete_pooling)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 9.264 0.214 43.29 <2e-16 ***
Signif. codes: 0 ‘*’ 0.001 ‘’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.914 on 79 degrees of freedom
No Pooling
In this scenario, we agree that the groups are distinct, but we estimate their means using the data from those groups.
no_pooling<-lm(y~groups-1, data = d) #remove the intercept from the model
summary(no_pooling)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
groups1 6.2116 0.1045 59.44 <2e-16 ***
groups2 10.9183 0.1045 104.48 <2e-16 ***
groups3 10.5156 0.1045 100.63 <2e-16 ***
groups4 9.4088 0.1045 90.04 <2e-16 ***
group_means + pop_means # pretty close
>>> 6.311974 10.878787 10.354225 9.634138
So we estimate the group means fairly well.
Partial Pooling
partial_pooling<-lmer(y~ 1 + 1|groups, data = d)
summary(partial_pooling)
Random effects:
Groups Name Variance Std.Dev.
groups (Intercept) 4.5362 2.1298
Residual 0.2184 0.4673
Number of obs: 80, groups: groups, 4
Fixed effects:
Estimate Std. Error t value
(Intercept) 9.264 1.066 8.688
modelr::data_grid(d, groups) %>% modelr::add_predictions(partial_pooling)
A tibble: 4 x 2
groups pred
<fct> <dbl>
1 1 6.22
2 2 10.9
3 3 10.5
4 4 9.41
As you can see, the estimates for the groups are partially pooled towards the population mean (they are slightly less extreme than the complete pooling model).
Here is some code to reproduce these results. The results are not exactly the same because I didn't set the random seed when I wrote this.
library(tidyverse)
library(lme4)
if(!require(modelr)){
install.packages('modelr')
}
library(modelr)
#Generate data
set.seed(123)
pop_mean<-10
n_groups<-4
groups<-gl(n_groups, 20)
Z<-model.matrix(~groups-1)
group_means<-rnorm(n_groups, 0, 2.5)
y<- pop_mean + Z%*%group_means + rnorm(length(groups), 0, 0.5)
d = tibble(y, groups)
complete_pooling<-lm(y~1, data = d)
no_pooling<-lm(y~groups-1, data = d)
partial_pooling<-lmer(y~ 1 + 1|groups, data = d)
modelr::data_grid(d, groups) %>% modelr::add_predictions(partial_pooling)
EDIT:
Here is an example with a fixed effect.
library(tidyverse)
library(lme4)
if(!require(modelr)){
install.packages('modelr')
}
library(modelr)
#Generate data
set.seed(123)
pop_mean<-10
n_groups<-4
groups<-gl(n_groups, 20)
x<-rnorm(length(groups))
Z<-model.matrix(~groups-1)
group_means<-rnorm(n_groups, 0, 2.5)
y<- pop_mean + 2x + Z%%group_means + rnorm(length(groups), 0, 0.5)
d = tibble(y, groups,x)
complete_pooling<-lm(y~x, data = d)
no_pooling<-lm(y~groups + x -1, data = d)
partial_pooling<-lmer(y~ x + 1 + 1|groups, data = d)
modelr::data_grid(d, groups,x=0) %>% modelr::add_predictions(partial_pooling)
You will note that the effect estimates in the partial pooling model are pooled towards the complete pooling estimates. They are ever so slightly closer.
lmcall will depend on what is being modelled. – Demetri Pananos Jul 05 '20 at 04:47group_meansare, but in the model they are fixed, not random. – Demetri Pananos Jul 09 '20 at 04:56