Assuming that this part of the model development process, then I would recommend choosing the grouping variable based on the one with the lowest AIC (similar to what you would do in a stepwise AIC process).
require(lme4)
dataf <- data.frame(
individual = factor(rep(1:100, times = 2)),
plot = factor(rep(1:10, 20)),
region = factor(rep(1:4, 50))
)
make the true dependence on the plot grouping
set.seed(1933)
dataf$val <- 7 + as.numeric(dataf$plot)/10 + rnorm(200, 0, 1)
mod_i <- lme4::lmer(val ~ (1|individual), data = dataf)
mod_p <- lme4::lmer(val ~ (1|plot), data = dataf)
mod_r <- lme4::lmer(val ~ (1|region), data = dataf)
boundary (singular) fit: see help('isSingular')
> extractAIC(mod_i)
[1] 3.000 617.753
> extractAIC(mod_p)
[1] 3.0000 608.7551
> extractAIC(mod_r)
[1] 3.0000 621.5314
mod_pi <- lme4::lmer(val ~ (1|plot) + (1|individual), data = dataf)
mod_pr <- lme4::lmer(val ~ (1|plot) + (1|region), data = dataf)
> extractAIC(mod_pi)
[1] 4.0000 610.0804
> extractAIC(mod_pr)
[1] 4.0000 610.7551
Final model is mod_p with the lowest AIC
make the true dependence on the individual grouping
set.seed(1933)
dataf$val <- 7 + as.numeric(dataf$individual)/100 + rnorm(200, 0, 1)
mod_i <- lme4::lmer(val ~ (1|individual), data = dataf)
mod_p <- lme4::lmer(val ~ (1|plot), data = dataf)
mod_r <- lme4::lmer(val ~ (1|region), data = dataf)
> extractAIC(mod_i)
[1] 3.000 603.226
> extractAIC(mod_p)
[1] 3.0000 604.1367
> extractAIC(mod_r)
[1] 3.0000 604.7195
mod_ip <- lme4::lmer(val ~ (1|individual) + (1|plot), data = dataf)
mod_ir <- lme4::lmer(val ~ (1|individual) + (1|region), data = dataf)
> extractAIC(mod_ip)
[1] 4.0000 604.9881
> extractAIC(mod_ir)
[1] 4.000 605.226
final model is mod_i with the lowest AIC
If the ultimate question is testing whether the grouping variable is statistically significant in the final model, then I also recommend the posting in the comment that @rep_ho made on the original question. Specifically, take a look at the RLRsim package.