0

I'm attempting to use lm.cluster in order to produce a linear model with clustered standard errors. However, I've noticed that the output is identical to the same model fitted with an lm() function. Shouldn't the standard errors be larger during clustering? Any help would be appreciated. Below is are my two models:

clustered_model <- lm.cluster(data = data,formula = wageincome ~ aca + state + year_factor + age + (age^2)+ GDP + per_capita_income + unemployment_rate, cluster = "state")

regular_model <- lm(data = data,formula = wageincome ~ aca + state + year_factor + age + (age^2)+ GDP + per_capita_income + unemployment_rate)


Here are the outputs of my two models. Because of the number of coefficients I've only included a sample to illustrate my problem. Here is the clustered output:

term    estimate    std.error   statistic   p.value
(Intercept) 84820.96004 10832.85098 7.829975707 4.91E-15
aca -1345.979076    1916.421376 -0.702339837    0.482468403
stateAlaska -6404.573152    12672.62485 -0.505386471    0.613288201
stateArizona    -13197.14804    4806.254937 -2.745827721    0.006036602

Here is the output from lm():

term    estimate    std.error   statistic   p.value
(Intercept) 84820.96004 10832.85098 7.829975707 4.91E-15
aca -1345.979076    1916.421376 -0.702339837    0.482468403
stateAlaska -6404.573152    12672.62485 -0.505386471    0.613288201
stateArizona    -13197.14804    4806.254937 -2.745827721    0.006036602
  • I don't think we can answer this without seeing some [example data](https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example). – neilfws Jan 13 '22 at 21:55

0 Answers0