I am currently using (general) linear mixed models as a mean to avoid pseudoreplication and control for measures made on various individuals in the same location (transect). I'm now starting to think using mixed models is not necessary and would really appreciate some guidance by more knowledgeable folks.
For clarity: I measured survival and size of several trees nested in transects along multiple shelterbelts (think ~1500 trees within ~250 transects within ~40 shelterbelts). I am trying to assess what factors influence trees survival and size in agroforestry systems. A single transect has several corresponding measures (ie soil texture, width, etc.), which are then shared by every single tree measured in the same transect. The data is then structured as a list of trees, with every ~5-10 neighboring individuals having different response data but otherwise sharing the exact same data structure, which is inherited from their "home transect".
Since I felt this was problematic, I use mixed models where I treat transect ID as a random effect, such as this model: (the following code is in R)
size_mod = lmer(size ~ various_fixed_effects +
age + (1|transect_id), data=trees)
After fitting a model, I plotted the random effects using sjPlot::plot_model(size_mod, type="re")
I then saw that random effects estimates were close to zero, with every single std. error bar crossing zero (as seen below).
I then saw creutzml's comment on an unrelated post about singular fits, which made me question the utility to use mixed models at all, hence this post.
So, my questions are;
- would the random effects estimates be considered different from zero ?
- are mixed models at all necessary if random effects coefficients (odds-ratio/estimates) are zero ? In my case, they ultimately don't seem to have an impact on the models so I'm really not sure. But then,
- wouldn't using non-mixed models be considered pseudoreplication ?

lmer(size ~ fixed_effects + age + (age|transect_id)). – Antoine Mathieu Jul 06 '22 at 22:20