0

I did some analysis that showed that one of my two factors was significant when I modeled it as a dummy coded variable (each of my two predictor variables has just two levels, and so the "on" in comparison to the "off" was shown to be significant). However, when I use a factor effects model, IE something like $\mu_{ij} = \mu + \alpha_i + \beta_j + (\alpha\beta)_{ij}$, I no longer find either of my predictors significant. They are shown to have higher p-values in a factor effects model.

I'm mostly just curious, why is this?

Edit: note, I am not asking how introducing a variable changes my result; I am asking how these two separate versions of building a model with the same variables can produce different results.

Dan W
  • 183
  • Does this answer your question? Covariate and Interaction Not Significant When Both in the Model That's in the context of a different type of regression, but the same principles hold. Follow the link from this comment on that question for further examples. – EdM Mar 13 '22 at 17:55
  • @EdM no. That question is getting at how significance changes when an interaction term is added to a model, which I understand. My question is getting at the difference between two different approaches to fitting a model with the exact same terms included in each of them. A linear regression and a factor-effects model can come to different conclusions about the significance of the terms fitted to the model, apparently, even when each model is fitted with the exact same terms (interaction or otherwise). – Dan W Mar 13 '22 at 19:08
  • The two default tests are different, depending on how they encode the factors. So, check the null hypotheses you are testing to understand why you get different results. – whuber Mar 14 '22 at 00:57
  • 1
    You already ask about this here. This is a lot like your two models: same underlying question, different formulations, same explanation. – dipetkov Mar 14 '22 at 09:07

0 Answers0