I have nearly 30 variables going in to a large PCA, but the variables really fall into two conceptual categories. I want to test whether leaving all the variables to correlate freely with one another is a better fit than running two separate models for each concept.
I have "behavior" and "cognition" variables, and I do eventually want to see where behavior and cognition correlate. However, I need some way of determining whether the correlation is so great between the two that it warrants throwing out the conceptual difference between the two. I imagine that a full model PCA will separate cognitive and behavioral variables into separate components to some extent, but I don't know how to appreciate that extent statistically.
From what I understand about the model fit tests I'm aware of, there's not an appropriate way to compare one model to TWO separate models simultaneously. When I forget the concepts and just think numbers, I can't imagine a way for splitting the models to be better than combining them and letting the chips fall where they may. However, I really need a way to show statistically that "cognition" and "behavior" are or are not distinct groupings here.
Thanks for any insight!
(My previous plan was to run both PCAs separately and try to do a correlation matrix between the resulting component scores. Not sure what to do with it, but that was my initial train of thought.)