I conducted a Principal Component Analysis to reduce the items and dimensions. But some items loaded on unexpected construct and the items have a low face validity with the construct. Is that a problem? If so, what can I do?
Asked
Active
Viewed 396 times
1 Answers
2
Here's the thing with PCA. It's unsupervised and exploratory. You can't build in a theory into the mathematics behind it. Here are some possible explanations and options:
- the item that loaded poorly is bad. It doesn't measure what you think, is worded poorly, or was misunderstood by respondents. Get rid of it.
- the dimensions you think exist don't exist in the way you thought. Reframe your theory.
- something else is going on with the way the item relates to other items in your data. That could be creating noise somewhere. If you think the item is needed, and you have a strong idea about how item should load, then consider running a confirmatory factor analysis (CFA). That allows you to specify what loads where based in substantive theory.
If you are interested in the construct validity of what you are measuring, not just data reduction, you should be working with CFA, and there is a host of different tests you can run there. If you don't have exposure to CFA then dig into any introductory psychometrics or multivariate statistics text. Here's a posting asking about recommended book: Book recommendations for multivariate analysis
I can't vouch for these but they should point you in the right direction.
robin.datadrivers
- 2,733
-
-
I was trying to build a scale to measure motivation. But four items that were supposed to measure two different factors clustered together. It caused me a naming problem and a low face validity if I name based on some of the items. Can I explain that I was trying to develop the instrument empirically rather than conceptually? By the way, THANKS THANKS THANKS. :-) – Ava XU Jan 08 '15 at 04:22
-
You should test construct validity using CFA, not PCA. Then you specify the model as you theorize, and run lots of diagnostics. If you still find those items loading on the factors in an unexpected way, you need to reevaluate your model or drop/alter the items. It is not uncommon that items don't hold up the theory as we expect. Face validity is important, but if a face-valid model isn't construct valid, that's just as problematic as the opposite. For example, math teachers may think a word problem tests algebra when it really tests reading comprehension. – robin.datadrivers Jan 08 '15 at 04:25
some items loaded on unexpected constructWhat is the "expected" then? Do you have interpretation of factors before the analysis offers them? Third: have checked the data is all right for factor analysis? Fourth: did you try out several solutions with different number of factors and different rotations? Fifth: do you think face validity (a subjective expectation) should always be confirmed? – ttnphns Jan 07 '15 at 08:33