4

If communality = 1, then we have a Heywood case, and if a communality > 1, it is known as an ultra-Heywood case. I read in a SAS manual that an ultra-Heywood case renders a factor solution invalid, and that factor analysts disagree about whether or not a factor solution with a Heywood case can be considered legitimate. I'm particularly interested in the case of a Heywood case within a larger EFA/CFA. Does having a Heywood case (not an ultra-heywood case) renders the whole solution invalid or not? Why? Could someone point me to some literature on this specific topic?

Charlie Glez
  • 545
  • 3
  • 10
  • 19
  • 1
    I'm remembering to have commented on a same question several days ago, but unable to locate it anymore; was it you question then deleted? – ttnphns Jun 03 '16 at 09:08
  • 3
    "Communality=1" case means there is zero unique variance in the variable. It is possible theoretically but unlikely practically. Some programs may set communality to 1 on iterations when they see communality becomes >1, so often "ultra-case" is hidden here. My sheer personal advice would be not to tolerate neither "ultra" nor "non-ultra" case. To struggle Heywood case, try to lessen the number of factors, try other initial communalities (in PAF method), try to drop variables with low KMO, check multicollinearity (see pt 5-6 here). – ttnphns Jun 03 '16 at 09:19
  • 3
    My opinion is that Heywood case (even "non-ultra") is "degenerate" and "unnatural" and should be treated as invalid. Very high communality, approaching the variable's variance (or 1, in case of analyzing correlations) is dubious. In reality - in psychometrics in particular, - all observed variables should normally have some unique variance – ttnphns Jun 03 '16 at 09:30

1 Answers1

1

I agree with comments made by @ttnphs regarding the validity of results when a Heywood case is observed. Specifically the point that in psychometrics, all observed variables should typically have some non trivial amount of unique variance (let alone 0 unique variance). In addition to the reasons given by @ttnphs, treating results with Heywood case(s) as valid is problematic because such results upwardly bias factor score determinacy (i.e., a measure of how well the actual factor scores are estimated from the data) and increase factor loading variances (Cooperman & Waller, 2022). Further, specific consequences of Heywood cases aside, I have yet to meet a statistician or psychometrician who believes results contaminated with Heywood case(s) should be treated as valid. Of course, this does not mean all is lost if results contain a Heywood case. For example, Cooperman & Waller (2022) and others have shown that use of regularization can help ward off unrealistically low unique variances.

Finally, regarding references, I recommend Kolenikov & Bollen (2012) and Rindskopf (1984). The former is more concerned for diagnostic testing of Heywood cases, though I believe it is well written and have found it particularly helpful in the past.

References

Cooperman, A. W., & Waller, N. G. (2022). Heywood you go away! Examining causes, effects, and treatments for Heywood cases in exploratory factor analysis. Psychological Methods, 27(2), 156.

Kolenikov, S., & Bollen, K. A. (2012). Testing negative error variances: Is a Heywood case a symptom of misspecification?. Sociological Methods & Research, 41(1), 124-167.

Rindskopf, D. (1984). Structural equation models: Empirical identification, Heywood cases, and related problems. Sociological Methods & Research, 13(1), 109-119.