Step 0.1. Compute centered variables X and covariance matrix S.
Step 0.2. Decide on the number of factors M to extract.
(There exist several well-known methods in help to decide, let's omit mentioning them. Most of them require that you do PCA first.)
Note that you have to select M before you proceed further because, unlike in PCA, in FA loadings and factor values depend on M.
Let's M=2.
Step 0.3. Set initial communalities on the diagonal of S.
Most often quantities called "images" are used as initial communalities (see https://stats.stackexchange.com/a/43224/3277).
Images are diagonal elements of matrix S-D, where D is diagonal matrix with diagonal = 1 / diagonal of inv(S).
(If S is correlation matrix, images are the squared multiple correlation coefficients.)
With covariance matrix, image is the squared multiple correlation multiplied by the variable variance.
S with images as initial communalities on the diagonal
.07146025 .09921633 .01635510 .01033061
.09921633 .07946595 .01169796 .00929796
.01635510 .01169796 .00437017 .00606939
.01033061 .00929796 .00606939 .00167624
Step 1. Decompose that modified S to get eigenvalues and right eigenvectors.
Use eigen decomposition, not svd.
(Some last eigenvalues may be negative. This is because a reduced covariance matrix can be not positive semidefinite.)
Eigenvalues L
F1 .1782099114
F2 .0062074477
-.0030958623
-.0243488794
Eigenvectors V
F1 F2
SLength .6875564132 .0145988554 .0466389510 .7244845480
SWidth .7122191394 .1808121121 -.0560070806 -.6759542030
PLength .1154657746 -.7640573143 .6203992617 -.1341224497
PWidth .0817173855 -.6191205651 -.7808922917 -.0148062006
Leave the first M=2 values in L and columns in V.
Step 2.1. Compute loadings A.
Loadings are eigenvectors normalized to respective eigenvalues: A value = V value * sqrt(L value)
F1 F2
SLength .2902513607 .0011502052
SWidth .3006627098 .0142457085
PLength .0487437795 -.0601980567
PWidth .0344969255 -.0487788732
Step 2.2. Compute row sums of squared loadings. These are updated communalities.
Reset the diagonal of S to them
S with updated communalities on the diagonal
.08424718 .09921633 .01635510 .01033061
.09921633 .09060101 .01169796 .00929796
.01635510 .01169796 .00599976 .00606939
.01033061 .00929796 .00606939 .00356942
REPEAT Steps 1-2 many times (iterations, say, 25)
Extraction of factors is done.
Let us look at the final eigenvalues of the reduced covariance
matrix after iterations:
Eigenvalues L
F1 .2026316056
F2 .0137096989
.0005000572
-.0005882867
The eigenvalues are the factors' (F1 and F2) variances. The overall common variance is .2026316056 + .0137096989 = .2163413036,
so F1, for example, explains .2026316056/.2163413036 = 93.7% of the common variance.
That 93.7% of the common variance amounts to .2026316056/.3092040816 = 65.5% of the total variability
(.3092040816, the total variance, is the trace of the initial, non-reduced covariance matrix).
[Note. The .0005000572 + -.0005882867 do not count a common variance; these "dross" eigenvalues are nonzero due to the fact
the 2-factor model does not predict the covariances without any error.]
Final loadings A and communalities (row sums of squares in A).
Loadings are the covariances between variables and factors.
Communality is the degree to what the factors load a variable, it is the "common variance" in the variable.
F1 F2 Comm
SLength .3125767362 .0128306509 .0978688416
SWidth .3187577564 -.0323523347 .1026531808
PLength .0476237419 .1034495601 .0129698323
PWidth .0324478281 .0423861795 .0028494498
Sums of squares in columns of A are the factors' variances: .2026316056 and .0137096989.
The main goal of factor analysis is to explain correlations or covariances by means of the loadings.
A*t(A) is the restored covariances:
.0978688416 .0992211576 .0162133990 .0106862785
.0992211576 .1026531808 .0118336023 .0089717050
.0162133990 .0118336023 .0129698323 .0059301186
.0106862785 .0089717050 .0059301186 .0028494498
See that off-diagonal elements above are quite close to those of the input covariance matrix:
S
.1242489796 .0992163265 .0163551020 .0103306122
.0992163265 .1436897959 .0116979592 .0092979592
.0163551020 .0116979592 .0301591837 .0060693878
.0103306122 .0092979592 .0060693878 .0111061224
Standardized (rescaled) loadings and communalities.
St. loading is Loading / sqrt(Variable's variance);
these loadings are computed if you analyse covariances, and are suitable for interpretation of Fs
(if you analyse correlations, A are already standardized).
F1 F2 Comm
SLength .8867684574 .0364000747 .7876832626
SWidth .8409066701 -.0853478652 .7144082859
PLength .2742292179 .5956880078 .4300458666
PWidth .3078962532 .4022009053 .2565656710
Step 3. Compute factor scores (values of Fs).
Unlike component scores in PCA, factor scores are not exact, they are reasonable approximations.
Several methods of computation exist (https://stats.stackexchange.com/q/126885/3277).
Here is regressional method which is the same as the one used in PCA.
Regression coefficients B to compute Standardized factor scores are: B = inv(S)*A (original S is used)
B
F1 F2
SLength 1.597852081 -.023604439
SWidth 1.070410719 -.637149341
PLength .212220217 3.157497050
PWidth .423222047 2.646300951
Standardized factor scores = X*B
These "Standardized factor scores" have variance not 1; the variance of a factor is SSregression of the factor by variables / (n-1).
F1 F2
.194641800 -.365588231
-.660133976 -.042292672
-.786844270 -.480751358
-1.011226507 .216823430
.141897664 -.426942721
1.250472186 .848980006
-.669003108 -.025440982
-.050962459 .016236852
...
Factors are extracted as orthogonal. And they are.
However, regressionally computed factor scores are not fully uncorrelated.
Covariance matrix between computed factor scores.
F1 F2
F1 .864 .026
F2 .026 .459
Factor variances are their squared loadings.
You can easily recompute the above "standardized" factor scores to "raw" factor scores having those variances:
raw score = st. score * sqrt(factor variance / st. scores variance).
dat <- subset(iris, Species=="setosa", select = -c(Species))X <- scale(dat, scale=FALSE)S <- cov(X)Rinv <- solve(S)D <- diag(1 / (diag(Rinv)))init_communalities <- D * Rinv * D - 2 * D + SBut why is it not instead matrix multiplication
– user1205901 - Слава Україні Mar 29 '24 at 12:28D %*% Rinv %*% D - 2 * D + S, which would give a slightly different answer?