0

This question is from the perspective of a student who has only a fundamental idea of eigenvectors and eigenspaces (and linear algebra in general). If my understanding is correct, an eigenvector E of a vector V projects the vector V along the direction of itself by a factor of the associated eigenvalue (lambda).

Av = (lambda)v

Now, in PCA we try to find the eigenvectors of the covariance matrix of a dataset. I am unclear as to the purpose of this step: what is the practical interpretation of this? A covariance matrix cannot be interpreted as a vector, but maybe a set of vectors.

  • The context in your question would suggest that you should review the concept of eigenvectors. The covariance matrix is symmetric, and therefore has an eigenvector decomposition: http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/ – Alex R. Nov 15 '16 at 19:57
  • Several hundred posts are related to PCA eigenvectors: please have a look. It might help conceptually to be more careful with your language: vectors do not have eigenvectors: only matrices or linear transformations do. PCA does more than find eigenvectors of the covariance matrix--indeed, finding those eigenvectors is merely an algorithm for carrying out part of a PCA. Refer to the SVD for more information. – whuber Nov 15 '16 at 20:18

0 Answers0