This question is from the perspective of a student who has only a fundamental idea of eigenvectors and eigenspaces (and linear algebra in general). If my understanding is correct, an eigenvector E of a vector V projects the vector V along the direction of itself by a factor of the associated eigenvalue (lambda).
Av = (lambda)v
Now, in PCA we try to find the eigenvectors of the covariance matrix of a dataset. I am unclear as to the purpose of this step: what is the practical interpretation of this? A covariance matrix cannot be interpreted as a vector, but maybe a set of vectors.