I'm trying to implement th Local Coordinate System (LCS) of this paper.
It's all clear to me about how it works, but the only thing that I' dont understand is the "rotation" mechanism. Quoting the paper (sect 4.2):
This processing actually encompasses three distinct operations: centering (C), rotation with PCA basis (R) and dimensionality reduction by a factor of 2 (D).
I don't understand what is the bold part. And later:Later in the same section:
we learn off-line (e.g., on Flickr60K for Holidays) a rotation matrix Q_i from training descriptors mapped to this word.
I understand the sentence except for the "rotation matrix" part, again.
From my knowledge, PCA consists in obtaining two matrices: the eigenvector matrix of the covariance matrix of the centered data and the correspondent diagonal eigenvalues matrix. I know that if we want to have a nxp matrix and we want to reduce a vector v from 1xp to 1xd (where d<p) all we have to do is the product between v and the pxd diagonal matrix of the most relevant eigenvalues. But I've never heard the "rotation" stuff.
Can someone please explain me this?
vin the same way to before doing the product with the eigenvector matrix? For example, if the third eigenvectors is swapped with the first one, do I have to swapv[3]withv[1]before doing the product with eigenvector matrix? – user6321 Jan 13 '17 at 15:00