2

I am using principal component analysis (PCA) for noise reduction, i.e. projecting a point from the original space to the reduced space, and then back to the original one.

For this sort of usage, is it necessary that the PC vectors have unit length? When I try to scale them, I am getting different results, so I suspect they must be unit length, I'd just like to get confirmation and explanation as to why this is.

amoeba
  • 104,745
The Baron
  • 641
  • If your data point is $\mathbf x \in \mathbb R^d$, and $k$ PC vectors are stacked as a columns into matrix $\mathbf V$ of $d\times k$ size, then your projection should be $\mathbf V \mathbf V^\top \mathbf x$. This is an orthogonal projector only if columns $\mathbf V$ have unit length. There are various ways to see it. Imagine that you take only one PC vector $\mathbf v$ and $\mathbf x = \alpha \mathbf v$ actually lies in the subspace spanned by it. Then the $\mathbf{vv}^\top$ transformation should do nothing. If $\mathbf v$ is of unit length, it will work out; if not, then not! – amoeba Oct 15 '15 at 21:21
  • Another argument is that repeatedly applying $\mathbf{vv}^\top$ or $\mathbf{VV}^\top$ should be exactly equivalent to applying it only once. This is only possible if all columns are orthogonal and have unit length. – amoeba Oct 15 '15 at 23:00

1 Answers1

1

That depends on what you want to achieve by the subspace projection.

If your goal is to cut off all noise-only dimensions and keep the signal dimensions unchanged then the PC vectors must have unit length. Otherwise you would distort the signal by stretching and shrinking the signal space along the PC vectors according to their length.

If your goal is more elaborate, for example to increase the signal-to-noise ratio of a digital communications signal, then the weights of the PC vectors differ from one (in general). How they differ depends totally on the signal processing task at hand.

WeiXi
  • 36
  • 2