I'm trying to understand the Fishers LDA. In the book I am using there is explained that one has to maximize expression
$$\frac{\left( a^T\cdot \bar{x}_2-a^T\cdot \bar{x}_1 \right)^2}{a^T\cdot W\cdot a}$$
with respect to $a$, where: $a$ is the vector defining the direction at which data should be projected, $\bar{x}_1, \bar{x}_2$ are mean vectors of the classes, and $W$ is the covariance matrix between two classes.
I solved an example with two classes:
C1 = {{1, 2}, {2, 3}, {3, 3}, {4, 5}, {5, 5}};
C2 = {{1, 0}, {2, 1}, {3, 1}, {3, 2}, {5, 3}, {6, 5}};
and got the vector $a=\{-0.420776, 0.471853\}$. As far as I know this vector should be perpendicular to the discriminant line, but when I plot it, it is exactly the discriminant line. My questions are:
Is my solution fine? How to plot the obtained $a$ vector?
a. If you now rescale it to sum of squares=1 you'll get-.665556, .746347. These are the cosines between the discriminant axis and the two variables' axes. Youreigenvector * sqrt(11-2)is the (unstandardized) discriminant coefficients (read) that help to compute discriminant scores. And no, you are not correct sayingvector should be perpendicular to the discriminant line: it defines the discriminant line. – ttnphns May 01 '15 at 15:45-.665556, .746347. Right? But it is the hypotenuse, and its projection on a variable axis, the cathetus, ishypothenuse*coswhere cos is again the value taken from-.665556, .746347. Easy. – ttnphns May 02 '15 at 11:59