I am deriving likelihoods of class membership after LDA dimensionality reduction.
I understand from ttnphns's answer to this question that the pooled covariance matrix between discriminants is the identity matrix, allowing for simplification of the multivariate normal distribution from
$$f(x|k)= \frac{1}{\sqrt{(2\pi)^k |{\textbf S}|}} * e^{-\frac{1}{2}(x-\mu_k)^\top \textbf S ^{-1}(x-\mu_k)}$$ to the simpler $$f(x|k)= \frac{1}{\sqrt{(2\pi)^k}} * e^{-\frac{1}{2}(x-\mu_k)^\top \textbf (x-\mu_k)}$$
where ${\textbf S}$ is the pooled covariance matrix between the discriminants and $\mu_k$ is the centroid of some class $k$.
Why the identity matrix? It's not obvious to me, and simulating (based on my understanding of LDA) doesn't return this result.
w = rand(3)+eye(3); % simulated data
b = rand(3)+eye(3); % simulated data
W = w*w'; % some random pooled within-class covariance
B = b*b'; % some random between-class covariance
[V, eigenvalues] = eig(inv(W)*B);
eigenvalues = diag(eigenvalues);
[eigenvalues, indx] = sort(eigenvalues, 'descend');
V = V(:, indx); % the ordered discriminants
cov(V)