3

This is related to the question posted in:

The discriminant function in linear discriminant analysis

In the one dimensional case, where $p_k(x) = \dfrac{f_k(x)\pi_k}{P(X = x)}$, where $f_k(x) = \dfrac{1}{\sqrt{2\pi}\sigma}\exp(- \frac{1}{2\sigma^2}(x-\mu_k)^2)$.

After taking the logarithm, the discriminant function is given as:

$\delta(x) = \log \pi_k + \dfrac{x \mu_k}{\sigma^2} - \dfrac{\mu_k^2}{2\sigma^2}$.

Why is there no term involving $-\dfrac{x^2}{2\sigma^2}$? This is the first term in the expansion of $(x-\mu_k)^2$. Why is this neglected?

And now that I have a discriminant function, what is the next step?

gunes
  • 57,205
cgo
  • 9,107

2 Answers2

3

It's ignored because you try to determine the class given $X=x$. So, $-x^2/2\sigma^2$ term is the same for all $k$, i.e. your classes. Therefore, it doesn't affect the decisions when two discriminant functions are compared, i.e. $\delta_k(x)$, $\delta_l(x)$. Once you have the discriminant functions, you put $x$ into them and choose the maximum one to get your class estimate. Note that this is also Bayes classifier with normal distribution.

gunes
  • 57,205
0

Only the terms specifically dependent on the $k^{th}$ class are considered before writing the decision equation. Since $\frac{-x^2}{2\sigma^2}$ does not depend on $k$, the decision taken will not be affected by the numerical value of this quantity. Thus, it is neglected.