1

Cohen's kappa with two raters is:

$$\kappa = \frac{p_o - p_a}{1 - p_a}$$

where $p_a$ is probability of agreement by chance, and $p_o$ is the observed rate of agreement. I can't figure out why the denominator is $1 - p_a$ instead of $p_a$. Shouldn't it be $p_a$ because the score should be inversely proportional to $p_a$ (the higher the accidental coincidence, the smaller the agreement should be)?

Chip Huyen
  • 111
  • 2

2 Answers2

2

The problem is that as $p_a$ goes up, both the denominator and the numerator change. This makes it hard to visualize how the entire expression changes, since the usual trick is something like "fix the numerator, vary the denominator," which doesn't work here.

It might be easier to understand if you rewrite it so that $p_a$ only appears once, such as:

$$\kappa = 1 - \frac{1-p_o}{1-p_a}$$

Notice that you can also think of the terms in the fraction as the proportion of disagreement divided by the proportion of disagreement by chance. Obviously, you want your observed disagreement to be small, and in particular, you want it to be small relative to the probability that you disagree by chance. That is, you want the entire fraction to be small, so that $\kappa$ is large.

Nathan
  • 305
  • 1
  • 10
0

i think it is to standardize the Kappa coefficients so you can compare the Kappa statistics for any other model for the same classification task