2

I have a few models which I would like to combine later on; for each model I have the sensitivity and specificity. I then calculate for each model the likelihood ratio to get a feel of how well the model performs: $LR_{+} = \frac{\text{sensitivity}}{1-\text{specificity}}$ (https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing).

The problem here is that I have a few models with perfect specificity (i.e. specificity of 1) and when performing the division I will get a divide-by-zero error. How can I solve this? I thought of subtracting some small amount from the specificity in all cases, but this influences a lot the final likelihood ratio's and more importantly the differences between the models. Is there a way I can solve this in a fair way?

  • Could you just flip the ratios? – Alexander Etz May 03 '16 at 22:18
  • That would solve the divide-by-zero errors, but I still won't have a way to compare models. In the original formula a ratio for model A of 30 and for model B of 15 could tell me how more accurate model A is. If the ratios are 0 I can't do this :( – Héctor van den Boorn May 03 '16 at 22:25
  • 1
    The likelihood ratio in that case is infinite.

    If no one tests positive who doesn't have the disease, then a positive test result implies, with absolute certainty, that the person has the disease! A positive test result would imply it's infinitely more likely the person has the disease than not.

    Note that if a model says something can NEVER happen, and then in the data, the thing happens, then the $P(\text{data} \mid \text{model}) = 0$. That doesn't mean the model is useless from a practical standpoint, but you do know with certainty that the model can't be entirely correct.

    – Matthew Gunn May 04 '16 at 04:42
  • Be aware that the math will take your model quite literally. Imagine Model B almost perfectly predicts who does or doesn't have cancer, but in a rare case, something weird happens in the data which Model B says is impossible. If you compare Model B to a total junk Model A which says everything is possible (and is totally useless), you'll get the seemingly insane result that Model A is infinitely more likely than Model B. Of course Model B is more useful, but we can reject it with 100% certainty while we cannot reject Model A with absolute certainty. – Matthew Gunn May 04 '16 at 04:59

0 Answers0