I have a few models which I would like to combine later on; for each model I have the sensitivity and specificity. I then calculate for each model the likelihood ratio to get a feel of how well the model performs: $LR_{+} = \frac{\text{sensitivity}}{1-\text{specificity}}$ (https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing).
The problem here is that I have a few models with perfect specificity (i.e. specificity of 1) and when performing the division I will get a divide-by-zero error. How can I solve this? I thought of subtracting some small amount from the specificity in all cases, but this influences a lot the final likelihood ratio's and more importantly the differences between the models. Is there a way I can solve this in a fair way?
If no one tests positive who doesn't have the disease, then a positive test result implies, with absolute certainty, that the person has the disease! A positive test result would imply it's infinitely more likely the person has the disease than not.
Note that if a model says something can NEVER happen, and then in the data, the thing happens, then the $P(\text{data} \mid \text{model}) = 0$. That doesn't mean the model is useless from a practical standpoint, but you do know with certainty that the model can't be entirely correct.
– Matthew Gunn May 04 '16 at 04:42