1

I have a multi-class classification model that predicts probabilities of 4 classes with the following percent of True Positives per class 0.74, 0.86, 0.87, 0.91.

I have predictions for 3 test cases:

1. (0.6, 0.1, 0.1, 0.2)  
2. (0.01, 0.09, 0.5, 0.4) 
3. (0.55, 0.25, 0.1, 0.1)

Questions:

  1. Would it be correct to say that class with probability 0.6 in case 1 is a confident prediction and class with probability 0.01 in case 2 is not a confident prediction?
  2. And what can we say about class with probability 0.55, is it a confident prediction or not?
  3. Would it be a better way to calculate 95% confidence interval for accuracy then trying to evaluate confidence of prediction probabilities?

Thanks!

dokondr
  • 277
  • Why don't your probabilities add up to one? $0.74 + 0.86 + 0.87 + 0.91 = 3.38$ $//$ 2) Doesn't a predicted probability of $0.01$ signal a model telling you something like, "No, absolutely not," while a predicted probability of $0.6$ signals a model telling you, "Maybe, I kind of think so, but maybe not"?
  • – Dave Feb 06 '23 at 00:22
  • Sorry for wrong description. 0.74, 0.86, 0.87, 0.91 are not probabilities but percent of TP in every class. – dokondr Feb 06 '23 at 00:56
  • How do your percents add up to 3.38? – Dave Feb 06 '23 at 00:57
  • You don't add percents of different classes. You add percent of TP with percent of FP to get 100 % of samples in one class. This numbers are from confusion matrix. – dokondr Feb 06 '23 at 01:06