5

A typical confusion matrix from R's caret package might look like this:

> confusionMatrix(pred_gbm, vowel.test$y)   
Confusion Matrix and Statistics

          Reference
Prediction  1  2  3  4  5  6  7  8  9 10 11
        1  33  1  0  0  0  0  0  0  0  4  0
        2   8 23  3  0  0  0  2  0  1 16  1
        3   1 14 31  3  0  0  0  0  0  0  0
        4   0  0  2 31  3  1  0  0  0  0  2
        5   0  0  0  0 17  7  9  0  0  0  0
        6   0  0  6  8 16 23  3  0  0  0  4
        7   0  0  0  0  3  0 27  7  5  0  3
        8   0  0  0  0  0  0  0 29  5  0  0
        9   0  4  0  0  0  0  1  6 24  2 13
        10  0  0  0  0  0  0  0  0  2 20  0
        11  0  0  0  0  3 11  0  0  5  0 19

Overall Statistics

               Accuracy : 0.5996          
                 95% CI : (0.5533, 0.6446)
    No Information Rate : 0.0909          
    P-Value [Acc > NIR] : < 2.2e-16       

                  Kappa : 0.5595          
 Mcnemar's Test P-Value : NA              

In the above output, we have some statistics that explain the classification accuracy, like a 95% CI and a p-value, etc. How do I interpret the p-value and confidence interval to understand how good the classification is?

0 Answers0