3

Just some quick questions to clarify my doubt please.

I know that one can get precision/recall for each class in a multiclass problem, e.g. in this classification report:

Classification report
              precision    recall  f1-score   support
       0       0.65      0.62      0.63     14601
       1       0.07      0.27      0.11      1398
       2       0.43      0.06      0.10      8317
       3       0.58      0.70      0.64     20301
       4       0.00      0.00      0.00       904

accuracy                           0.53     45521

macro avg 0.34 0.33 0.30 45521 weighted avg 0.55 0.53 0.51 45521

Accuracy is not computed as per class, since it's a global metric.

  1. Can one also computes ROC AUC for each class.
  2. What the difference between macro/weighted avg precision/recall and the value for each class. Is that representing a global measure of precision/recall?
super_ask
  • 225

0 Answers0