1

Consider an ML Model that is trained to classify between two classes of elements.

This particular classifier has consistently (must be consistent for the following conversion to take place) poor performance, say 30%.

Does flipping the output of this classifier result in a 70% accuracy classifier?

CheeseS
  • 13
  • 2
    Related…pay particular attention to the comment by Stephan Kolassa. (If only he’d expand that into an answer…) – Dave Nov 28 '22 at 17:36

1 Answers1

1

Accuracy is (TP + TN)/(P + N), where TP are true positives, TN are true negatives, P are positive cases, and N are negative cases. This means that the numerator is calculated by counting cases such that predicted == actual. The condition is the same as looking at !predicted != actual, because it's a double negation. So if you reverse the labels, it's the same calculating predicted != actual, while the denominator wouldn't change. So yes, reversing the labels would reverse accuracy.

Tim
  • 138,066