Consider an ML Model that is trained to classify between two classes of elements.
This particular classifier has consistently (must be consistent for the following conversion to take place) poor performance, say 30%.
Does flipping the output of this classifier result in a 70% accuracy classifier?