I have split my data into training and test data, built several prediction models and now I want to evaluate the models using the test data set.The data is very unbalanced so I balanced the training data using smote for the model building.
I wanted to calculate the area under the receiver operator curve to evaluate the models. Is that a good way to evaluate the models? As far as I understand, I should not upsample the testing data. Is it correct to use the auc roc for unbalanced testing data or should I rather use the area under the precision recall curve?