One of the most important considerations is if you truly have discrete classification predictions. That you are analyzing the ROC curve for a binary outcome tells me that you do not have discrete predictions but something on a continuum like a predicted probability or log-odds (as would come from a logistic regression). Especially if there is a reasonable interpretation of the predictions as probability values (this is often but not always the case), this opens up a world of proper scoring rules that evaluate the predicted probabilities. Among the advantages of this is that it allows for more nuanced decision-making. Frank Harrell discusses these extensively in two great blog posts and all over his posts on Cross Validated.
Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules
Classification vs. Prediction
Among the common evaluation metrics for these kinds of predictions are the Brier score and log-loss. These can be normalized to Efron's and McFadden's pseudo $R^2$ scores to give possibly easier interpretations that are $1$ for perfect predictions and less than $1$ for imperfect predictions, analogous to $R^2$ in regression.
The equations in the multi-class setting are messy, but they work about how you would expect when you generalize from the binary to multi-class setting.