Questions tagged [roc]

Receiver Operating Characteristic, also known as the ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system

Receiver Operating Characteristic curve, also known as an ROC curve, is a graphical plot comparing the true positive rates and the false positive rates of a classifier as the discrimination threshold of the classifier is varied.

The true positive rate, defined as is the the fraction of true positives out of the positives, is also called the sensitivity or recall. The false positive rate, defined as the fraction of false positives out of the negatives, is equivalent to 1 - sensitivity

In its original form, the ROC curve was used to summarize performance of a binary classification task, although it can be extended for use in multi-class problems.

A classifier performing at chance is expected to have true positive and false positive rates that are equal, producing a diagonal line. Classifiers that exceed chance produce a curve above this diagonal. The area under the curve (or AUC) is commonly used as a summary of the ROC curve and as a measure of classifier performance. The AUC is equal to the probability that a classifier will rank a randomly chosen positive case higher than a randomly chosen negative one. This is equivalent to the Wilcoxon test of ranks.

ROC curves enable visualizing and organizing classifier performance without regard to class distributions or error costs. This can be helpful when investigating learning with skewed distributions or cost-sensitive learning.

Helpful reading includes:

914 questions
29
votes
2 answers

Average ROC for repeated 10-fold cross validation with probability estimates

I am planning to use repeated (10 times) stratified 10-fold cross validation on about 10,000 cases using machine learning algorithm. Each time the repetition will be done with different random seed. In this process I create 10 instances of…
user97953
  • 291
17
votes
3 answers

ROC curve crossing the diagonal

I am running a binary classifier at the moment. When I plot the ROC curve I get a good lift at the beginning then it changes direction and crosses the diagonal then of course back up, making the curve a tilted S like shape. What can be an…
15
votes
2 answers

Cut-off point in a ROC curve. Is there a simple function?

I want to find the cut-off point for gender based on an anthropological measurement. I can draw the curves and I know that in case sensitivity and specificity are both similarly important, the point closest to the upper left corner of the frame (or…
Vic
  • 1,353
  • 6
  • 19
  • 30
5
votes
2 answers

How are True Negative and False Negative converted into True Positive and False Positive in ROC curve?

I found the ROC explanation at this link. It states that the ROC curve is TP vs FP. After the score has gone below 0.5, all predictions are negative. That makes them either TN or FN. Thus, how does it make sense to continue drawing as the axes…
CaTx
  • 233
5
votes
2 answers

Based only on these sensitivity and specificity values, what is the best decision method?

If I have the following sensitivity and specificity values, what is the best decision we can say in this case? sensitivity specificity ----------- ----------- 66.3 74.7 87.2 65.9 56.4 76.4 79.5 …
4
votes
4 answers

Where does a ROC curve of a perfect classifier start?

A ROC point is a point with a pair of x and y values in the ROC space where x is 1 – specificity and y is sensitivity and the ROC curve always starts at (0.0, 0.0) and ends at (1.0, 1.0). What about a "perfect" classifier that makes at some…
user247002
4
votes
2 answers

Can a ROC Curve have a continous outcome variable?

I'm currently undertaking research creating cut-off scores using ROC curves. I have encountered some confusion regarding the outcome variable. My outcome variable can range in score from 10-50, and we are using a cut-off previously established of 20…
4
votes
1 answer

ROC curve with multiple points

In a typical ROC curve all I do is a line from (0, 0) to (tpr, fpr) and a line from (tpr, fpr) to (1, 1). Now I see ROC curves with more than one points. Can someone explain what these extra points represent?
DimChtz
  • 175
  • 1
  • 7
3
votes
1 answer

Why do we need to create a model when creating a ROC curve?

I do not have strong background in statistics but I believe I know the basics to understand what a ROC curve means. I have a table, first column with probabilities (from 0 to 1) from a predictive test and second with true outcomes with 1 and 0 (1…
3
votes
1 answer

CAP and ROC measures

Why is the area under the curve measure better than the accuracy ratio? I know that the AR lies between 0,5 and 1 and the relationship $AUC=\frac{1}{2}(AR+1)$ holds, so the AUC lies between 0,75 and 1, but this does not really seem to be an…
testify
  • 31
3
votes
1 answer

Significance testing for comparing ROC areas

I have been analyzing the accuracy of 3 prognostication scores in predicting a certain binary outcome using ROC curves and significance testing for differences in AUCs between the curves (a figure of the ROC curves and the AUCs + 95% confidence…
3
votes
1 answer

Slope of ROC increases

Hi, my ROC curve seems weird because in most of the cases, the slope of ROC curve should decrease. Can anyone help to interpret this? Thanks in advance.
M.K
  • 161
3
votes
1 answer

question regarding ROC curve

When I was learning data-analysis course online, the lecturer spoke of two advantages of ROC curve. He said "that AUC results do not change with changes in the incidents of the actual condition, nor is AUC affected by changes in the relative cost of…
Danny
  • 43
3
votes
3 answers

Is there any other measure of the performance of a classifier than the area under the ROC curve?

I am trying to draw an ROC curve for a classifier and wondered to know if there is any other measure for the performance of the classifier than the AUC. And is there any free software that I can use to draw either the histograms or the probability…
2
votes
1 answer

Does ROC assessment of a binomial classifier serve as a good performance measure given equal weights between positives and negatives?

My problem is that i have created four candidate models that I am comparing mainly via the following performance measures: F-measure, recall, precision, accuracy and visual ROC assessment. The problem is that as you see from the table, the…
1
2 3