96

I'm working on a classification problem with unbalanced classes (5% 1's). I want to predict the class, not the probability.

In a binary classification problem, is scikit's classifier.predict() using 0.5 by default? If it doesn't, what's the default method? If it does, how do I change it?

In scikit some classifiers have the class_weight='auto' option, but not all do. With class_weight='auto', would .predict() use the actual population proportion as a threshold?

What would be the way to do this in a classifier like MultinomialNB that doesn't support class_weight? Other than using predict_proba() and then calculation the classes myself.

desertnaut
  • 52,940
  • 19
  • 125
  • 157
ADJ
  • 4,382
  • 9
  • 45
  • 80

5 Answers5

62

The threshold can be set using clf.predict_proba()

for example:

from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(random_state = 2)
clf.fit(X_train,y_train)
# y_pred = clf.predict(X_test)  # default threshold is 0.5
y_pred = (clf.predict_proba(X_test)[:,1] >= 0.3).astype(bool) # set threshold as 0.3
Yuchao Jiang
  • 2,944
  • 27
  • 20
  • 15
    For clarification, you don't *set the threshold*, because that would imply that you are permanently changing the behavior of `clf.predict()`, which you don't. – pcko1 Jun 10 '19 at 08:22
  • This is the correct answer. I couldn't see in the MLP source where they do the 0.5 threshold though... – eggie5 Sep 30 '19 at 16:38
  • 3
    How would you tie this into GridSearchCV where the prediction being performed is internal and not accessible to you? Say a threshold of 0.3 would yield me a different best model choice. – demongolem Sep 08 '20 at 21:14
  • 4
    I think GridSearchCV will only use the default threshold of 0.5. It is not reasonable to change this threshold during training, because we want everything to be fair. It is only in the final predicting phase, we tune the the probability threshold to favor more positive or negative result. e.g., To have a larger capture rate (at the cost of higher false alarm), we can manually lower the threshold. – Yuchao Jiang Sep 10 '20 at 01:43
  • Hi, I used `svm.predict(prediction_data)` to predict a given dataset. However, when I looked at the probability scores of those instances predicted as positives, some probability scores are lower than expected 0.5 and appear to be 0.1, 0.2, etc.. Any possible thoughts on why I have this result? Thank you! – Anqi Jul 21 '21 at 20:07
49

The threshold in scikit learn is 0.5 for binary classification and whichever class has the greatest probability for multiclass classification. In many problems a much better result may be obtained by adjusting the threshold. However, this must be done with care and NOT on the holdout test data but by cross validation on the training data. If you do any adjustment of the threshold on your test data you are just overfitting the test data.

Most methods of adjusting the threshold is based on the receiver operating characteristics (ROC) and Youden's J statistic but it can also be done by other methods such as a search with a genetic algorithm.

Here is a peer review journal article describing doing this in medicine:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2515362/

So far as I know there is no package for doing it in Python but it is relatively simple (but inefficient) to find it with a brute force search in Python.

This is some R code that does it.

## load data
DD73OP <- read.table("/my_probabilites.txt", header=T, quote="\"")

library("pROC")
# No smoothing
roc_OP <- roc(DD73OP$tc, DD73OP$prob)
auc_OP <- auc(roc_OP)
auc_OP
Area under the curve: 0.8909
plot(roc_OP)

# Best threshold
# Method: Youden
#Youden's J statistic (Youden, 1950) is employed. The optimal cut-off is the threshold that maximizes the distance to the identity (diagonal) line. Can be shortened to "y".
#The optimality criterion is:
#max(sensitivities + specificities)
coords(roc_OP, "best", ret=c("threshold", "specificity", "sensitivity"), best.method="youden")
#threshold specificity sensitivity 
#0.7276835   0.9092466   0.7559022
denson
  • 2,226
  • 2
  • 22
  • 24
  • 12
    Great post! Most important point: "If you do any adjustment of the threshold on your test data you are just overfitting the test data." – Sven R. Kunze Sep 13 '18 at 15:52
  • Is it greater than or equal to 0.5 rounds up to 1, or just greater than 0.5??? – wordsforthewise Sep 20 '21 at 19:22
  • SFAIK, in scikit learn and most other packages >= 0.5 is positive class and < 0.5 is negative class. This is completely arbitrary since you can always define either class as the positive class. – denson Sep 22 '21 at 00:00
48

is scikit's classifier.predict() using 0.5 by default?

In probabilistic classifiers, yes. It's the only sensible threshold from a mathematical viewpoint, as others have explained.

What would be the way to do this in a classifier like MultinomialNB that doesn't support class_weight?

You can set the class_prior, which is the prior probability P(y) per class y. That effectively shifts the decision boundary. E.g.

# minimal dataset
>>> X = [[1, 0], [1, 0], [0, 1]]
>>> y = [0, 0, 1]
# use empirical prior, learned from y
>>> MultinomialNB().fit(X,y).predict([1,1])
array([0])
# use custom prior to make 1 more likely
>>> MultinomialNB(class_prior=[.1, .9]).fit(X,y).predict([1,1])
array([1])
Fred Foo
  • 342,876
  • 71
  • 713
  • 819
  • It appears there's no class_prior for RandomForestClassifier. How to go about that? – famargar Jul 28 '17 at 11:08
  • 3
    The RandomForestClassifier has not a class_prior parameter, but it has a class_weight parameter which can be used. – lbcommer Sep 06 '17 at 12:28
  • 6
    Actually the 0.5 default is arbitrary and does not have to be optimal, as noticed e.g. [in this answer on CV by Frank Harrell](https://stats.stackexchange.com/a/73364/35989) who is a resected authority. – Tim Oct 06 '17 at 13:49
  • 1
    "In probabilistic classifiers, yes. It's the only sensible threshold from a mathematical viewpoint, as others have explained." - This seems completely off base. What if you want to weight recall over precision for example? – rump roast Jul 03 '20 at 15:34
8

You seem to be confusing concepts here. Threshold is not a concept for a "generic classifier" - the most basic approaches are based on some tunable threshold, but most of the existing methods create complex rules for classification which cannot (or at least shouldn't) be seen as a thresholding.

So first - one cannot answer your question for scikit's classifier default threshold because there is no such thing.

Second - class weighting is not about threshold, is about classifier ability to deal with imbalanced classes, and it is something dependent on a particular classifier. For example - in SVM case it is the way of weighting the slack variables in the optimization problem, or if you prefer - the upper bounds for the lagrange multipliers values connected with particular classes. Setting this to 'auto' means using some default heuristic, but once again - it cannot be simply translated into some thresholding.

Naive Bayes on the other hand directly estimates the classes probability from the training set. It is called "class prior" and you can set it in the constructor with "class_prior" variable.

From the documentation:

Prior probabilities of the classes. If specified the priors are not adjusted according to the data.

lejlot
  • 61,451
  • 8
  • 126
  • 152
  • 3
    Let me explain this differently, then feel free to say I'm still confused :-). Say I have two classes. Most classifier will predict a probability. I can use the probability to evaluate my model, say using an ROC. But if I wanted to predict a class, I would need to choose a cutoff, say 0.5, and say "every observation with p<0.5 goes into class 0, and those with p>0.5 go to class 1. That is usually a good choice if your priors are 0.5-0.5. But for unbalanced problems, I'll need a different cutoff. My question really was asking about how that cutoff is handled in scikit when using .predict(). – ADJ Nov 14 '13 at 23:17
  • Most classifiers are not probabilistic ones. The fact that they can somehow "product" this probability (estimate) does not mean that they actually "use it" to do a prediction. This is why I am refering to this as a probable confusion. Predict calles the original model's routine used to make prediction, it can be probabilistic (NB), geometric (SVM), regression based (NN) or rule based (Trees), so the question for a probability value inside predict() seems like a conceptual confussion. – lejlot Nov 14 '13 at 23:43
  • 4
    @lejlot, if that's the case then wouldn't the whole concept of roc curve plotted with predict_proba become irrelevant too? Aren't different points of the roc curve plotted at different thresholds applied to the results of predict_proba? – Eugene Bragin Sep 07 '20 at 10:38
6

In case someone visits this thread hoping for ready-to-use function (python 2.7). In this example cutoff is designed to reflect ratio of events to non-events in original dataset df, while y_prob could be the result of .predict_proba method (assuming stratified train/test split).

def predict_with_cutoff(colname, y_prob, df):
    n_events = df[colname].values
    event_rate = sum(n_events) / float(df.shape[0]) * 100
    threshold = np.percentile(y_prob[:, 1], 100 - event_rate)
    print "Cutoff/threshold at: " + str(threshold)
    y_pred = [1 if x >= threshold else 0 for x in y_prob[:, 1]]
    return y_pred

Feel free to criticize/modify. Hope it helps in rare cases when class balancing is out of the question and the dataset itself is highly imbalanced.

michalw
  • 71
  • 1
  • 2