33

Confused about random_state parameter, not sure why decision tree training needs some randomness. My thoughts, (1) is it related to random forest? (2) is it related to split training testing data set? If so, why not use training testing split method directly (http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html)?

http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html

>>> from sklearn.datasets import load_iris
>>> from sklearn.cross_validation import cross_val_score
>>> from sklearn.tree import DecisionTreeClassifier
>>> clf = DecisionTreeClassifier(random_state=0)
>>> iris = load_iris()
>>> cross_val_score(clf, iris.data, iris.target, cv=10)
...                             
...
array([ 1.     ,  0.93...,  0.86...,  0.93...,  0.93...,
        0.93...,  0.93...,  1.     ,  0.93...,  1.      ])

regards, Lin

Lin Ma
  • 9,059
  • 29
  • 91
  • 166

4 Answers4

37

This is explained in the documentation

The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement.

So, basically, a sub-optimal greedy algorithm is repeated a number of times using random selections of features and samples (a similar technique used in random forests). The random_state parameter allows controlling these random choices.

The interface documentation specifically states:

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

So, the random algorithm will be used in any case. Passing any value (whether a specific int, e.g., 0, or a RandomState instance), will not change that. The only rationale for passing in an int value (0 or otherwise) is to make the outcome consistent across calls: if you call this with random_state=0 (or any other value), then each and every time, you'll get the same result.

Ami Tavory
  • 71,268
  • 10
  • 134
  • 170
  • Thanks Ami for the clarification, then the question comes, which value of `random_state` should I select. I often see people select value `0`, does it mean they do not want the approximate greedy algorithm, but they want the NP-complete perfect algorithm? – Lin Ma Aug 26 '16 at 17:33
  • 1
    @LinMa You're welcome. The specific value you select doesn't really matter much - it just makes the results consistent (= deterministic), and won't cause using the NPC algorithm in any case. I updated the answer to show that. – Ami Tavory Aug 26 '16 at 18:58
  • Thanks Ami, vote up for the reply, and wondering do you have a bit more details (papers or tutorials) about how to greedy algorithm works? – Lin Ma Aug 26 '16 at 20:53
  • 1
    @LinMa It works pretty much as you'd expect (at each node, it finds the feature that "best" splits the samples routed to that node), but [here is a link](http://mas.cs.umass.edu/classes/cs683/lectures-2010/Lec23_Learning2-F2010-4up.pdf). – Ami Tavory Aug 26 '16 at 21:04
  • Thanks Ami, mark your reply as answer before read. :) BTW, I browse quickly for the slides, there are different methods mentioned, which one(ones) do you mean the greedy one? – Lin Ma Aug 26 '16 at 21:39
  • 1
    @LinMa The first 13 slides are relevant. They outline the greedy algorithm, and then fill out the specifics, not really provide alternatives (the information theoretic considerations explain what *is* the best decision at each point). All the best. – Ami Tavory Aug 26 '16 at 21:55
  • 1
    @LinMa :-) Thanks, you too! – Ami Tavory Aug 26 '16 at 23:05
1

Decision trees use heuristics process. Decision tree do not guarantee the same solution globally. There will be variations in the tree structure each time you build a model. Passing a specific seed to random_state ensures the same result is generated each time you build the model.

0

The above cited part of the documentation is misleading, the underlying problem is not greediness of the algorithm. The CART algorithm is deterministic (see e.g. here) and finds a global minimum of the weighted Gini indices.

Repeated runs of the decision tree can give different results because it is sometimes possible to split the data using different features and still achieve the same Gini index. This is described here: https://github.com/scikit-learn/scikit-learn/issues/8443

Setting the random state simply assures that the CART implementation works through the same randomized list of features when looking for the minimum.

tetraeder
  • 1
  • 1
-1

Many machine learning models allow some randomness in model training. Specifying a number for random_state ensures you get the same results in each run. This is considered a good practice. You use any number, and model quality won't depend meaningfully on exactly what value you choose.