It seems like you understand that you're able to have n levels, as opposed to n-1, because unlike in linear regression you don't need to worry about perfect colinearity.
(I'm coming at this from an R perspective, but I assume it's the same in Python.) That depends on a couple of things, such as 1) which package you're using and 2) how many factor levels you have.
1) If you are using R's randomForest package, then if you have <33 factor levels then you can go ahead and leave them in one feature if you want. That's because in R's random forest implementation, it will check to see which factor levels should be on one side of the split and which on the other (e.g., 5 of your levels might be grouped together on the left side, and 7 might be grouped together on the right). If you split the categorical feature out into n dummies, then the algorithm would not have this option at its disposal.
Obviously if the particularly package you're using can't handle categorical features then you'd just need to create n dummy variables.
2) As I alluded to above, R's random forest implementation can only handle 32 factor levels - if you have more than that then you either need to split your factors into smaller subsets, or create a dummy variable for each level.
randomForestare coded automatically, I should go withndummies because collinearity is not a problem for RF? – Sergey Bushmanov Oct 20 '15 at 19:47sklearn... Practically speaking, is there an evidence (practical experience, research, etc) that "dummified" variables will perform worse than "grouped" categorical variables [in R] – Sergey Bushmanov Oct 20 '15 at 19:58