The answer to the post Including Interaction Terms in Random Forest shows how a random forest can learn a multiplicative interactive term without it being specified as a feature. On the other hand, I read in ESL that tree-based models cannot learn linear combinations of features that are not pre-specified. Can someone explain why a RF is able to learn multiplicative interaction but not additive interaction?
Asked
Active
Viewed 45 times
1
-
1Where in ESL can we find that passage? – Ben Reiniger Nov 03 '22 at 17:03
-
Some table in Section 10.7. – David Nov 03 '22 at 17:05
1 Answers
0
The two arguments have competing definitions of "learn." A random forest can approximate a functional form with additive and multiplicative interaction terms, but it cannot reproduce either exactly, and will have a particularly hard time approximating the function outside the range of inputs provided during training.
As a comment to the accepted answer in the linked question alludes (reference link), even the approximation on the training range is subject to caveats. Here I play around with XOR style interactions, and among other things show that when other effects are being modeled, the trees may get distracted from faithfully representing the interaction.
Ben Reiniger
- 4,514
-
1No, I mean when the XOR is not the only part of the target; when the model is e.g. $x_1 + x_2 + sign(x_3 \cdot x_4)$, the trees may spend more time splitting on $x_1$ and $x_2$ so that they only occasionally find the xor structure in $x_3, x_4$. – Ben Reiniger Nov 03 '22 at 19:13