14

In Decision Trees, we can understand the output of the tree structure and we can also visualize how the Decision Tree makes decisions. So decision trees have explainability (their output can be explained easily.)

Do we have explainability in Neural Networks like with Decision Trees?

smci
  • 331
  • 2
  • 12
navya
  • 141
  • 5
  • 1
    A recent model-agnostic framework is the LIME model. – Emre May 22 '17 at 16:56
  • In the field of object recognition/classification using neural networks, heatmaps are popular to visualize/explain a decision such as in http://www.heatmapping.org/. Tutorials and interactive demonstrations are available. – Nikolas Rieble May 23 '17 at 08:10
  • In fact according to this new paper https://arxiv.org/abs/2210.05189 NNs can be represented as a Decision Tree – charL Oct 18 '22 at 04:48

4 Answers4

9

I disagree with the previous answer and with your suggestion for two reasons:

1) Decision trees are based on simple logical decisions which combined together can make more complex decisions. BUT if your input has 1000 dimensions, and the features learned are highly non linear, you get a really big and heavy decision tree which you won't be able to read/understand just by looking at the nodes.

2) Neural networks are similar to that in the sens that the function they learn is understandable only if they are very small. When getting big, you need other tricks to understand them. As @SmallChess suggested, you can read this article called Visualizing and Understanding Convolutional Networks which explains for the particular case of convolutional neural networks, how you can read the weights to understand stuff like "it detected a car in this picture, mainly because of the wheels, not the rest of the components".

These visualizations helped a lot of researchers to actually understand weaknesses in their neural architectures and helped to improve the training algorithms.

Robin
  • 1,337
  • 9
  • 19
  • :-) I found the paper itself harder to understand than the deep convolutional network itself. It's a very mathematical paper. – SmallChess May 22 '17 at 12:40
  • 1
    Sorry, I cited the wrong article :-) I just changed it, this one is more graphical, the idea of reversing the convnet is not really hard if you know how convnets work. In the same way, Google deep dream use back propagation to project a particular output in the input space. – Robin May 22 '17 at 12:47
  • There is a video where Matt Zeiler expains many of these ideas, called Deconconvolution networks – Alex May 08 '18 at 13:30
7

No. Neural network is generally difficult to understand. You trade predictive power for model complexity. While it's possible to visualize the NN weights graphically, they don't tell you exactly how a decision is made. Good luck trying to understanding a deep network.

There is a popular Python package (and it has a paper) that can model a NN locally with a simpler model. You may want to take a look.

Zephyr
  • 997
  • 4
  • 10
  • 20
SmallChess
  • 3,540
  • 2
  • 18
  • 30
0

In general, decision trees are easily understandable due to its structure. However, in most application they become so big that you easily lose sight. Additionally, in most cases you would want to use Random Forest as an ensemble method instead of a single Decision Tree and then again there is not one single tree that you can explain.

For neural networks there is a new research approach is coming up called "Explainable AI" which tries to make us understand the reasons for a neural network prediction. One method is the so-called integrated gradient that calculates the importance of each input feature for the prediction.

technik
  • 361
  • 1
  • 4
0

https://arxiv.org/abs/1704.02685 provide a NN specific local explanation tool : deep lift. It works by propagating the difference in activation between the instance you want to explain and a reference instance. Getting a reference is a bit tricky, but the tool appears to be interpretable and scalable overall. It can be used on tabular data.

Lucas Morin
  • 2,196
  • 5
  • 21
  • 42