Decision Trees & Random Forests

“The clearest way into the Universe is through a forest wilderness” – John Muir

The success of Artificial Intelligence over the last 10 years is staggering – it seems a week does not go by without a new and exciting application being announced. To the general public and seasoned practitioners alike, the pace of progress is such that it’s nearly impossible to keep up.

This has imbued Artificial Intelligence and Machine Learning with a mysterious, almost magical quality. The technology is so powerful it seems there is nothing it can’t do and yet we oftentimes have no idea how it does what it does.

This statement is perhaps a bit overly cryptic, like a Rube Goldberg machine, we carefully arrange the pieces and let the algorithm explore the tightly constrained space we have designed. However, the exact mechanism or “thought process” of how it lands in its final configuration are unknown.

In most applications, exact knowledge of the “thought process” is not necessary – the proof is in the pudding and we only care that it performs well at it’s designated task. However, this “black box” property of Artificial Intelligence can be troublesome for applications such as healthcare where reliability and accountability are of the utmost importance – before we entrust something as precious as a person’s health and well-being to an algorithm, we would like to “check it’s work” and understand how it came to a particular decision.

Decision Trees

One of the simplest and yet most powerful machine learning algorithms in practice today is known as a decision tree. A decision tree is an algorithm used for solving classification tasks, i.e. when presented with description of an object, it decides to which of a discrete number of classes it belongs.

If you’ve ever gotten an eye-exam or played 20 questions, you’ve experienced a decision tree first hand. A decision tree is simply a series of yes-no questions, where the next question asked is dependent on the answer to the previous question. After a sufficient number of questions have been asked, the algorithm can make a prediction as to which class an object belongs.

But how do we decide which questions to ask and how many are sufficient to make a prediction? This is where the machine learning comes in. There are slight variations, but the general idea is to start with a randomly chosen set of questions, run our data through this series of questions and compare the predictions it made to the correct classification. The set of questions is then tweaked so that when asked these new questions our predictions better match the data. This process is repeated over and over until we reach a desired level of accuracy.

An example of a decision tree for classifying fruit based on various properties.1

This is a very simple idea, but it is precisely this simplicity that gives the decision tree algorithm it’s strength; they are transparent and easily interpretable, and thus provide much more than just a black-box prediction. They provide a detailed set of logical steps explaining how it came to its decision, thus giving insight into which are the most salient features for a given problem. This “white-box” quality of decision trees is especially valuable in highly regulated fields such as medicine and finance.

Random Forests

Decision trees were designed to model the decision making process of a highly logical human being. However, just like human beings, they are not perfect and a decision tree’s unique characteristics (shape and depth of the tree) and particular experiences (training history) can imbue them with biases. These biases are not necessarily a bad thing, instead they could mean that a particular tree is extremely good at identifying one type of object but perhaps not so good at telling others apart.

When trying to build algorithms that mimic human intelligence, we often look to our own behavior for inspiration on improving their performance. In the case of decision trees we need look no further than Who Wants to be a Millionaire?, and it’s audience participation features, for a solution. While an individual’s decision making process is highly sensitive to their unique characteristics, a group of individuals is far more robust.

Thus, we can democratize our algorithms – we can train a large collection, or ensemble of decision trees, each with a different set of parameters and training history and combine their individual predictions into a consensus prediction. Like humans, we expect an algorithm’s bias to be largely random and highly dependent on their unique history. Thus, by combining a sufficiently varied set of algorithms we expect these biases to cancel eachother out, and, in fact, this is largely the case.

These ensembles of decision trees are known as random forests and, like decision trees, despite their simplicity they are extremely powerful. In fact, random forest algorithms are arguably the most powerful classification algorithms available today due to their improved accuracy and robustness. As the old adage goes: “None of us is as smart as all of us”.

Example of a random forest algorithm being used for spam detection.2

However, increased predictive power, like lunch, never comes for free. The price we pay for enhanced performance is a decrease in transparency – we no longer have a single, logical pathway showing precisely how we came to our decision, now we have a (typically) large number of different pathways each contributing only a vote towards the final decision. We can still gain insight, but now we must look at statistical properties such as average feature importance and stability of decision pathways.

While it may be discouraging that we have to sacrifice the desirable white-box character of decision trees to gain an improvement in performance, it should hardly be surprising. As we continue to increase the complexity of questions we ask computers, we should expect a commensurate increase in complexity of their decision making processes.

Conversely, the human thought process itself is far from a white box, when asked we often struggle to describe precisely how we arrived at a particular decision and often defer simply to personal judgement and expertise. And though these algorithms are intelligent, their intelligence is still artificial (and will be for the foreseeable future), and that’s a good thing – unlike humans, computers only do what they are told, and they will do it the same way every time (unless we tell it not to.)

To learn more about how random forest classifiers and their uses in healthcare, check out our case study on readmission risk stratification.

Image Credits

[1] https://srirangatarun.wordpress.com/2016/11/02/machine-learning/

[2] https://adpearance.com/blog/automated-inbound-link-analysis