Advanced Machine Learning with R
上QQ阅读APP看书,第一时间看更新

Tree-Based Classification

"The classifiers most likely to be the best are the random forest (RF) versions, the best of which (implemented in R and accessed via caret), achieves 94.1 percent of the maximum accuracy, overcoming 90 percent in 84.3 percent of the data sets."
- Fern ández-Delgado et al. (2014)

This quote from Fernández-Delgado et al. in the Journal of Machine Learning Research is meant to demonstrate that the techniques in this chapter are quite powerful, particularly when used for classification problems. 

In previous chapters, we examined techniques used to predict label classification on three different datasets. Here, we'll apply tree-based methods with an eye to see whether we can improve our predictive power on the Santander data used in Chapter 3, Logistic Regression, and the data used in Chapter 4, Advanced Feature Selection in Linear Models.

The first item of discussion is the basic decision tree, which is simple to both build and to understand. However, the single decision tree method isn't likely to perform as well as the other methods that you've already learned, for example, Support Vector Machines (SVMs), or the ones that we've yet to learn, such as neural networks. Therefore, we'll discuss the creation of multiple, sometimes hundreds, of different trees with their individual results combined, leading to a single overall prediction.

These methods, as the paper referenced at the beginning of this chapter states, perform as well as, or better than, any technique in this book. These methods are known as random forests and gradient boosted trees. Additionally, we'll work on how to use the random forest method to assist in feature elimination/selection. 

Following are the topics that we'll be covering in this chapter:

  • An overview of the techniques
  • Datasets and modeling