Random Forest

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Vol.

2/3, December 2002 18

Classification and Regression by


randomForest
Andy Liaw and Matthew Wiener variables. (Bagging can be thought of as the
special case of random forests obtained when
mtry = p, the number of predictors.)
Introduction
3. Predict new data by aggregating the predic-
Recently there has been a lot of interest in ensem- tions of the ntree trees (i.e., majority votes for
ble learning methods that generate many clas- classification, average for regression).
sifiers and aggregate their results. Two well-known
methods are boosting (see, e.g., Shapire et al., 1998) An estimate of the error rate can be obtained,
and bagging Breiman (1996) of classification trees. In based on the training data, by the following:
boosting, successive trees give extra weight to points
1. At each bootstrap iteration, predict the data
incorrectly predicted by earlier predictors. In the
not in the bootstrap sample (what Breiman
end, a weighted vote is taken for prediction. In bag-
calls out-of-bag, or OOB, data) using the tree
ging, successive trees do not depend on earlier trees
grown with the bootstrap sample.
each is independently constructed using a boot-
strap sample of the data set. In the end, a simple 2. Aggregate the OOB predictions. (On the av-
majority vote is taken for prediction. erage, each data point would be out-of-bag
Breiman (2001) proposed random forests, which around 36% of the times, so aggregate these
add an additional layer of randomness to bagging. predictions.) Calcuate the error rate, and call
In addition to constructing each tree using a different it the OOB estimate of error rate.
bootstrap sample of the data, random forests change
how the classification or regression trees are con- Our experience has been that the OOB estimate of
structed. In standard trees, each node is split using error rate is quite accurate, given that enough trees
the best split among all variables. In a random for- have been grown (otherwise the OOB estimate can
est, each node is split using the best among a sub- bias upward; see Bylander (2002)).
set of predictors randomly chosen at that node. This
somewhat counterintuitive strategy turns out to per- Extra information from Random Forests
form very well compared to many other classifiers,
including discriminant analysis, support vector ma- The randomForest package optionally produces two
chines and neural networks, and is robust against additional pieces of information: a measure of the
overfitting (Breiman, 2001). In addition, it is very importance of the predictor variables, and a measure
user-friendly in the sense that it has only two param- of the internal structure of the data (the proximity of
eters (the number of variables in the random subset different data points to one another).
at each node and the number of trees in the forest), Variable importance This is a difficult concept to
and is usually not very sensitive to their values. define in general, because the importance of a
The randomForest package provides an R inter- variable may be due to its (possibly complex)
face to the Fortran programs by Breiman and Cut- interaction with other variables. The random
ler (available at http://www.stat.berkeley.edu/ forest algorithm estimates the importance of a
users/breiman/). This article provides a brief intro- variable by looking at how much prediction er-
duction to the usage and features of the R functions. ror increases when (OOB) data for that vari-
able is permuted while all others are left un-
changed. The necessary calculations are car-
The algorithm ried out tree by tree as the random forest is
The random forests algorithm (for both classification constructed. (There are actually four different
and regression) is as follows: measures of variable importance implemented
in the classification code. The reader is referred
1. Draw ntree bootstrap samples from the original to Breiman (2002) for their definitions.)
data.
proximity measure The (i, j) element of the prox-
2. For each of the bootstrap samples, grow an un- imity matrix produced by randomForest is the
pruned classification or regression tree, with the fraction of trees in which elements i and j fall
following modification: at each node, rather in the same terminal node. The intuition is
than choosing the best split among all predic- that similar observations should be in the
tors, randomly sample mtry of the predictors same terminal nodes more often than dissim-
and choose the best split from among those ilar ones. The proximity matrix can be used

R News ISSN 1609-3631


Vol. 2/3, December 2002 19

to identify structure in the data (see Breiman, Veh 7 4 6 0 0 0 0.6470588


2002) or for unsupervised learning with ran- Con 0 2 0 10 0 1 0.2307692
dom forests (see below). Tabl 0 2 0 0 7 0 0.2222222
Head 1 2 0 1 0 25 0.1379310

Usage in R
The user interface to random forest is consistent with We can compare random forests with support
that of other classification functions such as nnet() vector machines by doing ten repetitions of 10-fold
(in the nnet package) and svm() (in the e1071 pack- cross-validation, using the errorest functions in the
age). (We actually borrowed some of the interface ipred package:
code from those two functions.) There is a formula
interface, and predictors can be specified as a matrix
or data frame via the x argument, with responses as a
vector via the y argument. If the response is a factor,
randomForest performs classification; if the response > library(ipred)
is continuous (that is, not a factor), randomForest > set.seed(131)
performs regression. If the response is unspecified, > error.RF <- numeric(10)
randomForest performs unsupervised learning (see > for(i in 1:10) error.RF[i] <-
below). Currently randomForest does not handle + errorest(type ~ ., data = fgl,
ordinal categorical responses. Note that categorical + model = randomForest, mtry = 2)$error
> summary(error.RF)
predictor variables must also be specified as factors
Min. 1st Qu. Median Mean 3rd Qu. Max.
(or else they will be wrongly treated as continuous).
0.1869 0.1974 0.2009 0.2009 0.2044 0.2103
The randomForest function returns an object of > library(e1071)
class "randomForest". Details on the components > set.seed(563)
of such an object are provided in the online docu- > error.SVM <- numeric(10)
mentation. Methods provided for the class includes > for (i in 1:10) error.SVM[i] <-
predict and print. + errorest(type ~ ., data = fgl,
+ model = svm, cost = 10, gamma = 1.5)$error
> summary(error.SVM)
A classification example Min. 1st Qu. Median Mean 3rd Qu. Max.
0.2430 0.2453 0.2523 0.2561 0.2664 0.2710
The Forensic Glass data set was used in Chapter 12 of
MASS4 (Venables and Ripley, 2002) to illustrate vari-
ous classification algorithms. We use it here to show
how random forests work:
We see that the random forest compares quite fa-
> library(randomForest) vorably with SVM.
> library(MASS)
> data(fgl) We have found that the variable importance mea-
> set.seed(17) sures produced by random forests can sometimes be
> fgl.rf <- randomForest(type ~ ., data = fgl, useful for model reduction (e.g., use the important
+ mtry = 2, importance = TRUE,
variables to build simpler, more readily interpretable
+ do.trace = 100)
100: OOB error rate=20.56%
models). Figure 1 shows the variable importance of
200: OOB error rate=21.03% the Forensic Glass data set, based on the fgl.rf ob-
300: OOB error rate=19.63% ject created above. Roughly, it is created by
400: OOB error rate=19.63%
500: OOB error rate=19.16%
> print(fgl.rf)
Call:
randomForest.formula(formula = type ~ ., > par(mfrow = c(2, 2))
data = fgl, mtry = 2, importance = TRUE, > for (i in 1:4)
do.trace = 100) + plot(sort(fgl.rf$importance[,i], dec = TRUE),
Type of random forest: classification + type = "h", main = paste("Measure", i))
Number of trees: 500
No. of variables tried at each split: 2

OOB estimate of error rate: 19.16%


Confusion matrix: We can see that measure 1 most clearly differentiates
WinF WinNF Veh Con Tabl Head class.error the variables. If we run random forest again drop-
WinF 63 6 1 0 0 0 0.1000000 ping Na, K, and Fe from the model, the error rate re-
WinNF 9 62 1 2 2 0 0.1842105 mains below 20%.

R News ISSN 1609-3631


Vol. 2/3, December 2002 20

Measure 1 Measure 2
No. of variables tried at each split: 4
40

RI RI
Mg
Al Mean of squared residuals: 10.64615

15
Mg
% Var explained: 87.39
30

Ca

10
Ba
20

Ca K Na
Ba Si
Si The mean of squared residuals is computed as
Al

5
10

Fe Fe n
MSEOOB = n1 { yi y iOOB }2 ,
K Na
0

1
Measure 3 Measure 4
where y iOOB is the average of the OOB predictions
25

RI Mg Al Mg
0.6

RI
Ca Al Ba Ca for the ith observation. The percent variance ex-
20

K Si Na Fe Na
K plained is computed as
0.4

Si
15

Ba
10

MSEOOB
0.2

Fe
1 ,
y2
5
0.0

where y2 is computed with n as divisor (rather than


n 1).
Figure 1: Variable importance for the Forensic Glass We can compare the result with the actual data,
data. as well as fitted values from a linear model, shown
in Figure 2.
The gain can be far more dramatic when there are 50 30 40 50

more predictors. In a data set with thousands of pre- 40

dictors, we used the variable importance measures to 30 30


Actual
select only dozens of predictors, and we were able to 20

retain essentially the same prediction accuracy. For a 10


simulated data set with 1,000 variables that we con- 10 20 30
20 30 40
structed, random forest, with the default mtry , we 40

were able to clearly identify the only two informa- 30

tive variables and totally ignore the other 998 noise 20 LM 20

variables. 10

0
0 10 20
50
30 40 50
A regression example 40

We use the Boston Housing data (available in the 30


RF
30

MASS package) as an example for regression by ran- 20

dom forest. Note a few differences between classifi-


10
cation and regression random forests: 10 20 30
Scatter Plot Matrix

1/2
The default mtry is p/3, as opposed to p for
classification, where p is the number of predic-
tors. Figure 2: Comparison of the predictions from ran-
dom forest and a linear model with the actual re-
The default nodesize is 5, as opposed to 1 for sponse of the Boston Housing data.
classification. (In the tree building algorithm,
nodes with fewer than nodesize observations
An unsupervised learning example
are not splitted.)
Because random forests are collections of classifica-
There is only one measure of variable impor- tion or regression trees, it is not immediately appar-
tance, instead of four. ent how they can be used for unsupervised learning.
The trick is to call the data class 1 and construct a
> data(Boston)
> set.seed(1341)
class 2 synthetic data, then try to classify the com-
> BH.rf <- randomForest(medv ~ ., Boston) bined data with a random forest. There are two ways
> print(BH.rf) to simulate the class 2 data:
Call:
randomForest.formula(formula = medv ~ ., 1. The class 2 data are sampled from the prod-
data = Boston) uct of the marginal distributions of the vari-
Type of random forest: regression ables (by independent bootstrap of each vari-
Number of trees: 500 able separately).

R News ISSN 1609-3631


Vol. 2/3, December 2002 21

2. The class 2 data are sampled uniformly from data set. This measure of outlyingness for the jth
the hypercube containing the data (by sam- observation is calculated as the reciprocal of the sum
pling uniformly within the range of each vari- of squared proximities between that observation and
ables). all other observations in the same class. The Example
section of the help page for randomForest shows the
The idea is that real data points that are similar to measure of outlyingness for the Iris data (assuming
one another will frequently end up in the same ter- they are unlabelled).
minal node of a tree exactly what is measured by
the proximity matrix that can be returned using the
proximity=TRUE option of randomForest. Thus the Some notes for practical use
proximity matrix can be taken as a similarity mea-
sure, and clustering or multi-dimensional scaling us- The number of trees necessary for good perfor-
ing this similarity can be used to divide the original mance grows with the number of predictors.
data points into groups for visual exploration. The best way to determine how many trees are
We use the crabs data in MASS4 to demonstrate necessary is to compare predictions made by a
the unsupervised learning mode of randomForest. forest to predictions made by a subset of a for-
We scaled the data as suggested on pages 308309 est. When the subsets work as well as the full
of MASS4 (also found in lines 2829 and 6368 forest, you have enough trees.
in $R HOME/library/MASS/scripts/ch11.R), result-
ing the the dslcrab data frame below. Then run For selecting mtry , Prof. Breiman suggests try-
randomForest to get the proximity matrix. We can ing the default, half of the default, and twice
then use cmdscale() (in package mva) to visualize the default, and pick the best. In our experi-
the 1proximity, as shown in Figure 3. As can be ence, the results generally do not change dra-
seen in the figure, the two color forms are fairly well matically. Even mtry = 1 can give very good
separated. performance for some data! If one has a very
large number of variables but expects only very
> library(mva) few to be important, using larger mtry may
> set.seed(131) give better performance.
> crabs.prox <- randomForest(dslcrabs,
+ ntree = 1000, proximity = TRUE)$proximity A lot of trees are necessary to get stable es-
> crabs.mds <- cmdscale(1 - crabs.prox) timates of variable importance and proximity.
> plot(crabs.mds, col = c("blue", However, our experience has been that even
+ "orange")[codes(crabs$sp)], pch = c(1, though the variable importance measures may
+ 16)[codes(crabs$sex)], xlab="", ylab="")
vary from run to run, the ranking of the impor-
tances is quite stable.

For classification problems where the class fre-


quencies are extremely unbalanced (e.g., 99%
0.3


class 1 and 1% class 2), it may be necessary to

0.2


change the prediction rule to other than ma-





jority votes. For example, in a two-class prob-
0.1





lem with 99% class 1 and 1% class 2, one may

want to predict the 1% of the observations with
0.0













largest class 2 probabilities as class 2, and use

0.1






the smallest of those probabilities as thresh-


old for prediction of test data (i.e., use the
0.2





B/M


type=prob argument in the predict method
B/F
and threshold the second column of the out-
0.3


O/M

O/F put). We have routinely done this to get ROC


0.4 0.2 0.0 0.2 0.4
curves. Prof. Breiman is working on a similar
enhancement for his next version of random
forest.

By default, the entire forest is contained in the


Figure 3: The metric multi-dimensional scaling rep- forest component of the randomForest ob-
resentation for the proximity matrix of the crabs ject. It can take up quite a bit of memory
data. for a large data set or large number of trees.
If prediction of test data is not needed, set
There is also an outscale option in the argument keep.forest=FALSE when run-
randomForest, which, if set to TRUE, returns a mea- ning randomForest. This way, only one tree is
sure of outlyingness for each observation in the kept in memory at any time, and thus lots of

R News ISSN 1609-3631


Vol. 2/3, December 2002 22

memory (and potentially execution time) can L. Breiman. Manual on setting up, using, and
be saved. understanding random forests v3.1, 2002.
http://oz.berkeley.edu/users/breiman/
Since the algorithm falls into the embarrass-
Using_random_forests_V3.1.pdf. 18, 19
ingly parallel category, one can run several
random forests on different machines and then
aggregate the votes component to get the final T. Bylander. Estimating generalization error on two-
result. class datasets using out-of-bag estimates. Machine
Learning, 48:287297, 2002. 18, 22

Acknowledgment R. Shapire, Y. Freund, P. Bartlett, and W. Lee. Boost-


ing the margin: A new explanation for the effec-
We would like to express our sincere gratitute to tiveness of voting methods. Annals of Statistics, 26
Prof. Breiman for making the Fortran code available, (5):16511686, 1998. 18
and answering many of our questions. We also thank
the reviewer for very helpful comments, and point-
ing out the reference Bylander (2002). W. N. Venables and B. D. Ripley. Modern Applied
Statistics in S. Springer, 4th edition, 2002. 19

Bibliography
Andy Liaw
L. Breiman. Bagging predictors. Machine Learning, 24 Matthew Wiener
(2):123140, 1996. 18 Merck Research Laboratories
L. Breiman. Random forests. Machine Learning, 45(1): [email protected]
532, 2001. 18 [email protected]

Some Strategies for Dealing with


Genomic Data
by R. Gentleman ously data on mRNA expression for approximately
10,000 genes (there are 12,600 probe sets but these
correspond to roughly 10,000 genes). The experi-
Introduction mental data, while valuable and interesting require
additional biological meta-data to be correctly inter-
Recent advances in molecular biology have enabled preted. Considering once again the example we see
the exploration of many different organisms at the that knowledge of chromosomal location, sequence,
molecular level. These technologies are being em- participation in different pathways and so on pro-
ployed in a very large number of experiments. In this vide substantial interpretive benefits.
article we consider some of the problems that arise in
the design and implementation of software that asso-
ciates biological meta-data with the experimentally
obtained data. The software is being developed as
part of the Bioconductor project www.bioconductor.
org. Meta-data is not a new concept for statisticians.
Perhaps the most common experiment of this However, the scale of the meta-data in genomic
type examines a single species and assays samples experiments is. In many cases the meta-data are
using a single common instrument. The samples are larger and more complex than the experimental data!
usually homogeneous collection of a particular type Hence, new tools and strategies for dealing with
of cell. A specific example is the study of mRNA ex- meta-data are going to be needed. The design of
pression in a sample of leukemia patients using the software to help manage and manipulate biological
Affymetrix U95A v2 chips Affymetrix (2001). In this annotation and to relate it to the experimental data
case a single type of human cell is being studied us- will be of some importance. As part of the Biocon-
ing a common instrument. ductor project we have made some preliminary stud-
These experiments provide estimates for thou- ies and implemented some software designed to ad-
sands (or tens of thousands) of sample specific fea- dress some of these issues. Some aspects of our in-
tures. In the Affymetrix experiment described previ- vestigations are considered here.

R News ISSN 1609-3631

You might also like