Bag of decision trees
TreeBagger bags an ensemble of decision trees for either classification or regression. Bagging stands for bootstrap aggregation. Every tree in the ensemble is grown on an independently drawn bootstrap replica of input data. Observations not included in this replica are "out of bag" for this tree.
TreeBagger relies on the
RegressionTree functionality for growing individual trees. In particular,
RegressionTree accepts the number of features selected at random for each decision split as an optional input argument. That is,
TreeBagger implements the random forest algorithm .
For regression problems,
TreeBagger supports mean and quantile regression (that is, quantile regression forest ).
To predict mean responses or estimate the mean-squared error given data, pass a
TreeBaggermodel and the data to
error, respectively. To perform similar operations for out-of-bag observations, use
To estimate quantiles of the response distribution or the quantile error given data, pass a
TreeBaggermodel and the data to
quantileError, respectively. To perform similar operations for out-of-bag observations, use
|TreeBagger||Create bag of decision trees|
|Append new trees to ensemble|
|Compact ensemble of decision trees|
|Error (misclassification probability or MSE)|
|Proximity matrix for training data|
|Train additional trees and add to ensemble|
|Multidimensional scaling of proximity matrix|
|Mean classification margin|
|Out-of-bag mean margins|
|Ensemble predictions for out-of-bag observations|
|Out-of-bag quantile loss of bag of regression trees|
|Quantile predictions for out-of-bag observations from bag of regression trees|
|Compute partial dependence|
|Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots|
|Predict responses using ensemble of bagged decision trees|
|Quantile loss using bag of regression trees|
|Predict response quantile using bag of regression trees|
A cell array containing the class names for the response variable
A logical flag specifying whether out-of-bag predictions for training observations should be computed. The default is
If this flag is
If this flag is
A logical flag specifying whether out-of-bag estimates of variable importance should be computed. The default is
If this flag is
Square matrix, where
This property is:
Default value returned by
A numeric array of size 1-by-Nvars of changes in the split criterion summed over splits on each variable, averaged across the entire ensemble of grown trees.
Fraction of observations that are randomly selected with replacement for each bootstrap replica. The size of each replica is Nobs×
A logical flag specifying whether decision tree leaves with the same parent are merged for splits that do not decrease the total risk. The default value is
Method used by trees. The possible values are
Minimum number of observations per tree leaf. By default,
Scalar value equal to the number of decision trees in the ensemble.
A numeric array of size 1-by-Nvars, where every element gives a number of splits on this predictor summed over all trees.
Number of predictor or feature variables to select at random for each decision split. By default,
Logical array of size Nobs-by-NumTrees, where Nobs is the number of observations in the training data and NumTrees is the number of trees in the ensemble. A
Numeric array of size Nobs-by-1 containing the number of trees used for computing the out-of-bag response for each observation. Nobs is the number of observations in the training data used to create the ensemble.
A numeric array of size 1-by-Nvars containing a measure of variable importance for each predictor variable (feature). For any variable, the measure is the difference between the number of raised margins and the number of lowered margins if the values of that variable are permuted across the out-of-bag observations. This measure is computed for every tree, then averaged over the entire ensemble and divided by the standard deviation over the entire ensemble. This property is empty for regression trees.
A numeric array of size 1-by-Nvars containing a measure of importance for each predictor variable (feature). For any variable, the measure is the increase in prediction error if the values of that variable are permuted across the out-of-bag observations. This measure is computed for every tree, then averaged over the entire ensemble and divided by the standard deviation over the entire ensemble.
A numeric array of size 1-by-Nvars containing a measure of importance for each predictor variable (feature). For any variable, the measure is the decrease in the classification margin if the values of that variable are permuted across the out-of-bag observations. This measure is computed for every tree, then averaged over the entire ensemble and divided by the standard deviation over the entire ensemble. This property is empty for regression trees.
A numeric array of size Nobs-by-1, where Nobs is the number of observations in the training data, containing outlier measures for each observation.
Numeric vector of prior probabilities for each class. The order of the elements of
This property is:
A numeric matrix of size Nobs-by-Nobs, where Nobs is the number of observations in the training data, containing measures of the proximity between observations. For any two observations, their proximity is defined as the fraction of trees for which these observations land on the same leaf. This is a symmetric matrix with 1s on the diagonal and off-diagonal elements ranging from 0 to 1.
A logical flag specifying if data are sampled for each decision tree with replacement. This property is
A cell array of size NumTrees-by-1 containing the trees in the ensemble.
A matrix of size Nvars-by-Nvars with predictive measures of variable association, averaged across the entire ensemble of grown trees. If you grew the ensemble setting
A cell array containing the names of the predictor variables (features).
Numeric vector of weights of length Nobs, where Nobs is the number of observations (rows) in the training data.
A table or numeric matrix of size Nobs-by-Nvars, where Nobs is the number of observations (rows) and Nvars is the number of variables (columns) in the training data. If you train the ensemble using a table of predictor values, then
A size Nobs array of response data. Elements of
Train Ensemble of Bagged Classification Trees
Load Fisher's iris data set.
Train an ensemble of bagged classification trees using the entire data set. Specify
50 weak learners. Store which observations are out of bag for each tree.
rng(1); % For reproducibility Mdl = TreeBagger(50,meas,species,'OOBPrediction','On',... 'Method','classification')
Mdl = TreeBagger Ensemble with 50 bagged decision trees: Training X: [150x4] Training Y: [150x1] Method: classification NumPredictors: 4 NumPredictorsToSample: 2 MinLeafSize: 1 InBagFraction: 1 SampleWithReplacement: 1 ComputeOOBPrediction: 1 ComputeOOBPredictorImportance: 0 Proximity:  ClassNames: 'setosa' 'versicolor' 'virginica' Properties, Methods
Mdl is a
Mdl.Trees stores a 50-by-1 cell vector of the trained classification trees (
CompactClassificationTree model objects) that compose the ensemble.
Plot a graph of the first trained classification tree.
TreeBagger grows deep trees.
Mdl.OOBIndices stores the out-of-bag indices as a matrix of logical values.
Plot the out-of-bag error over the number of grown classification trees.
figure; oobErrorBaggedEnsemble = oobError(Mdl); plot(oobErrorBaggedEnsemble) xlabel 'Number of grown trees'; ylabel 'Out-of-bag classification error';
The out-of-bag error decreases with the number of grown trees.
To label out-of-bag observations, pass
Train Ensemble of Bagged Regression Trees
carsmall data set. Consider a model that predicts the fuel economy of a car given its engine displacement.
Train an ensemble of bagged regression trees using the entire data set. Specify 100 weak learners.
rng(1); % For reproducibility Mdl = TreeBagger(100,Displacement,MPG,'Method','regression');
Mdl is a
Using a trained bag of regression trees, you can estimate conditional mean responses or perform quantile regression to predict conditional quantiles.
For ten equally-spaced engine displacements between the minimum and maximum in-sample displacement, predict conditional mean responses and conditional quartiles.
predX = linspace(min(Displacement),max(Displacement),10)'; mpgMean = predict(Mdl,predX); mpgQuartiles = quantilePredict(Mdl,predX,'Quantile',[0.25,0.5,0.75]);
Plot the observations, and estimated mean responses and quartiles in the same figure.
figure; plot(Displacement,MPG,'o'); hold on plot(predX,mpgMean); plot(predX,mpgQuartiles); ylabel('Fuel economy'); xlabel('Engine displacement'); legend('Data','Mean Response','First quartile','Median','Third quartile');
Unbiased Predictor Importance Estimates
carsmall data set. Consider a model that predicts the mean fuel economy of a car given its acceleration, number of cylinders, engine displacement, horsepower, manufacturer, model year, and weight. Consider
Model_Year as categorical variables.
load carsmall Cylinders = categorical(Cylinders); Mfg = categorical(cellstr(Mfg)); Model_Year = categorical(Model_Year); X = table(Acceleration,Cylinders,Displacement,Horsepower,Mfg,... Model_Year,Weight,MPG); rng('default'); % For reproducibility
Display the number of categories represented in the categorical variables.
numCylinders = numel(categories(Cylinders))
numCylinders = 3
numMfg = numel(categories(Mfg))
numMfg = 28
numModelYear = numel(categories(Model_Year))
numModelYear = 3
Because there are 3 categories only in
Model_Year, the standard CART, predictor-splitting algorithm prefers splitting a continuous predictor over these two variables.
Train a random forest of 200 regression trees using the entire data set. To grow unbiased trees, specify usage of the curvature test for splitting predictors. Because there are missing values in the data, specify usage of surrogate splits. Store the out-of-bag information for predictor importance estimation.
Mdl = TreeBagger(200,X,'MPG','Method','regression','Surrogate','on',... 'PredictorSelection','curvature','OOBPredictorImportance','on');
TreeBagger stores predictor importance estimates in the property
OOBPermutedPredictorDeltaError. Compare the estimates using a bar graph.
imp = Mdl.OOBPermutedPredictorDeltaError; figure; bar(imp); title('Curvature Test'); ylabel('Predictor importance estimates'); xlabel('Predictors'); h = gca; h.XTickLabel = Mdl.PredictorNames; h.XTickLabelRotation = 45; h.TickLabelInterpreter = 'none';
In this case,
Model_Year is the most important predictor, followed by
imp to predictor importance estimates computed from a random forest that grows trees using standard CART.
MdlCART = TreeBagger(200,X,'MPG','Method','regression','Surrogate','on',... 'OOBPredictorImportance','on'); impCART = MdlCART.OOBPermutedPredictorDeltaError; figure; bar(impCART); title('Standard CART'); ylabel('Predictor importance estimates'); xlabel('Predictors'); h = gca; h.XTickLabel = Mdl.PredictorNames; h.XTickLabelRotation = 45; h.TickLabelInterpreter = 'none';
In this case,
Weight, a continuous predictor, is the most important. The next two most importance predictor are
Model_Year followed closely by
Horsepower, which is a continuous predictor.
Value. To learn how this affects your use of the class, see Comparing Handle and Value Classes in the MATLAB® Object-Oriented Programming documentation.
TreeBagger model object
Trees property stores a cell vector of
CompactRegressionTree model objects. For a textual or graphical display of tree
t in the cell vector, enter
Statistics and Machine Learning Toolbox™ offers three objects for bagging and random forest:
For details about the differences between
bagged ensembles (
RegressionBaggedEnsemble), see Comparison of TreeBagger and Bagged Ensembles.
 Breiman, L. "Random Forests." Machine Learning 45, pp. 5–32, 2001.
 Meinshausen, N. “Quantile Regression Forests.” Journal of Machine Learning Research, Vol. 7, 2006, pp. 983–999.
Cost property stores the user-specified cost matrix
Behavior changed in R2022a