MathWorks Machine Translation
The automated translation of this page is provided by a general purpose third party translator tool.
MathWorks does not warrant, and disclaims all liability for, the accuracy, suitability, or fitness for purpose of the translation.
Fit ensemble of learners for classification and regression
To train classification or regression ensembles, use fitcensemble
or fitrensemble
,
respectively, instead.
returns
a trained ensemble model object that contains the results of fitting
an ensemble of Mdl
= fitensemble(Tbl
,ResponseVarName
,Method
,NLearn
,Learners
)NLearn
classification or regression
learners (Learners
) to all variables in the table Tbl
. ResponseVarName
is
the name of the response variable in Tbl
. Method
is
the ensembleaggregation method.
trains
an ensemble using additional options specified by one or more Mdl
= fitensemble(___,Name,Value
)Name,Value
pair
arguments and any of the previous syntaxes. For example, you can specify
the class order, to implement 10–fold crossvalidation, or
the learning rate.
Estimate the resubstitution loss of a trained, boosting classification ensemble of decision trees.
Load the ionosphere
data set.
load ionosphere;
Train a decision tree ensemble using AdaBoost, 100 learning cycles, and the entire data set.
ClassTreeEns = fitensemble(X,Y,'AdaBoostM1',100,'Tree');
ClassTreeEns
is a trained ClassificationEnsemble
ensemble classifier.
Determine the cumulative resubstitution losses (i.e., the cumulative misclassification error of the labels in the training data).
rsLoss = resubLoss(ClassTreeEns,'Mode','Cumulative');
rsLoss
is a 100by1 vector, where element k contains the resubstition loss after the first k learning cycles.
Plot the cumulative resubstitution loss over the number of learning cycles.
plot(rsLoss); xlabel('Number of Learning Cycles'); ylabel('Resubstitution Loss');
In general, as the number of decision trees in the trained classification ensemble increases, the resubstitution loss decreases.
A decrease in resubstitution loss might indicate that the software trained the ensemble sensibly. However, you cannot infer the predictive power of the ensemble by this decrease. To measure the predictive power of an ensemble, estimate the generalization error by:
Randomly partitioning the data into training and crossvalidation sets. Do this by specifying 'holdout',holdoutProportion
when you train the ensemble using fitensemble
.
Passing the trained ensemble to kfoldLoss
, which estimates the generalization error.
Use a trained, boosted regression tree ensemble to predict the fuel economy of a car. Choose the number of cylinders, volume displaced by the cylinders, horsepower, and weight as predictors. Then, train an ensemble using fewer predictors and compare its insample predictive accuracy against the first ensemble.
Load the carsmall
data set. Store the training data in a table.
load carsmall
Tbl = table(Cylinders,Displacement,Horsepower,Weight,MPG);
Specify a regression tree template that uses surrogate splits to impove predictive accuracy in the presence of NaN
values.
t = templateTree('Surrogate','On');
Train the regression tree ensemble using LSBoost and 100 learning cycles.
Mdl1 = fitensemble(Tbl,'MPG','LSBoost',100,t);
Mdl1
is a trained RegressionEnsemble
regression ensemble. Because MPG
is a variable in the MATLAB® Workspace, you can obtain the same result by entering
Mdl1 = fitensemble(Tbl,MPG,'LSBoost',100,t);
Use the trained regression ensemble to predict the fuel economy for a fourcylinder car with a 200cubic inch displacement, 150 horsepower, and weighing 3000 lbs.
predMPG = predict(Mdl1,[4 200 150 3000])
predMPG = 22.8462
The average fuel economy of a car with these specifications is 21.78 mpg.
Train a new ensemble using all predictors in Tbl
except Displacement
.
formula = 'MPG ~ Cylinders + Horsepower + Weight'; Mdl2 = fitensemble(Tbl,formula,'LSBoost',100,t);
Compare the resubstitution MSEs between Mdl1
and Mdl2
.
mse1 = resubLoss(Mdl1) mse2 = resubLoss(Mdl2)
mse1 = 6.4721 mse2 = 7.8599
The insample MSE for the ensemble that trains on all predictors is lower.
Estimate the generalization error of a trained, boosting classification ensemble of decision trees.
Load the ionosphere
data set.
load ionosphere;
Train a decision tree ensemble using AdaBoostM1, 100 learning cycles, and half of the data chosen randomly. The software validates the algorithm using the remaining half.
rng(2); % For reproducibility ClassTreeEns = fitensemble(X,Y,'AdaBoostM1',100,'Tree',... 'Holdout',0.5);
ClassTreeEns
is a trained ClassificationEnsemble
ensemble classifier.
Determine the cumulative generalization error, i.e., the cumulative misclassification error of the labels in the validation data).
genError = kfoldLoss(ClassTreeEns,'Mode','Cumulative');
genError
is a 100by1 vector, where element k contains the generalization error after the first k learning cycles.
Plot the generalization error over the number of learning cycles.
plot(genError); xlabel('Number of Learning Cycles'); ylabel('Generalization Error');
The cumulative generalization error decreases to approximately 7% when 25 weak learners compose the ensemble classifier.
You can control the depth of the trees in an ensemble of decision trees. You can also control the tree depth in an ECOC model containing decision tree binary learners using the MaxNumSplits
, MinLeafSize
, or MinParentSize
namevalue pair parameters.
When bagging decision trees, fitensemble
grows deep decision trees by default. You can grow shallower trees to reduce model complexity or computation time.
When boosting decision trees, fitensemble grows stumps (a tree with one split) by default. You can grow deeper trees for better accuracy.
Load the carsmall
data set. Specify the variables Acceleration
, Displacement
, Horsepower
, and Weight
as predictors, and MPG
as the response.
load carsmall
X = [Acceleration Displacement Horsepower Weight];
Y = MPG;
The default values of the tree depth controllers for boosting regression trees are:
1
for MaxNumSplits
. This option grows stumps.
5
for MinLeafSize
10
for MinParentSize
To search for the optimal number of splits:
Train a set of ensembles. Exponentially increase the maximum number of splits for subsequent ensembles from stump to at most n  1 splits. Also, decrease the learning rate for each ensemble from 1 to 0.1.
Cross validate the ensembles.
Estimate the crossvalidated meansquared error (MSE) for each ensemble.
Compare the crossvalidated MSEs. The ensemble with the lowest one performs the best, and indicates the optimal maximum number of splits, number of trees, and learning rate for the data set.
Grow and cross validate a deep regression tree and a stump. Specify to use surrogate splits because the data contain missing values. These serve as benchmarks.
MdlDeep = fitrtree(X,Y,'CrossVal','on','MergeLeaves','off',... 'MinParentSize',1,'Surrogate','on'); MdlStump = fitrtree(X,Y,'MaxNumSplits',1,'CrossVal','on','Surrogate','on');
Train the boosting ensembles using 150 regression trees. Cross validate the ensemble using 5fold cross validation. Vary the maximum number of splits using the values in the sequence , where m is such that is no greater than n  1. For each variant, adjust the learning rate to each value in the set {0.1, 0.25, 0.5, 1};
n = size(X,1); m = floor(log2(n  1)); lr = [0.1 0.25 0.5 1]; maxNumSplits = 2.^(0:m); numTrees = 150; Mdl = cell(numel(maxNumSplits),numel(lr)); rng(1); % For reproducibility for k = 1:numel(lr); for j = 1:numel(maxNumSplits); t = templateTree('MaxNumSplits',maxNumSplits(j),'Surrogate','on'); Mdl{j,k} = fitensemble(X,Y,'LSBoost',numTrees,t,... 'Type','regression','KFold',5,'LearnRate',lr(k)); end; end;
Compute the crossvalidated MSE for each ensemble.
kflAll = @(x)kfoldLoss(x,'Mode','cumulative'); errorCell = cellfun(kflAll,Mdl,'Uniform',false); error = reshape(cell2mat(errorCell),[numTrees numel(maxNumSplits) numel(lr)]); errorDeep = kfoldLoss(MdlDeep); errorStump = kfoldLoss(MdlStump);
Plot how the crossvalidated MSE behaves as the number of trees in the ensemble increases for a few of the ensembles, the deep tree, and the stump. Plot the curves with respect to learning rate in the same plot, and plot separate plots for varying tree complexities. Choose a subset of tree complexity levels.
mnsPlot = [1 round(numel(maxNumSplits)/2) numel(maxNumSplits)]; figure; for k = 1:3; subplot(2,2,k); plot(squeeze(error(:,mnsPlot(k),:)),'LineWidth',2); axis tight; hold on; h = gca; plot(h.XLim,[errorDeep errorDeep],'.b','LineWidth',2); plot(h.XLim,[errorStump errorStump],'.r','LineWidth',2); plot(h.XLim,min(min(error(:,mnsPlot(k),:))).*[1 1],'k'); h.YLim = [10 50]; xlabel 'Number of trees'; ylabel 'Crossvalidated MSE'; title(sprintf('MaxNumSplits = %0.3g', maxNumSplits(mnsPlot(k)))); hold off; end; hL = legend([cellstr(num2str(lr','Learning Rate = %0.2f'));... 'Deep Tree';'Stump';'Min. MSE']); hL.Position(1) = 0.6;
Each curve contains a minimum crossvalidated MSE occuring at the optimal number of trees in the ensemble.
Identify the maximum number of splits, number of trees, and learning rate that yields the lowest MSE overall.
[minErr,minErrIdxLin] = min(error(:)); [idxNumTrees,idxMNS,idxLR] = ind2sub(size(error),minErrIdxLin); fprintf('\nMin. MSE = %0.5f',minErr) fprintf('\nOptimal Parameter Values:\nNum. Trees = %d',idxNumTrees); fprintf('\nMaxNumSplits = %d\nLearning Rate = %0.2f\n',... maxNumSplits(idxMNS),lr(idxLR))
Min. MSE = 18.42979 Optimal Parameter Values: Num. Trees = 1 MaxNumSplits = 4 Learning Rate = 1.00
For a different approach to optimizing this ensemble, see Optimize a Boosted Regression Ensemble.
Tbl
— Sample dataSample data used to train the model, specified as a table. Each
row of Tbl
corresponds to one observation, and
each column corresponds to one predictor variable. Tbl
can
contain one additional column for the response variable. Multicolumn
variables and cell arrays other than cell arrays of character vectors
are not allowed.
If Tbl
contains the response variable
and you want to use all remaining variables as predictors, then specify
the response variable using ResponseVarName
.
If Tbl
contains the response
variable, and you want to use a subset of the remaining variables
only as predictors, then specify a formula using formula
.
If Tbl
does not contain the response
variable, then specify the response data using Y
.
The length of response variable and the number of rows of Tbl
must
be equal.
Note:
To save memory and execution time, supply 
Data Types: table
ResponseVarName
— Response variable nameTbl
Response variable name, specified as the name of the response
variable in Tbl
.
You must specify ResponseVarName
as a character
vector. For example, if Tbl.Y
is the response variable,
then specify ResponseVarName
as 'Y'
.
Otherwise, fitensemble
treats all columns of Tbl
as
predictor variables.
The response variable must be a categorical or character array, logical or numeric vector, or cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.
For classification, you can specify the order of the classes
using the ClassNames
namevalue pair argument.
Otherwise, fitensemble
determines the class order,
and stores it in the Mdl.ClassNames
.
Data Types: char
formula
— Explanatory model of response and subset of predictor variablesExplanatory model of the response and a subset of the predictor
variables, specified as a character vector in the form of 'Y~X1+X2+X3'
.
In this form, Y
represents the response variable,
and X1
, X2
, and X3
represent
the predictor variables. The variables must be variable names in Tbl
(Tbl.Properties.VariableNames
).
To specify a subset of variables in Tbl
as
predictors for training the model, use a formula. If you specify a
formula, then the software does not use any variables in Tbl
that
do not appear in formula
.
Data Types: char
X
— Predictor dataPredictor data, specified as numeric matrix.
Each row corresponds to one observation, and each column corresponds to one predictor variable.
The length of Y
and the number of rows
of X
must be equal.
To specify the names of the predictors in the order of their
appearance in X
, use the PredictorNames
namevalue
pair argument.
Data Types: double
 single
Y
— Response dataResponse data, specified as a categorical or character array,
logical or numeric vector, or cell array of character vectors. Each
entry in Y
is the response to or label for the
observation in the corresponding row of X
or Tbl
.
The length of Y
and the number of rows of X
or Tbl
must
be equal. If the response variable is a character array, then each
element must correspond to one row of the array.
For classification, Y
can be any
of the supported data types. You can specify the order of the classes
using the ClassNames
namevalue pair argument.
Otherwise, fitensemble
determines the class order,
and stores it in the Mdl.ClassNames
.
For regression, Y
must be a numeric
column vector.
Data Types: char
 cell
 categorical
 logical
 single
 double
Method
— Ensembleaggregation methodEnsembleaggregation method, specified as one of the method names in this list.
For classification with two classes:
'AdaBoostM1'
'LogitBoost'
'GentleBoost'
'RobustBoost'
(requires Optimization Toolbox™)
'LPBoost'
(requires Optimization Toolbox)
'TotalBoost'
(requires Optimization Toolbox)
'RUSBoost'
'Subspace'
'Bag'
For classification with three or more classes:
'AdaBoostM2'
'LPBoost'
(requires Optimization Toolbox)
'TotalBoost'
(requires Optimization Toolbox)
'RUSBoost'
'Subspace'
'Bag'
For regression:
'LSBoost'
'Bag'
Because you can specify 'Bag'
for classification
and regression problems, specify the problem type using the Type
namevalue
pair argument.
Data Types: char
NLearn
— Number of ensemble learning cycles'AllPredictorCombinations'
Number of ensemble learning cycles, specified as a positive
integer or 'AllPredictorCombinations'
.
If you specify a positive integer, then, at every
learning cycle, the software trains one weak learner for every template
object in Learners
. Consequently, the software
trains NLearn*numel(Learners)
learners.
If you specify 'AllPredictorCombinations'
,
then set Method
to 'Subspace'
and
specify one learner only in Learners
. With these
settings, the software trains learners for all possible combinations
of predictors taken NPredToSample
at a time.
Consequently, the software trains nchoosek
(size(X,2),NPredToSample)
leaners.
The software composes the ensemble using all trained learners
and stores them in Mdl.Trained
.
For more details, see Tips.
Data Types: single
 double
 char
Learners
— Weak learners to use in ensembleWeak learners to use in the ensemble, specified as a weaklearner name, weaklearner template object, or cell array of weaklearner template objects.
Weak Learner  WeakLearner Name  Template Object Creation Function  Method Settings 

Discriminant analysis  'Discriminant'  templateDiscriminant  Recommended for 'Subspace' 
k nearest neighbors  'KNN'  templateKNN  For 'Subspace' only 
Decision tree  'Tree'  templateTree  All methods except 'Subspace' 
For more details, see NLearn
and Tips.
Example: For an ensemble composed of two types of classification
trees, supply {t1 t2}
, where t1
and t2
are
classification tree templates.
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.
'CrossVal','on','LearningRate',0.05
specifies
to implement 10fold crossvalidation and to use 0.05
as
the learning rate.'CategoricalPredictors'
— Categorical predictors list'all'
Categorical predictors list, specified as the commaseparated
pair consisting of 'CategoricalPredictors'
and
one of the following:
A numeric vector with indices from 1
through p
,
where p
is the number of columns of X
.
A logical vector of length p
, where
a true
entry means that the corresponding column
of X
is a categorical variable.
A cell array of character vectors, where each element
in the array is the name of a predictor variable. The names must match
the entries in PredictorNames
.
A character matrix, where each row of the matrix is
a name of a predictor variable. The names must match the entries in PredictorNames
.
Pad the names with extra blanks so each row of the character matrix
has the same length.
'all'
, meaning all predictors are
categorical.
Specification of CategoricalPredictors
is
appropriate if:
'tree'
'knn'
, when all predictors are
categorical
By default, if the predictor data is in a matrix (X
), fitensemble
assumes
that none of the predictors are categorical. If the predictor data
is in a table (Tbl
), fitensemble
assumes
that a variable is categorical if it contains logical values, categorical
values, or a cell array of character vectors.
Example: 'CategoricalPredictors','all'
Data Types: single
 double
 char
'NPrint'
— Printout frequency'off'
(default)  positive integerPrintout frequency, specified as the commaseparated pair consisting
of 'NPrint'
and a positive integer or 'off'
.
To track the number of weak leaners or folds that fitensemble
trained
so far, specify a positive integer. That is, if you specify the positive
integer m:
Without also specifying any crossvalidation option
(for example, CrossVal
), then fitensemble
displays
a message to the command line every time it completes training m weak
learners.
And a crossvalidation option, then fitensemble
displays
a message to the command line every time it finishes training m folds.
If you specify 'off'
, then fitensemble
does
not display a message when it completes training weak learners.
Tip
When training an ensemble of many weak learners on a large data
set, specify a positive integer for 
Example: 'NPrint',5
Data Types: single
 double
 char
'PredictorNames'
— Predictor variable namesPredictor variable names, specified as the commaseparated pair
consisting of 'PredictorNames'
and a cell array
of unique character vectors. The functionality of 'PredictorNames'
depends
on the way you supply the training data.
If you supply X
and Y
,
then you can use 'PredictorNames'
to give the predictor
variables in X
names.
The order of the names in PredcitorNames
must
correspond to the column order of X
. That is, PredictorNames{1}
is
the name of X(:,1)
, PredictorNames{2}
is
the name of X(:,2)
, and so on. Also, size(X,2)
and numel(PredictorNames)
must
be equal.
By default, PredictorNames
is {x1,x2,...}
.
If you supply Tbl
, then you can
use 'PredictorNames'
to choose which predictor
variables to use in training. That is, fitensemble
uses
the predictor variables in PredictorNames
and the
response only in training.
PredictorNames
must be a subset
of Tbl.Properties.VariableNames
and cannot include
the name of the response variable.
By default, PredictorNames
contains
the names of all predictor variables.
It good practice to specify the predictors for training
using one of 'PredictorNames'
or formula
only.
Example: 'PredictorNames',{'SepalLength','SepalWidth','PedalLength','PedalWidth'}
Data Types: cell
'ResponseName'
— Response variable name'Y'
(default)  character vectorResponse variable name, specified as the commaseparated pair
consisting of 'ResponseName'
and a character vector.
If you supply Y
, then you can
use 'ResponseName'
to specify a name for the response
variable.
If you supply ResponseVarName
or formula
,
then you cannot use 'ResponseName'
.
Example: 'ResponseName','response'
Data Types: char
'Type'
— Supervised learning type'classification'
 'regression'
Supervised learning type, specified as the commaseparated pair
consisting of 'Type'
and 'classification'
or 'regression'
.
If Method
is 'bag'
,
then the supervised learning type is ambiguous. Therefore, specify Type
when
bagging.
Otherwise, the value of Method
determines
the supervised learning type.
Example: 'Type','classification'
Data Types: char
'CrossVal'
— Crossvalidation flag'off'
(default)  'on'
Crossvalidation flag, specified as the commaseparated pair
consisting of 'Crossval'
and 'on'
or 'off'
.
If you specify 'on'
, then the software implements
10fold crossvalidation.
To override this crossvalidation setting, use one of these
namevalue pair arguments: CVPartition
, Holdout
, KFold
,
or Leaveout
. To create a crossvalidated model,
you can use one crossvalidation namevalue pair argument at a time
only.
Alternatively, crossvalidate later by passing Mdl
to crossval
or crossval
.
Example: 'Crossval','on'
Data Types: char
'CVPartition'
— Crossvalidation partition[]
(default)  cvpartition
partition objectCrossvalidation partition, specified as the commaseparated
pair consisting of 'CVPartition'
and a cvpartition
partition
object as created by cvpartition
.
The partition object specifies the type of crossvalidation, and also
the indexing for training and validation sets.
To create a crossvalidated model, you can use one of these
four namevalue pair arguments only: CVPartition
, Holdout
, KFold
,
or Leaveout
.
'Holdout'
— Fraction of data for holdout validationFraction of data used for holdout validation, specified as the
commaseparated pair consisting of 'Holdout'
and
a scalar value in the range (0,1). If you specify 'Holdout',
,
then the software: p
Randomly reserves
%
of the data as validation data, and trains the model using the rest
of the datap
*100
Stores the compact, trained model in the Trained
property
of the crossvalidated model.
To create a crossvalidated model, you can use one of these
four namevalue pair arguments only: CVPartition
, Holdout
, KFold
,
or Leaveout
.
Example: 'Holdout',0.1
Data Types: double
 single
'KFold'
— Number of folds10
(default)  positive integer value greater than 1Number of folds to use in a crossvalidated classifier, specified
as the commaseparated pair consisting of 'KFold'
and
a positive integer value greater than 1. If you specify, e.g., 'KFold',k
,
then the software:
Randomly partitions the data into k sets
For each set, reserves the set as validation data, and trains the model using the other k – 1 sets
Stores the k
compact, trained
models in the cells of a k
by1 cell vector
in the Trained
property of the crossvalidated
model.
To create a crossvalidated model, you can use one of these
four namevalue pair arguments only: CVPartition
, Holdout
, KFold
,
or Leaveout
.
Example: 'KFold',5
Data Types: single
 double
'Leaveout'
— Leaveoneout crossvalidation flag'off'
(default)  'on'
Leaveoneout crossvalidation flag, specified as the commaseparated
pair consisting of 'Leaveout'
and 'on'
or 'off'
.
If you specify 'Leaveout','on'
, then, for each
of the n observations, where n is size(Mdl.X,1)
,
the software:
Reserves the observation as validation data, and trains the model using the other n – 1 observations
Stores the n compact, trained models
in the cells of an nby1 cell vector in the Trained
property
of the crossvalidated model.
To create a crossvalidated model, you can use one of these
four namevalue pair arguments only: CVPartition
, Holdout
, KFold
,
or Leaveout
.
Example: 'Leaveout','on'
Data Types: char
'ClassNames'
— Names of classes to use for trainingNames of classes to use for training, specified as the commaseparated
pair consisting of 'ClassNames'
and a categorical
or character array, logical or numeric vector, or cell array of character
vectors. ClassNames
must be the same data type
as Y
.
If ClassNames
is a character array, then
each element must correspond to one row of the
array.
Use ClassNames
to:
Order the classes during training.
Specify the order of any input or output argument
dimension that corresponds to the class order. For example, use ClassNames
to
specify the order of the dimensions of Cost
or
the column order of classification scores returned by predict
.
Select a subset of classes for training. For example,
suppose that the set of all distinct class names in Y
is {'a','b','c'}
.
To train the model using observations from classes 'a'
and 'c'
only,
specify 'ClassNames',{'a','c'}
.
The default is the set of all distinct class names in Y
.
Example: 'ClassNames',{'b','g'}
Data Types: categorical
 char
 logical
 single
 double
 cell
'Cost'
— Misclassification costMisclassification cost, specified as the commaseparated pair
consisting of 'Cost'
and a square matrix or structure.
If you specify:
The square matrix Cost
, then Cost(i,j)
is
the cost of classifying a point into class j
if
its true class is i
. That is, the rows correspond
to the true class and the columns correspond to the predicted class.
To specify the class order for the corresponding rows and columns
of Cost
, also specify the ClassNames
namevalue
pair argument.
The structure S
, then it must have
two fields:
S.ClassNames
, which contains the
class names as a variable of the same data type as Y
S.ClassificationCosts
, which contains
the cost matrix with rows and columns ordered as in S.ClassNames
The default is ones(
, where K
) 
eye(K
)K
is
the number of distinct classes.
Note:

Example: 'Cost',[0 1 2 ; 1 0 2; 2 2 0]
Data Types: double
 single
 struct
'Prior'
— Prior probabilities'empirical'
(default)  'uniform'
 numeric vector  structure arrayPrior probabilities for each class, specified as the commaseparated
pair consisting of 'Prior'
and a value in this
table.
Value  Description 

'empirical'  The class prior probabilities are the class relative frequencies
in Y . 
'uniform'  All class prior probabilities are equal to 1/K, where K is the number of classes. 
numeric vector  Each element is a class prior probability. Order the elements
according to SVMModel.ClassNames or specify the
order using the ClassNames namevalue pair argument.
The software normalizes the elements such that they sum to 1 . 
structure array  A structure

fitensemble
normalizes
the prior probabilities in Prior
to sum to 1.
Example: struct('ClassNames',{{'setosa','versicolor','virginica'}},'ClassProbs',1:3)
Data Types: char
 double
 single
 struct
'Weights'
— Observation weightsObservation weights, specified as the commaseparated pair consisting
of 'Weights'
and a numeric vector of positive values
or name of a variable in Tbl
. The software weighs
the observations in each row of X
or Tbl
with
the corresponding value in Weights
. The size of Weights
must
equal the number of rows of X
or Tbl
.
If you specify the input data as a table Tbl
,
then Weights
can be the name of a variable in Tbl
that
contains a numeric vector. In this case, you must specify Weights
as
a character vector. For example, if the weights vector W
is
stored as Tbl.W
, then specify it as 'W'
.
Otherwise, the software treats all columns of Tbl
,
including W
, as predictors or the response when
training the model.
The software normalizes Weights
to sum up
to the value of the prior probability in the respective class.
By default, Weights
is ones(
,
where n
,1)n
is the number of observations in X
or Tbl
.
Data Types: double
 single
 char
'FResample'
— Fraction of training set to resample1
(default)  positive scalar in (0,1]'Replace'
— Flag indicating to sample with replacement'on'
(default)  'off'
Flag indicating sampling with replacement, specified as the
commaseparated pair consisting of 'Replace'
and 'off'
or 'on'
.
For 'on'
, the software samples
the training observations with replacement.
For 'off'
, the software samples
the training observations without replacement. If you set Resample
to 'on'
,
then the software samples training observations assuming uniform weights.
If you also specify a boosting method, then the software boosts by
reweighting observations.
Unless you set Method
to 'bag'
or
set Resample
to 'on'
, Replace
has
no effect.
Example: 'Replace','off'
Data Types: char
'Resample'
— Flag indicating to resample'off'
 'on'
Flag indicating to resample, specified as the commaseparated
pair consisting of 'Resample'
and 'off'
or 'on'
.
If Method
is any boosting method,
then:
'Resample','on'
specifies to sample
training observations using updated weights as the multinomial sampling
probabilities.
'Resample','off'
specifies to reweight
observations at every learning iteration. This setting is the default.
If Method
is 'bag'
,
then 'Resample'
must be 'on'
.
The software resamples a fraction of the training observations (see FResample
)
with or without replacement (see Replace
).
'LearnRate'
— Learning rate for shrinkage1
(default)  numeric scalar in (0,1]Learning rate for shrinkage, specified as the commaseparated pair consisting of a numeric scalar in the interval (0,1].
To train an ensemble using shrinkage, set LearnRate
to
a value less than 1
, for example, 0.1
is
a popular choice. Training an ensemble using shrinkage requires more
learning iterations, but often achieves better accuracy.
Example: 'LearnRate',0.1
Data Types: single
 double
'LearnRate'
— Learning rate for shrinkage1
(default)  numeric scalar in (0,1]Learning rate for shrinkage, specified as the commaseparated pair consisting of a numeric scalar in the interval (0,1].
To train an ensemble using shrinkage, set LearnRate
to
a value less than 1
, for example, 0.1
is
a popular choice. Training an ensemble using shrinkage requires more
learning iterations, but often achieves better accuracy.
Example: 'LearnRate',0.1
Data Types: single
 double
'RatioToSmallest'
— Sampling proportion with respect to lowestrepresented classSampling proportion with respect to the lowestrepresented class,
specified as the commaseparated pair consisting of 'RatioToSmallest'
and
a numeric scalar or numeric vector of positive values with length
equal to the number of distinct classes in the training data.
Suppose that there are K
classes
in the training data and the lowestrepresented class has m
observations
in the training data.
If you specify the positive numeric scalar s
,
then fitensemble
samples
observations
from each class, that is, it uses the same sampling proportion for
each class. For more details, see Algorithms.s
*m
If you specify the numeric vector [
,
then s1
,s2
,...,sK
]fitensemble
samples
observations
from class si
*m
i
, i
=
1,...,K. The elements of RatioToSmallest
correspond
to the order of the class names specified using ClassNames
(see Tips).
The default value is ones(
,
which specifies to sample K
,1)m
observations
from each class.
Example: 'RatioToSmallest',[2,1]
Data Types: single
 double
'MarginPrecision'
— Margin precision to control convergence speed0.1
(default)  numeric scalar in [0,1]Margin precision to control convergence speed, specified as
the commaseparated pair consisting of 'MarginPrecision'
and
a numeric scalar in the interval [0,1]. MarginPrecision
affects
the number of boosting iterations required for convergence.
Tip
To train an ensemble using many learners, specify a small value
for 
Example: 'MarginPrecision',0.5
Data Types: single
 double
'RobustErrorGoal'
— Target classification error0.1
(default)  nonnegative numeric scalarTarget classification error, specified as the commaseparated
pair consisting of 'RobustErrorGoal'
and a nonnegative
numeric scalar. The upper bound on possible values depends on the
values of RobustMarginSigma
and RobustMaxMargin
.
However, the upper bound cannot exceed 1
.
Tip
For a particular training set, usually there is an optimal range
for 
Example: 'RobustErrorGoal',0.05
Data Types: single
 double
'RobustMarginSigma'
— Classification margin distribution spread0.1
(default)  positive numeric scalarClassification margin distribution spread over the training
data, specified as the commaseparated pair consisting of 'RobustMarginSigma'
and
a positive numeric scalar. Before specifying RobustMarginSigma
,
consult the literature on RobustBoost
, for example, [19].
Example: 'RobustMarginSigma',0.5
Data Types: single
 double
'RobustMaxMargin'
— Maximal classification margin0
(default)  nonnegative numeric scalarMaximal classification margin in the training data, specified
as the commaseparated pair consisting of 'RobustMaxMargin'
and
a nonnegative numeric scalar. The software minimizes the number of
observations in the training data having classification margins below RobustMaxMargin
.
Example: 'RobustMaxMargin',1
Data Types: single
 double
'NPredToSample'
— Number of predictors to sample1
(default)  positive integerNumber of predictors to sample for each random subspace learner,
specified as the commaseparated pair consisting of 'NPredToSample'
and
a positive integer in the interval 1,...,p, where p is
the number of predictor variables (size(X,2)
or size(Tbl,2)
).
Data Types: single
 double
Mdl
— Trained ensemble modelClassificationBaggedEnsemble
model object  ClassificationEnsemble
model object  ClassificationPartitionedEnsemble
crossvalidated model object  RegressionBaggedEnsemble
model object  RegressionEnsemble
model object  RegressionPartitionedEnsemble
crossvalidated
model objectTrained ensemble model, returned as one of the model objects in this table.
Model Object  Type Setting  Specify Any CrossValidation Options?  Method Setting  Resample Setting 

ClassificationBaggedEnsemble  'classification'  No  'Bag'  'on' 
ClassificationEnsemble  'classification'  No  Any ensembleaggregation method for classification  'off' 
ClassificationPartitionedEnsemble  'classification'  Yes  Any classification ensembleaggregation method  'off' or 'on' 
RegressionBaggedEnsemble  'regression'  No  'Bag'  'on' 
RegressionEnsemble  'regression'  No  'LSBoost'  'off' 
RegressionPartitionedEnsemble  'regression'  Yes  'LSBoost' or 'Bag'  'off' or 'on' 
The namevalue pair arguments
that control crossvalidation are CrossVal
, Holdout
, KFold
, Leaveout
,
and CVPartition
.
To reference properties of Mdl
, use dot notation.
For example, to access or display the cell vector of weak learner
model objects for an ensemble that has not been crossvalidated, enter Mdl.Trained
at
the command line.
NLearn
can vary from a few dozen
to a few thousand. Usually, an ensemble with good predictive power
requires from a few hundred to a few thousand weak learners. However,
you do not have to train an ensemble for that many cycles at once.
You can start by growing a few dozen learners, inspect the ensemble
performance and then, if necessary, train more weak learners using resume
for
classification problems, or resume
for regression problems.
Ensemble performance depends on the ensemble setting and the setting of the weak learners. That is, if you specify weak learners with default parameters, then the ensemble can perform poorly. Therefore, like ensemble settings, it is good practice to adjust the parameters of the weak learners using templates, and to choose values that minimize generalization error.
If you specify to resample using Resample
,
then it is good practice to resample to entire data set. That is,
use the default setting of 1
for FResample
.
In classification problems (that is, Type
is 'classification'
):
If the ensembleaggregation method (Method
)
is 'bag'
and:
The misclassification cost (Cost
)
is highly imbalanced, then, for inbag samples, the software oversamples
unique observations from the class that has a large penalty.
The class prior probabilities (Prior
)
are highly skewed, the software oversamples unique observations from
the class that has a large prior probability.
For smaller sample sizes, these combinations can result
in a low relative frequency of outofbag observations from the class
that has a large penalty or prior probability. Consequently, the
estimated outofbag error is highly variable and it can be difficult
to interpret. To avoid large estimated outofbag error variances,
particularly for small sample sizes, set a more balanced misclassification
cost matrix using Cost
or a less skewed prior
probability vector using Prior
.
Because the order of some input and output arguments
correspond to the distinct classes in the training data, it is good
practice to specify the class order using the ClassNames
namevalue
pair argument.
To determine the class order quickly, remove all
observations from the training data that are unclassified (that is,
have a missing label), obtain and display an array of all the distinct
classes, and then specify the array for ClassNames
.
For example, suppose the response variable (Y
)
is a cell array of labels. This code specifies the class order in
the variable classNames
.
Ycat = categorical(Y); classNames = categories(Ycat)
categorical
assigns <undefined>
to
unclassified observations and categories
excludes <undefined>
from
its output. Therefore, if you use this code for cell arrays of labels
or similar code for categorical arrays, then you do not have to remove
observations with missing labels to obtain a list of the distinct
classes.To specify that the class order from lowestrepresented
label to mostrepresented, then quickly determine the class order
(as in the previous bullet), but arrange the classes in the list by
frequency before passing the list to ClassNames
.
Following from the previous example, this code specifies the class
order from lowest to mostrepresented in classNamesLH
.
Ycat = categorical(Y); classNames = categories(Ycat); freq = countcats(Ycat); [~,idx] = sort(freq); classNamesLH = classNames(idx);
For details of ensembleaggregation algorithms, see Ensemble Algorithms.
If you specify Method
to be a
boosting algorithm and Learners
to be decision
trees, then the software grows stumps by default.
A decision stump is one root node connected to two terminal, leaf
nodes. You can adjust tree depth by specifying the MaxNumSplits
, MinLeafSize
,
and MinParentSize
namevalue pair arguments using templateTree
.
fitensemble
generates inbag
samples by oversampling classes with large misclassification costs
and undersampling classes with small misclassification costs. Consequently,
outofbag samples have fewer observations from classes with large
misclassification costs and more observations from classes with small
misclassification costs. If you train a classification ensemble using
a small data set and a highly skewed cost matrix, then the number
of outofbag observations per class can be low. Therefore, the estimated
outofbag error can have a large variance and can be difficult to
interpret. The same phenomenon can occur for classes with large prior
probabilities.
For the RUSBoost ensembleaggregation method (Method
),
the namevalue pair argument RatioToSmallest
specifies
the sampling proportion for each class with respect to the lowestrepresented
class. For example, suppose that there are two classes in the training
data: A and B. A have
100 observations and B have 10 observations. and
that the lowestrepresented class has m
observations
in the training data.
If you set 'RatioToSmallest',2
,
then
= s
*m
2*10
= 20
.
Consequently, fitensemble
trains every learner
using 20 observations from class A and 20 observations
from class B. If you set 'RatioToSmallest',[2
2]
, then you obtain the same result.
If you set 'RatioToSmallest',[2,1]
,
then
= s1
*m
2*10
= 20
and
= s2
*m
1*10
= 10
.
Consequently, fitensemble
trains every learner
using 20 observations from class A and 10 observations
from class B.
For ensembles of decision trees, and for dualcore
systems and above, fitensemble
parallelizes training
using Intel^{®} Threading Building Blocks (TBB). For details on Intel TBB,
see https://software.intel.com/enus/inteltbb.
[1] Breiman, L. "Bagging Predictors." Machine Learning. Vol. 26, pp. 123–140, 1996.
[2] Breiman, L. "Random Forests." Machine Learning. Vol. 45, pp. 5–32, 2001.
[3] Freund, Y. "A more robust boosting algorithm." arXiv:0905.2138v1, 2009.
[4] Freund, Y. and R. E. Schapire. "A DecisionTheoretic Generalization of OnLine Learning and an Application to Boosting." J. of Computer and System Sciences, Vol. 55, pp. 119–139, 1997.
[5] Friedman, J. "Greedy function approximation: A gradient boosting machine." Annals of Statistics, Vol. 29, No. 5, pp. 1189–1232, 2001.
[6] Friedman, J., T. Hastie, and R. Tibshirani. "Additive logistic regression: A statistical view of boosting." Annals of Statistics, Vol. 28, No. 2, pp. 337–407, 2000.
[7] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning section edition, Springer, New York, 2008.
[8] Ho, T. K. "The random subspace method for constructing decision forests." IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 8, pp. 832–844, 1998.
[9] Schapire, R. E., Y. Freund, P. Bartlett, and W.S. Lee. "Boosting the margin: A new explanation for the effectiveness of voting methods." Annals of Statistics, Vol. 26, No. 5, pp. 1651–1686, 1998.
[10] Seiffert, C., T. Khoshgoftaar, J. Hulse, and A. Napolitano. "RUSBoost: Improving clasification performance when training data is skewed." 19th International Conference on Pattern Recognition, pp. 1–4, 2008.
[11] Warmuth, M., J. Liao, and G. Ratsch. "Totally corrective boosting algorithms that maximize the margin." Proc. 23rd Int'l. Conf. on Machine Learning, ACM, New York, pp. 1001–1008, 2006.
ClassificationBaggedEnsemble
 ClassificationEnsemble
 ClassificationPartitionedEnsemble
 RegressionBaggedEnsemble
 RegressionEnsemble
 RegressionPartitionedEnsemble
 templateDiscriminant
 templateKNN
 templateTree
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
You can also select a location from the following list: