[label,score]
= oobPredict(ens)
[label,score]
= oobPredict(ens,Name,Value)
[
returns class labels
and scores for label
,score
]
= oobPredict(ens
)ens
for outofbag data.
[
computes
labels and scores with additional options specified by one or more label
,score
]
= oobPredict(ens
,Name,Value
)Name,Value
pair
arguments.

A classification bagged ensemble, constructed with 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.

Indices of weak learners in the ensemble ranging from Default: 

Classification labels of the same data type as the training
data 

An 
Bagging, which stands for "bootstrap
aggregation", is a type of ensemble learning. To bag a weak
learner such as a decision tree on a dataset, fitensemble
generates
many bootstrap replicas of the dataset and grows decision trees on
these replicas. fitensemble
obtains each bootstrap
replica by randomly selecting N
observations out
of N
with replacement, where N
is
the dataset size. To find the predicted response of a trained ensemble, predict
take
an average over predictions from individual trees.
Drawing N
out of N
observations
with replacement omits on average 37% (1/e) of
observations for each decision tree. These are "outofbag" observations.
For each observation, oobLoss
estimates the outofbag
prediction by averaging over predictions from all trees in the ensemble
for which this observation is out of bag. It then compares the computed
prediction against the true response for this observation. It calculates
the outofbag error by comparing the outofbag predicted responses
against the true responses for all observations used for training.
This outofbag average is an unbiased estimator of the true ensemble
error.
For ensembles, a classification score represents the confidence of a classification into a class. The higher the score, the higher the confidence.
Different ensemble algorithms have different definitions for their scores. Furthermore, the range of scores depends on ensemble type. For example:
AdaBoostM1
scores range from –∞
to ∞.
Bag
scores range from 0
to 1
.
Find the outofbag predictions and scores for the Fisher iris
data. Find the scores in the range (0.2,0.8)
; these
are the scores where there is notable uncertainty in the resulting
classifications.
load fisheriris ens = fitensemble(meas,species,'Bag',100,... 'Tree','type','classification'); [label score] = oobPredict(ens); unsure = ( (score > .2) & (score < .8)); sum(sum(unsure)) % How many uncertain predictions? ans = 16
oobPredict
and predict
similarly predict
classes and responses.
In regression problems:
For each observation that is out of bag for at least
one tree, oobPredict
composes the weighted mean
by selecting responses of trees in which the observation is out of
bag. For this computation, the 'TreeWeights'
namevalue
pair argument specifies the weights.
For each observation that is in bag for all trees,
the predicted response is the weighted mean of all of the training
responses. For this computation, the W
property
of the TreeBagger
model (i.e., the observation
weights) specify the weights.
In classification problems:
For each observation that is out of bag for at least
one tree, oobPredict
composes the weighted mean
of the class posterior probabilities by selecting the trees in which
the observation is out of bag. Consequently, the predicted class is
the class corresponding to the largest weighted mean. For this computation,
the 'TreeWeights'
namevalue pair argument specifies
the weights.
For each observation that is in bag for all trees,
the predicted class is the weighted, most popular class over all training
responses. For this computation, the W
property
of the TreeBagger
model (i.e., the observation
weights) specify the weights. If there are multiple most popular classes, oobPredict
considers
the one listed first in the ClassNames
property
of the TreeBagger
model the most popular.
oobEdge
 oobLoss
 oobMargin
 oobPredict
 predict