label = resubPredict(tree)
[label,posterior]
= resubPredict(tree)
[label,posterior,node]
= resubPredict(tree)
[label,posterior,node,cnum]
= resubPredict(tree)
[label,...] = resubPredict(tree,Name,Value)
returns
the labels label
= resubPredict(tree
)tree
predicts for the data tree.X
. label
is
the predictions of tree
on the data that fitctree
used to create tree
.
[
returns the posterior
class probabilities for the predictions.label
,posterior
]
= resubPredict(tree
)
[
returns the node
numbers of label
,posterior
,node
]
= resubPredict(tree
)tree
for the resubstituted data.
[
returns the predicted
class numbers for the predictions.label
,posterior
,node
,cnum
]
= resubPredict(tree
)
[label,...] = resubPredict(
returns
resubstitution predictions with additional options specified by one
or more tree
,Name,Value
)Name,Value
pair arguments.

A classification tree constructed by 
Specify optional commaseparated pairs of Name,Value
arguments.
Name
is the argument
name and Value
is the corresponding
value. Name
must appear
inside single quotes (' '
).
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN
.

A vector of nonnegative integers in ascending order or If you specify a vector, then all elements must be at least If you specify
To invoke Default: 

The response If the 

Matrix or array of posterior probabilities for classes If the If 

The node numbers of If the If 

The class numbers that If the If 
The posterior probability of the classification at a node is the number of training sequences that lead to that node with this classification, divided by the number of training sequences that lead to that node.
For example, consider classifying a predictor X
as true
when X
<0.15
or X
>0.95
,
and X
is false otherwise.
Generate 100 random points and classify them:
rng(0) % For reproducibility X = rand(100,1); Y = (abs(X  .55) > .4); tree = fitctree(X,Y); view(tree,'Mode','graph')
Prune the tree:
tree1 = prune(tree,'Level',1); view(tree1,'Mode','graph')
The pruned tree correctly classifies observations
that are less than 0.15 as true
. It also correctly
classifies observations between .15 and .94 as false
.
However, it incorrectly classifies observations that are greater than
.94 as false
. Therefore the score for observations
that are greater than .15 should be about .05/.85=.06 for true
,
and about .8/.85=.94 for false
.
Compute the prediction scores for the first 10 rows of X
:
[~,score] = predict(tree1,X(1:10)); [score X(1:10,:)]
ans = 0.9059 0.0941 0.8147 0.9059 0.0941 0.9058 0 1.0000 0.1270 0.9059 0.0941 0.9134 0.9059 0.0941 0.6324 0 1.0000 0.0975 0.9059 0.0941 0.2785 0.9059 0.0941 0.5469 0.9059 0.0941 0.9575 0.9059 0.0941 0.9649
Indeed,
every value of X
(the rightmost column) that is
less than 0.15 has associated scores (the left and center columns)
of 0
and 1
, while the other
values of X
have associated scores of 0.94
and 0.06
.
fitctree
 predict
 resubEdge
 resubLoss
 resubMargin