`label = predict(tree,TBL)`

`label = predict(tree,X)`

`label = predict(___,Name,Value)`

`[label,score,node,cnum] = predict(___)`

returns
labels with additional options specified by one or more `label`

= predict(___,`Name,Value`

)`Name,Value`

pair
arguments, using any of the previous syntaxes. For example, you can
specify subtrees.

`[`

also returns a matrix of scores
indicating the likelihood that a label comes from a particular class
(`label`

,`score`

,`node`

,`cnum`

]
= predict(___)`score`

), a vector of predicted node numbers for
the classification (`node`

), and a vector of predicted
class number for the classification (`cnum`

), using
any of the previous syntaxes.

`predict`

classifies so as to minimize the expected
classification cost:

$$\widehat{y}=\underset{y=1,\mathrm{...},K}{\mathrm{arg}\mathrm{min}}{\displaystyle \sum _{k=1}^{K}\widehat{P}\left(k|x\right)C\left(y|k\right)},$$

where

$$\widehat{y}$$ is the predicted classification.

*K*is the number of classes.$$\widehat{P}\left(k|x\right)$$ is the posterior probability of class

*k*for observation*x*.$$C\left(y|k\right)$$ is the cost of classifying an observation as

*y*when its true class is*k*.

For trees, the *score* of a classification
of a leaf node is the posterior probability of the classification
at that node. The posterior probability of the classification at a
node is the number of training sequences that lead to that node with
the classification, divided by the number of training sequences that
lead to that node.

For example, consider classifying a predictor `X`

as `true`

when `X`

< `0.15`

or `X`

> `0.95`

, and `X`

is
false otherwise.

Generate 100 random points and classify them:

rng(0,'twister') % for reproducibility X = rand(100,1); Y = (abs(X - .55) > .4); tree = fitctree(X,Y); view(tree,'Mode','Graph')

Prune the tree:

tree1 = prune(tree,'Level',1); view(tree1,'Mode','Graph')

The pruned tree correctly classifies observations that are less
than 0.15 as `true`

. It also correctly classifies
observations from .15 to .94 as `false`

. However,
it incorrectly classifies observations that are greater than .94 as `false`

.
Therefore, the score for observations that are greater than .15 should
be about .05/.85=.06 for `true`

, and about .8/.85=.94
for `false`

.

Compute the prediction scores for the first 10 rows of `X`

:

[~,score] = predict(tree1,X(1:10)); [score X(1:10,:)]

ans = 0.9059 0.0941 0.8147 0.9059 0.0941 0.9058 0 1.0000 0.1270 0.9059 0.0941 0.9134 0.9059 0.0941 0.6324 0 1.0000 0.0975 0.9059 0.0941 0.2785 0.9059 0.0941 0.5469 0.9059 0.0941 0.9575 0.9059 0.0941 0.9649

Indeed, every value of `X`

(the right-most
column) that is less than 0.15 has associated scores (the left and
center columns) of `0`

and `1`

,
while the other values of `X`

have associated scores
of `0.91`

and `0.09`

. The difference
(score `0.09`

instead of the expected `.06`

)
is due to a statistical fluctuation: there are `8`

observations
in `X`

in the range `(.95,1)`

instead
of the expected `5`

observations.

There are two costs associated with classification: the true misclassification cost per class, and the expected misclassification cost per observation.

You can set the true misclassification cost per class in the `Cost`

name-value
pair when you create the classifier using the `fitctree`

method. `Cost(i,j)`

is
the cost of classifying an observation into class `j`

if
its true class is `i`

. By default, `Cost(i,j)=1`

if `i~=j`

,
and `Cost(i,j)=0`

if `i=j`

. In other
words, the cost is `0`

for correct classification,
and `1`

for incorrect classification.

There are two costs associated with classification: the true misclassification cost per class, and the expected misclassification cost per observation.

Suppose you have `Nobs`

observations that you
want to classify with a trained classifier. Suppose you have `K`

classes.
You place the observations into a matrix `Xnew`

with
one observation per row.

The expected cost matrix `CE`

has size `Nobs`

-by-`K`

.
Each row of `CE`

contains the expected (average)
cost of classifying the observation into each of the `K`

classes. `CE(n,k)`

is

$$\sum _{i=1}^{K}\widehat{P}\left(i|Xnew(n)\right)C\left(k|i\right)},$$

where

*K*is the number of classes.$$\widehat{P}\left(i|Xnew(n)\right)$$ is the posterior probability of class

*i*for observation*Xnew*(*n*).$$C\left(k|i\right)$$ is the true misclassification cost of classifying an observation as

*k*when its true class is*i*.

The *predictive measure of association* is
a value that indicates the similarity between decision rules that
split observations. Among all possible decision splits that are compared
to the optimal split (found by growing the tree), the best surrogate decision
split yields the maximum predictive measure of association.
The second-best surrogate split has the second-largest predictive
measure of association.

Suppose *x _{j}* and

$${\lambda}_{jk}=\frac{\text{min}\left({P}_{L},{P}_{R}\right)-\left(1-{P}_{{L}_{j}{L}_{k}}-{P}_{{R}_{j}{R}_{k}}\right)}{\text{min}\left({P}_{L},{P}_{R}\right)}.$$

*P*is the proportion of observations in node_{L}*t*, such that*x*<_{j}*u*. The subscript*L*stands for the left child of node*t*.*P*is the proportion of observations in node_{R}*t*, such that*x*≥_{j}*u*. The subscript*R*stands for the right child of node*t*.$${P}_{{L}_{j}{L}_{k}}$$ is the proportion of observations at node

*t*, such that*x*<_{j}*u*and*x*<_{k}*v*.$${P}_{{R}_{j}{R}_{k}}$$ is the proportion of observations at node

*t*, such that*x*≥_{j}*u*and*x*≥_{k}*v*.Observations with missing values for

*x*or_{j}*x*do not contribute to the proportion calculations._{k}

*λ _{jk}* is a value
in (–∞,1]. If

`predict`

generates predictions by following
the branches of `tree`

until it reaches a leaf node
or a missing value. If `predict`

reaches a leaf node,
it returns the classification of that node.

If `predict`

reaches a node with a missing value
for a predictor, its behavior depends on the setting of the `Surrogate`

name-value
pair when `fitctree`

constructs `tree`

.

(default) —`Surrogate`

=`'off'`

`predict`

returns the label with the largest number of training samples that reach the node.—`Surrogate`

=`'on'`

`predict`

uses the best surrogate split at the node. If all surrogate split variables with positive*predictive measure of association*are missing,`predict`

returns the label with the largest number of training samples that reach the node. For a definition, see Predictive Measure of Association.

Was this topic helpful?