`classregtree`

will be removed in a future
release. See `fitctree`

, `fitrtree`

, `ClassificationTree`

, or `RegressionTree`

instead.

`yfit = eval(t,X)`

yfit = eval(t,X,s)

[yfit,nodes] = eval(...)

[yfit,nodes,cnums] = eval(...)

[...] = t(X)

[...] = t(X,s)

`yfit = eval(t,X)`

takes a classification
or regression tree `t`

and a matrix `X`

of
predictors, and produces a vector `yfit`

of predicted
response values. For a regression tree, `yfit(i)`

is
the fitted response value for a point having the predictor values `X(i,:)`

.
For a classification tree, `yfit(i)`

is the class
into which the tree assigns the point with data `X(i,:)`

.

`yfit = eval(t,X,s)`

takes an additional
vector `s`

of pruning levels, with 0 representing
the full, unpruned tree. `t`

must include a pruning
sequence as created by `classregtree`

or
by `prune`

.
If `s`

has *k* elements and `X`

has *n* rows,
the output `yfit`

is an *n*-by-*k* matrix,
with the `j`

th column containing the fitted values
produced by the `s(j)`

subtree. `s`

must
be sorted in ascending order.

To compute fitted values for a tree that is not part of the
optimal pruning sequence, first use `prune`

to prune the tree.

`[yfit,nodes] = eval(...)`

also returns a
vector `nodes`

the same size as `yfit`

containing
the node number assigned to each row of `X`

. Use `view`

to display
the node numbers for any node you select.

`[yfit,nodes,cnums] = eval(...)`

is valid
only for classification trees. It returns a vector `cnum`

containing
the predicted class numbers.

`NaN`

values in `X`

are treated
as missing. If `eval`

encounters a missing value
when it attempts to evaluate the split rule at a branch node, it cannot
determine whether to proceed to the left or right child node. Instead,
it sets the corresponding fitted value equal to the fitted value assigned
to the branch node.

`[...] = t(X)`

or `[...] = t(X,s)`

also
invoke `eval`

.

Create a classification tree for Fisher's iris data:

load fisheriris; t = classregtree(meas,species,... 'names',{'SL' 'SW' 'PL' 'PW'}) t = Decision tree for classification 1 if PL<2.45 then node 2 elseif PL>=2.45 then node 3 else setosa 2 class = setosa 3 if PW<1.75 then node 4 elseif PW>=1.75 then node 5 else versicolor 4 if PL<4.95 then node 6 elseif PL>=4.95 then node 7 else versicolor 5 class = virginica 6 if PW<1.65 then node 8 elseif PW>=1.65 then node 9 else versicolor 7 class = virginica 8 class = versicolor 9 class = virginica view(t)

Find assigned class names:

sfit = eval(t,meas);

Compute that proportion is correctly classified:

pct = mean(strcmp(sfit,species)) pct = 0.9800

[1] Breiman, L., J. Friedman, R. Olshen, and
C. Stone. *Classification and Regression Trees*.
Boca Raton, FL: CRC Press, 1984.

`classregtree`

| `prune`

| `test`

| `view`

Was this topic helpful?