# Documentation

### This is machine translation

Translated by
Mouse over text to see original. Click the button below to return to the English verison of the page.

# ClassificationTree.fit

Class: ClassificationTree

Fit classification tree (to be removed)

`ClassificationTree.fit` will be removed in a future release. Use `fitctree` instead.

## Syntax

`tree = ClassificationTree.fit(x,y)tree = ClassificationTree.fit(x,y,Name,Value)`

## Description

`tree = ClassificationTree.fit(x,y)` returns a classification tree based on the input variables (also known as predictors, features, or attributes) `x` and output (response) `y`. `tree` is a binary tree, where each branching node is split based on the values of a column of `x`.

`tree = ClassificationTree.fit(x,y,Name,Value)` fits a tree with additional options specified by one or more `Name,Value` pair arguments. You can specify several name-value pair arguments in any order as `Name1,Value1,…,NameN,ValueN`.

Note that using the `'CrossVal'`, `'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'` options results in a tree of class `ClassificationPartitionedModel`. You cannot use a partitioned tree for prediction, so this kind of tree does not have a `predict` method.

Otherwise, `tree` is of class `ClassificationTree`, and you can use the `predict` method to make predictions.

## Input Arguments

expand all

Predictor values, specified as a matrix of floating point values.

`ClassificationTree.fit` considers `NaN` values in `x` as missing values. `ClassificationTree.fit` does not use observations with all missing values for `x` in the fit. `ClassificationTree.fit` uses observations with some missing values for `x` to find splits on variables for which these observations have valid values.

Data Types: `single` | `double`

Predictor values, specified as a numeric vector, categorical vector, logical vector, character array, or cell array of character vectors.

Each row of `y` represents the classification of the corresponding row of `x`. For numeric `y`, consider using `fitrtree` instead. `ClassificationTree.fit` considers `NaN` values in `y` to be missing values.

`ClassificationTree.fit` does not use observations with missing values for `y` in the fit.

Data Types: `single` | `double` | `char` | `logical` | `cell`

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside single quotes (`' '`). You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

expand all

Algorithm to find the best split on a categorical predictor with L levels for data with K ≥ 3 classes, specified as the comma-separated pair consisting of `'AlgorithmForCategorical'` and one of the following.

 `'Exact'` Consider all 2L–1 – 1 combinations `'PullLeft'` Pull left by purity `'PCA'` Principal component-based partition `'OVAbyClass'` One versus all by class

`ClassificationTree.fit` selects the optimal subset of algorithms for each split using the known number of classes and levels of a categorical predictor. For K = 2 classes, `ClassificationTree.fit` always performs the exact search.

Example: `'AlgorithmForCategorical','PCA'`

Categorical predictors list, specified as the comma-separated pair consisting of `'CategoricalPredictors'` and one of the following.

• A numeric vector with indices from `1` to `p`, where `p` is the number of columns of `x`.

• A logical vector of length `p`, where a `true` entry means that the corresponding column of `x` is a categorical variable.

• A cell array of character vectors, where each element in the array is the name of a predictor variable. The names must match entries in `PredictorNames` values.

• A character matrix, where each row of the matrix is a name of a predictor variable. The names must match entries in `PredictorNames` values. Pad the names with extra blanks so each row of the character matrix has the same length.

• `'all'`, meaning all predictors are categorical.

Example: `'CategoricalPredictors','all'`

Data Types: `single` | `double` | `char`

Class names, specified as the comma-separated pair consisting of `'ClassNames'` and an array representing the class names. Use the same data type as the values that exist in `y`.

Use `ClassNames` to order the classes or to select a subset of classes for training. The default is the class names that exist in `y`.

Data Types: `single` | `double` | `char` | `logical` | `cell`

Cost of misclassification a point, specified as the comma-separated pair consisting of `'Cost'` and one of the following.

• Square matrix, where `Cost(i,j)` is the cost of classifying a point into class `j` if its true class is `i`.

• Structure `S` having two fields: `S.ClassNames` containing the group names as a variable of the same type as `y`, and `S.ClassificationCosts` containing the cost matrix.

The default is `Cost(i,j)=1` if `i~=j`, and `Cost(i,j)=0` if `i=j`

Data Types: `single` | `double` | `struct`

Flag to grow a cross-validated decision tree, specified as the comma-separated pair consisting of `'CrossVal'` and either `'on'` or `'off'`.

If `'on'`, `ClassificationTree.fit` grows a cross-validated decision tree with 10 folds. You can override this cross-validation setting using one of the `'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'` name-value pair arguments. Note that you can only use one of these four options (`'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'`) at a time when creating a cross-validated tree.

Alternatively, cross-validate `tree` later using the `crossval` method.

Example: `'CrossVal','on'`

Partition to use in a cross-validated tree, specified as the comma-separated pair consisting of `'CVPartition'` and an object of the `cvpartition` class created using `cvpartition`.

Note that if you use `'CVPartition'`, you cannot use any of the `'KFold'`, `'Holdout'`, or `'Leaveout'` name-value pair arguments.

Fraction of data used for holdout validation, specified as the comma-separated pair consisting of `'Holdout'` and a scalar value in the range `[0,1]`. Holdout validation tests the specified fraction of the data, and uses the rest of the data for training.

Note that if you use `'Holdout'`, you cannot use any of the `'CVPartition'`, `'KFold'`, or `'Leaveout'` name-value pair arguments.

Example: `'Holdout',0.1`

Data Types: `single` | `double`

Number of folds to use in a cross-validated tree, specified as the comma-separated pair consisting of `'KFold'` and a positive integer value.

Note that if you use `'KFold'`, you cannot use any of the `'CVPartition'`, `'Holdout'`, or `'Leaveout'` name-value pair arguments.

Example: `'KFold',8`

Data Types: `single` | `double`

Leave-one-out cross validation flag, specified as the comma-separated pair consisting of `'Leaveout'` and either `'on'` or `'off'`. Use leave-one-out cross validation by setting to `'on'`.

Note that if you use `'Leaveout'`, you cannot use any of the `'CVPartition'`, `'Holdout'`, or `'KFold'` name-value pair arguments.

Example: `'Leaveout','on'`

Maximum category levels, specified as the comma-separated pair consisting of `'MaxCat'` and a nonnegative scalar value. `ClassificationTree.fit` splits a categorical predictor using the exact search algorithm if the predictor has at most `MaxCat` levels in the split node. Otherwise, `ClassificationTree.fit` finds the best categorical split using one of the inexact algorithms.

Note that passing a small value can lead to loss of accuracy and passing a large value can lead to long computation time and memory overload.

Example: `'MaxCat',8`

Leaf merge flag, specified as the comma-separated pair consisting of `'MergeLeaves'` and either `'on'` or `'off'`. When `'on'`, `ClassificationTree.fit` merges leaves that originate from the same parent node, and that give a sum of risk values greater or equal to the risk associated with the parent node. When `'off'`, `ClassificationTree.fit` does not merge leaves.

Example: `'MergeLeaves','off'`

Minimum number of leaf node observations, specified as the comma-separated pair consisting of `'MinLeaf'` and a positive integer value. Each leaf has at least `MinLeaf` observations per tree leaf. If you supply both `MinParent` and `MinLeaf`, `ClassificationTree.fit` uses the setting that gives larger leaves: `MinParent=max(MinParent,2*MinLeaf)`.

Example: `'MinLeaf',3`

Data Types: `single` | `double`

Minimum number of branch node observations, specified as the comma-separated pair consisting of `'MinParent'` and a positive integer value. Each branch node in the tree has at least `MinParent` observations. If you supply both `MinParent` and `MinLeaf`, `ClassificationTree.fit` uses the setting that gives larger leaves: `MinParent=max(MinParent,2*MinLeaf)`.

Example: `'MinParent',8`

Data Types: `single` | `double`

Number of predictors to select at random for each split, specified as the comma-separated pair consisting of `'NVarToSample'` and a positive integer value. You can also specify `'all'` to use all available predictors.

Example: `'NVarToSample',3`

Data Types: `single` | `double`

Predictor variable names, specified as the comma-separated pair consisting of `'PredictorNames'` and a cell array of character vectors containing the names for the predictor variables, in the order in which they appear in `x`.

Prior probabilities for each class, specified as the comma-separated pair consisting of `'Prior'`and one of the following.

• `'empirical'` determines class probabilities from class frequencies in `y`. If you pass observation weights, they are used to compute the class probabilities.

• `'uniform'` sets all class probabilities equal.

• A vector (one scalar value for each class)

• A structure `S` with two fields:

• `S.ClassNames` containing the class names as a variable of the same type as `y`

• `S.ClassProbs` containing a vector of corresponding probabilities

If you set values for both `weights` and `prior`, the weights are renormalized to add up to the value of the prior probability in the respective class.

Example: `'Prior','uniform'`

Pruning flag, specified as the comma-separated pair consisting of `'Prune'` and either `'on'` or `'off'`. When `'on'`, `ClassificationTree.fit` grows the classification tree, and computes the optimal sequence of pruned subtrees. When `'off'` `ClassificationTree.fit` grows the classification tree without pruning.

Example: `'Prune','off'`

Pruning criterion, specified as the comma-separated pair consisting of `'PruneCriterion'` and either `'error'` or `'impurity'`.

Example: `'PruneCriterion','impurity'`

Response variable name, specified as the comma-separated pair consisting of `'ResponseName'` and a character vector representing the name of the response variable `y`.

Example: `'ResponseName','Response'`

Score transform function, specified as the comma-separated pair consisting of `'ScoreTransform'` and a function handle for transforming scores. Your function should accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).

Alternatively, you can specify one of the following values representing a built-in transformation function.

ValueFormula
`'doublelogit'`1/(1 + e–2x)
`'invlogit'`log(x / (1–x))
`'ismax'`Set the score for the class with the largest score to `1`, and scores for all other classes to `0`.
`'logit'`1/(1 + ex)
`'none'` or `'identity'`x (no transformation)
`'sign'`–1 for x < 0
0 for x = 0
1 for x > 0
`'symmetric'`2x – 1
`'symmetriclogit'`2/(1 + ex) – 1
`'symmetricismax'`Set the score for the class with the largest score to `1`, and scores for all other classes to `-1`.

Example: `'ScoreTransform','logit'`

Split criterion, specified as the comma-separated pair consisting of `'SplitCriterion'` and `'gdi'` (Gini's diversity index), `'twoing'` for the twoing rule, or `'deviance'` for maximum deviance reduction (also known as cross entropy).

Example: `'SplitCriterion','deviance'`

Surrogate decision splits flag, specified as the comma-separated pair consisting of `'Surrogate'` and `'on'`, `'off'`, `'all'`, or a positive integer value.

• When set to `'on'`, `ClassificationTree.fit` finds at most 10 surrogate splits at each branch node.

• When set to a positive integer value, `ClassificationTree.fit` finds at most the specified number of surrogate splits at each branch node.

• When set to `'all'`, `ClassificationTree.fit` finds all surrogate splits at each branch node. The `'all'` setting can use much time and memory.

Use surrogate splits to improve the accuracy of predictions for data with missing values. The setting also enables you to compute measures of predictive association between predictors.

Example: `'Surrogate','on'`

Vector of observation weights, specified as the comma-separated pair consisting of `'Weights'` and a vector of scalar values. The length of `Weights` is equal to the number of rows in `x`. `ClassificationTree.fit` normalizes the weights in each class to add up to the value of the prior probability of the class.

Data Types: `single` | `double`

## Output Arguments

expand all

Classification tree object, returned as a classification tree object.

Note that using the `'CrossVal'`, `'KFold'`, `'Holdout'`, `'Leaveout'`, or `'CVPartition'` options results in a tree of class `ClassificationPartitionedModel`. You cannot use a partitioned tree for prediction, so this kind of tree does not have a `predict` method. Instead, use `kfoldpredict` to predict responses for observations not used for training.

Otherwise, `tree` is of class `ClassificationTree`, and you can use the `predict` method to make predictions.

## Definitions

### Impurity and Node Error

`ClassificationTree` splits nodes based on either impurity or node error.

Impurity means one of several things, depending on your choice of the `SplitCriterion` name-value pair argument:

• Gini's Diversity Index (`gdi`) — The Gini index of a node is

`$1-\sum _{i}{p}^{2}\left(i\right),$`

where the sum is over the classes i at the node, and p(i) is the observed fraction of classes with class i that reach the node. A node with just one class (a pure node) has Gini index `0`; otherwise the Gini index is positive. So the Gini index is a measure of node impurity.

• Deviance (`'deviance'`) — With p(i) defined the same as for the Gini index, the deviance of a node is

`$-\sum _{i}p\left(i\right)\mathrm{log}p\left(i\right).$`

A pure node has deviance `0`; otherwise, the deviance is positive.

• Twoing rule (`'twoing'`) — Twoing is not a purity measure of a node, but is a different measure for deciding how to split a node. Let L(i) denote the fraction of members of class i in the left child node after a split, and R(i) denote the fraction of members of class i in the right child node after a split. Choose the split criterion to maximize

`$P\left(L\right)P\left(R\right){\left(\sum _{i}|L\left(i\right)-R\left(i\right)|\right)}^{2},$`

where P(L) and P(R) are the fractions of observations that split to the left and right respectively. If the expression is large, the split made each child node purer. Similarly, if the expression is small, the split made each child node similar to each other, and hence similar to the parent node, and so the split did not increase node purity.

• Node error — The node error is the fraction of misclassified classes at a node. If j is the class with the largest number of training samples at a node, the node error is

1 – p(j).

## Examples

expand all

Construct a classification tree for the data in `ionosphere.mat`.

```load ionosphere tc = ClassificationTree.fit(X,Y) ```
```tc = ClassificationTree PredictorNames: {1x34 cell} ResponseName: 'Y' ClassNames: {'b' 'g'} ScoreTransform: 'none' CategoricalPredictors: [] NumObservations: 351 Properties, Methods```

## References

[1] Coppersmith, D., S. J. Hong, and J. R. M. Hosking. "Partitioning Nominal Attributes in Decision Trees." Data Mining and Knowledge Discovery, Vol. 3, 1999, pp. 197–217.

[2] Breiman, L., J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Boca Raton, FL: CRC Press, 1984.

## See Also

Was this topic helpful?

Download now