Documentation Center

  • Trial Software
  • Product Updates

fitctree

Fit classification tree

Syntax

Description

example

tree = fitctree(x,y) returns a classification tree based on the input variables (also known as predictors, features, or attributes) x and output (response) y. The returned tree is a binary tree, where each branching node is split based on the values of a column of x.

example

tree = fitctree(x,y,Name,Value) fits a tree with additional options specified by one or more name-value pair arguments. For example, you can specify the algorithm used to find the best split on a categorical predictor, grow a cross-validated tree, or hold out a fraction of the input data for validation.

Examples

expand all

Construct a Classification Tree

Construct a classification tree using sample data.

Construct a classification tree using the data in ionosphere.mat.

load ionosphere
tc = fitctree(X,Y)
tc = 

  ClassificationTree
           PredictorNames: {1x34 cell}
             ResponseName: 'Y'
               ClassNames: {'b'  'g'}
           ScoreTransform: 'none'
    CategoricalPredictors: []
          NumObservations: 351


  Properties, Methods

Input Arguments

expand all

x — Predictor valuesmatrix of floating-point values

Predictor values, specified as a matrix of floating-point values.

fitctree considers NaN values in x as missing values. fitctree does not use observations with all missing values for x in the fit. fitctree uses observations with some missing values for x to find splits on variables for which these observations have valid values.

Data Types: single | double

y — predictor valuesnumeric vector | categorical vector | logical vector | character array | cell array of strings

Predictor values, specified as a numeric vector, categorical vector, logical vector, character array, or cell array of strings.

Each row of y represents the classification of the corresponding row of x. For numeric y, consider using fitrtree instead. fitctree considers NaN values in y to be missing values.

fitctree does not use observations with missing values for y in the fit.

Data Types: single | double | char | logical | cell

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'CrossVal','on','MinLeaf',40 specifies a cross-validated classification tree with a minimum of 40 observations per leaf.

'AlgorithmForCategorical' — Algorithm for best categorical predictor split'Exact' | 'PullLeft' | 'PCA' | 'OVAbyClass'

Algorithm to find the best split on a categorical predictor with C categories for data and K ≥ 3 classes, specified as the comma-separated pair consisting of 'AlgorithmForCategorical' and one of the following.

'Exact'Consider all 2C–1 – 1 combinations.
'PullLeft'Start with all C categories on the right branch. Consider moving each category to the left branch as it achieves the minimum impurity for the K classes among the remaining categories. From this sequence, choose the split that has the lowest impurity.
'PCA'Compute a score for each category using the inner product between the first principal component of a weighted covariance matrix (of the centered class probability matrix) and the vector of class probabilities for that category. Sort the scores in ascending order, and consider all C – 1 splits.
'OVAbyClass'Start with all C categories on the right branch. For each class, order the categories based on their probability for that class. For the first class, consider moving each category to the left branch in order, recording the impurity criterion at each move. Repeat for the remaining classes. From this sequence, choose the split that has the minimum impurity.

fitctree automatically selects the optimal subset of algorithms for each split using the known number of classes and levels of a categorical predictor. For K = 2 classes, fitctree always performs the exact search. Use the 'AlgorithmForCategorical' name-value pair argument to specify a particular algorithm.

Example: 'AlgorithmForCategorical','PCA'

'CategoricalPredictors' — Categorical predictors listnumeric or logical vector | cell array of strings | character matrix | 'all'

Categorical predictors list, specified as the comma-separated pair consisting of 'CategoricalPredictors' and one of the following:

  • A numeric vector with indices from 1 through p, where p is the number of columns of x.

  • A logical vector of length p, where a true entry means that the corresponding column of x is a categorical variable.

  • A cell array of strings, where each element in the array is the name of a predictor variable. The names must match entries in PredictorNames values.

  • A character matrix, where each row of the matrix is a name of a predictor variable. The names must match entries in PredictorNames values. Pad the names with extra blanks so each row of the character matrix has the same length.

  • 'all', meaning all predictors are categorical.

Example: 'CategoricalPredictors','all'

Data Types: single | double | char

'ClassNames' — Class namesnumeric vector | categorical vector | logical vector | character array | cell array of strings

Class names, specified as the comma-separated pair consisting of 'ClassNames' and an array representing the class names. Use the same data type as the values that exist in y.

Use ClassNames to order the classes or to select a subset of classes for training. The default is the class names that exist in y.

Data Types: single | double | char | logical | cell

'Cost' — Cost of misclassificationsquare matrix | structure

Cost of misclassification of a point, specified as the comma-separated pair consisting of 'Cost' and one of the following:

  • Square matrix, where Cost(i,j) is the cost of classifying a point into class j if its true class is i.

  • Structure S having two fields: S.ClassNames containing the group names as a variable of the same data type as y, and S.ClassificationCosts containing the cost matrix.

The default is Cost(i,j)=1 if i~=j, and Cost(i,j)=0 if i=j.

Data Types: single | double | struct

'CrossVal' — Flag to grow cross-validated decision tree'off' (default) | 'on'

Flag to grow a cross-validated decision tree, specified as the comma-separated pair consisting of 'CrossVal' and either 'on' or 'off'.

If 'on', fitctree grows a cross-validated decision tree with 10 folds. You can override this cross-validation setting using one of the 'KFold', 'Holdout', 'Leaveout', or 'CVPartition' name-value pair arguments. Note that you can only use one of these four arguments at a time when creating a cross-validated tree.

Alternatively, cross validate tree later using the crossval method.

Example: 'CrossVal','on'

'CVPartition' — Partition for cross-validated treecvpartition object

Partition to use in a cross-validated tree, specified as the comma-separated pair consisting of 'CVPartition' and an object created using cvpartition.

If you use 'CVPartition', you cannot use any of the 'KFold', 'Holdout', or 'Leaveout' name-value pair arguments.

'Holdout' — Fraction of data for holdout validation0 (default) | scalar value in the range [0,1]

Fraction of data used for holdout validation, specified as the comma-separated pair consisting of 'Holdout' and a scalar value in the range [0,1]. Holdout validation tests the specified fraction of the data, and uses the rest of the data for training.

If you use 'Holdout', you cannot use any of the 'CVPartition', 'KFold', or 'Leaveout' name-value pair arguments.

Example: 'Holdout',0.1

Data Types: single | double

'KFold' — Number of folds10 (default) | positive integer value

Number of folds to use in a cross-validated tree, specified as the comma-separated pair consisting of 'KFold' and a positive integer value.

If you use 'KFold', you cannot use any of the 'CVPartition', 'Holdout', or 'Leaveout' name-value pair arguments.

Example: 'KFold',8

Data Types: single | double

'Leaveout' — Leave-one-out cross-validation flag'off' (default) | 'on'

Leave-one-out cross-validation flag, specified as the comma-separated pair consisting of 'Leaveout' and either 'on' or 'off'. Specify 'on' to use leave-one-out cross-validation.

If you use 'Leaveout', you cannot use any of the 'CVPartition', 'Holdout', or 'KFold' name-value pair arguments.

Example: 'Leaveout','on'

'MaxCat' — Maximum category levels10 (default) | nonnegative scalar value

Maximum category levels, specified as the comma-separated pair consisting of 'MaxCat' and a nonnegative scalar value. fitctree splits a categorical predictor using the exact search algorithm if the predictor has at most MaxCat levels in the split node. Otherwise, fitctree finds the best categorical split using one of the inexact algorithms.

Passing a small value can lead to loss of accuracy and passing a large value can increase computation time and memory overload.

Example: 'MaxCat',8

'MergeLeaves' — Leaf merge flag'on' (default) | 'off'

Leaf merge flag, specified as the comma-separated pair consisting of 'MergeLeaves' and either 'on' or 'off'. If you specify 'on', fitctree merges leaves that originate from the same parent node, and that give a sum of risk values greater or equal to the risk associated with the parent node. If you specify 'off', fitctree does not merge leaves.

Example: 'MergeLeaves','off'

'MinLeaf' — Minimum number of leaf node observations1 (default) | positive integer value

Minimum number of leaf node observations, specified as the comma-separated pair consisting of 'MinLeaf' and a positive integer value. Each leaf has at least MinLeaf observations per tree leaf. If you supply both MinParent and MinLeaf, fitctree uses the setting that gives larger leaves: MinParent=max(MinParent,2*MinLeaf).

Example: 'MinLeaf',3

Data Types: single | double

'MinParent' — Minimum number of branch node observations10 (default) | positive integer value

Minimum number of branch node observations, specified as the comma-separated pair consisting of 'MinParent' and a positive integer value. Each branch node in the tree has at least MinParent observations. If you supply both MinParent and MinLeaf, fitctree uses the setting that gives larger leaves: MinParent=max(MinParent,2*MinLeaf).

Example: 'MinParent',8

Data Types: single | double

'NVarToSample' — Number of predictors for split'all' | positive integer value

Number of predictors to select at random for each split, specified as the comma-separated pair consisting of 'NVarToSample' and a positive integer value. You can also specify 'all' to use all available predictors.

Example: 'NVarToSample',3

Data Types: single | double

'PredictorNames' — Predictor variable names{'x1','x2',...} (default) | cell array of strings

Predictor variable names, specified as the comma-separated pair consisting of 'PredictorNames' and a cell array of strings containing the names for the predictor variables, in the order in which they appear in x.

'Prior' — Prior probabilities'empirical' (default) | 'uniform' | vector of scalar values | structure

Prior probabilities for each class, specified as the comma-separated pair consisting of 'Prior' and one of the following.

  • A string:

    • 'empirical' determines class probabilities from class frequencies in y. If you pass observation weights, fitctree uses the weights to compute the class probabilities.

    • 'uniform' sets all class probabilities equal.

  • A vector (one scalar value for each class)

  • A structure S with two fields:

    • S.ClassNames containing the class names as a variable of the same type as y

    • S.ClassProbs containing a vector of corresponding probabilities

If you set values for both weights and prior, the weights are renormalized to add up to the value of the prior probability in the respective class.

Example: 'Prior','uniform'

'Prune' — Pruning flag'on' (default) | 'off'

Pruning flag, specified as the comma-separated pair consisting of 'Prune' and either 'on' or 'off'. When 'on', fitctree grows the classification tree, and computes the optimal sequence of pruned subtrees. When 'off' fitctree grows the classification tree without pruning.

Example: 'Prune','off'

'PruneCriterion' — Pruning criterion'error' (default) | 'impurity'

Pruning criterion, specified as the comma-separated pair consisting of 'PruneCriterion' and either 'error' or 'impurity'.

Example: 'PruneCriterion','impurity'

'ResponseName' — Response variable name'Y' (default) | string

Response variable name, specified as the comma-separated pair consisting of 'ResponseName' and a string representing the name of the response variable y.

Example: 'ResponseName','Response'

'ScoreTransform' — Score transform function'none' | 'symmetric' | 'invlogit' | 'ismax' | function handle | ...

Score transform function, specified as the comma-separated pair consisting of 'ScoreTransform' and a function handle for transforming scores. Your function should accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).

Alternatively, you can specify one of the following strings representing a built-in transformation function.

StringFormula
'doublelogit'1/(1 + e–2x)
'invlogit'log(x / (1–x))
'ismax'Set the score for the class with the largest score to 1, and scores for all other classes to 0.
'logit'1/(1 + ex)
'none'x (no transformation)
'sign'–1 for x < 0
0 for x = 0
1 for x > 0
'symmetric'2x – 1
'symmetriclogit'2/(1 + ex) – 1
'symmetricismax'Set the score for the class with the largest score to 1, and scores for all other classes to -1.

Example: 'ScoreTransform','logit'

'SplitCriterion' — Split criterion'gdi' (default) | 'twoing' | 'deviance'

Split criterion, specified as the comma-separated pair consisting of 'SplitCriterion' and 'gdi' (Gini's diversity index), 'twoing' for the twoing rule, or 'deviance' for maximum deviance reduction (also known as cross entropy).

Example: 'SplitCriterion','deviance'

'Surrogate' — Surrogate decision splits flag'off' | 'on' | 'all' | positive integer value

Surrogate decision splits flag, specified as the comma-separated pair consisting of 'Surrogate' and 'on', 'off', 'all', or a positive integer value.

  • When set to 'on', fitctree finds at most 10 surrogate splits at each branch node.

  • When set to 'all', fitctree finds all surrogate splits at each branch node. The 'all' setting can use considerable time and memory.

  • When set to a positive integer value, fitctree finds at most the specified number of surrogate splits at each branch node.

Use surrogate splits to improve the accuracy of predictions for data with missing values. The setting also lets you compute measures of predictive association between predictors.

Example: 'Surrogate','on'

'Weights' — Observation weightsones(size(x,1),1) (default) | vector of scalar values

Vector of observation weights, specified as the comma-separated pair consisting of 'Weights' and a vector of scalar values. The length of Weights equals the number of rows in x. fitctree normalizes the weights in each class to add up to the value of the prior probability of the class.

Data Types: single | double

Output Arguments

expand all

tree — Classification treeclassification tree object

Classification tree, returned as a classification tree object.

Using the 'CrossVal', 'KFold', 'Holdout', 'Leaveout', or 'CVPartition' options results in a tree of class ClassificationPartitionedModel. You cannot use a partitioned tree for prediction, so this kind of tree does not have a predict method. Instead, use kfoldpredict to predict responses for observations not used for training.

Otherwise, tree is of class ClassificationTree, and you can use the predict method to make predictions.

More About

expand all

Impurity and Node Error

ClassificationTree splits nodes based on either impurity or node error. Impurity means one of several things, depending on your choice of the SplitCriterion name-value pair argument:

  • Gini's Diversity Index (gdi) — The Gini index of a node is

    where the sum is over the classes i at the node, and p(i) is the observed fraction of classes with class i that reach the node. A node with just one class (a pure node) has Gini index 0; otherwise the Gini index is positive. So the Gini index is a measure of node impurity.

  • Deviance ('deviance') — With p(i) defined the same as for the Gini index, the deviance of a node is

    A pure node has deviance 0; otherwise, the deviance is positive.

  • Twoing rule ('twoing') — Twoing is not a purity measure of a node, but is a different measure for deciding how to split a node. Let L(i) denote the fraction of members of class i in the left child node after a split, and R(i) denote the fraction of members of class i in the right child node after a split. Choose the split criterion to maximize

    where P(L) and P(R) are the fractions of observations that split to the left and right respectively. If the expression is large, the split made each child node purer. Similarly, if the expression is small, the split made each child node similar to each other, and hence similar to the parent node, and so the split did not increase node purity.

  • Node error — The node error is the fraction of misclassified classes at a node. If j is the class with the largest number of training samples at a node, the node error is

    1 – p(j).

References

[1] Coppersmith, D., S. J. Hong, and J. R. M. Hosking. "Partitioning Nominal Attributes in Decision Trees." Data Mining and Knowledge Discovery, Vol. 3, 1999, pp. 197–217.

See Also

| | |

Was this topic helpful?