Documentation

fitctree

Fit binary classification decision tree for multiclass classification

Syntax

  • tree = fitctree(TBL,ResponseVarName)
  • tree = fitctree(TBL,formula)
  • tree = fitctree(TBL,Y)
  • tree = fitctree(___,Name,Value)
    example

Description

tree = fitctree(TBL,ResponseVarName) returns a fitted binary classification decision tree based on the input variables (also known as predictors, features, or attributes) contained in the table TBL and output (response or labels) contained in ResponseVarName. The returned binary tree splits branching nodes based on the values of a column of TBL.

tree = fitctree(TBL,formula) returns a fitted binary classification decision tree based on the input variables contained in the table TBL. formula is a formula string that identifies the response and predictor variables in TBL used to fit tree. The returned binary tree splits branching nodes based on the values of a column of TBL.

tree = fitctree(TBL,Y) returns a fitted binary classification decision tree based on the input variables contained in the table TBL and output in vector Y. The returned binary tree splits branching nodes based on the values of a column of TBL.

example

tree = fitctree(X,Y) returns a fitted binary classification decision tree based on the input variables contained in matrix X and output Y. The returned binary tree splits branching nodes based on the values of a column of X.

example

tree = fitctree(___,Name,Value) fits a tree with additional options specified by one or more name-value pair arguments, using any of the previous syntaxes. For example, you can specify the algorithm used to find the best split on a categorical predictor, grow a cross-validated tree, or hold out a fraction of the input data for validation.

Examples

collapse all

Grow a Classification Tree

Grow a classification tree using the ionosphere data set.

load ionosphere
tc = fitctree(X,Y)
tc = 

  ClassificationTree
             ResponseName: 'Y'
    CategoricalPredictors: []
               ClassNames: {'b'  'g'}
           ScoreTransform: 'none'
          NumObservations: 351


Control Tree Depth

You can control the depth of the trees using the MaxNumSplits, MinLeafSize, or MinParentSize name-value pair parameters. fitctree grows deep decision trees by default. You can grow shallower trees to reduce model complexity or computation time.

Load the ionosphere data set.

load ionosphere

The default values of the tree depth controllers for growing classification trees are:

  • n - 1 for MaxNumSplits. n is the training sample size.

  • 1 for MinLeafSize.

  • 10 for MinParentSize.

These default values tend to grow deep trees for large training sample sizes.

Train a classification tree using the default values for tree depth control. Cross validate the model using 10-fold cross validation.

rng(1); % For reproducibility
MdlDefault = fitctree(X,Y,'CrossVal','on');

Draw a histogram of the number of imposed splits on the trees. Also, view one of the trees.

numBranches = @(x)sum(x.IsBranch);
mdlDefaultNumSplits = cellfun(numBranches, MdlDefault.Trained);

figure;
histogram(mdlDefaultNumSplits)

view(MdlDefault.Trained{1},'Mode','graph')

The average number of splits is around 15.

Suppose that you want a classification tree that is not as complex (deep) as the ones trained using the default number of splits. Train another classification tree, but set the maximum number of splits at 7, which is about half the mean number of splits from the default classification tree. Cross validate the model using 10-fold cross validation.

Mdl7 = fitctree(X,Y,'MaxNumSplits',7,'CrossVal','on');
view(Mdl7.Trained{1},'Mode','graph')

Compare the cross validation classification errors of the models.

classErrorDefault = kfoldLoss(MdlDefault)
classError7 = kfoldLoss(Mdl7)
classErrorDefault =

    0.1140


classError7 =

    0.1254

Mdl7 is much less complex and performs only slightly worse than MdlDefault.

Input Arguments

collapse all

TBL — Sample datatable

Sample data used to train the model, specified as a table. Each row of TBL corresponds to one observation, and each column corresponds to one predictor variable. Optionally, TBL can contain one additional column for the response variable. Multi-column variables and cell arrays other than cell arrays of strings are not allowed.

If TBL contains the response variable, and you want to use all remaining variables in TBL as predictors, then specify the response variable using ResponseVarName.

If TBL contains the response variable, and you want to use only a subset of the remaining variables in TBL as predictors, then specify a formula string using formula.

If TBL does not contain the response variable, then specify a response variable using Y. The length of response variable and the number of rows of TBL must be equal.

Data Types: table

X — Predictor valuesmatrix of floating-point values

Predictor values, specified as a matrix of floating-point values.

fitctree considers NaN values in X as missing values. fitctree does not use observations with all missing values for X in the fit. fitctree uses observations with some missing values for X to find splits on variables for which these observations have valid values.

Data Types: single | double

ResponseVarName — Response variable namename of a variable in TBL

Response variable name, specified as the name of a variable in TBL.

You must specify ResponseVarName as a string. For example, if the response variable Y is stored as TBL.Y, then specify it as 'Y'. Otherwise, the software treats all columns of TBL, including Y, as predictors when training the model.

The response variable must be a categorical or character array, logical or numeric vector, or cell array of strings. If Y is a character array, then each element must correspond to one row of the array.

It is good practice to specify the order of the classes using the ClassNames name-value pair argument.

formula — Response and predictor variables to use in model trainingstring in the form of 'Y~X1+X2+X3'

Response and predictor variables to use in model training, specified as a string in the form of 'Y~X1+X2+X3'. In this form, Y represents the response variable, and X1, X2, and X3 represent the predictor variables.

To specify a subset of variables in TBL as predictors for training the model, use a formula string. If you specify a formula string, then any variables in TBL that do not appear in formula are not used to train the model.

Y — Class labelsnumeric vector | categorical vector | logical vector | character array | cell array of strings

Class labels, specified as a numeric vector, categorical vector, logical vector, character array, or cell array of strings. Each row of X represents the classification of the corresponding row of X.

When fitting the tree, fitctree considers NaN, '' (empty string), and <undefined> values in Y to be missing values. fitctree does not use observations with missing values for Y in the fit.

For numeric Y, consider fitting a regression tree using fitrtree instead.

Data Types: single | double | char | logical | cell

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'CrossVal','on','MinLeafSize',40 specifies a cross-validated classification tree with a minimum of 40 observations per leaf.

'AlgorithmForCategorical' — Algorithm for best categorical predictor split'Exact' | 'PullLeft' | 'PCA' | 'OVAbyClass'

Algorithm to find the best split on a categorical predictor with C categories for data and K ≥ 3 classes, specified as the comma-separated pair consisting of 'AlgorithmForCategorical' and one of the following.

'Exact'Consider all 2C–1 – 1 combinations.
'PullLeft'Start with all C categories on the right branch. Consider moving each category to the left branch as it achieves the minimum impurity for the K classes among the remaining categories. From this sequence, choose the split that has the lowest impurity.
'PCA'Compute a score for each category using the inner product between the first principal component of a weighted covariance matrix (of the centered class probability matrix) and the vector of class probabilities for that category. Sort the scores in ascending order, and consider all C – 1 splits.
'OVAbyClass'Start with all C categories on the right branch. For each class, order the categories based on their probability for that class. For the first class, consider moving each category to the left branch in order, recording the impurity criterion at each move. Repeat for the remaining classes. From this sequence, choose the split that has the minimum impurity.

fitctree automatically selects the optimal subset of algorithms for each split using the known number of classes and levels of a categorical predictor. For K = 2 classes, fitctree always performs the exact search. To specify a particular algorithm, use the 'AlgorithmForCategorical' name-value pair argument.

Example: 'AlgorithmForCategorical','PCA'

'CategoricalPredictors' — Categorical predictors listnumeric or logical vector | cell array of strings | character matrix | 'all'

Categorical predictors list, specified as the comma-separated pair consisting of 'CategoricalPredictors' and one of the following:

  • A numeric vector with indices from 1 through p, where p is the number of columns of X.

  • A logical vector of length p, where a true entry means that the corresponding column of X is a categorical variable.

  • A cell array of strings, where each element in the array is the name of a predictor variable. The names must match entries in PredictorNames values.

  • A character matrix, where each row of the matrix is a name of a predictor variable. The names must match entries in PredictorNames values. Pad the names with extra blanks so each row of the character matrix has the same length.

  • 'all', meaning all predictors are categorical.

By default, if the predictor data is in a matrix (X), the software assumes that none of the predictors are categorical. If the predictor data is in a table (TBL), the software assumes that a variable is categorical if it contains, logical values, values of the unordered data type categorical, or a cell array of strings.

Example: 'CategoricalPredictors','all'

Data Types: single | double | char

'ClassNames' — Names of classes to use for trainingcategorical array | cell array of strings | character array | logical vector | numeric vector

Names of classes to use for training, specified as the comma-separated pair consisting of 'ClassNames' and a categorical or character array, logical or numeric vector, or cell array of strings. ClassNames must be the same data type as Y.

If ClassNames is a character array, then each element must correspond to one row of the array.

Use ClassNames to:

  • Order the classes during training.

  • Specify the order of any input or output argument dimension that corresponds to the class order. For example, use ClassNames to specify the order of the dimensions of Cost or the column order of classification scores returned by predict.

  • Select a subset of classes for training. For example, suppose that the set of all distinct class names in Y is {'a','b','c'}. To train the model using observations from classes 'a' and 'c' only, specify 'ClassNames',{'a','c'}.

The default is the set of all distinct class names in Y.

Example: 'ClassNames',{'b','g'}

Data Types: categorical | char | logical | single | double | cell

'Cost' — Cost of misclassificationsquare matrix | structure

Cost of misclassification of a point, specified as the comma-separated pair consisting of 'Cost' and one of the following:

  • Square matrix, where Cost(i,j) is the cost of classifying a point into class j if its true class is i (i.e., the rows correspond to the true class and the columns correspond to the predicted class). To specify the class order for the corresponding rows and columns of Cost, also specify the ClassNames name-value pair argument.

  • Structure S having two fields: S.ClassNames containing the group names as a variable of the same data type as Y, and S.ClassificationCosts containing the cost matrix.

The default is Cost(i,j)=1 if i~=j, and Cost(i,j)=0 if i=j.

Data Types: single | double | struct

'CrossVal' — Flag to grow cross-validated decision tree'off' (default) | 'on'

Flag to grow a cross-validated decision tree, specified as the comma-separated pair consisting of 'CrossVal' and 'on' or 'off'.

If 'on', fitctree grows a cross-validated decision tree with 10 folds. You can override this cross-validation setting using one of the 'KFold', 'Holdout', 'Leaveout', or 'CVPartition' name-value pair arguments. You can only use one of these four arguments at a time when creating a cross-validated tree.

Alternatively, cross validate tree later using the crossval method.

Example: 'CrossVal','on'

'CVPartition' — Partition for cross-validated treecvpartition object

Partition to use in a cross-validated tree, specified as the comma-separated pair consisting of 'CVPartition' and an object created using cvpartition.

If you use 'CVPartition', you cannot use any of the 'KFold', 'Holdout', or 'Leaveout' name-value pair arguments.

'Holdout' — Fraction of data for holdout validation0 (default) | scalar value in the range [0,1]

Fraction of data used for holdout validation, specified as the comma-separated pair consisting of 'Holdout' and a scalar value in the range [0,1]. Holdout validation tests the specified fraction of the data, and uses the rest of the data for training.

If you use 'Holdout', you cannot use any of the 'CVPartition', 'KFold', or 'Leaveout' name-value pair arguments.

Example: 'Holdout',0.1

Data Types: single | double

'KFold', 'KFold' — Number of folds10 (default) | positive integer value greater than 1

Number of folds to use in a cross-validated classifier, specified as the comma-separated pair consisting of 'KFold' and a positive integer value greater than 1. If you specify, e.g., 'KFold',k, then the software:

  1. Randomly partitions the data into k sets

  2. For each set, reserves the set as validation data, and trains the model using the other k – 1 sets

  3. Stores the k compact, trained models in the cells of a k-by-1 cell vector in the Trained property of the cross-validated model.

To create a cross-validated model, you can use one of these four options only: 'CVPartition', 'Holdout', 'KFold', or 'Leaveout'.

Example: 'KFold',8

Data Types: single | double

'Leaveout' — Leave-one-out cross-validation flag'off' (default) | 'on'

Leave-one-out cross-validation flag, specified as the comma-separated pair consisting of 'Leaveout' and 'on' or 'off'. Specify 'on' to use leave-one-out cross-validation.

If you use 'Leaveout', you cannot use any of the 'CVPartition', 'Holdout', or 'KFold' name-value pair arguments.

Example: 'Leaveout','on'

'MaxNumCategories' — Maximum category levels10 (default) | nonnegative scalar value

Maximum category levels, specified as the comma-separated pair consisting of 'MaxNumCategories' and a nonnegative scalar value. fitctree splits a categorical predictor using the exact search algorithm if the predictor has at most MaxNumCategories levels in the split node. Otherwise, fitctree finds the best categorical split using one of the inexact algorithms.

Passing a small value can lead to loss of accuracy and passing a large value can increase computation time and memory overload.

Example: 'MaxNumCategories',8

'MaxNumSplits' — Maximal number of decision splitssize(X,1) - 1 (default) | positive integer

Maximal number of decision splits (or branch nodes), specified as the comma-separated pair consisting of 'MaxNumSplits' and a positive integer. fitctree splits MaxNumSplits or fewer branch nodes. For more details on splitting behavior, see Algorithms.

Example: 'MaxNumSplits',5

Data Types: single | double

'MergeLeaves' — Leaf merge flag'on' (default) | 'off'

Leaf merge flag, specified as the comma-separated pair consisting of 'MergeLeaves' and 'on' or 'off'.

If MergeLeaves is 'on', then fitctree:

  • Merges leaves that originate from the same parent node, and that yields a sum of risk values greater or equal to the risk associated with the parent node

  • Estimates the optimal sequence of pruned subtrees, but does not prune the classification tree

Otherwise, fitctree does not merge leaves.

Example: 'MergeLeaves','off'

'MinLeafSize' — Minimum number of leaf node observations1 (default) | positive integer value

Minimum number of leaf node observations, specified as the comma-separated pair consisting of 'MinLeafSize' and a positive integer value. Each leaf has at least MinLeafSize observations per tree leaf. If you supply both MinParentSize and MinLeafSize, fitctree uses the setting that gives larger leaves: MinParentSize = max(MinParentSize,2*MinLeafSize).

Example: 'MinLeafSize',3

Data Types: single | double

'MinParentSize' — Minimum number of branch node observations10 (default) | positive integer value

Minimum number of branch node observations, specified as the comma-separated pair consisting of 'MinParentSize' and a positive integer value. Each branch node in the tree has at least MinParentSize observations. If you supply both MinParentSize and MinLeafSize, fitctree uses the setting that gives larger leaves: MinParentSize = max(MinParentSize,2*MinLeafSize).

Example: 'MinParentSize',8

Data Types: single | double

'NumVariablesToSample' — Number of predictors to select at random for each split'all' | positive integer value

Number of predictors to select at random for each split, specified as the comma-separated pair consisting of 'NumVariablesToSample' and a positive integer value. You can also specify 'all' to use all available predictors.

Example: 'NumVariablesToSample',3

Data Types: single | double

'PredictorNames' — Predictor variable names{'x1','x2',...} (default) | cell array of strings

Predictor variable names, specified as the comma-separated pair consisting of 'PredictorNames' and a cell array of strings containing the names for the predictor variables, in the order in which they appear in X or TBL.

If you specify the predictors as a table (TBL), PredictorNames must be a subset of the variable names in TBL. In this case, the software uses only the variables in PredictorNames to fit the model. If you use formula to specify the model, then you cannot use the PredictorNames name-value pair.

Example: 'PredictorNames',{'PetalWidth','PetalLength'}

'Prior' — Prior probabilities'empirical' (default) | 'uniform' | vector of scalar values | structure

Prior probabilities for each class, specified as the comma-separated pair consisting of 'Prior' and one of the following.

  • A string:

    • 'empirical' determines class probabilities from class frequencies in Y. If you pass observation weights, fitctree uses the weights to compute the class probabilities.

    • 'uniform' sets all class probabilities equal.

  • A vector (one scalar value for each class). To specify the class order for the corresponding elements of Prior, also specify the ClassNames name-value pair argument.

  • A structure S with two fields:

    • S.ClassNames containing the class names as a variable of the same type as Y

    • S.ClassProbs containing a vector of corresponding probabilities

If you set values for both weights and prior, the weights are renormalized to add up to the value of the prior probability in the respective class.

Example: 'Prior','uniform'

'Prune' — Flag to estimate optimal sequence of pruned subtrees'on' (default) | 'off'

Flag to estimate the optimal sequence of pruned subtrees, specified as the comma-separated pair consisting of 'Prune' and 'on' or 'off'.

If Prune is 'on', then fitctree grows the classification tree without pruning it, but estimates the optimal sequence of pruned subtrees. Otherwise, fitctree grows the classification tree without estimating the optimal sequence of pruned subtrees.

To prune a trained ClassificationTree model, pass it to prune.

Example: 'Prune','off'

'PruneCriterion' — Pruning criterion'error' (default) | 'impurity'

Pruning criterion, specified as the comma-separated pair consisting of 'PruneCriterion' and 'error' or 'impurity'.

Example: 'PruneCriterion','impurity'

'ResponseName' — Response variable name'Y' (default) | string

Response variable name, specified as the comma-separated pair consisting of 'ResponseName' and a string representing the name of the response variable.

This name-value pair is not valid when using the ResponseVarName or formula input arguments.

Example: 'ResponseName','IrisType'

'ScoreTransform' — Score transform function'none' | 'symmetric' | 'invlogit' | 'ismax' | function handle | ...

Score transform function, specified as the comma-separated pair consisting of 'ScoreTransform' and a function handle for transforming scores. Your function must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).

Alternatively, you can specify one of the following strings representing a built-in transformation function.

StringFormula
'doublelogit'1/(1 + e–2x)
'invlogit'log(x / (1–x))
'ismax'Set the score for the class with the largest score to 1, and scores for all other classes to 0.
'logit'1/(1 + ex)
'none' or 'identity'x (no transformation)
'sign'–1 for x < 0
0 for x = 0
1 for x > 0
'symmetric'2x – 1
'symmetriclogit'2/(1 + ex) – 1
'symmetricismax'Set the score for the class with the largest score to 1, and scores for all other classes to -1.

Example: 'ScoreTransform','logit'

'SplitCriterion' — Split criterion'gdi' (default) | 'twoing' | 'deviance'

Split criterion, specified as the comma-separated pair consisting of 'SplitCriterion' and 'gdi' (Gini's diversity index), 'twoing' for the twoing rule, or 'deviance' for maximum deviance reduction (also known as cross entropy).

Example: 'SplitCriterion','deviance'

'Surrogate' — Surrogate decision splits flag'off' | 'on' | 'all' | positive integer value

Surrogate decision splits flag, specified as the comma-separated pair consisting of 'Surrogate' and 'on', 'off', 'all', or a positive integer value.

  • When set to 'on', fitctree finds at most 10 surrogate splits at each branch node.

  • When set to 'all', fitctree finds all surrogate splits at each branch node. The 'all' setting can use considerable time and memory.

  • When set to a positive integer value, fitctree finds at most the specified number of surrogate splits at each branch node.

Use surrogate splits to improve the accuracy of predictions for data with missing values. The setting also lets you compute measures of predictive association between predictors. For more details, see Node Splitting Rules.

Example: 'Surrogate','on'

'Weights' — Observation weightsones(size(x,1),1) (default) | vector of scalar values

Observation weights, specified as the comma-separated pair consisting of 'Weights' and a vector of scalar values. The software weights the observations in each row of X or TBL with the corresponding value in Weights. The size of Weights must equal the number of rows in X or TBL.

If you specify the input data as a table TBL, then Weights can be the name of a variable in TBL that contains a numeric vector. In this case, you must specify Weights as a variable name string. For example, if weights vector W is stored as TBL.W, then specify it as 'W'. Otherwise, the software treats all columns of TBL, including W, as predictors when training the model.

fitctree normalizes the weights in each class to add up to the value of the prior probability of the class.

Data Types: single | double

Output Arguments

collapse all

tree — Classification treeclassification tree object

Classification tree, returned as a classification tree object.

Using the 'CrossVal', 'KFold', 'Holdout', 'Leaveout', or 'CVPartition' options results in a tree of class ClassificationPartitionedModel. You cannot use a partitioned tree for prediction, so this kind of tree does not have a predict method. Instead, use kfoldpredict to predict responses for observations not used for training.

Otherwise, tree is of class ClassificationTree, and you can use the predict method to make predictions.

More About

collapse all

Impurity and Node Error

ClassificationTree splits nodes based on either impurity or node error.

Impurity means one of several things, depending on your choice of the SplitCriterion name-value pair argument:

  • Gini's Diversity Index (gdi) — The Gini index of a node is

    1ip2(i),

    where the sum is over the classes i at the node, and p(i) is the observed fraction of classes with class i that reach the node. A node with just one class (a pure node) has Gini index 0; otherwise the Gini index is positive. So the Gini index is a measure of node impurity.

  • Deviance ('deviance') — With p(i) defined the same as for the Gini index, the deviance of a node is

    ip(i)logp(i).

    A pure node has deviance 0; otherwise, the deviance is positive.

  • Twoing rule ('twoing') — Twoing is not a purity measure of a node, but is a different measure for deciding how to split a node. Let L(i) denote the fraction of members of class i in the left child node after a split, and R(i) denote the fraction of members of class i in the right child node after a split. Choose the split criterion to maximize

    P(L)P(R)(i|L(i)R(i)|)2,

    where P(L) and P(R) are the fractions of observations that split to the left and right respectively. If the expression is large, the split made each child node purer. Similarly, if the expression is small, the split made each child node similar to each other, and hence similar to the parent node, and so the split did not increase node purity.

  • Node error — The node error is the fraction of misclassified classes at a node. If j is the class with the largest number of training samples at a node, the node error is

    1 – p(j).

Predictive Measure of Association

The predictive measure of association is a value that indicates the similarity between decision rules that split observations. Among all possible decision splits that are compared to the optimal split (found by growing the tree), the best surrogate decision split yields the maximum predictive measure of association. The second-best surrogate split has the second-largest predictive measure of association.

Suppose xj and xk are predictor variables j and k, respectively, and jk. At node t, the predictive measure of association between the optimal split xj < u and a surrogate split xk < v is

λjk=min(PL,PR)(1PLjLkPRjRk)min(PL,PR).

  • PL is the proportion of observations in node t, such that xj < u. The subscript L stands for the left child of node t.

  • PR is the proportion of observations in node t, such that xju. The subscript R stands for the right child of node t.

  • PLjLk is the proportion of observations at node t, such that xj < u and xk < v.

  • PRjRk is the proportion of observations at node t, such that xju and xkv.

  • Observations with missing values for xj or xk do not contribute to the proportion calculations.

λjk is a value in (–∞,1]. If λjk > 0, then xk < v is a worthwhile surrogate split for xj < u.

Surrogate Decision Splits

A surrogate decision split is an alternative to the optimal decision split at a given node in a decision tree. The optimal split is found by growing the tree; the surrogate split uses a similar or correlated predictor variable and split criterion.

When the value of the optimal split predictor for an observation is missing, the observation is sent to the left or right child node using the best surrogate predictor. When the value of the best surrogate split predictor for the observation is also missing, the observation is sent to the left or right child node using the second-best surrogate predictor, and so on. Candidate splits are sorted in descending order by their predictive measure of association.

Tips

By default, Prune is 'on'. However, this specification does not prune the classification tree. To prune a trained classification tree, pass the classification tree to prune.

Algorithms

Node Splitting Rules

fitctree follows these steps to determine how to split node t. For all predictors xi, i = 1,...,p:

  1. fitctree computes the weighted impurity of node t, it. For supported impurity measures, see SplitCriterion.

  2. fitctree estimates the probability that an observation is in node t using

    P(T)=jTwj.

    wj is the weight of observation j, and T is the set of all observation indices in node t. If you do not specify Prior or Weights, then wj = 1/n, where n is the sample size.

  3. fitctree sorts xi in ascending order. Each element of the sorted predictor is a splitting candidate or cut point. fitctree stores any indices corresponding to missing values in the set TU, which is the unsplit set.

  4. fitctree determines the best way to split node t using xi by maximizing the impurity gain (ΔI) over all splitting candidates. That is, for all splitting candidates in xi:

    1. fitctree splits the observations in node t into left and right child nodes (tL and tR, respectively).

    2. fitctree computes ΔI. Suppose that for a particular splitting candidate, tL and tR contain observation indices in the sets TL and TR, respectively.

      • If xi does not contain any missing values, then the impurity gain for the current splitting candidate is

        ΔI=P(T)itP(TL)itLP(TR)itR.

      • If xi contains missing values then, assuming that the observations are missing at random, the impurity gain is

        ΔIU=P(TTU)itP(TL)itLP(TR)itR.

        TTU is the set of all observation indices in node t that are not missing.

      • If you use surrogate decision splits, then:

        1. fitctree computes the predictive measures of association between the decision split xj < u and all possible decision splits xk < v, jk.

        2. fitctree sorts the possible alternative decision splits in descending order by their predictive measure of association with the optimal split. The surrogate split is the decision split yielding the largest measure.

        3. fitctree decides the child node assignments for observations with a missing value for xi using the surrogate split. If the surrogate predictor also contains a missing value, then fitctree uses the decision split with the second largest measure, and so on, until there are no other surrogates. It is possible for fitctree to split two different observations at node t using two different surrogate splits. For example, suppose the predictors x1 and x2 are the best and second best surrogates, respectively, for the predictor xi, i ∉ {1,2}, at node t. If observation m of predictor xi is missing (i.e., xmi is missing), but xm1 is not missing, then x1 is the surrogate predictor for observation xmi. If observations x(m + 1),i and x(m + 1),1 are missing, but x(m + 1),2 is not missing, then x2 is the surrogate predictor for observation m + 1.

        4. fitctree uses the appropriate impurity gain formula. That is, if fitctree fails to assign all missing observations in node t to children nodes using surrogate splits, then the impurity gain is ΔIU. Otherwise, fitctree uses ΔI for the impurity gain.

    3. fitctree chooses the candidate that yields the largest impurity gain.

fitctree splits the predictor variable at the cut point that maximizes the impurity gain.

Tree Depth Control

  • If MergeLeaves is 'on' and PruneCriterion is 'error' (which are the default values for these name-value pair arguments), then the software applies pruning only to the leaves and by using classification error. This specification amounts to merging leaves that share the most popular class per leaf.

  • To accommodate MaxNumSplits, fitctree splits all nodes in the current layer, and then counts the number of branch nodes. A layer is the set of nodes that are equidistant from the root node. If the number of branch nodes exceeds MaxNumSplits, fitctree follows this procedure:

    1. Determine how many branch nodes in the current layer must be unsplit so that there are at most MaxNumSplits branch nodes.

    2. Sort the branch nodes by their impurity gains.

    3. Unsplit the number of least successful branches.

    4. Return the decision tree grown so far.

    This procedure produces maximally balanced trees.

  • The software splits branch nodes layer by layer until at least one of these events occurs:

    • There are MaxNumSplits branch nodes.

    • A proposed split causes the number of observations in at least one branch node to be fewer than MinParentSize.

    • A proposed split causes the number of observations in at least one leaf node to be fewer than MinLeafSize.

    • The algorithm cannot find a good split within a layer (i.e., the pruning criterion (see PruneCriterion), does not improve for all proposed splits in a layer). A special case is when all nodes are pure (i.e., all observations in the node have the same class).

    MaxNumSplits and MinLeafSize do not affect splitting at their default values. Therefore, if you set 'MaxNumSplits', splitting might stop due to the value of MinParentSize, before MaxNumSplits splits occur.

Parallelization

For dual-core systems and above, fitctree parallelizes training decision trees using Intel® Threading Building Blocks (TBB). For details on Intel TBB, see https://software.intel.com/en-us/intel-tbb.

References

[1] Coppersmith, D., S. J. Hong, and J. R. M. Hosking. "Partitioning Nominal Attributes in Decision Trees." Data Mining and Knowledge Discovery, Vol. 3, 1999, pp. 197–217.

[2] Breiman, L., J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Boca Raton, FL: CRC Press, 1984.

Was this topic helpful?