# Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

# predictorImportance

Estimates of predictor importance

## Syntax

imp = predictorImportance(tree)

## Description

imp = predictorImportance(tree) computes estimates of predictor importance for tree by summing changes in the risk due to splits on every predictor and dividing the sum by the number of branch nodes.

## Input Arguments

 tree A classification tree created by fitctree, or by the compact method.

## Output Arguments

 imp A row vector with the same number of elements as the number of predictors (columns) in tree.X. The entries are the estimates of predictor importance, with 0 representing the smallest possible importance.

## Definitions

### Predictor Importance

predictorImportance computes estimates of predictor importance for tree by summing changes in the risk due to splits on every predictor and dividing the sum by the number of branch nodes. If tree is grown without surrogate splits, this sum is taken over best splits found at each branch node. If tree is grown with surrogate splits, this sum is taken over all splits at each branch node including surrogate splits. imp has one element for each input predictor in the data used to train tree. Predictor importance associated with this split is computed as the difference between the risk for the parent node and the total risk for the two children.

Estimates of predictor importance do not depend on the order of predictors if you use surrogate splits, but do depend on the order if you do not use surrogate splits.

If you use surrogate splits, predictorImportance computes estimates before the tree is reduced by pruning or merging leaves. If you do not use surrogate splits, predictorImportance computes estimates after the tree is reduced by pruning or merging leaves. Therefore, reducing the tree by pruning affects the predictor importance for a tree grown without surrogate splits, and does not affect the predictor importance for a tree grown with surrogate splits.

### Impurity and Node Error

ClassificationTree splits nodes based on either impurity or node error.

Impurity means one of several things, depending on your choice of the SplitCriterion name-value pair argument:

• Gini's Diversity Index (gdi) — The Gini index of a node is

$1-\sum _{i}{p}^{2}\left(i\right),$

where the sum is over the classes i at the node, and p(i) is the observed fraction of classes with class i that reach the node. A node with just one class (a pure node) has Gini index 0; otherwise the Gini index is positive. So the Gini index is a measure of node impurity.

• Deviance ('deviance') — With p(i) defined the same as for the Gini index, the deviance of a node is

$-\sum _{i}p\left(i\right)\mathrm{log}p\left(i\right).$

A pure node has deviance 0; otherwise, the deviance is positive.

• Twoing rule ('twoing') — Twoing is not a purity measure of a node, but is a different measure for deciding how to split a node. Let L(i) denote the fraction of members of class i in the left child node after a split, and R(i) denote the fraction of members of class i in the right child node after a split. Choose the split criterion to maximize

$P\left(L\right)P\left(R\right){\left(\sum _{i}|L\left(i\right)-R\left(i\right)|\right)}^{2},$

where P(L) and P(R) are the fractions of observations that split to the left and right respectively. If the expression is large, the split made each child node purer. Similarly, if the expression is small, the split made each child node similar to each other, and hence similar to the parent node, and so the split did not increase node purity.

• Node error — The node error is the fraction of misclassified classes at a node. If j is the class with the largest number of training samples at a node, the node error is

1 – p(j).

## Examples

expand all

Load Fisher's iris data set.

Grow a classification tree.

Mdl = fitctree(meas,species);

Compute predictor importance estimates for all predictor variables.

imp = predictorImportance(Mdl)
imp =

0         0    0.0907    0.0682

The first two elements of imp are zero. Therefore, the first two predictors do not enter into Mdl calculations for classifying irises.

Estimates of predictor importance do not depend on the order of predictors if you use surrogate splits, but do depend on the order if you do not use surrogate splits.

Permute the order of the data columns in the previous example, grow another classification tree, and then compute predictor importance estimates.

measPerm  = meas(:,[4 1 3 2]);
MdlPerm = fitctree(measPerm,species);
impPerm = predictorImportance(MdlPerm)
impPerm =

0.1515         0    0.0074         0

The estimates of predictor importance are not a permutation of imp.

Load Fisher's iris data set.

Grow a classification tree. Specify usage of surrogate splits.

Mdl = fitctree(meas,species,'Surrogate','on');

Compute predictor importance estimates for all predictor variables.

imp = predictorImportance(Mdl)
imp =

0.0791    0.0374    0.1530    0.1529

All predictors have some importance. The first two predictors are less important than the final two.

Permute the order of the data columns in the previous example, grow another classification tree specifying usgae of surrogate splits, and then compute predictor importance estimates.

measPerm  = meas(:,[4 1 3 2]);
MdlPerm = fitctree(measPerm,species,'Surrogate','on');
impPerm = predictorImportance(MdlPerm)
impPerm =

0.1529    0.0791    0.1530    0.0374

The estimates of predictor importance are a permutation of imp.

Load the census1994 data set. Consider a model that predicts a person's salary category given their age, working class, education level, martial status, race, sex, capital gain and loss, and number of working hours per week.

'sex','capital_gain','capital_loss','hours_per_week','salary'});

Display the number of categories represented in the categorical variables using summary.

summary(X)
Variables:

age: 32561×1 double
Values:

min       17
median    37
max       90

workClass: 32561×1 categorical
Values:

Federal-gov           960
Local-gov            2093
Never-worked            7
Private             22696
Self-emp-inc         1116
Self-emp-not-inc     2541
State-gov            1298
Without-pay            14
<undefined>          1836

education_num: 32561×1 double
Values:

min        1
median    10
max       16

marital_status: 32561×1 categorical
Values:

Divorced                  4443
Married-AF-spouse           23
Married-civ-spouse       14976
Married-spouse-absent      418
Never-married            10683
Separated                 1025
Widowed                    993

race: 32561×1 categorical
Values:

Amer-Indian-Eskimo      311
Asian-Pac-Islander     1039
Black                  3124
Other                   271
White                 27816

sex: 32561×1 categorical
Values:

Female    10771
Male      21790

capital_gain: 32561×1 double
Values:

min           0
median        0
max       99999

capital_loss: 32561×1 double
Values:

min          0
median       0
max       4356

hours_per_week: 32561×1 double
Values:

min        1
median    40
max       99

salary: 32561×1 categorical
Values:

<=50K    24720
>50K      7841

Because there are few categories represented in the categorical variables compared to levels in the continuous variables, the standard CART, predictor-splitting algorithm prefers splitting a continuous predictor over the categorical variables.

Train a classification tree using the entire data set. To grow unbiased trees, specify usage of the curvature test for splitting predictors. Because there are missing observations in the data, specify usage of surrogate splits.

Mdl = fitctree(X,'salary','PredictorSelection','curvature',...
'Surrogate','on');

Estimate predictor importance values by summing changes in the risk due to splits on every predictor and dividing the sum by the number of branch nodes. Compare the estimates using a bar graph.

imp = predictorImportance(Mdl);

figure;
bar(imp);
title('Predictor Importance Estimates');
ylabel('Estimates');
xlabel('Predictors');
h = gca;
h.XTickLabel = Mdl.PredictorNames;
h.XTickLabelRotation = 45;
h.TickLabelInterpreter = 'none';

In this case, capital_gain is the most important predictor, followed by education_num.