Documentation Center

  • Trial Software
  • Product Updates

loss

Class: CompactClassificationTree

Classification error

Syntax

L = loss(tree,X,Y)
L = loss(tree,X,Y,Name,Value)
L = loss(tree,X,Y,'subtrees',subtreevector)
[L,se] = loss(tree,X,Y,'subtrees',subtreevector)
[L,se,NLeaf] = loss(tree,X,Y,'subtrees',subtreevector)
[L,se,NLeaf,bestlevel] = loss(tree,X,Y,'subtrees',subtreevector)
[L,...] = loss(tree,X,Y,'subtrees',subtreevector,Name,Value)

Description

L = loss(tree,X,Y) returns a scalar representing how well tree classifies the data in X, when Y contains the true classifications.

When computing the loss, loss normalizes the class probabilities in Y to the class probabilities used for training, stored in the Prior property of tree.

L = loss(tree,X,Y,Name,Value) returns the loss with additional options specified by one or more Name,Value pair arguments.

L = loss(tree,X,Y,'subtrees',subtreevector) returns a vector of classification errors for the trees in the pruning sequence subtreevector.

[L,se] = loss(tree,X,Y,'subtrees',subtreevector) returns the vector of standard errors of the classification errors.

    Note:   loss returns se and further outputs only when the lossfun name-value pair is the default 'classiferror'.

[L,se,NLeaf] = loss(tree,X,Y,'subtrees',subtreevector) returns the vector of numbers of leaf nodes in the trees of the pruning sequence.

[L,se,NLeaf,bestlevel] = loss(tree,X,Y,'subtrees',subtreevector) returns the best pruning level as defined in the treesize name-value pair. By default, bestlevel is the pruning level that gives loss within one standard deviation of minimal loss.

[L,...] = loss(tree,X,Y,'subtrees',subtreevector,Name,Value) returns loss statistics with additional options specified by one or more Name,Value pair arguments.

Input Arguments

tree

A classification tree or compact classification tree constructed by fitctree or compact.

X

Matrix of data to classify. Each row of X represents one observation, and each column represents one predictor. X must have the same number of columns as the data used to train tree. X should have the same number of rows as the number of elements in Y.

Y

Classification of X. Y should be of the same type as the classification used to train tree, and its number of elements should equal the number of rows of X.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

'lossfun '

Function handle or string representing a loss function. Built-in loss functions:

You can write your own loss function in the syntax described in Loss Functions.

Default: 'mincost'

'weights'

A numeric vector of length N, where N is the number of rows of X. weights are nonnegative. loss normalizes the weights so that observation weights in each class sum to the prior probability of that class. When you supply weights, loss computes weighted classification loss.

Default: ones(N,1)

Name,Value arguments associated with pruning subtrees:

'subtrees'

A vector with integer values from 0 (full unpruned tree) to the maximal pruning level max(tree.PruneList). You can set subtrees to 'all', meaning the entire pruning sequence.

Default: 0

'treesize'

One of the following strings:

  • 'se'loss returns the highest pruning level with loss within one standard deviation of the minimum (L+se, where L and se relate to the smallest value in subtrees).

  • 'min'loss returns the element of subtrees with smallest loss, usually the smallest element of subtrees.

Output Arguments

L

Classification error, a vector the length of subtrees. The meaning of the error depends on the values in weights and lossfun; see Classification Error.

se

Standard error of loss, a vector the length of subtrees.

NLeaf

Number of leaves (terminal nodes) in the pruned subtrees, a vector the length of subtrees.

bestlevel

A scalar whose value depends on treesize:

  • treesize = 'se'loss returns the highest pruning level with loss within one standard deviation of the minimum (L+se, where L and se relate to the smallest value in subtrees).

  • treesize = 'min'loss returns the element of subtrees with smallest loss, usually the smallest element of subtrees.

Definitions

Classification Error

The default classification error is the fraction of data X that tree misclassifies, where Y represents the true classifications.

Weighted classification error is the sum of weight i times the Boolean value that is 1 when tree misclassifies the ith row of X, divided by the sum of the weights.

Loss Functions

The built-in loss functions are:

  • 'binodeviance' — For binary classification, assume the classes yn are -1 and 1. With weight vector w normalized to have sum 1, and predictions of row n of data X as f(Xn), the binomial deviance is

  • 'exponential' — With the same definitions as for 'binodeviance', the exponential loss is

  • 'classiferror' — Predict the label with the largest posterior probability. The loss is then the fraction of misclassified observations.

  • 'mincost' — Predict the label with the smallest expected misclassification cost, with expectation taken over the posterior probability, and cost as given by the Cost property of the classifier (a matrix). The loss is then the true misclassification cost averaged over the observations.

To write your own loss function, create a function file in this form:

function loss = lossfun(C,S,W,COST)
  • N is the number of rows of X.

  • K is the number of classes in the classifier, represented in the ClassNames property.

  • C is an N-by-K logical matrix, with one true per row for the true class. The index for each class is its position in the ClassNames property.

  • S is an N-by-K numeric matrix. S is a matrix of posterior probabilities for classes with one row per observation, similar to the posterior output from predict.

  • W is a numeric vector with N elements, the observation weights. If you pass W, the elements are normalized to sum to the prior probabilities in the respective classes.

  • COST is a K-by-K numeric matrix of misclassification costs. For example, you can use COST = ones(K) - eye(K), which means a cost of 0 for correct classification, and 1 for misclassification.

  • The output loss should be a scalar.

Pass the function handle @lossfun as the value of the lossfun name-value pair.

True Misclassification Cost

There are two costs associated with classification: the true misclassification cost per class, and the expected misclassification cost per observation.

You can set the true misclassification cost per class in the Cost name-value pair when you create the classifier using the fitctree method. Cost(i,j) is the cost of classifying an observation into class j if its true class is i. By default, Cost(i,j)=1 if i~=j, and Cost(i,j)=0 if i=j. In other words, the cost is 0 for correct classification, and 1 for incorrect classification.

Expected Misclassification Cost

There are two costs associated with classification: the true misclassification cost per class, and the expected misclassification cost per observation.

Suppose you have Nobs observations that you want to classify with a trained classifier. Suppose you have K classes. You place the observations into a matrix Xnew with one observation per row.

The expected cost matrix CE has size Nobs-by-K. Each row of CE contains the expected (average) cost of classifying the observation into each of the K classes. CE(n,k) is

where

  • K is the number of classes.

  • is the posterior probability of class i for observation Xnew(n).

  • is the true misclassification cost of classifying an observation as k when its true class is i.

Score (tree)

For trees, the score of a classification of a leaf node is the posterior probability of the classification at that node. The posterior probability of the classification at a node is the number of training sequences that lead to that node with the classification, divided by the number of training sequences that lead to that node.

For example, consider classifying a predictor X as true when X < 0.15 or X > 0.95, and X is false otherwise.

Generate 100 random points and classify them:

rng(0,'twister') % for reproducibility
X = rand(100,1);
Y = (abs(X - .55) > .4);
tree = fitctree(X,Y);
view(tree,'Mode','Graph')

Prune the tree:

tree1 = prune(tree,'Level',1);
view(tree1,'Mode','Graph')

The pruned tree correctly classifies observations that are less than 0.15 as true. It also correctly classifies observations from .15 to .94 as false. However, it incorrectly classifies observations that are greater than .94 as false. Therefore, the score for observations that are greater than .15 should be about .05/.85=.06 for true, and about .8/.85=.94 for false.

Compute the prediction scores for the first 10 rows of X:

[~,score] = predict(tree1,X(1:10));
[score X(1:10,:)]
ans =

    0.9059    0.0941    0.8147
    0.9059    0.0941    0.9058
         0    1.0000    0.1270
    0.9059    0.0941    0.9134
    0.9059    0.0941    0.6324
         0    1.0000    0.0975
    0.9059    0.0941    0.2785
    0.9059    0.0941    0.5469
    0.9059    0.0941    0.9575
    0.9059    0.0941    0.9649

Indeed, every value of X (the right-most column) that is less than 0.15 has associated scores (the left and center columns) of 0 and 1, while the other values of X have associated scores of 0.91 and 0.09. The difference (score 0.09 instead of the expected .06) is due to a statistical fluctuation: there are 8 observations in X in the range (.95,1) instead of the expected 5 observations.

Examples

Compute the resubstituted classification error for the ionosphere data:

load ionosphere
tree = fitctree(X,Y);
L = loss(tree,X,Y)
L = 
    0.0114

See Also

| | |

Was this topic helpful?