kfoldLoss

Class: ClassificationPartitionedEnsemble

Classification loss for observations not used for training

Syntax

L = kfoldLoss(ens)
L = kfoldLoss(ens,Name,Value)

Description

L = kfoldLoss(ens) returns loss obtained by cross-validated classification model ens. For every fold, this method computes classification loss for in-fold observations using a model trained on out-of-fold observations.

L = kfoldLoss(ens,Name,Value) calculates loss with additional options specified by one or more Name,Value pair arguments. You can specify several name-value pair arguments in any order as Name1,Value1,…,NameN,ValueN.

Input Arguments

ens

Object of class ClassificationPartitionedEnsemble. Create ens with fitensemble along with one of the cross-validation options: 'crossval', 'kfold', 'holdout', 'leaveout', or 'cvpartition'. Alternatively, create ens from a classification ensemble with crossval.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

'folds'

Indices of folds ranging from 1 to ens.KFold. Use only these folds for predictions.

Default: 1:ens.KFold

'lossfun '

Function handle or string representing a loss function. Built-in loss functions:

You can write your own loss function in the syntax described in Loss Functions.

Default: 'classiferror'

'mode'

A string for determining the output of kfoldLoss:

  • 'average'L is a scalar, the loss averaged over all folds.

  • 'individual'L is a vector of length ens.KFold, where each entry is the loss for a fold.

  • 'cumulative'L is a vector in which element J is obtained by using learners 1:J from the input list of learners.

Default: 'average'

Output Arguments

L

Loss, by default the fraction of misclassified data. L can be a vector, and can mean different things, depending on the name-value pair settings.

Definitions

Loss Functions

The built-in loss functions are:

  • 'binodeviance' — For binary classification, assume the classes yn are -1 and 1. With weight vector w normalized to have sum 1, and predictions of row n of data X as f(Xn), the binomial deviance is

    wnlog(1+exp(2ynf(Xn))).

  • 'classiferror' — Fraction of misclassified data, weighted by w.

  • 'exponential' — With the same definitions as for 'binodeviance', the exponential loss is

    wnexp(ynf(Xn)).

  • 'hinge' — Classification error measure that has the form

    L=j=1nwjmax{0,1yjf(Xj)}j=1nwj,

    where:

    • wj is weight j.

    • For binary classification, yj = 1 for the positive class and -1 for the negative class. For problems where the number of classes K > 3, yj is a vector of 0s, but with a 1 in the position corresponding to the true class, e.g., if the second observation is in the third class and K = 4, then y2 = [0 0 1 0]′.

    • f(Xj) is, for binary classification, the posterior probability or, for K > 3, a vector of posterior probabilities for each class given observation j.

  • 'mincost' — Predict the label with the smallest expected misclassification cost, with expectation taken over the posterior probability, and cost as given by the Cost property of the classifier (a matrix). The loss is then the true misclassification cost averaged over the observations.

To write your own loss function, create a function file of the form

function loss = lossfun(C,S,W,COST)
  • N is the number of rows of ens.X.

  • K is the number of classes in ens, represented in ens.ClassNames.

  • C is an N-by-K logical matrix, with one true per row for the true class. The index for each class is its position in tree.ClassNames.

  • S is an N-by-K numeric matrix. S is a matrix of posterior probabilities for classes with one row per observation, similar to the score output from predict.

  • W is a numeric vector with N elements, the observation weights.

  • COST is a K-by-K numeric matrix of misclassification costs. The default 'classiferror' gives a cost of 0 for correct classification, and 1 for misclassification.

  • The output loss should be a scalar.

Pass the function handle @lossfun as the value of the lossfun name-value pair.

Examples

Find the average cross-validated classification error for an ensemble model of the ionosphere data:

load ionosphere
ens = fitensemble(X,Y,'AdaBoostM1',100,'Tree');
cvens = crossval(ens);
L = kfoldLoss(cvens)

L =
    0.0826
Was this topic helpful?