kfoldEdge

Class: ClassificationPartitionedModel

Classification edge for observations not used for training

Syntax

E = kfoldEdge(obj)
E = kfoldEdge(obj,Name,Value)

Description

E = kfoldEdge(obj) returns classification edge (average classification margin) obtained by cross-validated classification model obj. For every fold, this method computes classification edge for in-fold observations using an ensemble trained on out-of-fold observations.

E = kfoldEdge(obj,Name,Value) calculates edge with additional options specified by one or more Name,Value pair arguments. You can specify several name-value pair arguments in any order as Name1,Value1,…,NameN,ValueN.

Input Arguments

obj

Object of class ClassificationPartitionedModel.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

'folds'

Indices of folds ranging from 1 to obj.KFold. Use only these folds for predictions.

Default: 1:obj.KFold

'mode'

String representing the meaning of the output edge:

  • 'average'edge is a scalar value, the average over all folds.

  • 'individual'edge is a vector of length obj.KFold with one element per fold.

Default: 'average'

Output Arguments

E

The average classification margin. E is a scalar or vector, depending on the setting of the mode name-value pair.

Definitions

Edge

The edge is the weighted mean value of the classification margin. The weights are class prior probabilities. If you supply additional weights, those weights are normalized to sum to the prior probabilities in the respective classes, and are then used to compute the weighted average.

Margin

The classification margin is the difference between the classification score for the true class and maximal classification score for the false classes.

The classification margin is a column vector with the same number of rows as in the matrix X. A high value of margin indicates a more reliable prediction than a low value.

Score

For discriminant analysis, the score of a classification is the posterior probability of the classification. For the definition of posterior probability in discriminant analysis, see Posterior Probability.

For trees, the score of a classification of a leaf node is the posterior probability of the classification at that node. The posterior probability of the classification at a node is the number of training sequences that lead to that node with the classification, divided by the number of training sequences that lead to that node.

For example, consider classifying a predictor X as true when X < 0.15 or X > 0.95, and X is false otherwise.

Generate 100 random points and classify them:

rng(0,'twister') % for reproducibility
X = rand(100,1);
Y = (abs(X - .55) > .4);
tree = fitctree(X,Y);
view(tree,'Mode','Graph')

Prune the tree:

tree1 = prune(tree,'Level',1);
view(tree1,'Mode','Graph')

The pruned tree correctly classifies observations that are less than 0.15 as true. It also correctly classifies observations from .15 to .94 as false. However, it incorrectly classifies observations that are greater than .94 as false. Therefore, the score for observations that are greater than .15 should be about .05/.85=.06 for true, and about .8/.85=.94 for false.

Compute the prediction scores for the first 10 rows of X:

[~,score] = predict(tree1,X(1:10));
[score X(1:10,:)]
ans =

    0.9059    0.0941    0.8147
    0.9059    0.0941    0.9058
         0    1.0000    0.1270
    0.9059    0.0941    0.9134
    0.9059    0.0941    0.6324
         0    1.0000    0.0975
    0.9059    0.0941    0.2785
    0.9059    0.0941    0.5469
    0.9059    0.0941    0.9575
    0.9059    0.0941    0.9649

Indeed, every value of X (the right-most column) that is less than 0.15 has associated scores (the left and center columns) of 0 and 1, while the other values of X have associated scores of 0.91 and 0.09. The difference (score 0.09 instead of the expected .06) is due to a statistical fluctuation: there are 8 observations in X in the range (.95,1) instead of the expected 5 observations.

Examples

expand all

Esimtate the k-fold Edge of a Classifier

Compute the k-fold edge for a model trained on Fisher's iris data.

Load Fisher's iris data set.

load fisheriris

Train a classification tree classifier.

tree = fitctree(meas,species);

Cross validate the classifier using 10-fold cross validation.

cvtree = crossval(tree);

Compute the k-fold edge.

edge = kfoldEdge(cvtree)
edge =

    0.8578

Was this topic helpful?