Classification edge for observations not used for training
E = kfoldEdge(obj)
E = kfoldEdge(obj,Name,Value)
classification edge (average classification margin) obtained by cross-validated
E = kfoldEdge(
obj. For every fold, this
method computes classification edge for in-fold observations using
an ensemble trained on out-of-fold observations.
edge with additional options specified by one or more
E = kfoldEdge(
arguments. You can specify several name-value pair arguments in any
Object of class
comma-separated pairs of
the argument name and
Value is the corresponding value.
Name must appear inside single quotes (
' '). You can
specify several name and value pair arguments in any order as
Indices of folds ranging from
Character vector representing the meaning of the output
The average classification margin.
Compute the k-fold edge for a model trained on Fisher's iris data.
Load Fisher's iris data set.
Train a classification tree classifier.
tree = fitctree(meas,species);
Cross validate the classifier using 10-fold cross validation.
cvtree = crossval(tree);
Compute the k-fold edge.
edge = kfoldEdge(cvtree)
edge = 0.8578
The edge is the weighted mean value of the classification margin. The weights are class prior probabilities. If you supply additional weights, those weights are normalized to sum to the prior probabilities in the respective classes, and are then used to compute the weighted average.
The classification margin is the difference between the classification score for the true class and maximal classification score for the false classes.
The classification margin is a column vector with the same number
of rows as in the matrix
X. A high value of margin
indicates a more reliable prediction than a low value.
For discriminant analysis, the score of a classification is the posterior probability of the classification. For the definition of posterior probability in discriminant analysis, see Posterior Probability.
For trees, the score of a classification of a leaf node is the posterior probability of the classification at that node. The posterior probability of the classification at a node is the number of training sequences that lead to that node with the classification, divided by the number of training sequences that lead to that node.
For example, consider classifying a predictor
Generate 100 random points and classify them:
rng(0,'twister') % for reproducibility X = rand(100,1); Y = (abs(X - .55) > .4); tree = fitctree(X,Y); view(tree,'Mode','Graph')
Prune the tree:
tree1 = prune(tree,'Level',1); view(tree1,'Mode','Graph')
The pruned tree correctly classifies observations that are less
than 0.15 as
true. It also correctly classifies
observations from .15 to .94 as
it incorrectly classifies observations that are greater than .94 as
Therefore, the score for observations that are greater than .15 should
be about .05/.85=.06 for
true, and about .8/.85=.94
Compute the prediction scores for the first 10 rows of
[~,score] = predict(tree1,X(1:10)); [score X(1:10,:)]
ans = 0.9059 0.0941 0.8147 0.9059 0.0941 0.9058 0 1.0000 0.1270 0.9059 0.0941 0.9134 0.9059 0.0941 0.6324 0 1.0000 0.0975 0.9059 0.0941 0.2785 0.9059 0.0941 0.5469 0.9059 0.0941 0.9575 0.9059 0.0941 0.9649
Indeed, every value of
X (the right-most
column) that is less than 0.15 has associated scores (the left and
center columns) of
while the other values of
X have associated scores
0.09. The difference
0.09 instead of the expected
is due to a statistical fluctuation: there are
X in the range
of the expected