Documentation

This is machine translation

Translated by Microsoft
Mouse over text to see original. Click the button below to return to the English verison of the page.

resubMargin

Class: ClassificationSVM

Classification margins for support vector machine classifiers by resubstitution

Syntax

Description

example

m = resubMargin(SVMModel) returns the resubstitution classification margins (m) for the support vector machine (SVM) classifier SVMModel using the training data stored in SVMModel.X and corresponding class labels stored in SVMModel.Y.

Input Arguments

expand all

Full, trained SVM classifier, specified as a ClassificationSVM model trained using fitcsvm.

Output Arguments

expand all

Classification margins, returned as a numeric vector.

m has the same length as Y. The software estimates each entry of m using the trained SVM classifier SVMModel, the corresponding row of X, and the true class label Y.

Definitions

Classification Edge

The edge is the weighted mean of the classification margins.

The weights are the prior class probabilities. If you supply weights, then the software normalizes them to sum to the prior probabilities in the respective classes. The software uses the renormalized weights to compute the weighted mean.

One way to choose among multiple classifiers, e.g., to perform feature selection, is to choose the classifier that yields the highest edge.

Classification Margin

The classification margins for binary classification are, for each observation, the difference between the classification score for the true class and the classification score for the false class.

The software defines the classification margin for binary classification as

m=2yf(x).

x is an observation. If the true label of x is the positive class, then y is 1, and –1 otherwise. f(x) is the positive-class classification score for the observation x. The literature commonly defines the margin as m = yf(x).

If the margins are on the same scale, then they serve as a classification confidence measure, i.e., among multiple classifiers, those that yield larger margins are better.

Classification Score

The SVM classification score for classifying observation x is the signed distance from x to the decision boundary ranging from -∞ to +∞. A positive score for a class indicates that x is predicted to be in that class, a negative score indicates otherwise.

The score for predicting x into the positive class, also the numerical, predicted response for x, f(x), is the trained SVM classification function

f(x)=j=1nαjyjG(xj,x)+b,

where (α1,...,αn,b) are the estimated SVM parameters, G(xj,x) is the dot product in the predictor space between x and the support vectors, and the sum includes the training set observations. The score for predicting x into the negative class is –f(x).

If G(xj,x) = xjx (the linear kernel), then the score function reduces to

f(x)=(x/s)β+b.

s is the kernel scale and β is the vector of fitted linear coefficients.

Examples

expand all

Load the ionosphere data set.

load ionosphere

Train an SVM classifier. It is good practice to specify the class order and standardize the data.

SVMModel = fitcsvm(X,Y,'ClassNames',{'b','g'},'Standardize',true);

SVMModel is a ClassificationSVM classifier. The negative class is 'b' and the positive class is 'g'.

Estimate the in-sample classification margins.

m = resubMargin(SVMModel);
m(10:20)
ans =

    5.5622
    4.2918
    1.9993
    4.5520
   -1.4897
    3.2816
    4.0260
    4.5419
   16.4449
    2.0006
   23.3782

An observation margin is the observed (true) class score minus the maximum false class score among all scores in the respective class. Classifiers that yield relatively large margins are desirable.

The classifier margins measure, for each observation, the difference between the true class observed score and the maximal false class score for a particular class. One way to perform feature selection is to compare in-sample margins from multiple models. Based solely on this criterion, the model with the highest margins is the best model.

Load the ionosphere data set. Define two data sets:

  • fullX contains all predictors (except the removed column of 0s).

  • partX contains the last 20 predictors.

load ionosphere
fullX = X;
partX = X(:,end-20:end);

Train SVM classifiers for each predictor set.

FullSVMModel = fitcsvm(fullX,Y);
PartSVMModel = fitcsvm(partX,Y);

Estimate the in-sample margins for each classifier.

fullMargins = resubMargin(FullSVMModel);
partMargins = resubMargin(PartSVMModel);
n = size(X,1);
p = sum(fullMargins < partMargins)/n
p =

    0.2194

Approximately 22% of the margins from the full model are less than those from the model with fewer predictors. This suggests that the model trained using all of the predictors is better.

Algorithms

For binary classification, the software defines the margin for observation j, mj, as

mj=2yjf(xj),

where yj ∊ {-1,1}, and f(xj) is the predicted score of observation j for the positive class. However, the literature commonly uses mj = yjf(xj) to define the margin.

References

[1] Christianini, N., and J. C. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, UK: Cambridge University Press, 2000.

Was this topic helpful?