resubMargin

Class: ClassificationSVM

Classification margins for support vector machine classifiers by resubstitution

Syntax

Description

example

m = resubMargin(SVMModel) returns the resubstitution classification margins (m) for the support vector machine (SVM) classifier SVMModel using the training data stored in SVMModel.X and corresponding class labels stored in SVMModel.Y.

Input Arguments

expand all

SVMModel — Full, trained SVM classifierClassificationSVM classifier

Full, trained SVM classifier, specified as a ClassificationSVM model trained using fitcsvm.

Output Arguments

expand all

m — Classification marginsnumeric vector

Classification margins, returned as a numeric vector.

m has the same length as Y. The software estimates each entry of m using the trained SVM classifier SVMModel, the corresponding row of X, and the true class label Y.

Definitions

Margins

The classification margins are, for each observation, the difference between the score for the true class and maximal score for the false classes. Provided that they are on the same scale, margins serve as a classification confidence measure, i.e., among multiple classifiers, those that yield larger margins are better [2].

Edge

The edge is the weighted mean of the classification margins.

The weights are the prior class probabilities. If you supply weights, then the software normalizes them to sum to the prior probabilities in the respective classes. The software uses the renormalized weights to compute the weighted mean.

One way to choose among multiple classifiers, e.g., to perform feature selection, is to choose the classifier that yields the highest edge.

Score

The SVM score for classifying observation x is the signed distance from x to the decision boundary ranging from -∞ to +∞. A positive score for a class indicates that x is predicted to be in that class, a negative score indicates otherwise.

The score is also the numerical, predicted response for x, f(x), computed by the trained SVM classification function

f(x)=j=1nαjyjG(xj,x)+b,

where (α1,...,αn,b) are the estimated SVM parameters, G(xj,x) is the dot product in the predictor space between x and the support vectors, and the sum includes the training set observations.

Examples

expand all

Estimate In-Sample Classification Margins of SVM Classifiers

Load the ionosphere data set.

load ionosphere

Train an SVM classifier. It is good practice to specify the class order and standardize the data.

SVMModel = fitcsvm(X,Y,'ClassNames',{'b','g'},'Standardize',true);

SVMModel is a ClassificationSVM classifier. The negative class is 'b' and the positive class is 'g'.

Estimate the in-sample classification margins.

m = resubMargin(SVMModel);
m(10:20)
ans =

    5.5618
    4.2924
    1.9994
    4.5524
   -1.4892
    3.2813
    4.0256
    4.5420
   16.4434
    2.0001
   23.3757

An observation margin is the observed (true) class score minus the maximum false class score among all scores in the respective class. Classifiers that yield relatively large margins are desirable.

Select SVM Classifier Features by Examining In-Sample Margins

The classifier margins measure, for each observation, the difference between the true class observed score and the maximal false class score for a particular class. One way to perform feature selection is to compare in-sample margins from multiple models. Based solely on this criterion, the model with the highest margins is the best model.

Load the ionosphere data set. Define two data sets:

  • fullX contains all predictors (except the removed column of 0s).

  • partX contains the last 20 predictors.

load ionosphere
fullX = X;
partX = X(:,end-20:end);

Train SVM classifiers for each predictor set.

FullSVMModel = fitcsvm(fullX,Y);
PartSVMModel = fitcsvm(partX,Y);

Estimate the in-sample margins for each classifier.

fullMargins = resubMargin(FullSVMModel);
partMargins = resubMargin(PartSVMModel);
n = size(X,1);
p = sum(fullMargins < partMargins)/n
p =

    0.2194

Approximately 22% of the margins from the full model are less than those from the model with fewer predictors. This suggests that the model trained using all of the predictors is better.

Algorithms

For binary classification, the software defines the margin for observation j, mj, as

mj=2yjf(xj),

where yj ∊ {-1,1}, and f(xj) is the predicted score of observation j for the positive class. However, the literature commonly uses mj = yjf(xj) to define the margin.

References

[1] Christianini, N., and J. C. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, UK: Cambridge University Press, 2000.

[2] Hu, Q., X. Che, L. Zhang, and D. Yu. "Feature Evaluation and Selection Based on Neighborhood Soft Margin." Neurocomputing. Vol. 73, 2010, pp. 2114–2124.

Was this topic helpful?