edge

Class: CompactClassificationSVM

Classification edge for support vector machine classifiers

Syntax

  • e = edge(SVMModel,X,Y) example
  • e = edge(SVMModel,X,Y,Name,Value) example

Description

example

e = edge(SVMModel,X,Y) returns the classification edge (e) for the support vector machine (SVM) classifier SVMModel using predictor data X and class labels Y.

example

e = edge(SVMModel,X,Y,Name,Value) computes the classification edge with additional options specified by one or more Name,Value pair arguments.

Input Arguments

expand all

SVMModel — SVM classifierClassificationSVM classifier | CompactClassificationSVM classifier

SVM classifier, specified as a ClassificationSVM classifier or CompactClassificationSVM classifier returned by fitcsvm or compact, respectively.

X — Predictor datanumeric matrix

Predictor data, specified as a numeric matrix.

Each row of X corresponds to one observation (also known as an instance or example), and each column corresponds to one variable (also known as a feature). The variables making up the columns of X should be the same as the variables that trained the SVMModel classifier.

The length of Y and the number of rows of X must be equal.

If you set 'Standardize',true in fitcsvm to train SVMModel, then the software standardizes the columns of X using the corresponding means in SVMModel.Mu and standard deviations in SVMModel.Sigma.

Data Types: double | single

Y — Class labelscategorical array | character array | logical vector | vector of numeric values | cell array of strings

Class labels, specified as a categorical or character array, logical or numeric vector, or cell array of strings. Y must be the same as the data type of SVMModel.ClassNames.

The length of Y and the number of rows of X must be equal.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

'Weights' — Observation weightsones(size(X,1)) (default) | numeric vector

Observation weights, specified as the comma-separated pair consisting of 'Weights' and a numeric vector. Weights must have the same length as the number of rows of X, i.e., size(X,1).

If you supply weights, edge computes the weighted classification edge.

Output Arguments

expand all

e — Classification edgescalar

Classification edge, returned as a scalar. e represents the (weighted) mean of the classification margins.

Definitions

Classification Edge

The edge is the weighted mean of the classification margins.

The weights are the prior class probabilities. If you supply weights, then the software normalizes them to sum to the prior probabilities in the respective classes. The software uses the renormalized weights to compute the weighted mean.

One way to choose among multiple classifiers, e.g., to perform feature selection, is to choose the classifier that yields the highest edge.

Classification Margins

The classification margins are, for each observation, the difference between the score for the true class and maximal score for the false classes. Provided that they are on the same scale, margins serve as a classification confidence measure, i.e., among multiple classifiers, those that yield larger margins are better [2].

Score

The SVM score for classifying observation x is the signed distance from x to the decision boundary ranging from -∞ to +∞. A positive score for a class indicates that x is predicted to be in that class, a negative score indicates otherwise.

The score is also the numerical, predicted response for x, f(x), computed by the trained SVM classification function

f(x)=j=1nαjyjG(xj,x)+b,

where (α1,...,αn,b) are the estimated SVM parameters, G(xj,x) is the dot product in the predictor space between x and the support vectors, and the sum includes the training set observations.

Examples

expand all

Estimate the Test Sample Edge of SVM Classifiers

Load the ionosphere data set.

load ionosphere
rng(1); % For reproducibility

Train an SVM classifier. Specify a 15% holdout sample for testing. It is good practice to specify the class order and standardize the data.

CVSVMModel = fitcsvm(X,Y,'Holdout',0.15,'ClassNames',{'b','g'},...
    'Standardize',true);
CompactSVMModel = CVSVMModel.Trained{1}; % Extract trained, compact classifier
testInds = test(CVSVMModel.Partition);   % Extract the test indices
XTest = X(testInds,:);
YTest = Y(testInds,:);

CVSVMModel is a ClassificationPartitionedModel classifier. It contains the property Trained, which is a 1-by-1 cell array holding a CompactClassificationSVM classifier that the software trained using the training set.

Estimate the test sample edge.

e = edge(CompactSVMModel,XTest,YTest)
e =

    5.0765

The estimated test sample margin average is approximately 5.

Estimate the Test Sample Weighted Margin Mean of SVM Classifiers

Load the ionosphere data set.

load ionosphere
rng(1); % For reproducibility

Suppose that the observations were measured sequentially, and that the last 150 observations were better quality due to a technology upgrade. One way to incorporate this advancement is to weigh the better quality observations more than the other observations.

Define a weight vector that weighs the better quality observations two times the other observations.

n = size(X,1);
weights = [ones(n-150,1);2*ones(150,1)];

Train an SVM classifier. Specify the weighting scheme and a 15% holdout sample for testing. It is good practice to specify the class order and standardize the data.

CVSVMModel = fitcsvm(X,Y,'Weights',weights,'Holdout',0.15,...
    'ClassNames',{'b','g'},'Standardize',true);
CompactSVMModel = CVSVMModel.Trained{1};
testInds = test(CVSVMModel.Partition);   % Extract the test indices
XTest = X(testInds,:);
YTest = Y(testInds,:);
wTest = weights(testInds,:);

CVSVMModel is a trained ClassificationPartitionedModel classifier. It contains the property Trained, which is a 1-by-1 cell array holding a CompactClassificationSVM classifier that the software trained using the training set.

Estimate the test sample weighted edge using the weighting scheme.

e = edge(CompactSVMModel,XTest,YTest,'Weights',wTest)
e =

    4.8339

The test sample weighted average margin is approximately 5.

Select SVM Classifier Features by Comparing Test Sample Edges

The classifier edge measures the average of the classifier margins. One way to perform feature selection is to compare test sample edges from multiple models. Based solely on this criterion, the classifier with the highest edge is the best classifier.

Load the ionosphere data set.

load ionosphere
rng(1); % For reproducibility

Partition the data set into training and test sets. Specify a 15% holdout sample for testing.

Partition = cvpartition(Y,'Holdout',0.15);
testInds = test(Partition); % Indices for the test set
XTest = X(testInds,:);
YTest = Y(testInds,:);

Partition defines the data set partition.

Define these two data sets:

  • fullX contains all predictors (except the removed column of 0s).

  • partX contains the last 20 predictors.

fullX = X;
partX = X(:,end-20:end);

Train SVM classifiers for each predictor set. Specify the partition definition.

FullCVSVMModel = fitcsvm(fullX,Y,'CVPartition',Partition);
PartCVSVMModel = fitcsvm(partX,Y,'CVPartition',Partition);
FCSVMModel = FullCVSVMModel.Trained{1};
PCSVMModel = PartCVSVMModel.Trained{1};

FullCVSVMModel and PartCVSVMModel are ClassificationPartitionedModel classifiers. They contain the property Trained, which is a 1-by-1 cell array holding a CompactClassificationSVM classifier that the software trained using the training set.

Estimate the test sample edge for each classifier.

fullEdge = edge(FCSVMModel,XTest,YTest)
partEdge = edge(PCSVMModel,XTest(:,end-20:end),YTest)
fullEdge =

    2.8320


partEdge =

    1.5540

The edge for the classifier trained on the complete data set is greater, suggesting that the classifier trained using all of the predictors is better.

Algorithms

For binary classification, the software defines the margin for observation j, mj, as

mj=2yjf(xj),

where yj ∊ {-1,1}, and f(xj) is the predicted score of observation j for the positive class. However, the literature commonly uses mj = yjf(xj) to define the margin.

References

[1] Christianini, N., and J. C. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, UK: Cambridge University Press, 2000.

[2] Hu, Q, X. Che, L. Zhang, and D. Yu. "Feature Evaluation and Selection Based on Neighborhood Soft Margin." Neurocomputing. Vol. 73, 2010, pp. 2114–2124.

Was this topic helpful?