Documentation Center

  • Trial Software
  • Product Updates

predict

Class: CompactClassificationSVM

Predict classification

Syntax

  • label = predict(SVMModel,X) example
  • [label,Score] = predict(SVMModel,X) example

Description

example

label = predict(SVMModel,X) returns a vector of predicted class labels for predictor data X, based on the trained, full, or compact SVM classifier SVMModel.

example

[label,Score] = predict(SVMModel,X) additionally returns class likelihood measures, i.e., either scores or posterior probabilities.

Input Arguments

expand all

SVMModel — SVM classifierClassificationSVM classifier | CompactClassificationSVM classifier

SVM classifier that was trained using fitcsvm, specified as a ClassificationSVM or CompactClassificationSVM classifier.

X — Predictor datamatrix of numeric values

Predictor data, specified as a matrix of numeric values.

Each row of X corresponds to one observation (also known as an instance or example), and each column corresponds to one variable (also known as a feature). The variables making up the columns of X should be the same as the variables that trained the SVMModel classifier.

The length of Y and the number of rows of X must be equal.

If you set 'Standardize',true in fitcsvm to train SVMModel, then the software standardizes the columns of X using the corresponding means in SVMModel.Mu and standard deviations in SVMModel.Sigma.

Data Types: double | single

Output Arguments

expand all

label — Predicted class labelscategorical array | character array | logical vector | vector of numeric values | cell array of strings

Predicted class labels, returned as a categorical or character array, logical or numeric vector, or cell array of strings.

label:

  • Is the same data type as the observed class labels (Y) that trained SVMModel

  • Has length equal to the number of rows of X

Score — Predicted class likelihoodsnumeric matrix

Predicted class likelihoods, returned as a numeric matrix.

If the ScoreTransform property of SVMModel is none, then Score is a matrix of scores indicating the likelihood that an observation comes from a particular class.

  • For one-class learning, Score has the same number of rows as X, and one column. The elements are the positive class scores for the corresponding observations.

  • For two-class learning, Score has the same number of rows as X, and two columns. The elements of the first column of Score correspond to the negative class (SVMModel.ClassNames{1}) scores. The elements of the second column correspond to the positive class (SVMModel.ClassNames{2}) scores for the corresponding observations. If the ScoreTransform property of SVMModel is not none, then Score is a matrix of posterior probabilities that an observation comes from a particular class.

Data Types: double | single

Definitions

Score

The SVM score for classifying observation x is the numerical predicted response, , computed by the trained SVM classification function

where are the estimated SVM parameters, is the dot product in the predictor space between x and the support vectors, and the sum includes the training set observations.

Posterior Probability

The probability that an observation belongs in a particular class, given the data.

For SVM, the posterior probability is a function of the score, P(s), that observation j is in class k = {-1,1}.

  • For separable classes, the posterior probability is the step function

    where:

    • sj is the score of observation j.

    • +1 and –1 denote the positive and negative classes, respectively.

    • π is the prior probability that an observation is in the positive class.

  • For inseparable classes, the posterior probability is the sigmoid function

    where the parameters A and B are the slope and intercept parameters.

  • For one-class learning, the posterior probability is 1 or 0. A posterior probability of 1 indicates that the observation is from the positive class, and 0 indicates that it is not from the positive class.

Prior Probability

The prior probability is the believed relative frequency that observations from a class occur in the population, for each class.

Examples

expand all

Label Test Sample Observations of SVM Classifiers

Load the ionosphere data set.

load ionosphere
rng(1); % For reproducibility

Train an SVM classifier. Specify a 15% holdout sample for testing. It is good practice to specify the class order and standardize the data.

CVSVMModel = fitcsvm(X,Y,'Holdout',0.15,'ClassNames',{'b','g'},...
    'Standardize',true);
CompactSVMModel = CVSVMModel.Trained{1}; % Extract trained, compact classifier
testInds = test(CVSVMModel.Partition);   % Extract the test indices
XTest = X(testInds,:);
YTest = Y(testInds,:);

CVSVMModel is a ClassificationPartitionedModel classifier. It contains the property Trained, which is a 1-by-1 cell array holding a CompactClassificationSVM classifier that the software trained using the training set.

Label the test sample observations. Display the results for the first 10 observations in the test sample.

[label,score] = predict(CompactSVMModel,XTest);
table(YTest(1:10),label(1:10),score(1:10,2),'VariableNames',...
    {'TrueLabel','PredictedLabel','Score'})
ans = 

    TrueLabel    PredictedLabel     Score  
    _________    ______________    ________

    'b'          'b'                -1.7178
    'g'          'g'                 2.0003
    'b'          'b'                -9.6847
    'g'          'g'                 2.5619
    'b'          'b'                -1.5481
    'g'          'g'                 2.0984
    'b'          'b'                -2.7017
    'b'          'b'               -0.66307
    'g'          'g'                 1.6047
    'g'          'g'                 1.7731

Predict Labels and Posterior Probabilities of SVM Classifiers

A goal of classification is to predict labels of new observations using a trained algorithm. Many applications train algorithms on large data sets, which can use resources that are better used elsewhere. This example shows how to efficiently label new observations using an SVM classifier.

Load the ionosphere data set. Suppose that the last 10 observations become available after training the SVM classifier.

load ionosphere

n = size(X,1);       % Training sample size
isInds = 1:(n-10);   % In-sample indices
oosInds = (n-9):n;   % Out-of-sample indices

Train an SVM classifier. It is good practice to standardize the predictors and specify the order of the classes. Conserve memory by reducing the size of the trained SVM classifier.

SVMModel = fitcsvm(X(isInds,:),Y(isInds),'Standardize',true,...
    'ClassNames',{'b','g'});
CompactSVMModel = compact(SVMModel);
whos('SVMModel','CompactSVMModel')
  Name                 Size             Bytes  Class                                                 Attributes

  CompactSVMModel      1x1              29562  classreg.learning.classif.CompactClassificationSVM              
  SVMModel             1x1             137539  ClassificationSVM                                               

The positive class is 'g'. The CompactClassificationSVM classifier (CompactSVMModel) uses less space than the ClassificationSVM classifier (SVMModel) because the latter stores the data.

Estimate the optimal score-to-posterior-probability-transformation function.

CompactSVMModel = fitPosterior(CompactSVMModel,...
    X(isInds,:),Y(isInds))
CompactSVMModel = 

  classreg.learning.classif.CompactClassificationSVM
         PredictorNames: {1x34 cell}
           ResponseName: 'Y'
             ClassNames: {'b'  'g'}
         ScoreTransform: '@(S)sigmoid(S,-1.968351e+00,3.122242e-01)'
                  Alpha: [88x1 double]
                   Bias: -0.2142
       KernelParameters: [1x1 struct]
                     Mu: [1x34 double]
                  Sigma: [1x34 double]
         SupportVectors: [88x34 double]
    SupportVectorLabels: [88x1 double]


The optimal score transformation function (CompactSVMModel.ScoreTransform) is the sigmoid function because the classes are inseparable.

Predict the out-of-sample labels and positive class posterior probabilities. Since true labels are available, compare them with the predicted labels.

[labels,postProbs] = predict(CompactSVMModel,X(oosInds,:));
table(labels,Y(oosInds),postProbs(:,2),'VariableNames',...
    {'TrueLabels','PredictedLabels','PosteriorProbabilities'})
ans = 

    TrueLabels    PredictedLabels    PosteriorProbabilities
    __________    _______________    ______________________

    'g'           'g'                0.98419               
    'g'           'g'                0.95545               
    'g'           'g'                0.67792               
    'g'           'g'                0.94447               
    'g'           'g'                0.98744               
    'g'           'g'                 0.9248               
    'g'           'g'                 0.9711               
    'g'           'g'                0.96986               
    'g'           'g'                0.97803               
    'g'           'g'                0.94361               

posterior is a 10-by-2 matrix; its first column is the negative class posterior probabilities, and second column is the positive class posterior probabilities corresponding to the new observations.

Plot Posterior Probability Regions for SVM Classifiers

Load Fisher's iris data set. Train the classifier using the petal lengths and widths, and remove the virginica species from the data.

load fisheriris
classKeep = ~strcmp(species,'virginica');
X = meas(classKeep,3:4);
y = species(classKeep);

Train an SVM classifier using the data. It is good practice to specify the order of the classes.

SVMModel = fitcsvm(X,y,'ClassNames',{'setosa','versicolor'});

Estimate the optimal score transformation function.

rng(1); % For reproducibility
[SVMModel,ScoreParameters] = fitPosterior(SVMModel);
ScoreParameters
Warning: Classes are perfectly separated. The optimal score-to-posterior
transformation is a step function. 

ScoreParameters = 

                        Type: 'step'
                  LowerBound: -0.8431
                  UpperBound: 0.6897
    PositiveClassProbability: 0.5000

The optimal score transformation function is the step function because the classes are separable. The fields LowerBound and UpperBound of ScoreParameters indicate the lower and upper end points of the interval of scores corresponding to observations within the class-separating hyperplanes (the margin). No training observation falls within the margin. If a new score is in the interval, then the software assigns the corresonding observation a positive class posterior probability, i.e., the value in the PositiveClassProbability field of ScoreParameters.

Define a grid of values in the observed predictor space. Predict the posterior probabilities for each instance in the grid.

xMax = max(X);
xMin = min(X);
h = 0.01;
[x1Grid,x2Grid] = meshgrid(xMin(1):h:xMax(1),xMin(2):h:xMax(2));

[~,PosteriorRegion] = predict(SVMModel,[x1Grid(:),x2Grid(:)]);

Plot the positive class posterior probability region and the training data.

contourf(x1Grid,x2Grid,...
        reshape(PosteriorRegion(:,2),size(x1Grid,1),size(x1Grid,2)));
h(1) = colorbar;
set(get(h(1),'YLabel'),'String','P({\it versicolor})','FontSize',16);
hold on
gscatter(X(:,1),X(:,2),y,'mc','.x',[15,10])
sv = X(SVMModel.IsSupportVector,:);
plot(sv(:,1),sv(:,2),'ro','MarkerSize',15,'LineWidth',2);
axis tight
hold off

In two-class learning, if the classes are separable, then there are three regions: one where observations have positive class posterior probability 0, one where it is 1, and the other where it is the postiive class prior probability.

Algorithms

  • By default, the software computes optimal posterior probabilities using Platt's method [1]:

    1. Performing 10-fold cross validation

    2. Fitting the sigmoid function parameters to the scores returned from the cross validation

    3. Estimating the posterior probabilities by entering the cross-validation scores into the fitted sigmoid function

  • The software incorporates prior probabilities in the SVM objective function during training.

  • For SVM, predict classifies observations into the class yielding the largest score (i.e., the largest posterior probability). The software accounts for misclassification costs by applying the average-cost correction before training the classifier. That is, given the class prior vector P, misclassification cost matrix C, and observation weight vector w, the software defines a new vector of observation weights (W) such that

References

[1] Platt, J. "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods." In Advances in Large Margin Classifiers. MIT Press, 1999, pages 61–74.

See Also

| | | |

Was this topic helpful?