Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

**Class: **CompactClassificationSVM

Predict labels using support vector machine classification model

`[`

also
returns a matrix of scores (`label`

,`score`

]
= predict(`SVMModel`

,`X`

)`score`

), indicating
the likelihood that a label comes from a particular class. For SVM,
likelihood measures are either classification scores or class posterior probabilities.
For each observation in `X`

, the predicted class
label corresponds to the maximum score among all classes.

Code Generation support: Yes.

MATLAB Function Block support: Yes.

The SVM *classification score* for
classifying observation *x* is the signed distance
from *x* to the decision boundary ranging from -∞
to +∞. A positive score for a class indicates that *x* is
predicted to be in that class, a negative score indicates otherwise.

The score for predicting *x* into the positive
class, also the numerical, predicted response for *x*, $$f(x)$$, is the trained SVM classification
function

$$f(x)={\displaystyle \sum _{j=1}^{n}{\alpha}_{j}}{y}_{j}G({x}_{j},x)+b,$$

where $$({\alpha}_{1},\mathrm{...},{\alpha}_{n},b)$$ are
the estimated SVM parameters, $$G({x}_{j},x)$$ is
the dot product in the predictor space between *x* and
the support vectors, and the sum includes the training set observations.
The score for predicting *x* into the negative class
is –*f*(*x*).

If *G*(*x _{j}*,

$$f\left(x\right)=\left(x/s\right)\prime \beta +b.$$

*s* is
the kernel scale and *β* is the vector of fitted
linear coefficients.

The probability that an observation belongs in a particular class, given the data.

For SVM, the posterior probability is a function of the score, *P*(*s*),
that observation *j* is in class *k* =
{-1,1}.

For separable classes, the posterior probability is the step function

$$P\left({s}_{j}\right)=\{\begin{array}{l}\begin{array}{cc}0;& s<\underset{{y}_{k}=-1}{\mathrm{max}}{s}_{k}\end{array}\\ \begin{array}{cc}\pi ;& \underset{{y}_{k}=-1}{\mathrm{max}}{s}_{k}\le {s}_{j}\le \underset{{y}_{k}=+1}{\mathrm{min}}{s}_{k}\end{array}\\ \begin{array}{cc}1;& {s}_{j}>\underset{{y}_{k}=+1}{\mathrm{min}}{s}_{k}\end{array}\end{array},$$

where:

*s*is the score of observation_{j}*j*.+1 and –1 denote the positive and negative classes, respectively.

*π*is the prior probability that an observation is in the positive class.

For inseparable classes, the posterior probability is the sigmoid function

$$P({s}_{j})=\frac{1}{1+\mathrm{exp}(A{s}_{j}+B)},$$

where the parameters

*A*and*B*are the slope and intercept parameters.

The *prior probability* is
the believed relative frequency that observations from a class occur
in the population for each class.

By default, the software computes optimal posterior probabilities using Platt's method [1]:

Performing 10-fold cross validation

Fitting the sigmoid function parameters to the scores returned from the cross validation

Estimating the posterior probabilities by entering the cross-validation scores into the fitted sigmoid function

The software incorporates prior probabilities in the SVM objective function during training.

For SVM,

`predict`

classifies observations into the class yielding the largest score (i.e., the largest posterior probability). The software accounts for misclassification costs by applying the average-cost correction before training the classifier. That is, given the class prior vector*P*, misclassification cost matrix*C*, and observation weight vector*w*, the software defines a new vector of observation weights (*W*) such that$${W}_{j}={w}_{j}{P}_{j}{\displaystyle \sum _{k=1}^{K}{C}_{jk}}.$$

`predict`

generates reference C code. Notes
and limitations for code generation include:

You must call

`predict`

within a function that you declare (that is, you cannot call`predict`

at the top-level).This table contains input-and-output-argument notes and limitations.

Argument Notes and Limitations `SVMModel`

You must load the model using

`loadCompactModel`

within a function that you declare.Must be a compile-time constant, that is, its value cannot change while

`codegen`

generates the code.

`X`

Must be a single- or double-precision matrix and can be variable sized. However, the number of columns in

`X`

must be`numel(Mdl.PredictorNames)`

.Rows and columns must correspond to observations and predictors, respectively.

`score`

Returned as the same data type as `X`

, that is, a single- or double-precision matrix

For code generation notes and limitations on `Mdl`

,
see Code Generation Support, Usage Notes, and Limitations.

You can use this function in the MATLAB^{®} Function Block
in Simulink^{®}.

[1] Platt, J. "Probabilistic outputs
for support vector machines and comparisons to regularized likelihood
methods." *In Advances in Large Margin Classifiers*.
MIT Press, 1999, pages 61–74.

Was this topic helpful?