Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

**Class: **CompactClassificationSVM

Predict labels using support vector machine classification model

`label = predict(SVMModel,X)`

```
[label,score]
= predict(SVMModel,X)
```

`[`

also
returns a matrix of scores (`label`

,`score`

]
= predict(`SVMModel`

,`X`

)`score`

), indicating
the likelihood that a label comes from a particular class. For SVM,
likelihood measures are either classification scores or class posterior probabilities.
For each observation in `X`

, the predicted class
label corresponds to the maximum score among all classes.

By default and irrespective of the model kernel function, MATLAB

^{®}uses the dual representation of the score function to classify observations based on trained SVM models, specifically$$\widehat{f}(x)={\displaystyle \sum _{j=1}^{n}{\widehat{\alpha}}_{j}}{y}_{j}G(x,{x}_{j})+\widehat{b}.$$

This prediction method requires, among other things, the trained support vectors and

coefficients (see the*α*`SupportVectors`

and`Alpha`

properties of the SVM model).If you are using a linear SVM model for classification and there are many support vectors, then this prediction method can be slow. To efficiently classify observations based on a linear SVM model, remove the support vectors from the model object using

`discardSupportVectors`

. The resulting model uses the simple linear score function for prediction instead, specifically$$\widehat{f}\left(x\right)={x}^{\prime}\widehat{\beta}+\widehat{b}.$$

For more details, see Support Vector Machines for Binary Classification.

By default, the software computes optimal posterior probabilities using Platt's method [1]:

Performing 10-fold cross validation

Fitting the sigmoid function parameters to the scores returned from the cross validation

Estimating the posterior probabilities by entering the cross-validation scores into the fitted sigmoid function

The software incorporates prior probabilities in the SVM objective function during training.

For SVM,

`predict`

classifies observations into the class yielding the largest score (i.e., the largest posterior probability). The software accounts for misclassification costs by applying the average-cost correction before training the classifier. That is, given the class prior vector, misclassification cost matrix*P*, and observation weight vector*C*, the software defines a new vector of observation weights (*w*) such that*W*$${W}_{j}={w}_{j}{P}_{j}{\displaystyle \sum _{k=1}^{K}{C}_{jk}}.$$

[1] Platt, J. "Probabilistic outputs
for support vector machines and comparisons to regularized likelihood
methods." *Advances in Large Margin Classifiers*.
MIT Press, 1999, pages 61–74.

`ClassificationSVM`

| `codegen`

| `CompactClassificationSVM`

| `fitcsvm`

| `fitSVMPosterior`

| `loadCompactModel`

| `loss`

| `saveCompactModel`

Was this topic helpful?