`m = margin(SVMModel,X,Y)`

example

The *classification margins* are,
for each observation, the difference between the score for the true
class and maximal score for the false classes. Provided that they
are on the same scale, margins serve as a classification confidence
measure, i.e., among multiple classifiers, those that yield larger
margins are better [2].

The *edge* is the weighted
mean of the *classification margins*.

The weights are the prior class probabilities. If you supply weights, then the software normalizes them to sum to the prior probabilities in the respective classes. The software uses the renormalized weights to compute the weighted mean.

One way to choose among multiple classifiers, e.g., to perform feature selection, is to choose the classifier that yields the highest edge.

The SVM *score* for classifying
observation *x* is the signed distance from *x* to
the decision boundary ranging from -∞ to +∞. A positive
score for a class indicates that *x* is predicted
to be in that class, a negative score indicates otherwise.

The score is also the numerical, predicted response for *x*, $$f(x)$$, computed by the trained SVM
classification function

$$f(x)={\displaystyle \sum _{j=1}^{n}{\alpha}_{j}}{y}_{j}G({x}_{j},x)+b,$$

where $$({\alpha}_{1},\mathrm{...},{\alpha}_{n},b)$$ are the estimated SVM parameters, $$G({x}_{j},x)$$ is the dot product in the predictor
space between *x* and the support vectors, and the
sum includes the training set observations.

If *G*(*x _{j}*,

$$f\left(x\right)=\left(x/s\right)\prime \beta +b.$$

*s* is
the kernel scale and *β* is the vector of fitted
linear coefficients.

For binary classification, the software defines the margin for
observation *j*, *m _{j}*,
as

$${m}_{j}=2{y}_{j}f({x}_{j}),$$

where *y _{j}* ∊
{-1,1}, and

[1] Christianini, N., and J. C. Shawe-Taylor. *An
Introduction to Support Vector Machines and Other Kernel-Based Learning
Methods*. Cambridge, UK: Cambridge University Press, 2000.

[2] Hu, Q. X. Che, L. Zhang, and D. Yu. "Feature
Evaluation and Selection Based on Neighborhood Soft Margin." *Neurocomputing*.
Vol. 73, 2010, pp. 2114–2124.

`ClassificationSVM`

| `CompactClassificationSVM`

| `edge`

| `fitcsvm`

| `loss`

| `predict`

Was this topic helpful?