Superclasses:
Compact support vector machine for binary classification
CompactClassificationSVM
is a compact support
vector machine (SVM) classifier.
The compact classifier does not include the data used for training the SVM classifier. Therefore, you cannot perform tasks, such as crossvalidation, using the compact classifier.
Use a compact SVM classifier for labeling new data (i.e., predicting the labels of new data).
returns a compact
SVM classifier (CompactSVMModel
=
compact(SVMModel
)CompactSVMModel
) from a full, trained
support vector machine classifier (SVMModel
).

A full, trained 

sby1 numeric vector of trained classifier
coefficients from the dual problem, that is, the estimated Lagrange
multipliers. s is the number of support vectors
in the trained classifier, that is, If you specify removing duplicates using  

Numeric vector of linear predictor coefficients. If your predictor data contains categorical variables, then
the software uses full dummy encoding for these variables. The software
creates one dummy variable for each level of each categorical variable. If $$f\left(x\right)=\left(x/s\right)\prime \beta +b.$$ Mdl stores β, b,
and s in the properties Beta , Bias ,
and KernelParameters.Scale , respectively.
If  

Scalar corresponding to the trained classifier bias term.  

Indices of categorical
predictors, stored as a vector of positive integers.  

List of elements in  

Square matrix, where During training, the software updates the prior probabilities by incorporating the penalties described in the cost matrix. Therefore:
This property is readonly. For more details, see Algorithms.  

Expanded predictor names, stored as a cell array of character vectors. If the model uses encoding for categorical variables, then  

Structure array containing the kernel name and parameter values. To display the values of The software accepts  

Numeric vector of predictor means. If you specify If your predictor data contains categorical variables, then
the software uses full dummy encoding for these variables. The software
creates one dummy variable for each level of each categorical variable. If  

Cell array of character vectors containing the predictor names, in the order that they appear in the training data.  

Numeric vector of prior probabilities for each class. The order
of the elements of For twoclass learning, if you specify a cost matrix, then the software updates the prior probabilities by incorporating the penalties described in the cost matrix. This property is readonly. For more details, see Algorithms.  

Character vector representing a builtin transformation function, or a function handle for transforming predicted classification scores. To change the score transformation function to, e.g.,
 

Numeric vector of predictor standard deviations. If you specify If your predictor data contains categorical variables, then
the software uses full dummy encoding for these variables. The software
creates one dummy variable for each level of each categorical variable. If  

sbyp numeric matrix
containing rows of If you specify If you specify removing duplicates using  

sby1 numeric vector of support vector class
labels. s is the number of support vectors in the
trained classifier, that is, A value of If you specify removing duplicates using 
compareHoldout  Compare accuracies of two classification models using new data 
discardSupportVectors  Discard support vectors for linear support vector machine models 
edge  Classification edge for support vector machine classifiers 
fitPosterior  Fit posterior probabilities 
loss  Classification error for support vector machine classifiers 
margin  Classification margins for support vector machine classifiers 
predict  Predict labels using support vector machine classification model 
Value. To learn how value classes affect copy operations, see Copying Objects (MATLAB).
[1] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Second Edition. NY: Springer, 2008.
[2] Scholkopf, B., J. C. Platt, J. C. ShaweTaylor, A. J. Smola, and R. C. Williamson. “Estimating the Support of a HighDimensional Distribution.” Neural Computation., Vol. 13, Number 7, 2001, pp. 1443–1471.
[3] Christianini, N., and J. C. ShaweTaylor. An Introduction to Support Vector Machines and Other KernelBased Learning Methods. Cambridge, UK: Cambridge University Press, 2000.
[4] Scholkopf, B. and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond, Adaptive Computation and Machine Learning Cambridge, MA: The MIT Press, 2002.