Compact support vector machine for binary classification
CompactClassificationSVM
is a compact support
vector machine (SVM) classifier.
The compact classifier does not include the data used for training the SVM classifier. Therefore, you cannot perform tasks, such as cross validation, using the compact classifier.
Use a compact SVM classifier for labeling new data (i.e., predicting the labels of new data).
returns a compact
SVM classifier (CompactSVMModel
=
compact(SVMModel
)CompactSVMModel
) from a full, trained
support vector machine classifier (SVMModel
).

A full, trained 

Numeric vector of trained classifier coefficients from the dual
problem (i.e., the estimated Lagrange multipliers).  

Numeric vector of linear predictor coefficients. If your predictor data contains categorical variables, then
the software uses full dummy encoding for these variables. The software
creates one dummy variable for each level of each categorical variable. If $$f\left(x\right)=\left(x/s\right)\prime \beta +b.$$
If  

Scalar corresponding to the trained classifier bias term.  

List of categorical predictors, which is always empty (  

List of elements in  

Square matrix, where During training, the software updates the prior probabilities by incorporating the penalties described in the cost matrix. Therefore,
This property is readonly. For more details, see Algorithms.  

Expanded predictor names, stored as a cell array of strings. If the model uses encoding for categorical variables, then  

Structure array containing the kernel name and parameter values. To display the values of The software accepts  

Numeric vector of predictor means. If you specify If your predictor data contains categorical variables, then
the software uses full dummy encoding for these variables. The software
creates one dummy variable for each level of each categorical variable. If  

Cell array of strings containing the predictor names, in the
order that they appear in  

Numeric vector of prior probabilities for each class. The order
of the elements of For twoclass learning, if you specify a cost matrix, then the software updates the prior probabilities by incorporating the penalties described in the cost matrix. This property is readonly. For more details, see Algorithms.  

String representing a builtin transformation function, or a function handle for transforming predicted classification scores. To change the score transformation function to, e.g.,
 

Numeric vector of predictor standard deviations. If you specify If your predictor data contains categorical variables, then
the software uses full dummy encoding for these variables. The software
creates one dummy variable for each level of each categorical variable. If  

Matrix containing rows of If you specify  

Numeric vector of support vector class labels.

compareHoldout  Compare accuracies of two classification models using new data 
discardSupportVectors  Discard support vectors for linear support vector machine models 
edge  Classification edge for support vector machine classifiers 
fitPosterior  Fit posterior probabilities 
loss  Classification error for support vector machine classifiers 
margin  Classification margins for support vector machine classifiers 
predict  Predict labels for support vector machine classifiers 
Value. To learn how value classes affect copy operations, see Copying Objects in the MATLAB documentation.
[1] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Second Edition. NY: Springer, 2008.
[2] Scholkopf, B., J. C. Platt, J. C. ShaweTaylor, A. J. Smola, and R. C. Williamson. "Estimating the Support of a HighDimensional Distribution." Neural Comput., Vol. 13, Number 7, 2001, pp. 1443–1471.
[3] Christianini, N., and J. C. ShaweTaylor. An Introduction to Support Vector Machines and Other KernelBased Learning Methods. Cambridge, UK: Cambridge University Press, 2000.
[4] Scholkopf, B. and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond, Adaptive Computation and Machine Learning Cambridge, MA: The MIT Press, 2002.