(Removed) Train support vector machine classifier
SVMStruct = svmtrain(Training,Group)
SVMStruct = svmtrain(Training,Group,Name,Value)
Matrix of training data, where each row corresponds to an observation or replicate,
and each column corresponds to a feature or variable.
Grouping variable, which can be a categorical, numeric, or logical vector, a cell
array of character vectors, or a character matrix with each row representing a class
label. Each element of
comma-separated pairs of
the argument name and
Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
Boolean specifying whether
Value of the box constraint
Value that specifies the size of the kernel matrix cache for the SMO training
method. The algorithm keeps a matrix with up to
Value that specifies the fraction of variables allowed to violate the
Karush-Kuhn-Tucker (KKT) conditions for the SMO training method. Set any value in
[0,1). For example, if you set
Set this option to a positive value to help the algorithm converge if it is fluctuating near a good solution.
For more information on KKT conditions, see Cristianini and Shawe-Taylor .
Method used to find the separating hyperplane. Options are:
Parameters of the Multilayer Perceptron (
Options structure for training.
Order of the polynomial kernel.
Scaling factor (sigma) in the radial basis function kernel.
Boolean indicating whether to plot the grouped data and separating line. Creates a plot only when the data has two columns (features).
Value that specifies the tolerance with which the Karush-Kuhn-Tucker (KKT) conditions are checked for the SMO training method. For a definition of KKT conditions, see Karush-Kuhn-Tucker (KKT) Conditions.
Structure containing information about the trained SVM classifier in the following fields:
The Karush-Kuhn-Tucker (KKT) conditions are analogous to the condition that the gradient must be zero at a minimum, modified to take constraints into account. The difference is that the KKT conditions hold for constrained problems. The KKT conditions use the auxiliary Lagrangian function:
Here f(x) is the objective function, g(x) is a vector of constraint functions g(x) ≤ 0, and h(x) is a vector of constraint functions h(x) = 0. The vector λ, which is the concatenation of λg and λh, is the Lagrange multiplier vector. Its length is the total number of constraints.
The KKT conditions are:
For more information, see Karush-Kuhn-Tucker conditions.
To classify new data, use the result of training,
svmtrain function uses an optimization method to identify support
vectors si, weights
αi, and bias b that are used
to classify vectors x according to the following equation:
where k is a kernel function. In the case of a linear kernel, k is the dot product. If c ≥ 0, then x is classified as a member of the first group, otherwise it is classified as a member of the second group.
When you set
svmtrain function operates on a data set containing
N elements, and it creates an
(N+1) matrix to find the separating
hyperplane. This matrix needs at least
8*(n+1)^2 bytes of contiguous
memory. If this size of contiguous memory is not available, the software displays an
“out of memory” error message.
When you set
'SMO' (default), memory
consumption is controlled by the
kernelcachelimit option. The SMO
algorithm stores only a submatrix of the kernel matrix, limited by the size specified by the
kernelcachelimit option. However, if the number of data points exceeds
the size specified by the
kernelcachelimit option, the SMO algorithm
slows down because it has to recalculate the kernel matrix elements.
svmtrain on large data sets, and you run out of memory or
the optimization step is very time consuming, try either of the following:
Use a smaller number of samples and use cross-validation to test the performance of the classifier.
'SMO', and set the
kernelcachelimit option as large as your system permits.
Errors starting in R2018a
svmclassify have been removed.
Instead, use the
fitcsvm function to train a binary SVM
classifier, and use the object function
ClassificationSVM to predict labels. Several
differences between these functions require updates to your code.
fitcsvm function was introduced in R2014a as a new way to train
an SVM classifier for one-class or two-class learning.
a trained SVM classifier as a
ClassificationSVM is an object for accessing and performing operations on
the training data and storing configurations of trained models. The new features include the
providing several advantages over these functions, as described here.
The new functionality
Supports computation of soft classification scores
Supports fitting posterior probabilities
Has improved training speed, especially on big data with well-separated classes, by providing shrinkage
Allows a warm restart by accepting an initial α value
Allows training to resume after the maximum number of iterations is exceeded
Supports robust learning in the presence of outliers
ClassificationSVM is built on the same framework as
ClassificationKNN. Therefore, the syntax, options, and object
functions resemble those in the existing objects, including:
This table shows some typical usages of
svmtrain and how to update
your code to use
|Removed Functionality||Recommended Replacement|
For details, see Train SVM Classifier.
This table shows the name-value pair arguments of
svmtrain and the
corresponding arguments of
fitcsvm. The default values and detailed
usages can differ. For details, see the name-value pair argument descriptions of
|Name-Value Pair Arguments of ||Name-Value Pair Arguments of |
 Kecman, V., Learning and Soft Computing, MIT Press, Cambridge, MA. 2001.
 Suykens, J.A.K., Van Gestel, T., De Brabanter, J., De Moor, B., and Vandewalle, J., Least Squares Support Vector Machines, World Scientific, Singapore, 2002.
 Scholkopf, B., and Smola, A.J., Learning with Kernels, MIT Press, Cambridge, MA. 2002.
 Cristianini, N., and Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, First Edition (Cambridge: Cambridge University Press).