MMGDX: a maximum-margin training method for neural networks
The large classification margin is the most usual approach to achieve good generalization. The most known maximal margin algorithm is SVM, for which different kernels have been investigated. Unfortunately, in the context of some real world problems, such as on-the-fly object detection, the use of nonlinear kernels implicates in a prohibitive computational cost, due to the big number of windows to be classified during the scanning of each image frame. Notice that, in case of SVM with non-linear kernels, the decision function requires a recursive calculation that demands a large amount of time when the number of support vectors is big. Therefore, the MMGDX was proposed, in order to enable a non-linear classification with good generalization and lower computational effort. For instance, in pedestrian detection applications, neural networks trained by MMGDX can achieve accuracy similar to SVM with RBF kernel. However, the neural network can classify the cropped images hundred times faster than the SVM-RBF. The algorithm must receive the following input data: F_train, which must have all the input data (each column represents one input vector), t_train, which is a row vector where each element is the target output of its respective input data, and nneu, which determines the number of hidden neurons. t_train must receive the value -1, for negative occurrences, or 1, for positive occurrences. The algorithm outputs the MLP parameters Nor, W1, W2, b1, and b2, which are the normalization matrix, the synaptic weight matrixes, and biases of the net. These parameters are required by the routine that simulates the trained MLP (sim_MMGDX). After the training section the algorithm may suggest to increase or decrease the number of neurons. This is only a suggestion, which is based on the analysis of the Bias-variance tradeoff over the training data. Therefore, the user should test the result before change the number of hidden neurons. In case of publication of any application of this method, please, cite the original work: O. Ludwig and U. Nunes; “ Novel Maximum-Margin Training Algorithms for Supervised Neural Networks;” IEEE Transactions on Neural Networks, vol.21, issue 6, pp. 972-984, Jun. 2010.
Cite As
Oswaldo Ludwig (2024). MMGDX: a maximum-margin training method for neural networks (https://www.mathworks.com/matlabcentral/fileexchange/28749-mmgdx-a-maximum-margin-training-method-for-neural-networks), MATLAB Central File Exchange. Retrieved .
MATLAB Release Compatibility
Platform Compatibility
Windows macOS LinuxCategories
- AI and Statistics > Deep Learning Toolbox > Function Approximation, Clustering, and Control > Function Approximation and Clustering >
Tags
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Discover Live Editor
Create scripts with code, output, and formatted text in a single executable document.