Be the first to rate this file! 23 Downloads (last 30 days) File Size: 4.38 KB File ID: #28749

MMGDX: a maximum-margin training method for neural networks

by

 

17 Sep 2010 (Updated )

Maximum-margin training method applicable to MLP in the context of binary classification.

| Watch this File

File Information
Description

The large classification margin is the most usual approach to achieve good generalization. The most known maximal margin algorithm is SVM, for which different kernels have been investigated. Unfortunately, in the context of some real world problems, such as on-the-fly object detection, the use of nonlinear kernels implicates in a prohibitive computational cost, due to the big number of windows to be classified during the scanning of each image frame. Notice that, in case of SVM with non-linear kernels, the decision function requires a recursive calculation that demands a large amount of time when the number of support vectors is big. Therefore, the MMGDX was proposed, in order to enable a non-linear classification with good generalization and lower computational effort. For instance, in pedestrian detection applications, neural networks trained by MMGDX can achieve accuracy similar to SVM with RBF kernel. However, the neural network can classify the cropped images hundred times faster than the SVM-RBF. The algorithm must receive the following input data: F_train, which must have all the input data (each column represents one input vector), t_train, which is a row vector where each element is the target output of its respective input data, and nneu, which determines the number of hidden neurons. t_train must receive the value -1, for negative occurrences, or 1, for positive occurrences. The algorithm outputs the MLP parameters Nor, W1, W2, b1, and b2, which are the normalization matrix, the synaptic weight matrixes, and biases of the net. These parameters are required by the routine that simulates the trained MLP (sim_MMGDX). After the training section the algorithm may suggest to increase or decrease the number of neurons. This is only a suggestion, which is based on the analysis of the Bias-variance tradeoff over the training data. Therefore, the user should test the result before change the number of hidden neurons. In case of publication of any application of this method, please, cite the original work: O. Ludwig and U. Nunes; “ Novel Maximum-Margin Training Algorithms for Supervised Neural Networks;” IEEE Transactions on Neural Networks, vol.21, issue 6, pp. 972-984, Jun. 2010.

Required Products Neural Network Toolbox
MATLAB release MATLAB 7 (R14)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (1)
20 Sep 2010 Oswaldo Ludwig

Download a tutorial with two experiments in two UCI data sets in my webpage: http://webmail.isr.uc.pt/~oludwig/

In this new version were fixed some problems in translating from Matlab 6 (the environment in which this code was developed) to Matlab 7, which implicated in optimization errors for some data sets.

Updates
04 Nov 2010

Only the description was updated

Contact us