For applications that do not require real-time predictions, the user may tolerate Matlab ANN runs that seem to take forever. Case in point: With about 300 variables (or descriptors), 2000 data points, 5 different variations of neurons in the hidden layer (5 to 25 in increments of 5),3 re-initializations (to avoid local minima), but no division into training & validation sets (unnecessary with Bayes regularization), trainbr (1 hidden layer) takes about a week (on a Windows 7 laptop with 4GB RAM, running the 64 bit version of Matlab).
The run also includes, for comparison, trainings based on 30 linear or nonlinear principal components or selected descriptors instead of the 300 raw descriptors. So, asking me to explore "dimension reduction" is not the answer.
It would be nice to save the network (weights, etc.) trained at such an expense of time, so that it can be evaluated on test data that may become available in the future, without having to spend a week training the network everytime a new set of test data becomes available.
No products are associated with this question.