Good day dear forum members and dear MathWorks team.
I am having a hard time trying to implement batch learning of a multi-layer perceptron by myself. For certain reasons (including self-learning), I won't use Neural Network Toolbox.
A similar problem may be found at some places over the Internet, but nowhere a clear answer has been given. Basically, the net learns only to reproduce the average over the targets in batch learning, yet back propagation itself seems to work just fine.
I'd really appreciate if someone from here, who knows neural nets well, took a little time and looked at my code. I'm completely stuck and have no ideas how to go further.
I've tried everything:
- Introduced bias weights
- Tried with and without updating of input weights
- Shuffled the patterns in batch learning
- Tried to update after each pattern and accumulating
- Initialized weights in different possible ways
- Double-checked the code 10 times
- Normalized accumulated updates by the number of patterns
- Tried different layer, neuron numbers
- Tried different activation functions
- Tried different learning rates
- Tried different number of epochs from 50 to 10000
- Tried to normalize the data
And nothing helps. Doing just simple scalar function approximation, I always get something like:
In one epoch of batch learning, I compute weight updates for all patterns, accumulate them and then update by accumulated weight deltas. However, I've tried all different possible ways to rearrange cycles etc. and nothing helped.
Here is my code: