Why the gradient descent algorithm increases in each epoch?
1 view (last 30 days)
Show older comments
Hy everyone
I am training a neural network using the algorithm gradient descendent with momentum, I have been used different combinations of Learning rate and momentum, but the gradient is increasing, why?
This is the structure of my program:
- inputDelays = (1:2);
- hiddenSizes = [3 2 2];
- net = narnet(inputDelays, hiddenSizes);
- net.layers{1}.transferFcn = 'mytransfer';
- net.layers{2}.transferFcn = 'mytransfer';
- net.layers{3}.transferFcn = 'mytransfer';
- net.layers{4}.transferFcn = 'purelin';
- net.layers{6}.size=1;
- net.trainFcn = 'traingdm'
- net.divideParam.trainRatio = 0.8;
- net.divideParam.valRatio = 0.1;
- net.divideParam.testRatio = 0.1;
- net.trainParam.epochs= 60000;
- net.trainParam.max_fail = 60;
- net.trainParam.lr = 0.1;
- net.trainParam.mc = 0.9;
- net.trainParam.goal=1e-4;
First I used a logsig activation function and I get a decrement gradient, but when i use a custom activation function to approximate a logsin, i get a increase gradiente
Can someone help me, please.
0 Comments
Accepted Answer
Greg Heath
on 3 Mar 2015
1. Use as many defaults as possible
2.Don't use the custom activation function.
0 Comments
More Answers (0)
See Also
Categories
Find more on Define Shallow Neural Network Architectures in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!