# Thread Subject: fmincon where is the learning rate value for gradient descent??

 Subject: fmincon where is the learning rate value for gradient descent?? Date: 11 May, 2012 14:33:34 Message: 1 of 7 I want to know what is the value of learning rate in Gradient descent that gives the result of minimization when using fmincon learning rate we keep changing until we get suitable minimzation step Is there any output of from fmincon that gives us hint about the learning rate used even if it was variable I want to know the best learning rate the fmincon used to get the output local optima value
 Subject: fmincon where is the learning rate value for gradient descent?? From: Alan Weiss Date: 11 May, 2012 15:09:46 Message: 2 of 7 On 5/11/2012 10:33 AM, besbesmany besbesmany wrote: > I want to know what is the value of learning rate in Gradient descent > that gives the result of minimization when using fmincon > > learning rate we keep changing until we get suitable minimzation step > Is there any output of from fmincon that gives us hint about the > learning rate used even if it was variable > > I want to know the best learning rate the fmincon used to get the output > local optima value I don't know what you mean by "learning rate." fmincon does not exactly use a gradient descent algorithm, though it does use gradients in its calculations. You can read about the algorithms fmincon uses here: http://www.mathworks.com/help/toolbox/optim/ug/brnoxzl.html Alan Weiss MATLAB mathematical toolbox documentation
 Subject: fmincon where is the learning rate value for gradient descent?? From: Matt J Date: 11 May, 2012 15:15:25 Message: 3 of 7 "besbesmany besbesmany" wrote in message ... > I want to know what is the value of learning rate in Gradient descent that gives the result of minimization when using fmincon > > learning rate we keep changing until we get suitable minimzation step > > Is there any output of from fmincon that gives us hint about the learning rate used even if it was variable > > I want to know the best learning rate the fmincon used to get the output local optima value ================ Your terminology "learning rate" is not standard, but I think you mean the "stepsize". You can use the OutputFcn option to get the stepsize http://www.mathworks.com/help/releases/R13sp2/toolbox/optim/ref_int7.html It is one of the fields of OptimValues.
 Subject: fmincon where is the learning rate value for gradient descent?? Date: 11 May, 2012 15:30:24 Message: 4 of 7 I mean alpha in this equation http://en.wikipedia.org/wiki/Stochastic_gradient_descent i want matlab function that make minimization and give me the output or delatiled values of best alpha can you help me which matlab function i use? "Matt J" wrote in message ... > "besbesmany besbesmany" wrote in message ... > > I want to know what is the value of learning rate in Gradient descent that gives the result of minimization when using fmincon > > > > learning rate we keep changing until we get suitable minimzation step > > > > Is there any output of from fmincon that gives us hint about the learning rate used even if it was variable > > > > I want to know the best learning rate the fmincon used to get the output local optima value > ================ > > Your terminology "learning rate" is not standard, but I think you mean the "stepsize". > > You can use the OutputFcn option to get the stepsize > > http://www.mathworks.com/help/releases/R13sp2/toolbox/optim/ref_int7.html > > It is one of the fields of OptimValues.
 Subject: fmincon where is the learning rate value for gradient descent?? From: Matt J Date: 11 May, 2012 15:53:26 Message: 5 of 7 "besbesmany besbesmany" wrote in message ... > I mean alpha in this equation > http://en.wikipedia.org/wiki/Stochastic_gradient_descent Yes, that's the stepsize. > i want matlab function that make minimization and give me the output or delatiled values of best alpha > > can you help me which matlab function i use? =============== My advice from my last post still applies. Using OutputFcn, you can obtain whatever stepsize FMINCON is using. However, as Alan told you, FMINCON does not use Gradient Descent. Some of it's algorithms might involve line searches (I'm not entirely sure), but there are other operations in the iteration updating that greatly distinguish FMINCON algorithms from conventional direction-of-descent algorithms. The value of OptimValues.stepsize might have more relevance in FMINUNC which does offer an option to use gradient descent.
 Subject: fmincon where is the learning rate value for gradient descent?? Date: 11 May, 2012 16:03:23 Message: 6 of 7 how can i get the step size vector for all iterations not only the final iteration that appears in output?? > > I don't know what you mean by "learning rate." fmincon does not exactly > use a gradient descent algorithm, though it does use gradients in its > calculations. > > You can read about the algorithms fmincon uses here: > http://www.mathworks.com/help/toolbox/optim/ug/brnoxzl.html > > Alan Weiss > MATLAB mathematical toolbox documentation
 Subject: fmincon where is the learning rate value for gradient descent?? From: Matt J Date: 11 May, 2012 16:12:12 Message: 7 of 7 "besbesmany besbesmany" wrote in message ... > how can i get the step size vector for all iterations not only the final iteration that appears in output?? ============== Here is a section of the documentation for OutputFcn that explains how to get data from prior iterations: http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-56.html#brjhnxu

Separated by commas
Ex.: root locus, bode

### What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.