This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.


1-D minimization using golden section search


[a,gX,perf,retcode,delta,tol] = srchgol(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf)


srchgol is a linear search routine. It searches in a given direction to locate the minimum of the performance function in that direction. It uses a technique called the golden section search.

[a,gX,perf,retcode,delta,tol] = srchgol(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf) takes these inputs,


Neural network


Vector containing current values of weights and biases


Delayed input vectors


Layer target vectors


Initial input delay conditions


Batch size


Time steps


Search direction vector


Gradient vector


Performance value at current X


Slope of performance value at current X in direction of dX


Initial step size


Tolerance on search


Change in performance on previous step

and returns


Step size that minimizes performance


Gradient at new minimum point


Performance value at new minimum point


Return code that has three elements. The first two elements correspond to the number of function evaluations in the two stages of the search. The third element is a return code. These have different meanings for different search algorithms. Some might not be used in this function.

 0  Normal
 1  Minimum step taken
 2  Maximum step taken
 3  Beta condition not met

New initial step size, based on the current step size


New tolerance on search

Parameters used for the golden section algorithm are


Scale factor that determines sufficient reduction in perf


Largest step size


Parameter that relates the tolerance tol to the initial step size delta, usually set to 20

The defaults for these parameters are set in the training function that calls them. See traincgf, traincgb, traincgp, trainbfg, and trainoss.

Dimensions for these variables are


No-by-Ni-by-TS cell array

Each element P{i,j,ts} is a Dij-by-Q matrix.


Nl-by-TS cell array

Each element P{i,ts} is a Vi-by-Q matrix.


Nl-by-LD cell array

Each element Ai{i,k} is an Si-by-Q matrix.


Ni =net.numInputs
Nl =net.numLayers
LD =net.numLayerDelays
Ri =net.inputs{i}.size
Dij=Ri * length(net.inputWeights{i,j}.delays)


Here is a problem consisting of inputs p and targets t to be solved with a network.

p = [0 1 2 3 4 5];
t = [0 0 0 1 1 1];

A two-layer feed-forward network is created. The network’s input ranges from [0 to 10]. The first layer has two tansig neurons, and the second layer has one logsig neuron. The traincgf network training function and the srchgol search function are to be used.

Create and Test a Network

net = newff([0 5],[2 1],{'tansig','logsig'},'traincgf');
a = sim(net,p)

Train and Retest the Network

net.trainParam.searchFcn = 'srchgol';
net.trainParam.epochs = 50; = 10;
net.trainParam.goal = 0.1;
net = train(net,p,t);
a = sim(net,p)

Network Use

You can create a standard network that uses srchgol with newff, newcf, or newelm.

To prepare a custom network to be trained with traincgf, using the line search function srchgol,

  1. Set net.trainFcn to 'traincgf'. This sets net.trainParam to traincgf’s default parameters.

  2. Set net.trainParam.searchFcn to 'srchgol'.

The srchgol function can be used with any of the following training functions: traincgf, traincgb, traincgp, trainbfg, trainoss.

More About

collapse all

Golden Section Search

The golden section search srchgol is a linear search that does not require the calculation of the slope. This routine begins by locating an interval in which the minimum of the performance function occurs. This is accomplished by evaluating the performance at a sequence of points, starting at a distance of delta and doubling in distance each step, along the search direction. When the performance increases between two successive iterations, a minimum has been bracketed. The next step is to reduce the size of the interval containing the minimum. Two new points are located within the initial interval. The values of the performance at these two points determine a section of the interval that can be discarded, and a new interior point is placed within the new interval. This procedure is continued until the interval of uncertainty is reduced to a width of tol, which is equal to delta/scale_tol.

See [HDB96], starting on page 12-16, for a complete description of the golden section search. Try the Neural Network Design demonstration nnd12sd1 [HDB96] for an illustration of the performance of the golden section search in combination with a conjugate gradient algorithm.


srchgol locates the minimum of the performance function in the search direction dX, using the golden section search. It is based on the algorithm as described on page 33 of Scales (see reference below).


Scales, L.E., Introduction to Non-Linear Optimization, New York, Springer-Verlag, 1985

Introduced before R2006a

Was this topic helpful?