File Exchange

## General simulated annealing algorithm

version 1.0 (3.77 KB) by

Minimizes a function with the method of simulated annealing

Updated

anneal Minimizes a function with the method of simulated annealing (Kirkpatrick et al., 1983)

ANNEAL takes three input parameters, in this order:

LOSS is a function handle (anonymous function or inline) with a loss function, which may be of any type, and needn't be continuous. It does, however, need to return a single value.

PARENT is a vector with initial guess parameters. You must input an initial guess.

OPTIONS is a structure with settings for the simulated annealing. If no OPTIONS structure is provided, anneal uses a default structure. OPTIONS can contain any or all of the following fields (missing fields are filled with default values):

Verbosity: Controls output to the screen.
0 suppresses all output
1 gives final report only [default]
2 gives temperature changes and final report
Generator: Generates a new solution from an old one. Any function handle that takes a solution as input and gives a valid solution (i.e. some point in the solution space) as output. The default function generates a row vector which slightly differs from the input vector in one element: @(x) (x+(randperm(length(x))==length(x))*randn/100). Other examples of possible solution generators: @(x) (rand(3,1)): Picks a random point in the unit cube. @(x) (ceil([9 5].*rand(2,1))): Picks a point in a 9-by-5 discrete grid.
Note that if you use the default generator, ANNEAL only works on row vectors. For loss functions that operate on column vectors, use this generator instead of the default: @(x) (x(:)'+(randperm(length(x))==length(x))*randn/100)'
InitTemp: The initial temperature, can be any positive number. Default is 1.
StopTemp: Temperature at which to stop, can be any positive number smaller than InitTemp. Default is 1e-8.
StopVal: Value at which to stop immediately, can be any output of LOSS that is sufficiently low for you. Default is -Inf.
CoolSched: Generates a new temperature from the previous one. Any function handle that takes a scalar as input and returns a smaller but positive scalar as output. Default is @(T) (.8*T).
MaxConsRej: Maximum number of consecutive rejections, can be any positive number. Default is 1000.
MaxTries: Maximum number of tries within one temperature, can be any positive number. Default is 300.
MaxSuccess: Maximum number of successes within one temperature, can be any positive number. Default is 20.

Usage:
[MINIMUM,FVAL] = ANNEAL(LOSS,NEWSOL,[OPTIONS]);
MINIMUM is the solution which generated the smallest encountered value when input into LOSS.
FVAL is the value of the LOSS function evaluated at MINIMUM.
OPTIONS = ANNEAL();
OPTIONS is the default options structure.

Example:
The so-called six-hump camelback function has several local minima in the range -3<=x<=3 and -2<=y<=2. It has two global minima, namely f(-0.0898,0.7126) = f(0.0898,-0.7126) = -1.0316. We can define and minimise it as follows:
camel = @(x,y)(4-2.1*x.^2+x.^4/3).*x.^2+x.*y+4*(y.^2-1).*y.^2;
loss = @(p)camel(p(1),p(2));
[x f] = anneal(loss,[0 0])
We get output:
Initial temperature: 1
Final temperature: 3.21388e-007
Consecutive rejections: 1027
Number of function calls: 6220
Total final loss: -1.03163
x =
-0.0899 0.7127
f =
-1.0316
Which reasonably approximates the analytical global minimum (note that due to randomness, your results will likely not be exactly the same).

Aniruddha Vijaykumar Shembekar

thank you.

Dalya

Felix Goldberg

### Felix Goldberg (view profile)

I would like to use this for a problem of optimal column selection from a fixed matrix. Thus my solution vector x is binary. Is there a principled way to handle this situation using this implementation?

Joshua

### Joshua (view profile)

From my understanding, this isn't a strict simulated annealing program, but more of a pure Monte Carlo.
To be simulated annealing, the 'Generator' would need to be modified so that the size of the changes it makes to the model parameters shrinks as the temperature shrinks.

lvlin

### lvlin (view profile)

Joachim Vandekerckhove

### Joachim Vandekerckhove (view profile)

To those who are looking to constrain parameters to a certain range, the easiest way is to constrain the Generator option. For example: Generator = @(x) (rand(3,1)); will only return results in the unit cube. You can also make your own Generator = @(x)myRand(x); to return only points in any domain you choose.

Emmanuel Farhi

### Emmanuel Farhi (view profile)

That's a very good optimizer, especially on noisy problems.

Luke S

### Luke S (view profile)

Hi,

This function worked really well right off the bat, thanks! Is there a way to incorporate bounds for the parameters? I'm simulating some electrophysiological data, and I want to constrain the parameters to realistic values.

Thanks,
Luke

LIAO xj

Sanjay Manohar

### Sanjay Manohar (view profile)

Thanks so much! worked first time, out of the box, without modification to my previous code, to solve a family of problems that fminsearch could not. Which is better than most built-in matlab functions manage!

aymen

### aymen (view profile)

how we can adjust PID parameter by this algorithm

Nik

### Nik (view profile)

Can anybody suggest how to incorporate constraints (inequality/equality) to the optimization? Especially in a discrete space?

Thank you.

Brecht

### Brecht (view profile)

An implementation of the SIMPSA algorithm, a combination of simulated annealing and the simplex algorithm, can be found here:

The SIMPSA algorithm was developed and described in:

Cardoso, M. F., Salcedo, R., and Feyo de Azevedo, S. (1996). The simplex-simulated annealing approach to continuous non-linear optimization. Computers and Chemical Engineering, 20(9):1065-1080.

David Schwartz

### David Schwartz (view profile)

There is a minor bug in anneal, it fails to keep/return the best solution found when it is not the final cooled solution. Easily fixed.

Anybody to help me out with how to generate examination timetable using Hybrid of GA/SA with the help of Matlab

Chih-Ying Hsiao

### Chih-Ying Hsiao (view profile)

I optimized a likelihood function of 15 parameters. Some parameters still do not attain their minimum. However, this method improve my previous results a lot. Thank you.

Christophe

Terrance Nearey

Jorge Garriga

### Jorge Garriga (view profile)

I am using your SA code from this repository to minimize reaction rate constants for parameter estimation. Is there any way to constrain the optimization? I keep getting negative rate constants when they should be positive.

Seyed Iman

### Seyed Iman (view profile)

I have a nonconvex problem, I have a gradient method to find local optima, can I use this code to generate initial values for local optima?

Can I give multiple initial guesses to this code? (the local optimas i get by gradient method)

David

### David (view profile)

Joachim Vandekerckhove

### Joachim Vandekerckhove (view profile)

Unless I'm misunderstanding, you can do that by defining the anonymous function dynamically, like so;

min2logL = @(x,y) x*log(y);
const = -2;
loss = @(p)min2logL(const,p);
[x f] = anneal(loss,[0 0])

... or similar.

-------------------------

http://www-stat.wharton.upenn.edu/~stroud/classics/KirkpatrickGelattVecchi83.pdf

Allen

### Allen (view profile)

You should allow the Loss function to take in additionally arguments like csminwel does. I could just use global variables but that is bad programming.

ato Xu

thks

Yongguo SHI

Joachim Vandekerckhove

Yes, for an erratic function like that, you'd need to change the options a bit. I would expect that you need a lot of exploring in the beginning (set InitTemp to 100), slow cooling (set CoolSched to @(T).95*T), and probably you want steps in multiple directions simultaneously (set Generator to @(x)x+randn(1,2)/10). Those setting always find the GO (although after many steps).

Hom Gharti

It's very nice code. But I could not obtain the correct minimum value for Goldstein-Price's function as defined in http://www.geatbx.com/docu/fcnindex-01.html. Do I have to change something in the option?

woosung yang

good

Its a nice general purpose implementation. Could make it more versatile by allowing uphill steps with different probability distributions (this uses Boltzmann) and different freezing conditions

Joachim Vandekerckhove

Thank you for the comment, Travis. The error is thrown by the default generator function, which only operates on row vectors. To work with column vectors, you can pass the following generator to ANNEAL:
@(x) (x(:)'+(randperm(length(x))==length(x))*randn/100)'

I'll add that to the help shortly (the submission portal seems to be down at the moment).

Travis Johnson

Great job, I like the program and so do the faculty here.

One suggestion though: right now it fails (in a non-helpful way) if you give it a loss function that works on column vectors (and a parent that is a column vector). Everything in the optimization toolbox works on functions of both row and column vector functions.

If you can't fix it to work on column vectors, at least add a note to the comments saying that it needs to be a row vector.

Joachim V

Any suggestions? The file now allows you to provide a custom generator, so the choice of generator is basically unlimited, unless it should it be giving multiple outputs (which I can't really imagine).

MAS Sahabi

Thank you,good program, though the generator needs some improvements.

Joachim Vandekerckhove

Since I receive many requests for the Kirkpatrick et al. paper, it can be found online here:

http://www.cs.virginia.edu/cs432/documents/sa-1983.pdf

matlap annealing

programing simulated

pyari lal

good one

hello

huilin zhou

Alireza Sarand

L Taylor

Extremely thankful that I found this. Found it accessible, even though I'm new to simulated annealing. Thank you! Great paired with Wikkipedia's entry on SA.

robert bolding

the code accepts the input function to be optimized only via the function handle, that is it has to be known in advance in its explicit form; now i wonder if it is possible to use the code also with functions whose explicit form is not known; thank you

Graeme Doole

A nice, efficient annealing algorithm to adapt as required.

Laurent Ferro-Famil

Simple, efficient and generic.
Can be easily adapted to particular contexts.

Congratulations

Saeed Soltani

Ahmed Bin Ezra

very well-expalained and very easily traced.

Peng YU

Dmitrey Kroshko

I connected Joachim Vandekerckhove's files to the OpenOpt project
& informed him
if any pretensions will be received, I promise to exclude the one

Joachim Vandekerckhove

Corrected the typo. Incidentally, the function doesn't work on 5.1 anymore (it now uses anonymous functions).

Christopher Schwalm

I've found the function useful.

Please change ct to camel in the example.

Joachim Vandekerckhove

John, I've taken all of your suggestions to heart. The interface is now closer to the standard in the Optimization Toolbox, I've put in defaults for everything, and given the user (optional) control over the annealing schedule.
The previous help didn't include anonymous functions because the algorithm was written in version 5.1, now it's written in 7.0.1. I'm not sure if it'll still work in 5.1, perhaps someone could test?

ralph guo

John D'Errico

This does work, after a fashion. What did I not like about the code? Mainly the interface. The author has given you very much (perhaps too much) control over one part of the annealing process, in terms of how perturbations are generated. The help is somewht confusing, especially about the newsol function that you must provide. Why not provide some default for this? Surely there is an easier interface to use here?
The annealing schedule itself is something you have been given absolutely no control over, so if your problem did not converge quickly enough, your only choice is to edit the code, or keep rerunning it until you are happy. There is no tolerance to decide that it has converged before the time is up.
Finally, a minor point - the author refers often to the use of inline functions, but uses anonymous functions in the examples. Either type of function should work here.