A new metaheuristic optimization algorithm, called Cuckoo Search (CS), is fully implemented, and the vectorized version is given here. This code demonstrates how CS works for unconstrained optimization, which can easily be extended to solve various global optimization problems efficiently.

Three versions are provided:
Cuckoo_search.m is for a given tolerance.
Cuckoo_search_new.m is for a fixed number of iterations.
Cuckoo_search_spring.m is constrained optimization for designing a spring.

dear sir can u please provide me code cuckoo search optimization algorithm for job shop scheduling,if possible send me to my emailid satyendra.gla@gmail.com

Dear sir
can you please upload cuckoo search code for phase equilibrium calculation.
if possible can you please send it to my mail id : m.abbasi.pche@gmail.com

Dear sir,
Can you please upload cuckoo search code for tuning PID controller gain parameters with ITAE objective function.
if possible can u please send it to my mail id: chandu76522@gmail.com

sir,
Can I use the same code for the PID controller gain parameters tuning using cuckoo algorithm?
(previously I tried the same problem using GA,with the objective function as IAE.)

hi! what would be upper and lower bounds if we define it in case of 2d image? what upper lower bounds telling? nd how cann we choose that??
would be grateful for your response ..

Hi Dr.Yang,
I am a PhD student from Durham University. I am trying to apply your Cuckoo Search algorithm in power system analysis. It works quite well in the initial study.
I am wondering if it is possible that you can explain more about eggs, cuckoos and nests. For example, how will the algorithm work if there are multiple eggs in one nest?
Many thanks!
Best regards,
Pengfei

Hello,Xin-She Yang
I am Gurpreet Singh, student of M.tech Computer science.Now i am doing a work on thesis. I have needed your help in it. Can u please tell me that how we can implement Cuckoo search in a Vehicle Routung Problem.Can you guide me.
My mail id is gurujaswal@gmail.com

I am interest in your Cuckoo algorithm. May i know is there a 'train' function coded because I would want to use it to train neural network.
Thank you.

Thanks. The fraction (pa) is checked by using the line (find this line in the code):
K=rand(size(nest))>pa;
which provides a vectorized implementation. That is, for n nests, you can check this in one go.

If the condition is true, then you update/replace the solutions by generating new solutions.

Hello sir,
i'm actually trying to apply CS to design digital FIR filter. My objective function gives values in a 60*60 matrices. I've changed value of n & nd to 60. but i'm still not sure how to attach my objective function to CS (in line 175) in order to get optimized result. as in your code it takes 1 row at a time and applies 'sum' to get 1 value which is further compared with fitness. but in my problem how am i supposed to get just 1 value to return from 'fobj'. as it doesn't contain 'sum' or any else function. how can i get best value in 1 row to compare to fitness for the optimization of my 60*60 matrices according to CS.
kindly help me, i'll be highly obliged.
Thank You.

Thanks. Yes, of course, you can use any linear constraints. Strictly speaking, you can use both linear and nonlinear constraints in the lines where the nonlinear function is defined.

I had an optimal control problem with 56 control parameters for
an industrial applications. After trying PSO and other algorithms,
I tried cuckoo search, and found that, among the 4 methods I tried,
cuckoo search obtained the best results. Well done!

I'm want to ask about Cuckoo Search implementation in Discrete Optimization Problem...

I'm now trying to implement it in a TSP (Traveling Salesman Problem)

Right now, i'm representing edge in TSP using a fuzzy matrices...
I'm using levy flights as random step for the probability change (but it's too big), first i try to normalize it, but it makes the probability value has a chance to become zero, it's caused by large difference from one value to other in the matrices produced, but if i'm not normalizing it then the it'll stack in local optima and of course the value will bigger than 1...

In converting from fuzzy matrices to a path, i'm using Max Number Method in fuzzy matrices...

Beside that i'm a little afraid of wrong implementation in cuckoo search itself...

Can you give me suggestion/idea about this ? i'm really confused about this...

Thanks. That's a good question. The demo implementation uses
a given tolerance, but you can easily change it into a given
number of iteration by replacing the line "while (fmin>Tol)"
with the following two lines

N_numEval=1000;
for t=1:N_numEval,

and remove the line "N_iter=N_iter+n" because it becomes irrelevant.
Now the new stopping criterion should allow you to do things
more flexibly. Of course, to get better accuracy, you need to
increase N_numEval from 1000 to even 10000 or higher.

I'm curious about something. Supposed the cuckoo search can be use as an optimization technique which means we can either find the maximum or the minimum.

but after I tested it with several objective functions, i found that we have to know the tolerance in order to find the best solutions. which mean i have to know what gives the maximum or minimum value for the function.

supposed the cuckoo search have to find what the max or min value isn't it.?

but what i found, when I change the tolerance, the min value will be change which mean, the min value depends on the tolerance.

so, that mean this cuckoo search didn't find the minimum value, instead we have to know what is the value of tolerance to give the minimum value. isn't it?

can you explain about this? i'm really confuse about this matter.

Thanks. Of course, cuckoo search can solve that sort of problem. In fact, it has been designed to solve nonlinear problems in higher dimensions (nd) where nd is the number of dimensions, which can be 1, 2, 100, 4000, several thousands or even higher. The search principle is the same. In this demo, nd=15. So problems with two variables are usually considered "easy". Thanks.

Hi, Thanks. The function you mentioned is too simple. Anyway,
if you want to test any function, just change the last line
(Line 175), also changed line 51 (the number of dimensions).

hi..can someone help me..I want to use this cuckoo search algorithm to test a function. It state that we can change the function with our own function. But I don't know how and which part should I change. I want to test this function :
y=6x-x^2

can someone help me to get the value of x and y at maximum point.. thank you..

Hi,I have read the paper in which it is shown that cuckoo search is better than particle swarm optimization technique.How can we use CS algo for test effort estimation?

Thanks. If you can define an appropriate objective function that links with
the parameters of SVM, it becomes an optimization problem, and thus should be
solved efficiently by CS in this case.

Hi Yang, currently I am focusing on selecting optimal set of features and model parameters (e.g. SVM). I wondering that is it possible to use CS for my case? Would you please provide me some suggestions?

Hi again,
I finally received (and read) your book. I'm comparing the code in the book and the latest which is published here. I understand this one is supposed to be better and refined but I'd like to ask some clarification nonetheless since some things don't add up to me.
1) the 1st thing I noticed is that get_cuckoo in the book's algorithm (hence BA) cuckoos make random walks around the best so far, whereas in the latest algorithm (LA) they move from their own current position. chapter 12.2 of the book seems to explain these 2 strategies, but I wonder if indeed random walks from the current solution (not even around the better locally as in PSO) don't make intensification too sparse?
2) the 2nd issue has to do with the empty_nest. As I understand the objective here is apply "selection of the fittest". In BA indeed the worst nests are selected as candidates for replacement (again with random walk this time around the current position). In LA it seems the concept of "worst nests" is lost and a kind of Differential Evolution approach is taken, where a nest is moved (random uniform) toward one of the other nest.
It seems to me that both approach are reasonable but they are rather different conceptually. I mean, in the principle of "survival of the fittest" shouldn't we have a selection (as in BA) of the worst nest as candidate for this mutation rather than all of them (provided rand<pa obviously). In other words, shouldn't we always pick the worst nest rather than in parallel all nests together?
Thanks a lot for any help.

Thanks a lot for your prompt reply. Indeed I ordered your book as google's version wont allow full access. In this paper
http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBwQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.130.5359%26rep%3Drep1%26type%3Dpdf&rct=j&q=COMPARISON%20OF%20THREE%20ALGORITHMS%20FOR%20L%C3%89VY%20NOISE&ei=Gb6RTcWeE83EswbxsLDQBg&usg=AFQjCNEQ4Xdw6mUnPesejAzxI4yjUcKP2w&cad=rja

they compare 3 algs for levy distribution and they say McCullochâ€™s alg perform better than Mantegna's. Also, they provide a more complex Mantegna's implementation than yours (in matlab line 35-49)
35 invalpha = 1/alpha;
36 sigx = ((gamma(1+alpha)*sin(pi*alpha/2))/(gamma((1+alpha)/2)...
37 *alpha*2^((alpha-1)/2)))^invalpha;
38 v = sigx*randn(n,N)./abs(randn(n,N)).^invalpha;
39 kappa = (alpha*gamma((alpha+1)/(2*alpha)))/gamma(invalpha)...
40 *((alpha*gamma((alpha+1)/2))/(gamma(1+alpha)*sin(pi*alpha/2)))^invalpha;
41 p = [-17.7767 113.3855 -281.5879 337.5439 -193.5494 44.8754];
42 c = polyval(p, alpha);
43 w = ((kappa-1)*exp(-abs(v)/c)+1).*v;
44 if(n>1)
45 z = (1/n^invalpha)*sum(w);
46 else
47 z = w;
48 end
49 z = c^invalpha*z;

it seems your version is a simplified one that avoids step 39-49? BTW, all of my questioning is because I'm trying to port your alg in Java and I need to understand all the nuances of all tidbits. Thanks a lot for any help.

a) For the formula: s=s+stepsize.*randn(size(s)),
the stepsize is a random number vector, but it is biased
because if s<best (in the sense of component-wise comparison),
then this stepsize is biased to one side, this leads to
a biased random-walk. In order to explore the search space
more efficiently, a symmetric random walk should be used.
Thus, another factor randn.

Here you might argue that the vector "stepsize" is already
symmetric, but this is only true for step sizes, but not for s.
Ideally, let's use 2D as an example, a random walk should
consists of a step size and angle (0 to 360) in a circle
on a 2D surface. The step size should be Gaussian (or Levy),
but the angle should be uniformly distributed, otherwise,
some regions cannot be reached.

b) The factor 0.01 or 1/100 is mainly to limit the step size.
Otherwise, Levy flights become too aggressive, and thus the
new solutions generated will be even outside of the domain.
For a more detail description of this factor, please see
the formulas (4.14 to 4.17) on page 33 of the book
"Nature-Inspired Metaheuristic Algoirthms", Second Edition,
(Yang 2010), Luniver Press.
Or at the link
http://books.google.co.uk/books?id=iVB_ETlh4ogC&printsec=frontcover&dq=nature-inspired+metaheuristic&hl=en&ei=tLaRTbLyMo-0hAeukMybDw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCoQ6AEwAA#v=onepage&q&f=false

Hi there,
could you please clarify why you apply
s=s+stepsize.*randn(size(s));
(line 123), isn't stepsize already in levy distribution? Why the need to multiply for a gaussian?
Also, could you please explain what "the factor 0.01 comes from the fact that L/100 should the typical step size of walks/flights where L is the typical lenghtscale". Has it anything to do with the lower-upper bound limits of the problem? Thanks a lot.