Rank: 248 based on 248 downloads (last 30 days) and 2 files submitted
photo

George Evers

E-mail
Company/University
The University of Texas - Pan American

Personal Profile:

www.georgeevers.org

Professional Interests:
population-based optimizers, facial expression recognition, forecasting

 

Watch this Author's files

 

Files Posted by George View all
Updated   File Tags Downloads
(last 30 days)
Comments Rating
15 May 2011 Screenshot Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers particle swarm code, particle swarm algori..., pso algorithm, particle swarm toolbo..., particle swarm optimi..., particle swarm optimi... 245 23
  • 3.85714
3.9 | 7 ratings
06 Dec 2010 Screenshot Traditionalized Ackley Implements the traditional benchmark formulation used in optimization literature Author: George Evers ackleyfcn, ackleyfcnm, ackley, benchmark, global optimization, global optimization t... 3 0
Comments and Ratings by George View all
Updated File Comments Rating
10 Mar 2012 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers

Rohit,

Rather than copying the syntax and functionality of the GA toolbox, I wrote the Particle Swarm Optimization Research Toolbox from the ground up. Among other benefits, this provides the convenience of setting parameters via the control panel without needing to learn how to use a lot of functions. While there is still a learning curve required to harness the many capabilities, it should not be steep for users specialized in particle swarm, who should find the toolbox quite capable and versatile enough to support a myriad of research directions.

The documentation contains a section called “Adding Your Problem to the Toolbox”, which describes how to apply the PSO Research Toolbox to solve new problems*. Should you encounter any difficulty, just let me know.

Regards,
George

* http://www.georgeevers.org/pso_research_toolbox_documentation.pdf

13 Nov 2011 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers

Abdul,

Hu and Eberhart designed an approach for implementing PSO with constraints [1], which should work fine with "randn" in lieu of "rand". Section "Adding Constraints" within Chapter "IV. Guide to Conducting Your Own Research" of the documentation explains how to implement it.

Sampling from a normal distribution at initialization alone would probably affect overall performance negligibly. It would standardize the initial velocities a bit more, but since initialization is only one 'iteration', any difference in behavior would most likely be diluted over the course of the entire search. Realizing a sustainable difference might require applying the same distribution to velocity updates as well.

To conveniently switch between “randn” and "rand" for comparison purposes, simply (i) create a switch, (ii) use its status to select which code to implement at velocity initialization, and (iii) use either the same switch or a unique one to select which code to implement during velocity updates. These steps are elaborated below.

(i) Create switch "OnOff_randn_velocity" at a relevant location within the control panel (e.g. below switch “OnOff_v_reset”), and set it to "logical(1)".

(ii) At lines 20 and 23 of "gbest_initialization.m" and "lbest_initialization.m" respectively, replace

"v = 2*vmax.*rand(np, dim) - vmax;"

with

"if OnOff_randn_velocity
v = 2*vmax.*randn(np, dim) - vmax;
else
v = 2*vmax.*rand(np, dim) - vmax;
end".

(iii) At line 41 of "gbest_core_loop.m" and "lbest_core_loop.m", replace each occurrence of

"r1 = rand(np, dim);
r2 = rand(np, dim);"

with

"if OnOff_randn_velocity
r1 = randn(np, dim);
r2 = randn(np, dim);
else
r1 = rand(np, dim);
r2 = rand(np, dim);
end".

The traditional uniform distribution has a mean of 1/2, which stochastically models the kinematic physics equation for translating a particle: x_f = x_0 + v_0*t + 1/2*a*t^2 [2: pp. 1-2]. To preserve this mean, "1/2 + randn(np, dim)" could be used in place of "randn(np, dim)" at each mention above. Going a step further, different standard deviations, D, could be experimented with by using "1/2 + D*randn(np, dim)"; and D could be set in the control panel for easy access. Personal experimentation has shown that performance deteriorates if stochasm is removed from PSO altogether by using the static 1/2 of the aforementioned kinematics equation in lieu of pseudo-random numbers; hence, poor behavior should be expected as the standard deviation approaches zero.

Chapter III of thesis prompts the question, "What mechanism would most effectively grant particles a healthy degree of distrust by which to avoid converging too quickly?" [3] Sampling random numbers from a normal distribution for velocity updates would be one candidate mechanism. Using a standard deviation of 1 and mean 1/2, for example, particles would 'trust' personal and global bests 69% of the time and 'distrust' them 31% of the time - positive accelerations toward the bests occurring over twice as often as negative accelerations away from them. Positive accelerations would also generally be of greater magnitude than negative accelerations since positive random numbers would generally be larger than negative random numbers due to the shift of the bell curve in the positive direction.

It stands to reason that if PSO can encapsulate a healthy degree of distrust, particles should not prematurely converge nearly as quickly, which should increase overall solution quality. In summary, do not be afraid to try sampling from a normal distribution for velocity updates as well as for velocity initialization even though doing so would occasionally produce negative random numbers: the small negative numbers might beneficially model a healthy degree of 'distrust'.

Feel free to contact me personally through my website with any further questions.

Regards,
George
http://www.georgeevers.org

[1] X. Hu and R. C. Eberhart, "Solving constrained nonlinear optimization problems with particle swarm optimization," in Proceedings of the Sixth World Multiconference on Systemics, Cybernetics and Informatics (SCI 2002), Orlando, 2002.
[2] G. Evers, “The No Free Lunch Theorem Does not Apply to Continuous Optimization,” 2011 International Conference on Swarm Intelligence, Cergy, France.
http://icsi11.eisti.fr/papers/paper_25.pdf
[3] G. Evers, “An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization,” M.S. thesis, The University of Texas – Pan American, Edinburg, TX, 2009
http://www.georgeevers.org/thesis.pdf
> To see that a healthy degree of distrust can be beneficial, compare Gbest PSO tested with: (i) a positive, static inertia weight for Table II-1 (Adobe p. 42), (ii) a linearly decreasing inertia weight for Table II-2 (Adobe p. 44), and (iii) a slightly negative inertia weight with predominantly social acceleration for Table III-3 (Adobe p. 68). All PSO parameters are inter-related (e.g. w, c1, c2, vmax) such that the ideal choice of any one parameter depends on the values of the other parameters, but for Gbest PSO with a negative inertia weight to significantly outperform even more complicated PSO models certainly warrants further investigations on the topics of distrust and parameter selection.

10 Nov 2011 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers

Troy,

Without evaluating the quality of each position visited, the optimization process would not work. You should be able to access your Simulink model from within the objective function for this purpose.

Regards,
George

01 Sep 2011 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers

Rishi, I've responded at the thread started at [1]. Feel free to post more questions there on the topic, and thank you for investing the time to digest the documentation.

If you encounter any errors, please zip the folder with the containing files (i.e. minus the Data folder), and send the whole thing to me. This will make it quicker and easier to debug.

Regards,
George

[1] http://www.mathworks.com/matlabcentral/newsreader/view_thread/312084

14 Feb 2011 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers

Mark, since Jongwon is new to the community, he probably had the impression that a rating was required to leave a comment.

Your balancing perspective more than offsets the rating. Thanks!

Comments and Ratings on George's Files View all
Updated File Comment by Comments Rating
15 May 2013 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers sehrish

HI, I want to train Neural Network using Bat algorithm.....Bat algorithm is provided on mathworks....But i dont know how to use bat algorithm as training algorithm???? plz Guide me how i can do the same NN training as done by PSO?????

06 Feb 2013 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers Amer, Motaz

Dear George

i'd like to ask if i want the full version of the PSO toolbox how can i download and where i find it?

15 Jan 2013 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers MTOLERA, IBRAHIM

sorry i am new in using PSO ,How can I Create switch OnOff_Constraints in the control panel’s

27 May 2012 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers anjaneya

hi george!
does RegPSO work good for symmetric cost functions only?

10 Mar 2012 Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers Rohit

Top Tags Applied by George
ackley, ackley bug, ackley fix, ackleyfcn, ackleyfcn bug
Files Tagged by George View all
Updated   File Tags Downloads
(last 30 days)
Comments Rating
15 May 2011 Screenshot Particle Swarm Optimization Research Toolbox Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations Author: George Evers particle swarm code, particle swarm algori..., pso algorithm, particle swarm toolbo..., particle swarm optimi..., particle swarm optimi... 245 23
  • 3.85714
3.9 | 7 ratings
06 Dec 2010 Screenshot Traditionalized Ackley Implements the traditional benchmark formulation used in optimization literature Author: George Evers ackleyfcn, ackleyfcnm, ackley, benchmark, global optimization, global optimization t... 3 0

Contact us