Code covered by the BSD License  

Highlights from
Particle Swarm Optimization Research Toolbox

4.0

4.0 | 9 ratings Rate this file 291 Downloads (last 30 days) File Size: 125 KB File ID: #28291
image thumbnail

Particle Swarm Optimization Research Toolbox

by

 

24 Jul 2010 (Updated )

Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation, and hybrid combinations

| Watch this File

File Information
Description

The Particle Swarm Optimization Research Toolbox was written to assist with thesis research combating the premature convergence problem of particle swarm optimization (PSO). The control panel offers ample flexibility to accommodate various research directions; after specifying your intentions, the toolbox will automate several tasks to free up time for conceptual planning.

EXAMPLE FEATURES

+ Choose from Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation of global best, and hybrid combinations.

+ The benchmark suite consists of Ackley, Griewangk, Quadric, noisy Quartic, Rastrigin, Rosenbrock, Schaffer's f6, Schwefel, Sphere, and Weighted Sphere.

+ Each trial uses its own sequence of pseudo-random numbers to ensure both replicability and uniqueness of data.

+ Choose a maximum number of function evaluations or iterations. Terminate early if the threshold for success is reached or premature convergence is detected.

+ Choose a static or linearly varying inertia weight.

+ Activate velocity clamping and specify the percentage.

+ Choose symmetric or asymmetric initialization.

+ A suite of pre-made graph types facilitates understanding of swarm behavior.
-- Automated Graph Features --
> Specify where on the screen to generate figures.
> Automatically generate titles, legends, and labels.
> Automatically save figures to any supported format.
-- Graph Types--
> Phase plots trace each particle's path across a contour map of the search space with iteration numbers overlaid.*
> Swarm trajectory snapshots capture the swarm state in intervals with optional tags marking global and personal bests. *
> The global bests's function value vs iteration shows how solution quality progresses and stagnates over the course of the search.
> The global best vs iteration shows how each decision variable progresses and stagnates with time.
> Each particle's function value vs iteration shows how its own solution quality oscillates with time.
> Each particle's position vector vs iteration shows how its decision variables oscillate toward the local or global minimizer.
> Each particle's velocity vector vs iteration shows how velocity components diminish with time.
> Each particle's personal best vs iteration shows how regularly and significantly each personal best updates.
* Note: Graph types marked with an asterisk are for 2D optimization problems by nature of the contour map.

+ Confine particles to the initialization space when physical limitations or a priori knowledge mandate doing so; but if the initialization space is merely an educated guess at an unfamiliar application problem, particles can be allowed to roam outside.

+ Activate or de-activate the following histories to control execution speed and the size of automatically saved workspaces.
ITERATIVE HISTORIES
> Global bests
> Function values of global bests
> Personal bests
> Function values of personal bests
> Positions
> Function values of positions
> Velocities
> Cognitive velocity components
> Social velocity components
Note: Disabling lengthy histories is recommended except when generating data to be published or verifying proper toolbox functioning, in which case histories should be analyzed.

+ Automatic input validation assertively corrects conflicting settings and displays changes made.

+ Automatically save the workspace after each trial and set of trials.

+ Automatically generate statistics.

+ Free yourself from the computer with a progress meter estimating completion time. A "choo choo" sound conveniently signals completion.

+ An Introductory Walk-through in the documentation teaches the basic functionalities of the toolbox, including how to analyze workspace variables.

ADD-IN
+ ANN Training Add-in by Tricia Rambharose
http://www.mathworks.com/matlabcentral/fileexchange/29565-neural-network-add-in-for-psort

HELPFUL LINKS
+ A history of toolbox updates is available at www.georgeevers.org/pso_research_toolbox.htm, where you can subscribe to be notified of future updates.
+ An introduction to the particle swarm algorithm is available at www.georgeevers.org/particle_swarm_optimization.htm.
+ A well-maintained list of PSO toolboxes is available at www.particleswarm.info/Programs.html.
+ My research on: (i) regrouping the swarm to liberate particles from the state of premature convergence for continued progress, and (ii) empirically searching for high-quality PSO parameters, is available at www.georgeevers.org/publications.htm.

Acknowledgements

Cauchy inspired this file.

This file inspired Neural Network Add In For Psort.

MATLAB release MATLAB 7.8 (R2009a)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (35)
06 Oct 2014 Sampada

Hello George, can I use this toolbox for object detection n tracking in video? If yes, Pl. help me how to do it.

30 Jun 2014 Muhammad Bilal

Hi george i work with GA to get best result for different differntianal equation. But now i listen about PSO i want to work with it. how is this possible to use PSO. please tell me may i use the same met file which i make for GA use for PSO. thanks

02 May 2014 George Evers

Narin, to perform a controlled experiment, the number of layers, number of neurons per layer, bias connections, input connections, layer connections, output connections, target connections, transfer functions, and performance functions need to be set identically for backpropagation and RegPSO. I have an alpha version of the next release I could email you that allows you to set these values in the control panel: my email address is at http://www.georgeevers.org

02 May 2014 George Evers

Chen, opposition-based PSO is activated in the control panel via switch OnOff_OPSO. When active, gbest_core_loop.m or lbest_core_loop.m will implement it. Just open the appropriate file, press Ctrl + F, and type OPSO to jump straight to that chunk of code.

02 May 2014 George Evers

Motaz, it's available at http://www.georgeevers.org

Feel free to contact me through that site.

02 May 2014 George Evers

Ibrahim, switches appear in the first section of the control panel. You can add yours there along with a comment about how to use it. The user's guide contains the remaining steps.

User's guide in pdf format: http://www.georgeevers.org/pso_research_toolbox_documentation.pdf
User's guide in doc format: http://www.georgeevers.org/pso_research_toolbox_documentation.doc

02 May 2014 George Evers

Anjaneya, no, it works quite well on asymmetric functions as well. Sorry I don't have time to look up the data and post it right now. Feel free to email me if you'd like to see it.

02 May 2014 Narin Sovann

Dear George,
I am using Nueral Network add-in for PSORT. it works very well when it trains networks with RegPSO algorithm (George Evers) for the NN_train_demo.m. But when i apply it for my project, the error of output is quit high (the same input variables for ANNs training by BP algorithms is 1.81% but ANNs training by PSO(RegPSO algorithm) is more than 20% not acceptable).
any suggestions, please help me

regards,
Narin

25 Apr 2014 rajae

Plz help me , I need the code for the method of particle swarm

03 Sep 2013 chen

Dear George, I want to know is there opso file in the toolbox? I didn't find the opso m file. Thx!

15 May 2013 sehrish

HI, I want to train Neural Network using Bat algorithm.....Bat algorithm is provided on mathworks....But i dont know how to use bat algorithm as training algorithm???? plz Guide me how i can do the same NN training as done by PSO?????

06 Feb 2013 Motaz Amer

Dear George

i'd like to ask if i want the full version of the PSO toolbox how can i download and where i find it?

15 Jan 2013 IBRAHIM MTOLERA

sorry i am new in using PSO ,How can I Create switch OnOff_Constraints in the control panel’s

27 May 2012 anjaneya

hi george!
does RegPSO work good for symmetric cost functions only?

10 Mar 2012 Rohit  
10 Mar 2012 George Evers

Rohit,

Rather than copying the syntax and functionality of the GA toolbox, I wrote the Particle Swarm Optimization Research Toolbox from the ground up. Among other benefits, this provides the convenience of setting parameters via the control panel without needing to learn how to use a lot of functions. While there is still a learning curve required to harness the many capabilities, it should not be steep for users specialized in particle swarm, who should find the toolbox quite capable and versatile enough to support a myriad of research directions.

The documentation contains a section called “Adding Your Problem to the Toolbox”, which describes how to apply the PSO Research Toolbox to solve new problems*. Should you encounter any difficulty, just let me know.

Regards,
George

* http://www.georgeevers.org/pso_research_toolbox_documentation.pdf

09 Mar 2012 Rohit

Respected Sir,
I really appreciate your work on Optimization toolbox.
It is really a great help to college students like me.
Presently,I'm in final semester of B.Tech and my project topic
is "Optimization of PID controller using PSO".
I tried to use your toolbox and tried calling my objective function but
it is showing error on "feedback" syntax.
Same objective function worked for Genetic algorithm toolbox
then why it is not working for this ??

When I called the "pidobj":function:
>> pso_Trelea_vectorized('pidobj',3)
PSO: 1/2000 iterations, GBest = 1.008433808145666e-007.
??? Error using ==> plot3
Vectors must be the same lengths.

Error in ==> goplotpso at 23
plot3(pos(:,1),pos(:,D),out,'b.','Markersize',7)

Error in ==> pso_Trelea_vectorized at 344
eval(plotfcn); % defined at top of script

20 Dec 2011 ehs

many thanks

13 Nov 2011 George Evers

Abdul,

Hu and Eberhart designed an approach for implementing PSO with constraints [1], which should work fine with "randn" in lieu of "rand". Section "Adding Constraints" within Chapter "IV. Guide to Conducting Your Own Research" of the documentation explains how to implement it.

Sampling from a normal distribution at initialization alone would probably affect overall performance negligibly. It would standardize the initial velocities a bit more, but since initialization is only one 'iteration', any difference in behavior would most likely be diluted over the course of the entire search. Realizing a sustainable difference might require applying the same distribution to velocity updates as well.

To conveniently switch between “randn” and "rand" for comparison purposes, simply (i) create a switch, (ii) use its status to select which code to implement at velocity initialization, and (iii) use either the same switch or a unique one to select which code to implement during velocity updates. These steps are elaborated below.

(i) Create switch "OnOff_randn_velocity" at a relevant location within the control panel (e.g. below switch “OnOff_v_reset”), and set it to "logical(1)".

(ii) At lines 20 and 23 of "gbest_initialization.m" and "lbest_initialization.m" respectively, replace

"v = 2*vmax.*rand(np, dim) - vmax;"

with

"if OnOff_randn_velocity
v = 2*vmax.*randn(np, dim) - vmax;
else
v = 2*vmax.*rand(np, dim) - vmax;
end".

(iii) At line 41 of "gbest_core_loop.m" and "lbest_core_loop.m", replace each occurrence of

"r1 = rand(np, dim);
r2 = rand(np, dim);"

with

"if OnOff_randn_velocity
r1 = randn(np, dim);
r2 = randn(np, dim);
else
r1 = rand(np, dim);
r2 = rand(np, dim);
end".

The traditional uniform distribution has a mean of 1/2, which stochastically models the kinematic physics equation for translating a particle: x_f = x_0 + v_0*t + 1/2*a*t^2 [2: pp. 1-2]. To preserve this mean, "1/2 + randn(np, dim)" could be used in place of "randn(np, dim)" at each mention above. Going a step further, different standard deviations, D, could be experimented with by using "1/2 + D*randn(np, dim)"; and D could be set in the control panel for easy access. Personal experimentation has shown that performance deteriorates if stochasm is removed from PSO altogether by using the static 1/2 of the aforementioned kinematics equation in lieu of pseudo-random numbers; hence, poor behavior should be expected as the standard deviation approaches zero.

Chapter III of thesis prompts the question, "What mechanism would most effectively grant particles a healthy degree of distrust by which to avoid converging too quickly?" [3] Sampling random numbers from a normal distribution for velocity updates would be one candidate mechanism. Using a standard deviation of 1 and mean 1/2, for example, particles would 'trust' personal and global bests 69% of the time and 'distrust' them 31% of the time - positive accelerations toward the bests occurring over twice as often as negative accelerations away from them. Positive accelerations would also generally be of greater magnitude than negative accelerations since positive random numbers would generally be larger than negative random numbers due to the shift of the bell curve in the positive direction.

It stands to reason that if PSO can encapsulate a healthy degree of distrust, particles should not prematurely converge nearly as quickly, which should increase overall solution quality. In summary, do not be afraid to try sampling from a normal distribution for velocity updates as well as for velocity initialization even though doing so would occasionally produce negative random numbers: the small negative numbers might beneficially model a healthy degree of 'distrust'.

Feel free to contact me personally through my website with any further questions.

Regards,
George
http://www.georgeevers.org

[1] X. Hu and R. C. Eberhart, "Solving constrained nonlinear optimization problems with particle swarm optimization," in Proceedings of the Sixth World Multiconference on Systemics, Cybernetics and Informatics (SCI 2002), Orlando, 2002.
[2] G. Evers, “The No Free Lunch Theorem Does not Apply to Continuous Optimization,” 2011 International Conference on Swarm Intelligence, Cergy, France.
http://icsi11.eisti.fr/papers/paper_25.pdf
[3] G. Evers, “An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization,” M.S. thesis, The University of Texas – Pan American, Edinburg, TX, 2009
http://www.georgeevers.org/thesis.pdf
> To see that a healthy degree of distrust can be beneficial, compare Gbest PSO tested with: (i) a positive, static inertia weight for Table II-1 (Adobe p. 42), (ii) a linearly decreasing inertia weight for Table II-2 (Adobe p. 44), and (iii) a slightly negative inertia weight with predominantly social acceleration for Table III-3 (Adobe p. 68). All PSO parameters are inter-related (e.g. w, c1, c2, vmax) such that the ideal choice of any one parameter depends on the values of the other parameters, but for Gbest PSO with a negative inertia weight to significantly outperform even more complicated PSO models certainly warrants further investigations on the topics of distrust and parameter selection.

10 Nov 2011 Abdul Hameed

Dear George,

I am working with PSO for optimizing controller parameters within constraints for a system.I have used randn command for generating initial velocities.it converges with a solution out of the constraints.Can we limit the velocity with in a constraint ? if we can do so.. what will be the limit for it?
thank you,

Hameed

10 Nov 2011 George Evers

Troy,

Without evaluating the quality of each position visited, the optimization process would not work. You should be able to access your Simulink model from within the objective function for this purpose.

Regards,
George

28 Oct 2011 troy lim

hi George,

i have tried to understand and read through the pso documentation so that i can implement ur pso toolbox into my college project.

my project is about designing a queuing system to control the service time(output) according to entity flow rate(input 1) and queue length(input 2)using flc and the system was developed in slimulink and running well.

until now i come to last part of my project as to optimize the flc mfs parameters(total 63) so that the waiting time could be minimized.

from the pso documentation i found that objective function could be the waiting time but i realize that i need to formulate the objfunc according to my system in simulink seems like very tough.

is it possible i can achieve my goal without the objfunc? i have the data of waiting time from the simulation, can i utilize it to do optimization?

i wish u can help me out or give me some idea as this is the last part of my project towards the end....thx

01 Sep 2011 George Evers

Rishi, I've responded at the thread started at [1]. Feel free to post more questions there on the topic, and thank you for investing the time to digest the documentation.

If you encounter any errors, please zip the folder with the containing files (i.e. minus the Data folder), and send the whole thing to me. This will make it quicker and easier to debug.

Regards,
George

[1] http://www.mathworks.com/matlabcentral/newsreader/view_thread/312084

31 Aug 2011 Rishi

The only problem now remaining is the graphing_options that I had to switch off as it could not recognise the variable Z
in mesh(X1,X2,Z)

31 Aug 2011 Rishi

Hi George,

I solved my previous problem (Link:http://www.mathworks.com/matlabcentral/newsreader/view_thread/312084) by changing my objective function as follows,

*******************************************
function [f] = ObjFun_Zeta_Projection ( position_matrix, num_particles_2_evaluate )


load KG_106.mat

SET_1=M(1:26,:);
Dynset = SET_1;

Run_SET=[Dynset(:,1),Dynset(:,2),Dynset(:,3),Dynset(:,4),Dynset(:,5)];

xp=Run_SET(:,3);
yp=Run_SET(:,4);



for j=1:26
% Returns the difference between the simulated
% equation and the experimental data.

theta1 = position_matrix(1:num_particles_2_evaluate,1);
theta2 = position_matrix(1:num_particles_2_evaluate,2);
theta3 = position_matrix(1:num_particles_2_evaluate,3);
theta4 = position_matrix(1:num_particles_2_evaluate,4);

y_calc(j,:) = theta1.*(1-exp(-theta2.*xp(j,:))) + ...
theta3.*(exp(theta4.*xp(j,:))-1);
y_min(j,:)=100.*y_calc(j,:);
diffmat(j,:) = (y_min(j,:)-yp(j,:)).^2;


end

f=sum(diffmat);

*******************************************

where j is basically the size of xdata and ydata.

My second problem is now clamping the variables to my desired space.I tried to implement the Hu&Eberhart's approach but am receiving the following error,

??? Undefined function or method 'Satisfied' for input arguments of type 'double'.

Error in ==> gbest_core_loop at 167
if (Satisfied(Internal_i) && (f(Internal_i) <= fp(Internal_i))) %if the new function value is
better

I have sent you some files by mail though for verification.

Thanks,

Regards,
Rishi

14 Feb 2011 George Evers

Mark, since Jongwon is new to the community, he probably had the impression that a rating was required to leave a comment.

Your balancing perspective more than offsets the rating. Thanks!

11 Feb 2011 Mark Shore

jongwon chae, do you think it is appropriate to give a mediocre rating to a submission that you clearly have not tested in any way?

I have not had the opportunity to use this PSO submission in any detail yet, and will avoid giving a rating until such time as it might be a meaningful one. The extensive documentation George Evers provides suggests that the software will be of at least moderately high quality.

10 Feb 2011 George Evers

Jongwon, the optimization functionality is built from the ground up to avoid any such dependency, so it does not require any other products.

If you encounter any ambiguities in the documentation, please let me know so I can address them: I value your feedback. Minor improvements to the documentation made between toolbox versions are uploaded at [1, 2], depending on which format you prefer.

Regards,
George
http://www.georgeevers.org

[1] http://www.georgeevers.org/pso_research_toolbox_documentation.doc
[2] http://www.georgeevers.org/pso_research_toolbox_documentation.pdf

10 Feb 2011 jongwon chae

Do I have to buy Matlab Optimization toolbox and Matlab Global Optimization toolbox to run PSO toolbox?

Can it run without them?

Thank you for your works

10 Dec 2010 Yen Hanning  
29 Nov 2010 Tricia Rambharose

Good day users,

An add-in for this PSO toolbox is now available that allows a Neural Network (NN) to be trained by PSO. It can be found at the following file on MathWorks

http://www.mathworks.com/matlabcentral/fileexchange/29565-neural-network-add-in-for-psort

Regards,
Tricia

24 Nov 2010 George Evers

Khalid, the README included in the zip will introduce you to the toolbox and walk you through some example simulations. Afterward, section "Conducting Your Own Research" will explain how to write your problem as a compatible function. At that point, if you require additional functionalities and are knowledgeable of your research area, I will provide some guidance as time permits to help you contribute the functionality (e.g. as done at http://www.mathworks.com/matlabcentral/newsreader/view_thread/154538 ).

23 Nov 2010 khalid qora

is it easy to use pso in medical image segmentation and can you help me in that ?

13 Oct 2010 George Evers

Eric,

The most restricting part of the agreement is probably, "You are expressly prohibited from doing the following with or to the PSO Research Toolbox: ... duplicating, reproducing, copying, or modifying it or otherwise creating derivative works from it **AND** either (1) claiming the resulting program as the intellectual property of any person, organization, or entity other than George Evers or (2) calling the resulting program by any name other than PSO Research Toolbox...." The Toolbox Use Agreement strives to protect and respect the intellectual property of authors of PSO Research Toolbox Add-ins and myself. Given the extensive amount of work that has gone into developing the code, I don't believe that anyone who modifies it should reasonably expect either to claim it as one's own intellectual property or to distribute it under a different name.

The main idea is to allow other researchers to contribute functionality in a structured manner that facilitates the evolution of the toolbox - much as MATLAB itself has evolved through the structured accumulation of an immense number of functions. As a very simple example, if any toolbox functionality is found to be inconsistent with a researcher's particular goals, I expect the researcher to be able to add a simple switch to de-activate the functionality rather than deleting it entirely, thus enabling modifications to move in a consistent direction and improve for the good of the community. This should be viewed as an attempt to organize contributions rather than as a restriction - i.e. as a request to channel creative energies in a common direction rather than merely seeing them spent in different directions. MATLAB is the perfect example of what can be accomplished by the organized development of functions, and the Particle Swarm Optimization Research Toolbox Community-Based Development Project is merely a specialized implementation of a similar thought process.

Other researchers are modifying the code to suit their own research while giving back to the community from which they downloaded the code. As the agreement states, "New code contributed either as a new file or as a new section of code within an existing file shall be referred to as a PSO Research Toolbox Add-in and shall be deemed the intellectual property of its author or authors"; hence, users are encouraged to improve the code and, furthermore, can expect formal acknowledgement of their contributions.

As for the reserved right to improve the terms of use, this is solely to allow the quick resolution of any flaws that might evidence themselves in the future. It has been some time since I made any substantial changes, and I have no plans to do so in the future; but I certainly expect the right to resolve promptly any problems that might come to light.

Sincerely,
George Evers

23 Sep 2010 Eric

This looks to be a complete and ready-to-use PSO implementation, however, do note that the downloaded package includes additional terms of use which appear contrary to the BSD License claimed on this File Exchange page. The rights to modify the code appear to be heavily restricted, and the author claims the right to change the terms of use without notice. Consequently, I don't think this really belongs here on the File Exchange, where it implied that submissions are not encumbered with additional restrictions.

Updates
06 Aug 2010

Modified "User Input Validation" phase to reflect MPSO termination criteria, completed section IX of documentation

25 Aug 2010

Velocity clamping is now set identically for symmetric and asymmetric initialization, graphing switches are only created when switch OnOff_graphs is active, and the walk through has been iterated.

13 Oct 2010

Fixed reported bugs related to switch "OnOff_graphs"; improved README chapter "Conducting Your Own Research," improved Appendix A's definitions of variables in response to questions, improved Appendix A's definitions of benchmarks (work in progress)

21 Oct 2010

Removed toolbox use agreement from the free version of the toolbox posted here, and merged or converted to appendices the shortest sections of the README

27 Oct 2010

Corrected errors resulting from unorthodox input combinations, which caused graph generation to terminate unexpectedly: details are available at http://www.georgeevers.org/toolbox_updates.rtf

12 Nov 2010

"Load_trial_data_for_stats.m" no longer attempts to access "fghist" when it does not exist for a previously problematic input combination.

22 Nov 2010

Compatibility has been added to accommodate calls from the neural net toolbox through Tricia Rambharose's wrapper.

06 Dec 2010

minor improvements to documentation

10 Jan 2011

Fixed bug causing axes to be overlaid for some swarm trajectory snapshots, improved readability of code related to 3D graphs, improved clarity in certain sections of the documentation

15 May 2011

Comments added to control panel by user request, definitions added to Appendix A by user request, documentation linked to online to ensure that users have the most recent information

Contact us