The Particle Swarm Optimization Research Toolbox was written to assist with thesis research combating the premature convergence problem of particle swarm optimization (PSO). The control panel offers ample flexibility to accommodate various research directions; after specifying your intentions, the toolbox will automate several tasks to free up time for conceptual planning.
+ Choose from Gbest PSO, Lbest PSO, RegPSO, GCPSO, MPSO, OPSO, Cauchy mutation of global best, and hybrid combinations.
+ The benchmark suite consists of Ackley, Griewangk, Quadric, noisy Quartic, Rastrigin, Rosenbrock, Schaffer's f6, Schwefel, Sphere, and Weighted Sphere.
+ Each trial uses its own sequence of pseudo-random numbers to ensure both replicability and uniqueness of data.
+ Choose a maximum number of function evaluations or iterations. Terminate early if the threshold for success is reached or premature convergence is detected.
+ Choose a static or linearly varying inertia weight.
+ Activate velocity clamping and specify the percentage.
+ Choose symmetric or asymmetric initialization.
+ A suite of pre-made graph types facilitates understanding of swarm behavior.
-- Automated Graph Features --
> Specify where on the screen to generate figures.
> Automatically generate titles, legends, and labels.
> Automatically save figures to any supported format.
-- Graph Types--
> Phase plots trace each particle's path across a contour map of the search space with iteration numbers overlaid.*
> Swarm trajectory snapshots capture the swarm state in intervals with optional tags marking global and personal bests. *
> The global bests's function value vs iteration shows how solution quality progresses and stagnates over the course of the search.
> The global best vs iteration shows how each decision variable progresses and stagnates with time.
> Each particle's function value vs iteration shows how its own solution quality oscillates with time.
> Each particle's position vector vs iteration shows how its decision variables oscillate toward the local or global minimizer.
> Each particle's velocity vector vs iteration shows how velocity components diminish with time.
> Each particle's personal best vs iteration shows how regularly and significantly each personal best updates.
* Note: Graph types marked with an asterisk are for 2D optimization problems by nature of the contour map.
+ Confine particles to the initialization space when physical limitations or a priori knowledge mandate doing so; but if the initialization space is merely an educated guess at an unfamiliar application problem, particles can be allowed to roam outside.
+ Activate or de-activate the following histories to control execution speed and the size of automatically saved workspaces.
> Global bests
> Function values of global bests
> Personal bests
> Function values of personal bests
> Function values of positions
> Cognitive velocity components
> Social velocity components
Note: Disabling lengthy histories is recommended except when generating data to be published or verifying proper toolbox functioning, in which case histories should be analyzed.
+ Automatic input validation assertively corrects conflicting settings and displays changes made.
+ Automatically save the workspace after each trial and set of trials.
+ Automatically generate statistics.
+ Free yourself from the computer with a progress meter estimating completion time. A "choo choo" sound conveniently signals completion.
+ An Introductory Walk-through in the documentation teaches the basic functionalities of the toolbox, including how to analyze workspace variables.
+ ANN Training Add-in by Tricia Rambharose
+ A history of toolbox updates is available at www.georgeevers.org/pso_research_toolbox.htm, where you can subscribe to be notified of future updates.
+ An introduction to the particle swarm algorithm is available at www.georgeevers.org/particle_swarm_optimization.htm.
+ A well-maintained list of PSO toolboxes is available at www.particleswarm.info/Programs.html.
+ My research on: (i) regrouping the swarm to liberate particles from the state of premature convergence for continued progress, and (ii) empirically searching for high-quality PSO parameters, is available at www.georgeevers.org/publications.htm.
I need to optimize and tune fuzzy logic controller (fuzzy membership function and gain values) using pso. I use simulink for modeling purpose. Now how I can use pso toolbox for tunning of fuzzy and call the fuction in simulink? Pls let me know.
Thanks and regards
Eliazar, since I wasn't the author of the artificial neural network add-in for the PSO Research Toolbox, I don't support it.
I am having a problem with ANN-PSO Add-In. I am getting error messages as shown below:
Error using trainpso (line 54)
Error in network/subsasgn>getDefaultParam (line
Error in network/subsasgn>setTrainFcn (line
net.trainParam = getDefaultParam(trainFcn);
Error in network/subsasgn>network_subsasgn
Error in network/subsasgn (line 10)
Error in NN_training_demo (line 58)
Thank u for uploading the file sir.. I am new to matlab.. I am working with pso based upqc system... so plz can u guide me in above..If relevant file kindly mail me at firstname.lastname@example.org
Shaista, to optimize the characteristics of your fuzzy system, you need to write the objective function to be evaluated. Each dimension will represent one characteristic to be optimized, and each particle will represent one candidate solution. Please refer to section "Writing Your Function" in chapter "IV. Guide to Conducting Your Own Research" of the PSO Research Toolbox documentation for guidance. You may always download the most recent version of the documentation:
(i) in PDF format from http://www.georgeevers.org/pso_research_toolbox_documentation.pdf
(ii) or in Word format from http://www.georgeevers.org/pso_research_toolbox_documentation.doc
I am using anfisedit function in matlab and generate anfis structure using training data now i want to optimize it using PSO method how can i do it. please guide me using an example. I am new in this field.
Arif, you'd just need to write the function to be optimized - as per the users' guide - and any constraints and constraint method into the toolbox.
Since research involves doing something new, there should always be code for a researcher to write. By starting with a good toolbox, you would minimize the amount to be written.
If you're interested in contributing to the toolbox, I might be able to provide some feedback on occasion.
Dear George Evers very good code !!
I want to use PSO for the optimal DG placement to minimize the reactive power loss, for the testing i am using Matlab. Is it possible to use this code ?
Is it possible to use this code for model order reduction problems and minimize the error between systems?
If yes, can you help me in that?
Is it possible to use this code for model order reduction problems and minimize the error between systems?
If you can you help me in that?
Where can I find Opposition PSO?
Can the problem of optimal placement and sizing of distributed generation in distribution system for voltage improvement be solved with this toolbox(free version)?
Thank you for answers
I want to optimize nontraditional machining response using desirability based PSO.I have 6 parameter of 3level and 11 response.I have conducted 54 experiment.kindly help me how I make this.I am new in MATLAB.
can anyone explain to use this coding..
my project is about finding the optimal size of distributed generator on 14 bus system using pso coding. I'm newbies please help me.thank you
Thank You very much
Can any one explain me how can i use this tool box,as i am new to matlab.
Thank you very much!
Hello George, can I use this toolbox for object detection n tracking in video? If yes, Pl. help me how to do it.
Hi george i work with GA to get best result for different differntianal equation. But now i listen about PSO i want to work with it. how is this possible to use PSO. please tell me may i use the same met file which i make for GA use for PSO. thanks
Narin, to perform a controlled experiment, the number of layers, number of neurons per layer, bias connections, input connections, layer connections, output connections, target connections, transfer functions, and performance functions need to be set identically for backpropagation and RegPSO. I have an alpha version of the next release I could email you that allows you to set these values in the control panel: my email address is at http://www.georgeevers.org
Chen, opposition-based PSO is activated in the control panel via switch OnOff_OPSO. When active, gbest_core_loop.m or lbest_core_loop.m will implement it. Just open the appropriate file, press Ctrl + F, and type OPSO to jump straight to that chunk of code.
Motaz, it's available at http://www.georgeevers.org
Feel free to contact me through that site.
Ibrahim, switches appear in the first section of the control panel. You can add yours there along with a comment about how to use it. The user's guide contains the remaining steps.
User's guide in pdf format: http://www.georgeevers.org/pso_research_toolbox_documentation.pdf
User's guide in doc format: http://www.georgeevers.org/pso_research_toolbox_documentation.doc
Anjaneya, no, it works quite well on asymmetric functions as well. Sorry I don't have time to look up the data and post it right now. Feel free to email me if you'd like to see it.
I am using Nueral Network add-in for PSORT. it works very well when it trains networks with RegPSO algorithm (George Evers) for the NN_train_demo.m. But when i apply it for my project, the error of output is quit high (the same input variables for ANNs training by BP algorithms is 1.81% but ANNs training by PSO(RegPSO algorithm) is more than 20% not acceptable).
any suggestions, please help me
Plz help me , I need the code for the method of particle swarm
Dear George, I want to know is there opso file in the toolbox? I didn't find the opso m file. Thx!
HI, I want to train Neural Network using Bat algorithm.....Bat algorithm is provided on mathworks....But i dont know how to use bat algorithm as training algorithm???? plz Guide me how i can do the same NN training as done by PSO?????
i'd like to ask if i want the full version of the PSO toolbox how can i download and where i find it?
sorry i am new in using PSO ,How can I Create switch OnOff_Constraints in the control panel’s
does RegPSO work good for symmetric cost functions only?
Rather than copying the syntax and functionality of the GA toolbox, I wrote the Particle Swarm Optimization Research Toolbox from the ground up. Among other benefits, this provides the convenience of setting parameters via the control panel without needing to learn how to use a lot of functions. While there is still a learning curve required to harness the many capabilities, it should not be steep for users specialized in particle swarm, who should find the toolbox quite capable and versatile enough to support a myriad of research directions.
The documentation contains a section called “Adding Your Problem to the Toolbox”, which describes how to apply the PSO Research Toolbox to solve new problems*. Should you encounter any difficulty, just let me know.
I really appreciate your work on Optimization toolbox.
It is really a great help to college students like me.
Presently,I'm in final semester of B.Tech and my project topic
is "Optimization of PID controller using PSO".
I tried to use your toolbox and tried calling my objective function but
it is showing error on "feedback" syntax.
Same objective function worked for Genetic algorithm toolbox
then why it is not working for this ??
When I called the "pidobj":function:
PSO: 1/2000 iterations, GBest = 1.008433808145666e-007.
??? Error using ==> plot3
Vectors must be the same lengths.
Error in ==> goplotpso at 23
Error in ==> pso_Trelea_vectorized at 344
eval(plotfcn); % defined at top of script
Hu and Eberhart designed an approach for implementing PSO with constraints , which should work fine with "randn" in lieu of "rand". Section "Adding Constraints" within Chapter "IV. Guide to Conducting Your Own Research" of the documentation explains how to implement it.
Sampling from a normal distribution at initialization alone would probably affect overall performance negligibly. It would standardize the initial velocities a bit more, but since initialization is only one 'iteration', any difference in behavior would most likely be diluted over the course of the entire search. Realizing a sustainable difference might require applying the same distribution to velocity updates as well.
To conveniently switch between “randn” and "rand" for comparison purposes, simply (i) create a switch, (ii) use its status to select which code to implement at velocity initialization, and (iii) use either the same switch or a unique one to select which code to implement during velocity updates. These steps are elaborated below.
(i) Create switch "OnOff_randn_velocity" at a relevant location within the control panel (e.g. below switch “OnOff_v_reset”), and set it to "logical(1)".
(ii) At lines 20 and 23 of "gbest_initialization.m" and "lbest_initialization.m" respectively, replace
"v = 2*vmax.*rand(np, dim) - vmax;"
v = 2*vmax.*randn(np, dim) - vmax;
v = 2*vmax.*rand(np, dim) - vmax;
(iii) At line 41 of "gbest_core_loop.m" and "lbest_core_loop.m", replace each occurrence of
"r1 = rand(np, dim);
r2 = rand(np, dim);"
r1 = randn(np, dim);
r2 = randn(np, dim);
r1 = rand(np, dim);
r2 = rand(np, dim);
The traditional uniform distribution has a mean of 1/2, which stochastically models the kinematic physics equation for translating a particle: x_f = x_0 + v_0*t + 1/2*a*t^2 [2: pp. 1-2]. To preserve this mean, "1/2 + randn(np, dim)" could be used in place of "randn(np, dim)" at each mention above. Going a step further, different standard deviations, D, could be experimented with by using "1/2 + D*randn(np, dim)"; and D could be set in the control panel for easy access. Personal experimentation has shown that performance deteriorates if stochasm is removed from PSO altogether by using the static 1/2 of the aforementioned kinematics equation in lieu of pseudo-random numbers; hence, poor behavior should be expected as the standard deviation approaches zero.
Chapter III of thesis prompts the question, "What mechanism would most effectively grant particles a healthy degree of distrust by which to avoid converging too quickly?"  Sampling random numbers from a normal distribution for velocity updates would be one candidate mechanism. Using a standard deviation of 1 and mean 1/2, for example, particles would 'trust' personal and global bests 69% of the time and 'distrust' them 31% of the time - positive accelerations toward the bests occurring over twice as often as negative accelerations away from them. Positive accelerations would also generally be of greater magnitude than negative accelerations since positive random numbers would generally be larger than negative random numbers due to the shift of the bell curve in the positive direction.
It stands to reason that if PSO can encapsulate a healthy degree of distrust, particles should not prematurely converge nearly as quickly, which should increase overall solution quality. In summary, do not be afraid to try sampling from a normal distribution for velocity updates as well as for velocity initialization even though doing so would occasionally produce negative random numbers: the small negative numbers might beneficially model a healthy degree of 'distrust'.
Feel free to contact me personally through my website with any further questions.
 X. Hu and R. C. Eberhart, "Solving constrained nonlinear optimization problems with particle swarm optimization," in Proceedings of the Sixth World Multiconference on Systemics, Cybernetics and Informatics (SCI 2002), Orlando, 2002.
 G. Evers, “The No Free Lunch Theorem Does not Apply to Continuous Optimization,” 2011 International Conference on Swarm Intelligence, Cergy, France.
 G. Evers, “An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization,” M.S. thesis, The University of Texas – Pan American, Edinburg, TX, 2009
> To see that a healthy degree of distrust can be beneficial, compare Gbest PSO tested with: (i) a positive, static inertia weight for Table II-1 (Adobe p. 42), (ii) a linearly decreasing inertia weight for Table II-2 (Adobe p. 44), and (iii) a slightly negative inertia weight with predominantly social acceleration for Table III-3 (Adobe p. 68). All PSO parameters are inter-related (e.g. w, c1, c2, vmax) such that the ideal choice of any one parameter depends on the values of the other parameters, but for Gbest PSO with a negative inertia weight to significantly outperform even more complicated PSO models certainly warrants further investigations on the topics of distrust and parameter selection.
I am working with PSO for optimizing controller parameters within constraints for a system.I have used randn command for generating initial velocities.it converges with a solution out of the constraints.Can we limit the velocity with in a constraint ? if we can do so.. what will be the limit for it?
Without evaluating the quality of each position visited, the optimization process would not work. You should be able to access your Simulink model from within the objective function for this purpose.
i have tried to understand and read through the pso documentation so that i can implement ur pso toolbox into my college project.
my project is about designing a queuing system to control the service time(output) according to entity flow rate(input 1) and queue length(input 2)using flc and the system was developed in slimulink and running well.
until now i come to last part of my project as to optimize the flc mfs parameters(total 63) so that the waiting time could be minimized.
from the pso documentation i found that objective function could be the waiting time but i realize that i need to formulate the objfunc according to my system in simulink seems like very tough.
is it possible i can achieve my goal without the objfunc? i have the data of waiting time from the simulation, can i utilize it to do optimization?
i wish u can help me out or give me some idea as this is the last part of my project towards the end....thx
Rishi, I've responded at the thread started at . Feel free to post more questions there on the topic, and thank you for investing the time to digest the documentation.
If you encounter any errors, please zip the folder with the containing files (i.e. minus the Data folder), and send the whole thing to me. This will make it quicker and easier to debug.
The only problem now remaining is the graphing_options that I had to switch off as it could not recognise the variable Z
I solved my previous problem (Link:http://www.mathworks.com/matlabcentral/newsreader/view_thread/312084) by changing my objective function as follows,
function [f] = ObjFun_Zeta_Projection ( position_matrix, num_particles_2_evaluate )
Dynset = SET_1;
% Returns the difference between the simulated
% equation and the experimental data.
theta1 = position_matrix(1:num_particles_2_evaluate,1);
theta2 = position_matrix(1:num_particles_2_evaluate,2);
theta3 = position_matrix(1:num_particles_2_evaluate,3);
theta4 = position_matrix(1:num_particles_2_evaluate,4);
y_calc(j,:) = theta1.*(1-exp(-theta2.*xp(j,:))) + ...
diffmat(j,:) = (y_min(j,:)-yp(j,:)).^2;
where j is basically the size of xdata and ydata.
My second problem is now clamping the variables to my desired space.I tried to implement the Hu&Eberhart's approach but am receiving the following error,
??? Undefined function or method 'Satisfied' for input arguments of type 'double'.
Error in ==> gbest_core_loop at 167
if (Satisfied(Internal_i) && (f(Internal_i) <= fp(Internal_i))) %if the new function value is
I have sent you some files by mail though for verification.
Mark, since Jongwon is new to the community, he probably had the impression that a rating was required to leave a comment.
Your balancing perspective more than offsets the rating. Thanks!
jongwon chae, do you think it is appropriate to give a mediocre rating to a submission that you clearly have not tested in any way?
I have not had the opportunity to use this PSO submission in any detail yet, and will avoid giving a rating until such time as it might be a meaningful one. The extensive documentation George Evers provides suggests that the software will be of at least moderately high quality.
Jongwon, the optimization functionality is built from the ground up to avoid any such dependency, so it does not require any other products.
If you encounter any ambiguities in the documentation, please let me know so I can address them: I value your feedback. Minor improvements to the documentation made between toolbox versions are uploaded at [1, 2], depending on which format you prefer.
Do I have to buy Matlab Optimization toolbox and Matlab Global Optimization toolbox to run PSO toolbox?
Can it run without them?
Thank you for your works
Good day users,
An add-in for this PSO toolbox is now available that allows a Neural Network (NN) to be trained by PSO. It can be found at the following file on MathWorks
Khalid, the README included in the zip will introduce you to the toolbox and walk you through some example simulations. Afterward, section "Conducting Your Own Research" will explain how to write your problem as a compatible function. At that point, if you require additional functionalities and are knowledgeable of your research area, I will provide some guidance as time permits to help you contribute the functionality (e.g. as done at http://www.mathworks.com/matlabcentral/newsreader/view_thread/154538 ).
is it easy to use pso in medical image segmentation and can you help me in that ?
The most restricting part of the agreement is probably, "You are expressly prohibited from doing the following with or to the PSO Research Toolbox: ... duplicating, reproducing, copying, or modifying it or otherwise creating derivative works from it **AND** either (1) claiming the resulting program as the intellectual property of any person, organization, or entity other than George Evers or (2) calling the resulting program by any name other than PSO Research Toolbox...." The Toolbox Use Agreement strives to protect and respect the intellectual property of authors of PSO Research Toolbox Add-ins and myself. Given the extensive amount of work that has gone into developing the code, I don't believe that anyone who modifies it should reasonably expect either to claim it as one's own intellectual property or to distribute it under a different name.
The main idea is to allow other researchers to contribute functionality in a structured manner that facilitates the evolution of the toolbox - much as MATLAB itself has evolved through the structured accumulation of an immense number of functions. As a very simple example, if any toolbox functionality is found to be inconsistent with a researcher's particular goals, I expect the researcher to be able to add a simple switch to de-activate the functionality rather than deleting it entirely, thus enabling modifications to move in a consistent direction and improve for the good of the community. This should be viewed as an attempt to organize contributions rather than as a restriction - i.e. as a request to channel creative energies in a common direction rather than merely seeing them spent in different directions. MATLAB is the perfect example of what can be accomplished by the organized development of functions, and the Particle Swarm Optimization Research Toolbox Community-Based Development Project is merely a specialized implementation of a similar thought process.
Other researchers are modifying the code to suit their own research while giving back to the community from which they downloaded the code. As the agreement states, "New code contributed either as a new file or as a new section of code within an existing file shall be referred to as a PSO Research Toolbox Add-in and shall be deemed the intellectual property of its author or authors"; hence, users are encouraged to improve the code and, furthermore, can expect formal acknowledgement of their contributions.
Comments added to control panel by user request, definitions added to Appendix A by user request, documentation linked to online to ensure that users have the most recent information
Fixed bug causing axes to be overlaid for some swarm trajectory snapshots, improved readability of code related to 3D graphs, improved clarity in certain sections of the documentation
minor improvements to documentation
Compatibility has been added to accommodate calls from the neural net toolbox through Tricia Rambharose's wrapper.
"Load_trial_data_for_stats.m" no longer attempts to access "fghist" when it does not exist for a previously problematic input combination.
Corrected errors resulting from unorthodox input combinations, which caused graph generation to terminate unexpectedly: details are available at http://www.georgeevers.org/toolbox_updates.rtf
Removed toolbox use agreement from the free version of the toolbox posted here, and merged or converted to appendices the shortest sections of the README
Fixed reported bugs related to switch "OnOff_graphs"; improved README chapter "Conducting Your Own Research," improved Appendix A's definitions of variables in response to questions, improved Appendix A's definitions of benchmarks (work in progress)
Velocity clamping is now set identically for symmetric and asymmetric initialization, graphing switches are only created when switch OnOff_graphs is active, and the walk through has been iterated.
Modified "User Input Validation" phase to reflect MPSO termination criteria, completed section IX of documentation