Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
ANN Training with PSO

Subject: ANN Training with PSO

From: Burak

Date: 14 Aug, 2007 13:25:35

Message: 1 of 68

Hi there,

I am a graduate student working on particle swarm
optimization. I wanna to learn more about ANN training with
PSO. Although there is a good PSO toolbox release, it seems
complicated as I observe the source code for neural network
training. There are some articles about this issue, but it
is not clear how they implement PSO to ANN training
Thanks for your answers and help

Burak

Subject: relay coordination using PSO

From: vijayakumar

Date: 10 Oct, 2007 07:19:54

Message: 2 of 68

i am doing my thesis in the area of relay coordination,
 (perticularly Directional over current relay)
i want to know about the working of optimization toolbox
use in the that area file name is lipsol.m
main aim is to do the problem of relay setting using the
PSO...........

i am waiting for reply

your's
vijay

Subject: relay coordination using PSO

From: Ken Davis

Date: 10 Oct, 2007 08:00:29

Message: 3 of 68

"vijayakumar " <vijay.manit@mathworks.com> wrote in message
news:fehueq$5k$1@fred.mathworks.com...
>i am doing my thesis in the area of relay coordination,
> (perticularly Directional over current relay)
> i want to know about the working of optimization toolbox
> use in the that area file name is lipsol.m
> main aim is to do the problem of relay setting using the
> PSO...........
>
> i am waiting for reply
>
> your's
> vijay

A description (including references) is in the online documentation.

Subject: ANN Training with PSO

From: Shahril Sulaiman

Date: 23 Oct, 2007 08:20:28

Message: 4 of 68

Cool...i'm also working on PSO-trained neural network in
energy area. PSO too;box won't be of much help if u are
doing hybrid PSO-ANN. I'm also searching for the Matlab PSO-
ANN source code at the moment. it's quite tricky to
identify how the weights can be optimized. For articles on
PSO-ANN, u may refer to IEEE Explore website. Hope this
would help.

"Burak " <newsreader@mathworks.com> wrote in message
<f9sagf$dfn$1@fred.mathworks.com>...
> Hi there,
>
> I am a graduate student working on particle swarm
> optimization. I wanna to learn more about ANN training
with
> PSO. Although there is a good PSO toolbox release, it
seems
> complicated as I observe the source code for neural
network
> training. There are some articles about this issue, but it
> is not clear how they implement PSO to ANN training
> Thanks for your answers and help
>
> Burak

Subject: ANN Training with PSO

From: George

Date: 21 Jun, 2009 22:24:02

Message: 5 of 68

Burak,

To evolve an ANN using PSO or any other population-based heuristic, just make each characteristic of the ANN a dimension to be optimized - unless of course there is some characteristic to be held constant. For example, the number of nodes in the hidden layer could be either a dimension to be optimized or held constant. The same goes for the number of hidden layers, and so on.

For each position a particle assumes, the ANN is implemented with the characteristics contained in the particle's position to determine how well those parameters work. The result of that trial, or the median or mean of a set of trials, is the function value used to optimize the parameters.

Hope that helps.

George Evers
www.georgeevers.org

"Burak " <newsreader@mathworks.com> wrote in message <f9sagf$dfn$1@fred.mathworks.com>...
> Hi there,
>
> I am a graduate student working on particle swarm
> optimization. I wanna to learn more about ANN training with
> PSO. Although there is a good PSO toolbox release, it seems
> complicated as I observe the source code for neural network
> training. There are some articles about this issue, but it
> is not clear how they implement PSO to ANN training
> Thanks for your answers and help
>
> Burak

Subject: PSO toolbox for latest Matlab versions

From: Trish R

Date: 4 Aug, 2009 03:09:03

Message: 6 of 68

Hi friends,

I too am working on ANN Training with PSO, in Matlab. I Have used the Brian Birge PSO toolbox with Matlab version 5, but I as yet I can not locate a Matlab PSO toolbox that works with the latests versions of Matlab ie versions 7. I'd appreciate any help and advice.


Best Regards,
R. Tricia Rambharose

Subject: relay coordination using PSO

From: Mohammed Saeed Saeed

Date: 11 Jan, 2010 07:13:04

Message: 7 of 68

"vijayakumar " <vijay.manit@mathworks.com> wrote in message <fehueq$5k$1@fred.mathworks.com>...
> i am doing my thesis in the area of relay coordination,
> (perticularly Directional over current relay)
> i want to know about the working of optimization toolbox
> use in the that area file name is lipsol.m
> main aim is to do the problem of relay setting using the
> PSO...........
>
> i am waiting for reply
>
> your's
> vijay
> Dear sir ,How do you do ? i prepare for my Master in optimal relay coordination and if you please i need your help to provide me with a swarm code to determine the optimal relay settings ( i can not do the fitness function for each particle )
Thanks

Subject: PSO toolbox for latest Matlab versions

From: George

Date: 16 Mar, 2010 23:35:14

Message: 8 of 68

"Trish R" <RTRambharose@gmail.com> wrote in message <h588of$sqa$1@fred.mathworks.com>...
> Hi friends,
>
> I too am working on ANN Training with PSO, in Matlab. I Have used the Brian Birge PSO toolbox with Matlab version 5, but I as yet I can not locate a Matlab PSO toolbox that works with the latests versions of Matlab ie versions 7. I'd appreciate any help and advice.
>
>
> Best Regards,
> R. Tricia Rambharose

Trish, I believe the PSO Research Toolbox [1] is fully compatible with the latest versions of MATLAB. One could straightforwardly write an objective function to implement an existing ANN code per particle: the column vector of function values would then iteratively be returned to the toolbox in order to optimize the network.

[1] http://www.georgeevers.org/pso_research_toolbox.htm

Cordially,
George Evers

Subject: ANN Training with PSO

From: Mohammed Ibrahim

Date: 2 Apr, 2010 17:53:06

Message: 9 of 68

"Burak " <newsreader@mathworks.com> wrote in message <f9sagf$dfn$1@fred.mathworks.com>...
> Hi there,
>
> I am a graduate student working on particle swarm
> optimization. I wanna to learn more about ANN training with
> PSO. Although there is a good PSO toolbox release, it seems
> complicated as I observe the source code for neural network
> training. There are some articles about this issue, but it
> is not clear how they implement PSO to ANN training
> Thanks for your answers and help
>
> Burak

Subject: ANN Training with PSO

From: George

Date: 3 Apr, 2010 02:15:07

Message: 10 of 68

"Mohammed Ibrahim" <hammudy20@yahoo.com> wrote in message <hp5au2$3iu$1@fred.mathworks.com>...
> "Burak " <newsreader@mathworks.com> wrote in message <f9sagf$dfn$1@fred.mathworks.com>...
> > Hi there,
> >
> > I am a graduate student working on particle swarm
> > optimization. I wanna to learn more about ANN training with
> > PSO. Although there is a good PSO toolbox release, it seems
> > complicated as I observe the source code for neural network
> > training. There are some articles about this issue, but it
> > is not clear how they implement PSO to ANN training
> > Thanks for your answers and help
> >
> > Burak

Burak, to train an ANN using PSO, firstly, identify a well-performing ANN for your application. Find characteristics that seem to work well for problems similar to yours: e.g. novel concepts, number of hidden layers, number of inputs, and types of inputs. Keep a detailed bibliography and save all relevant papers. Though you will train with PSO, you should keep notes of other good training algorithms with which to compare in order to fully demonstrate the validity of PSO as a training mechanism.

Secondly, find a PSO type suitable to your application. For example, RegPSO [1: Chapters 4-6], [2] does a great job of escaping from premature convergence in order to find increasingly better solutions when there is time to regroup the swarm, which would seem to be the case for training ANN's in general.

Thirdly, locate a good PSO toolbox - preferably one that is already capable of implementing the strain of PSO you would like to use. Ideally, the toolbox would contain standard gbest and lbest PSO's as well as the more evolved PSO type found in step two. If the variation of PSO you would like to use is not available in a suitable toolbox, locate a powerful toolbox, and contribute the code. The PSO toolbox doesn't need to have code for training ANN's since you can locate solid code for implementing ANN's and simply interface the best of both worlds.

Fourthly, locate a good ANN code to interface with the toolbox - preferably written in the same langauge. As long as you can implement the ANN with code alone (e.g. as with MATLAB's neural net toolbox) rather than necessarily depending on a GUI, the two can be interfaced.

Fifthly, interface the PSO toolbox and ANN code by creating a new objective function for the toolbox. If you use the "PSO Research Toolbox," just create a new file called "benchmark_BurakANN.m" using the pseudo code below to interface the two codes:
function [f] = benchmark_BurakANN(x, np)
global dim
f = zeros(np, 1);
for Internal_j = 1:np
    f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code
end

What makes sense to me is that each function value in column vector "f" would reflect the error (e.g. the difference or biased difference) between the ANN's prediction and the actually desired function value since it is the error that you want to minimize. To be more in line with real-world applications, you could translate each error into financial cost and minimize that value.

FYI, the problem dimensionality will be equal to the number of ANN parameters that you wish to optimize so that each dimension represents one decision variable of the network to be optimized. For example, will you keep the number of hidden layers constant or include this as one dimension to be optimized? Read up on the most recent ANN literature: I could be wrong, but it is my impression that while more complex ANN's (e.g. those with more hidden layers) might be capable of solving more complicated problems, they also tend to memorize the data more quickly, which is a big problem since the goal is not memorization but prediction. I personally would leave the number of hidden layers constant at whatever seems to have worked best in the literature and possibly experiment with changing it at a later time.

Happy Researching!

Sincerely,
George I. Evers

[1] http://www.georgeevers.org/thesis.pdf
[2] http://www.georgeevers.org/Evers_BenGhalia_IEEEConfSMC09.pdf

Subject: ANN Training with PSO

From: George

Date: 5 Apr, 2010 21:17:05

Message: 11 of 68

I apologize, the last reply was intended for Mohammed rather than Burak.

Subject: ANN Training with PSO

From: Trish R

Date: 19 May, 2010 15:53:04

Message: 12 of 68

Hi George,

Your post here was helpful to me thank you. I have a question. You stated,
"Fifthly, interface the PSO toolbox and ANN code by creating a new objective function for the toolbox"
Can you please clarify the use of objective functions with ANN and PSO. Thank you.


"George " <george@georgeevers.org> wrote in message <hp68bb$j0t$1@fred.mathworks.com>...
> "Mohammed Ibrahim" <hammudy20@yahoo.com> wrote in message <hp5au2$3iu$1@fred.mathworks.com>...
> > "Burak " <newsreader@mathworks.com> wrote in message <f9sagf$dfn$1@fred.mathworks.com>...
> > > Hi there,
> > >
> > > I am a graduate student working on particle swarm
> > > optimization. I wanna to learn more about ANN training with
> > > PSO. Although there is a good PSO toolbox release, it seems
> > > complicated as I observe the source code for neural network
> > > training. There are some articles about this issue, but it
> > > is not clear how they implement PSO to ANN training
> > > Thanks for your answers and help
> > >
> > > Burak
>
> Burak, to train an ANN using PSO, firstly, identify a well-performing ANN for your application. Find characteristics that seem to work well for problems similar to yours: e.g. novel concepts, number of hidden layers, number of inputs, and types of inputs. Keep a detailed bibliography and save all relevant papers. Though you will train with PSO, you should keep notes of other good training algorithms with which to compare in order to fully demonstrate the validity of PSO as a training mechanism.
>
> Secondly, find a PSO type suitable to your application. For example, RegPSO [1: Chapters 4-6], [2] does a great job of escaping from premature convergence in order to find increasingly better solutions when there is time to regroup the swarm, which would seem to be the case for training ANN's in general.
>
> Thirdly, locate a good PSO toolbox - preferably one that is already capable of implementing the strain of PSO you would like to use. Ideally, the toolbox would contain standard gbest and lbest PSO's as well as the more evolved PSO type found in step two. If the variation of PSO you would like to use is not available in a suitable toolbox, locate a powerful toolbox, and contribute the code. The PSO toolbox doesn't need to have code for training ANN's since you can locate solid code for implementing ANN's and simply interface the best of both worlds.
>
> Fourthly, locate a good ANN code to interface with the toolbox - preferably written in the same langauge. As long as you can implement the ANN with code alone (e.g. as with MATLAB's neural net toolbox) rather than necessarily depending on a GUI, the two can be interfaced.
>
> Fifthly, interface the PSO toolbox and ANN code by creating a new objective function for the toolbox. If you use the "PSO Research Toolbox," just create a new file called "benchmark_BurakANN.m" using the pseudo code below to interface the two codes:
> function [f] = benchmark_BurakANN(x, np)
> global dim
> f = zeros(np, 1);
> for Internal_j = 1:np
> f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code
> end
>
> What makes sense to me is that each function value in column vector "f" would reflect the error (e.g. the difference or biased difference) between the ANN's prediction and the actually desired function value since it is the error that you want to minimize. To be more in line with real-world applications, you could translate each error into financial cost and minimize that value.
>
> FYI, the problem dimensionality will be equal to the number of ANN parameters that you wish to optimize so that each dimension represents one decision variable of the network to be optimized. For example, will you keep the number of hidden layers constant or include this as one dimension to be optimized? Read up on the most recent ANN literature: I could be wrong, but it is my impression that while more complex ANN's (e.g. those with more hidden layers) might be capable of solving more complicated problems, they also tend to memorize the data more quickly, which is a big problem since the goal is not memorization but prediction. I personally would leave the number of hidden layers constant at whatever seems to have worked best in the literature and possibly experiment with changing it at a later time.
>
> Happy Researching!
>
> Sincerely,
> George I. Evers
>
> [1] http://www.georgeevers.org/thesis.pdf
> [2] http://www.georgeevers.org/Evers_BenGhalia_IEEEConfSMC09.pdf

Subject: PSO toolbox for latest Matlab versions

From: Tricia Rambharose

Date: 19 May, 2010 16:55:22

Message: 13 of 68

Hi George,

I looked at your work on a new PSO toolbox as explained in your thesis and also the toolbox documentation provided on your website. I did not find any mention of the version of Matlab you used to implement the toolbox. Can you please specify the version of Matlab you used successfully with your PSO toolbox? Thank you.

"George " <george@georgeevers.org> wrote in message <hnp4ji$e1m$1@fred.mathworks.com>...
> "Trish R" <RTRambharose@gmail.com> wrote in message <h588of$sqa$1@fred.mathworks.com>...
> > Hi friends,
> >
> > I too am working on ANN Training with PSO, in Matlab. I Have used the Brian Birge PSO toolbox with Matlab version 5, but I as yet I can not locate a Matlab PSO toolbox that works with the latests versions of Matlab ie versions 7. I'd appreciate any help and advice.
> >
> >
> > Best Regards,
> > R. Tricia Rambharose
>
> Trish, I believe the PSO Research Toolbox [1] is fully compatible with the latest versions of MATLAB. One could straightforwardly write an objective function to implement an existing ANN code per particle: the column vector of function values would then iteratively be returned to the toolbox in order to optimize the network.
>
> [1] http://www.georgeevers.org/pso_research_toolbox.htm
>
> Cordially,
> George Evers

Subject: PSO toolbox for latest Matlab versions

From: George

Date: 19 May, 2010 21:30:23

Message: 14 of 68

>Can you please specify the version of Matlab you used successfully with your PSO toolbox? Thank you.

Tricia,

That is a good point: thank you. I believe the computers on which I wrote the PSO Research Toolbox were running version 2007a, and I have not observed any difficulties with the student version of 2009a. A bug reported on Windows 7 has been fixed, so to my knowledge the PSO Research Toolbox is compatible with all MATLAB versions since 2007a and all Windows OS since XP.

Please report any bugs you come across or difficulties you encounter so that I can keep the PSO Research Toolbox up-to-date.

Cordially,
George I. Evers

Subject: ANN Training with PSO

From: George

Date: 19 May, 2010 22:14:04

Message: 15 of 68

> Your post here was helpful to me thank you. I have a question. You stated,
> "Fifthly, interface the PSO toolbox and ANN code by creating a new objective function for the toolbox"
> Can you please clarify the use of objective functions with ANN and PSO. Thank you.

Tricia,

You are welcome. Because this post is long, I created headings using capital letters. These don't mean I'm yelling at you though, lol.

* CREATION OF NEW "FUNCTION"
I used the PSO Research Toolbox to combat premature convergence across a suite of popular benchmark functions. To train an ANN instead, I suggest writing a "function" (i.e. in the programming sense of the word) as per step "Fifthly" in the 3 Apr, 2010 posting above in order to interface the toolbox with the ANN. This function will pass: (i) the matrix of position/row vectors from the toolbox to the ANN, and (ii) the column vector of function values to be minimized from the ANN to the toolbox. See the comments atop existing benchmark files for further clarification.

Caution: Be sure to properly define the function value that you will be minimizing. For example, if you define error as f = predicted_f - historical_f, this would produce negative values when the historical value is less than the ANN's predicted value, so the optimization process would not minimize the error since all negative error values would appear more attractive than the ideal error value of zero that occurs when predicted_f = historical_f. A simple solution would be to minimize the absolute value of the error, which creates a true minimum function value of zero where the actual and predicted values are identical. But this might not be practical for some applications where erring on one side could be more problematic than erring on the other side, in which case you might need to add a penalty based on the sign of the error. For example, if I use an ANN to predict the 52-week
maximum price of a stock in order to predict the ideal selling price for the year, an error of even one nickel could be devastating if it overestimates my selling price since that could mean not being able to effect the trade (though realistically, I sold MDT above its "52-week maximum" since it is not a true mathematical maximum but appears to be nothing more than the maximum of the daily closing prices). Whereas, selling for a nickel less than the predicted 52-week maximum price doesn't hurt me unless I'm trading penny stocks. So I suggest giving some consideration to the most intelligent way to define the error you would like to minimize since that is crucial to giving your problem real-world meaning.

* MODIFICATION OF BENCHMARKS.M
Once you have created the new function, copy the code below into "Benchmarks.m" to complete the interface between the PSO Research Toolbox and MATLAB's neural net toolbox. Internally, programming variables will refer to each function as a "benchmark": I have used the name "BurakANN" below for consistency with the pseudo code of the 3 Apr, 2010 post.

elseif benchmark_id == 11
    benchmark = 'BurakANN';
    range_ss0 = input a scalar or row vector here
    center_ss0 = input a scalar or row vector here

To illustrate the use of scalar inputs, the Rastrigin benchmark searches [-5.12, 5.12] per dimension. Since the search space is the same per dimension, you can simply input scalar values. In this case, range_ss0 = 5.12 - (-5.12) = 2*5.12 = 10.24, and center_ss0 = 0.

To illustrate the use of vector inputs, suppose you want to search between -5.12 and 5.12 on the first dimension (i.e. with a range of 10.24 centered at 0) and between 0 and 40 on the second dimension (i.e. with a range of 40 centered at 20). In this case, you could set range_ss0 = [10.24, 40] and center_ss0 = [0, 20]. This functionality was added to enable the PSO Research Toolbox to handle real-world application problems that might involve different feasible values per dimension. For example, if one dimension to be optimized stores the number of nodes in a particular ANN layer, multiple dimensions store the values of the weights, and perhaps another dimension stores the number of hidden layers, vector inputs allow you to define a reasonable center and range for each dimension. Velocities in this case are clamped to percentage "vmax_perc" of the range on each dimension.

* SELECTION OF NEW FUNCTION IN CONTROL_PANEL.M
Be sure to set "benchmark_id = 11" near the end of "Control_Panel.m," which tells the PSO Research Toolbox to optimize the function to which id 11 is assigned in "Benchmarks.m" (i.e. your new function).

Cordially,
George I. Evers

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 24 May, 2010 22:04:04

Message: 16 of 68

Hi George,

Very very informative. Thank you!

However the interface is still somewhat unclear to me. I agree that the optimization in this case is minimizing the error between the ANN output and the target value(s). Using older PSO toolboxes the interface with Matlab is done simply when the network is created by specifying for e.g 'trainpso' as the training function in the call to Matlab's newff function. If using your PSO Research Toolbox, what exact call has to be made from a main program to link PSO to an ANN programmed in Matlab. Your explanations before explained creation of a new benchmark function but no mention was made of modifications or parameter settings for Matlab's ANN toolbox existing functions to link PSO with ANN. Or maybe I did not fully understand your explanation. Can you further clarify? Thank you.



"George " <george@georgeevers.org> wrote in message <ht1nrc$3cf$1@fred.mathworks.com>...
> > Your post here was helpful to me thank you. I have a question. You stated,
> > "Fifthly, interface the PSO toolbox and ANN code by creating a new objective function for the toolbox"
> > Can you please clarify the use of objective functions with ANN and PSO. Thank you.
>
> Tricia,
>
> You are welcome. Because this post is long, I created headings using capital letters. These don't mean I'm yelling at you though, lol.
>
> * CREATION OF NEW "FUNCTION"
> I used the PSO Research Toolbox to combat premature convergence across a suite of popular benchmark functions. To train an ANN instead, I suggest writing a "function" (i.e. in the programming sense of the word) as per step "Fifthly" in the 3 Apr, 2010 posting above in order to interface the toolbox with the ANN. This function will pass: (i) the matrix of position/row vectors from the toolbox to the ANN, and (ii) the column vector of function values to be minimized from the ANN to the toolbox. See the comments atop existing benchmark files for further clarification.
>
> Caution: Be sure to properly define the function value that you will be minimizing. For example, if you define error as f = predicted_f - historical_f, this would produce negative values when the historical value is less than the ANN's predicted value, so the optimization process would not minimize the error since all negative error values would appear more attractive than the ideal error value of zero that occurs when predicted_f = historical_f. A simple solution would be to minimize the absolute value of the error, which creates a true minimum function value of zero where the actual and predicted values are identical. But this might not be practical for some applications where erring on one side could be more problematic than erring on the other side, in which case you might need to add a penalty based on the sign of the error. For example, if I use an ANN to predict the 52-week
> maximum price of a stock in order to predict the ideal selling price for the year, an error of even one nickel could be devastating if it overestimates my selling price since that could mean not being able to effect the trade (though realistically, I sold MDT above its "52-week maximum" since it is not a true mathematical maximum but appears to be nothing more than the maximum of the daily closing prices). Whereas, selling for a nickel less than the predicted 52-week maximum price doesn't hurt me unless I'm trading penny stocks. So I suggest giving some consideration to the most intelligent way to define the error you would like to minimize since that is crucial to giving your problem real-world meaning.
>
> * MODIFICATION OF BENCHMARKS.M
> Once you have created the new function, copy the code below into "Benchmarks.m" to complete the interface between the PSO Research Toolbox and MATLAB's neural net toolbox. Internally, programming variables will refer to each function as a "benchmark": I have used the name "BurakANN" below for consistency with the pseudo code of the 3 Apr, 2010 post.
>
> elseif benchmark_id == 11
> benchmark = 'BurakANN';
> range_ss0 = input a scalar or row vector here
> center_ss0 = input a scalar or row vector here
>
> To illustrate the use of scalar inputs, the Rastrigin benchmark searches [-5.12, 5.12] per dimension. Since the search space is the same per dimension, you can simply input scalar values. In this case, range_ss0 = 5.12 - (-5.12) = 2*5.12 = 10.24, and center_ss0 = 0.
>
> To illustrate the use of vector inputs, suppose you want to search between -5.12 and 5.12 on the first dimension (i.e. with a range of 10.24 centered at 0) and between 0 and 40 on the second dimension (i.e. with a range of 40 centered at 20). In this case, you could set range_ss0 = [10.24, 40] and center_ss0 = [0, 20]. This functionality was added to enable the PSO Research Toolbox to handle real-world application problems that might involve different feasible values per dimension. For example, if one dimension to be optimized stores the number of nodes in a particular ANN layer, multiple dimensions store the values of the weights, and perhaps another dimension stores the number of hidden layers, vector inputs allow you to define a reasonable center and range for each dimension. Velocities in this case are clamped to percentage "vmax_perc" of the range on each dimension.
>
> * SELECTION OF NEW FUNCTION IN CONTROL_PANEL.M
> Be sure to set "benchmark_id = 11" near the end of "Control_Panel.m," which tells the PSO Research Toolbox to optimize the function to which id 11 is assigned in "Benchmarks.m" (i.e. your new function).
>
> Cordially,
> George I. Evers

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 16 Jul, 2010 18:08:04

Message: 17 of 68

Hi George,

I am using your PSO Research toolbox 2007 version.

I may have found a bug in your PSO Research toolbox code. When I set

OnOff_SuccessfulUnsuccessful = logical(0);

in the Control_Panel I get an error from the Standard_Output function :

"Undefined function or variable 'w'.


If OnOff_SuccessfulUnsuccessful = logical(1) however the toolbox seems to work fine.

Can you provide some insight into this? Thanks!

Subject: ANN Training with PSO

From: George Evers

Date: 25 Jul, 2010 03:51:04

Message: 18 of 68

Tricia, I've fixed the bug; thank you for reporting it. The problem occurred for input combination (OnOff_w_time_vary) &&(~OnOff_SuccessfulUnsuccessful), which I have not used in quite a while since I generally track the number of successful trials.

Concerning our email collaboration on neural net training, I am about to email you the updated version of the toolbox, which I just submitted to The MathWorks (i.e. pending approval). At least one fix is relevant to your goals as I will clarify further in the email.

Subject: ANN Training with PSO

From: George Evers

Date: 26 Jul, 2010 03:49:04

Message: 19 of 68

Tricia,

If you want to call the PSO Research Toolbox as a function by which to return values to the neural net toolbox, you could use a wrapper between the toolboxes. As in an earlier post, I'll use capitalization below for headings.

WRAPPER CODE
function [fg, g(1, :)] = ANNbox_calls_PSORT_wrapper(...) % This would wrap the PSO Research Toolbox for input/output.
Control_Panel.m %This would execute the PSO Research Toolbox within the wrapper according to the settings specified in the control panel.
% At this point, any data produced by the PSO Research Toolbox could be passed to the neural net toolbox, though you are probably only interested in the global best, g(1, :), and its function value, fg.

ONE TRIAL
If you decide to use a wrapper, be sure to set "num_trials" within the control panel to 1 since you presumably want to execute 1 PSO trial per call. Inadvertantly setting it to a larger value would produce valid data so that no error message would be generated, but excess computations would be performed.

PASSING INFO INTO WRAPPER?
If you will be passing information in through a wrapper, "clear all" atop Control_Panel.m would need to be removed to avoid clearing the input information; it would no longer serve a purpose anyway since the PSO Research Toolbox would have a local workspace within the wrapper. You might, however, be able to specify all settings within the control panel rather than passing anything in, which would render this modification unnecessary; in this case, the wrapper would exist only to pass data generated according to the settings in Control_Panel.m back to the neural net toolbox.

HOW TO EVALUATE WEIGHTS?
The PSO Research Toolbox iteratively passes the position matrix, x, into the objective/benchmark function and utilizes the returned vector of function values, f, to update the global best, g, and personal bests, p, before updating velocities, v, and positions. If calling the PSO Research Toolbox from the neural net toolbox through a wrapper, a neural net would be pre-defined, and the PSO Research Toolbox would need to iteratively pass the position/weight matrix into that network to evaluate the weights' effectiveness in order to optimize them. One way to do this would be to (i) declare the network and all pertinent information global before calling the PSO Research Toolbox, and (ii) declare the same variables global atop ObjFun_ANN.m/benchmark_ANN.m in order to grant the objective function access to them.

There are other possible approaches that we might discuss before proceeding. Would 8-9 AM or 9:30PM onward work for a phone call? It would be good to see clearly where we're going before trying to get there.

George

Subject: ANN Training with PSO

From: George Evers

Date: 29 Jul, 2010 06:03:07

Message: 20 of 68

Tricia,

It was nice chatting with you this evening. The updated Particle Swarm Optimization Research Toolbox is available at www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox. If the updates described at www.georgeevers.org/toolbox_updates.rtf seem relevant to your research, you might consider pasting your code into the updated version. I understand that you have added quite a bit of code, but you can paste in chunks; and it might be easier now than later.

George

Subject: ANN Training with PSO

From: George Evers

Date: 29 Jul, 2010 06:43:23

Message: 21 of 68

Tricia,

Sorry, reposting with clickable links: the updated Particle Swarm Optimization Research Toolbox is available at http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox. If the updates described at http://www.georgeevers.org/toolbox_updates.rtf seem relevant to your research, you might consider pasting your code into the updated version. I understand that you have added quite a bit of code, but you can paste in chunks; and it might be easier now than later.

George

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 29 Jul, 2010 16:58:04

Message: 22 of 68

Hi George,

Just a few questions. I experimented with 'fg' and 'g' to pass to the NN as we discussed. I see that 'fg' is a single value, the function value at the global best. 'g' seems to be a matrix however. Is 'g' supposed to be the single position vector of where 'fg' occurs? Thanks.


"George Evers" <george@georgeevers.org> wrote in message <i2r7ub$dnu$1@fred.mathworks.com>...
> Tricia,
>
> Sorry, reposting with clickable links: the updated Particle Swarm Optimization Research Toolbox is available at http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox. If the updates described at http://www.georgeevers.org/toolbox_updates.rtf seem relevant to your research, you might consider pasting your code into the updated version. I understand that you have added quite a bit of code, but you can paste in chunks; and it might be easier now than later.
>
> George

Subject: ANN Training with PSO

From: George Evers

Date: 29 Jul, 2010 20:14:20

Message: 23 of 68

Tricia, you are correct that the global best is a single position vector; however, that row vector is replicated across each row of "g" to accommodate matrix subtraction in the velocity update equation (i.e. to have the same dimensionality as the matrix of positions, "x," and the matrix of personal bests, "p"). Please access the global best as a vector using "g(1, :)," which will always work regardless of the swarm size.

Also, the link above is non-working due to the period interpreted as part of the link. The correct location of the updated Particle Swarm Optimization Research Toolbox is [*].

[*] http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox

"Tricia Rambharose" <rtrambharose@gmail.com> wrote in message <i2sbus$jbk$1@fred.mathworks.com>...
> Hi George,
>
> Just a few questions. I experimented with 'fg' and 'g' to pass to the NN as we discussed. I see that 'fg' is a single value, the function value at the global best. 'g' seems to be a matrix however. Is 'g' supposed to be the single position vector of where 'fg' occurs? Thanks.

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 2 Aug, 2010 19:19:24

Message: 24 of 68

Hi George,

I replace the following lines of code in RegPSO_main.m

if OnOff_v_reset || OnOff_position_clamping
        xmax_matrix = center_ss0 + range_ss0./2; %Calculate once and for all
        xmin_matrix = center_ss0 - range_ss0./2; %rather than iteratively.
end

With the code you sent to me via email.

When I run PSO using gbest I get an error saying:

"Undefined function or variable 'xmax_matrix
Error in ==> gbest_core_loop at 82"

It seems that since I removed the above lines of code, this xmax_matrix which is referenced in gbest_core_loop, no longer exists.

How do you suggest to fix this? Thanks again!


"George Evers" <george@georgeevers.org> wrote in message <i2snes$u7$1@fred.mathworks.com>...
> Tricia, you are correct that the global best is a single position vector; however, that row vector is replicated across each row of "g" to accommodate matrix subtraction in the velocity update equation (i.e. to have the same dimensionality as the matrix of positions, "x," and the matrix of personal bests, "p"). Please access the global best as a vector using "g(1, :)," which will always work regardless of the swarm size.
>
> Also, the link above is non-working due to the period interpreted as part of the link. The correct location of the updated Particle Swarm Optimization Research Toolbox is [*].
>
> [*] http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox
>
> "Tricia Rambharose" <rtrambharose@gmail.com> wrote in message <i2sbus$jbk$1@fred.mathworks.com>...
> > Hi George,
> >
> > Just a few questions. I experimented with 'fg' and 'g' to pass to the NN as we discussed. I see that 'fg' is a single value, the function value at the global best. 'g' seems to be a matrix however. Is 'g' supposed to be the single position vector of where 'fg' occurs? Thanks.

Subject: ANN Training with PSO

From: George Evers

Date: 3 Aug, 2010 07:06:04

Message: 25 of 68

Tricia, in newer versions of the toolbox, "xmin_matrix" and "xmax_matrix" have been renamed "xmin" and "xmax" for simplicity. You could either: (a) paste your code into the most recent version of the toolbox using current variable names to avoid all such errors in the future, or (b) simply rename "xmin" and "xmax" as "xmin_matrix" and "xmax_matrix" in the fix.

George

"Tricia Rambharose" <rtrambharose@gmail.com> wrote in message <i375ns$mq9$1@fred.mathworks.com>...
> Hi George,
>
> I replace the following lines of code in RegPSO_main.m
>
> if OnOff_v_reset || OnOff_position_clamping
> xmax_matrix = center_ss0 + range_ss0./2; %Calculate once and for all
> xmin_matrix = center_ss0 - range_ss0./2; %rather than iteratively.
> end
>
> With the code you sent to me via email.
>
> When I run PSO using gbest I get an error saying:
>
> "Undefined function or variable 'xmax_matrix
> Error in ==> gbest_core_loop at 82"
>
> It seems that since I removed the above lines of code, this xmax_matrix which is referenced in gbest_core_loop, no longer exists.
>
> How do you suggest to fix this? Thanks again!
>
>
> "George Evers" <george@georgeevers.org> wrote in message <i2snes$u7$1@fred.mathworks.com>...
> > Tricia, you are correct that the global best is a single position vector; however, that row vector is replicated across each row of "g" to accommodate matrix subtraction in the velocity update equation (i.e. to have the same dimensionality as the matrix of positions, "x," and the matrix of personal bests, "p"). Please access the global best as a vector using "g(1, :)," which will always work regardless of the swarm size.
> >
> > Also, the link above is non-working due to the period interpreted as part of the link. The correct location of the updated Particle Swarm Optimization Research Toolbox is [*].
> >
> > [*] http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 3 Aug, 2010 21:55:04

Message: 26 of 68

Hi George,

Thanks again! For now I made the change in the fix. I want to switch over to the latest version of your PSO Research toolbox, but there's a bug in the interfacing code I developed that I need to fix first.

What I need to know now is what is the stopping condition(s) for PSO in your toolbox and where is it found? I ask because I want PSO to stop when a certain fitness value is found. After some tests I realised that the PSO continues long after this specified stopping fitness value. Also from my tests, at the minima point the PSO continues for a lot of iterations even though all particles continuously give the same fitness value. This is a waste of time since no change is observed in particles positions for quite a while before it stops.

In ANN the user can specify a training goal. If applying PSO to ANN training I think it would make sense to stop the PSO algorithm when this ANN goal is met. I see in the Control_Panel.m there is a variable 'true_global_minimum'. I set this to be the ANN goal but PSO still does not stop when this is reached. I also experimented with setting the 'thresh_for_succ' in Benchmarks.m to the ANN goal but with no better results.

Can you differentiate true_global_minimum and thresh_for_succ and also provide any explanation for this stagnation on PSO? Thanks!

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 3 Aug, 2010 22:54:22

Message: 27 of 68

Good days, I realised the fix. Now it works by stopping at the specified ANN goal!

What I still need assistance on is..
" at the minima point the PSO continues for a lot of iterations even though all particles continuously give the same fitness value. This is a waste of time since no change is observed in particles positions for quite a while before it stops. "

and

> Can you differentiate true_global_minimum and thresh_for_succ and also provide any explanation for this stagnation on PSO? Thanks!

Subject: ANN Training with PSO

From: George Evers

Date: 5 Aug, 2010 05:48:05

Message: 28 of 68

Tricia,

The true global minimum, "true_global_minimum," is set in section “(6) Objectives” to enable any trial that successfully minimizes the fitness value to its lowest possible value to terminate rather than spending excess time in computation. Once the number of significant figures carried internally by MATLAB agree without error with the specified "true_global_minimum," no further optimization is possible. Of course, when many decision variables map through a complex formula to one fitness value, the decision variables will not be optimized to the same number of decimal places as the fitness value itself.

When switch "OnOff_SuccessfulUnsuccessful" is active in subsection "REGPSO & PSO Success Measures" of the first section, the concurrent activation of switch "OnOff_Terminate_Upon_Success" causes each trial to terminate when the threshold for success, "thresh_for_succ," is achieved as specified in Objectives.m. This will stop PSO when the subjective threshold required for a trial to be considered successful is reached.

As for stopping execution if PSO stagnates before reaching the specified threshold for success, Van den Bergh's normalized swarm radius criterion terminates the search when all particles are near the global best, which is a good indication of stagnation [1]. According to quick empirical testing done during thesis, monitoring particles' proximities to detect stagnation is more efficient than monitoring the number of iterations for which the function value fails to improve; this stands to reason since monitoring the source of a problem is more efficient than waiting for an undesirable side effect to occur, which is also the logic behind preventive medicine and firewalls. Switch "OnOff_NormR_stag_det" can be activated in subsection "Termination Criteria for REGPSO & PSO Algorithms" in the first section. At this point, RegPSO goes a step farther than terminating the search by regrouping
the swarm about the global best for continued progress when stagnation is detected, which allows the function value to improve over multiple groupings. RegPSO can be activated via switch "OnOff_RegPSO" in subsection "PSO Algorithm Selection" of the first section.

I've improved Appendix A of the documentation in response to your questions [2]. Thank you, and keep them coming.

Cordially,
George

[1] F. van den Bergh, “An Analysis of Particle Swarm Optimizers,” PhD thesis, Department of Computer Science, University of Pretoria, Pretoria, South Africa, 2002
[2] http://www.georgeevers.org/pso_research_toolbox_documentation.doc

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 17 Sep, 2010 17:21:05

Message: 29 of 68

Good day George,

I am now using the updated version of the PSO Research toolbox you emailed to me. I found one bug however. In the in Control Panel the settings are as follows:

OnOff_Autosave_Workspace_Per_Grouping = logical(0);
OnOff_Autosave_Workspace_Per_Trial = logical(0);
OnOff_Autodelete_Trial_Data = logical(1);
OnOff_Autosave_Workspace_Per_Column = logical(0);
OnOff_Autosave_Workspace_Per_Table = logical(0);


The in execution I get the error:

??? Undefined function or variable 'DateString_dir'.
Error in ==> Display_Settings at 207
    disp(['Start Time: ', DateString_dir])


I think this should be a simple fix however. Can you please check. Thank you!


"George Evers" <george@georgeevers.org> wrote in message <i3djal$9ks$1@fred.mathworks.com>...
> Tricia,
>
> The true global minimum, "true_global_minimum," is set in section “(6) Objectives” to enable any trial that successfully minimizes the fitness value to its lowest possible value to terminate rather than spending excess time in computation. Once the number of significant figures carried internally by MATLAB agree without error with the specified "true_global_minimum," no further optimization is possible. Of course, when many decision variables map through a complex formula to one fitness value, the decision variables will not be optimized to the same number of decimal places as the fitness value itself.
>
> When switch "OnOff_SuccessfulUnsuccessful" is active in subsection "REGPSO & PSO Success Measures" of the first section, the concurrent activation of switch "OnOff_Terminate_Upon_Success" causes each trial to terminate when the threshold for success, "thresh_for_succ," is achieved as specified in Objectives.m. This will stop PSO when the subjective threshold required for a trial to be considered successful is reached.
>
> As for stopping execution if PSO stagnates before reaching the specified threshold for success, Van den Bergh's normalized swarm radius criterion terminates the search when all particles are near the global best, which is a good indication of stagnation [1]. According to quick empirical testing done during thesis, monitoring particles' proximities to detect stagnation is more efficient than monitoring the number of iterations for which the function value fails to improve; this stands to reason since monitoring the source of a problem is more efficient than waiting for an undesirable side effect to occur, which is also the logic behind preventive medicine and firewalls. Switch "OnOff_NormR_stag_det" can be activated in subsection "Termination Criteria for REGPSO & PSO Algorithms" in the first section. At this point, RegPSO goes a step farther than terminating the search by regrouping

> the swarm about the global best for continued progress when stagnation is detected, which allows the function value to improve over multiple groupings. RegPSO can be activated via switch "OnOff_RegPSO" in subsection "PSO Algorithm Selection" of the first section.
>
> I've improved Appendix A of the documentation in response to your questions [2]. Thank you, and keep them coming.
>
> Cordially,
> George
>
> [1] F. van den Bergh, “An Analysis of Particle Swarm Optimizers,” PhD thesis, Department of Computer Science, University of Pretoria, Pretoria, South Africa, 2002
> [2] http://www.georgeevers.org/pso_research_toolbox_documentation.doc

Subject: ANN Training with PSO

From: George Evers

Date: 22 Sep, 2010 23:55:26

Message: 30 of 68

Tricia,

The bug only affected the alpha version (i.e. not the publicly posted version) since the final lines of "Display_Settings.m" were modified to bypass the creation of "Date_String_dir" in the case of ANN training. I infer that the intention was to bypass the need for the user to validate the correctness of displayed settings prior to execution; hence, I have created new switch "OnOff_user_validation_required" in subsection "Miscellaneous Features" of the control panel's Section 2 and modified the first and last lines of "Display_Settings.m" accordingly. In the modified code, which I'll email you shortly, the switch is already de-activated to eliminate the need for the user to validate displayed settings. Let me know whether or not the intended functionality is achieved.

Best Regards,
George

Subject: Alpha Version Feedback

From: George Evers

Date: 23 Sep, 2010 02:13:09

Message: 31 of 68

Tricia,

New file “pathdef.m” will produce many “Name is nonexistent or not a directory” warnings on most machines due to: (i) attempted access to drive “I:”, and (ii) attempted access to other directories that may not exist on many machines. Perhaps the path should be specified by the user prior to execution rather than by the toolbox during execution.

Throughout the code, I replaced variable “trials” with already existing variable “num_trials” to avoid unnecessary creation of a new variable. The control panel now sets “num_trials” to 1 atop section “(3) Particle Swarm Algorithm Settings” if “OnOff_NN_training” is active; if inactive, the user specifies the number of trials as before.

I reworded copyright statements within existing files to say “This paragraph copyright” instead of just “Copyright” to explicitly state which lines of code are affected. Please advise if you agree with the changes made.

I made “first_pso” a logical variable since it only serves to check whether or not inputs have been validated by previous calls to the toolbox. As a general habit, using logical variables when possible produces quicker checks and reduces workspace size.

Rather than setting “first_pso” to zero within “Display_Settings.m” after each occurrence of "RegPSO_main," I have set it to zero at the end of "RegPSO_main.m" itself. The motivation for this change was twofold: (i) the line appears only once this way, which simplifies the logic for readability; and (ii) the switch appeared out of context within “Display_Settings.m” since it does not affect settings displayed.

I deleted line “OnOff_NN_training = logical(0);” from the first section of the control panel since that portion of the code only executes if OnOff_NN_training == logical(0) so that setting it to zero becomes redundant.

I removed what appeared to be an extra "End" command on the final line of “ObjFun_NN.m.”

Let me know what you think of the changes. And please consider an approach for "pathdef.m" that allows flexibility across machines. For example, you might execute the file independently of the toolbox, which would allow other users to specify the path manually rather than requiring them to make modifications for their machines.

Cordially,
George

Subject: More questions about the working of the PSO Research toolbox

From: Tricia Rambharose

Date: 4 Oct, 2010 02:58:04

Message: 32 of 68

Hi George,

Thank you for the comments and suggestions for my ANN add-in to your PSO Research toolbox. I look forward to your further questions and suggestions on the latest version of this Add-on, as discussed in our email communications.

In the meantime, I did some more examination of the working of your PSO toolbox and I have a few questions which are as follows:
1. Do the neighbours of each particle change when regrouping happens?

2. If using LBEST PSO, is it that the neighbourhood best of each neighbourhood is determined for each iteration and then the best of all the neighbourhood bests is the final PSO solution?

3. RegPSO used with LBEST or GBEST or both?

4. Why is the regrouping factor inversely proportional to the stagnation threshold?

That's it for now.

Keep up the good work!

Subject: More questions about the working of the PSO Research toolbox

From: Tricia Rambharose

Date: 11 Oct, 2010 13:45:06

Message: 33 of 68

Good day again George,

Hope you are well. Here are a few more questions I have to better understand the working of your PSO toolbox:

1. Does regrouping happen after max iters per grouping or are iterations interrupted dues to some condition? If the latter, where is this condition found in code?
2. In lbest_core and gbest_core, one stopping condition in the while loop is
                    Fg ~= true_global_minimum
    If however fg <= true_global_minimum then shouldn't the PSO stop as well?

I appreciate your helping in understanding your toolbox fully!

Subject: More questions about the working of the PSO Research toolbox

From: George Evers

Date: 13 Oct, 2010 07:17:04

Message: 34 of 68

Q1: Do the neighbours of each particle change when regrouping happens?

A1: No, only positions change when the swarm regroups: neighborhoods remain the same.
----------
Q2: If using LBEST PSO, is it that the neighbourhood best of each neighbourhood is determined for each iteration and then the best of all the neighbourhood bests is the final PSO solution?

A2: Yes, that's correct.
----------
Q3: Is RegPSO used with LBEST or GBEST or both?

A3: When Gbest and Lbest PSO compete over a small number of iterations, Gbest PSO generally outperforms since it converges more quickly; but over many iterations, Lbest PSO sometimes outperforms by continuing to make progress once Gbest PSO has stagnated. This behavior is evident in the final graph of the RegPSO slides [1] and Table V-1 of thesis [2]. The slow and steady characteristic of Lbest PSO results from information about the global best gradually traveling from neighborhood to neighborhood rather than immediately being considered by all particles, which also happens to provide a more realistic simulation of actual social processes.

That said, the regrouping mechanism is a good solution to the early onset of premature convergence in Gbest PSO. Regrouping works better in conjunction with Gbest PSO than with Lbest PSO since the resulting RegPSO offers both quick convergence and efficient regrouping, which combine to provide a more aggressive overall rate of convergence toward the global minimizer than the slow and steady approach of Lbest PSO [1].

The Particle Swarm Optimization Research Toolbox allows the concurrent activation of various algorithmic switches such as "OnOff_RegPSO" and "OnOff_lbest" in order to facilitate the testing of hybrid combinations; hence, the regrouping mechanism can be activated in conjunction with Lbest PSO, but it is generally more effective with Gbest PSO since the combination of quick convergence and efficient regrouping produce quicker overall progress than Lbest PSO - regardless of whether regrouping is employed with Lbest PSO or not.

[1] http://www.georgeevers.org/RegPSO_slides.ppt (Ctrl + End: Slide 54)
[2] http://www.georgeevers.org/thesis.pdf
----------
Q4: Why is the regrouping factor inversely proportional to the stagnation threshold?

A4: The lower the stagnation threshold is set, the longer the exploitation phase is allowed to continue, and the more closely particles converge before stagnation is detected and regrouping is triggered. Setting the threshold too low produces excessive exploitation of presumably suboptimal solutions prior to regrouping, which slows the overall rate of progress; whereas, setting the threshold too high triggers regrouping too early while the state of the swarm is still largely affected by initial positions and velocities (i.e. by randomness) so that the uncertainty per dimension cannot be inferred from the swarm state with any certainty. A proper balance between exploitation and exploration lies between premature overanalysis and hasty regrouping.

While adjusting the stagnation threshold in search of this balance, it was desirable for the regrouping factor to adjust automatically so that only one parameter required manual fine tuning. This was the initial motivation for mathematically formulating the relation between "stag_tresh" and "reg_fact." The smaller the stagnation threshold is set, the more closely particles become grouped so that a larger regrouping factor becomes necessary to achieve a regrouping space sufficiently large to facilitate continued exploration; hence, I defined the two variables as inversely proportional so that "reg_fact" was automatically adjusted in response to "stag_thresh." The value of the proportionality constant was then refined with time and experimentation.
----------
Q5: Does regrouping happen after max iters per grouping or are iterations interrupted due to some condition? If the latter, where is this condition found in code?

A5: Regrouping is triggered when the swarm radius becomes smaller than fraction "stag_thresh" of the current grouping's original swarm radius as defined by formulas 4.1 - 4.4 of thesis [1]. These formulas are implemented within "gbest_core.m" and "lbest_core.m" via termination criterion
"max(sqrt(sum((x - g).^2, 2)))/sqrt(sum((range_IS(1, :)).^2)) > stag_thresh".

The user also has the option of setting "max_iter_per_grouping" less than "max_iter_over_all_groupings" to prompt swarm regrouping even if a stray particle is still wandering about the search space after a considerable number of iterations. This seems to improve general performance and is akin to a group of individuals making a decision rather than being paralyzed by indecision should one individual still dissent after a considerable amount of time.
----------
Q6: In lbest_core and gbest_core, one stopping condition in the while loop is
                    Fg ~= true_global_minimum
    If however fg <= true_global_minimum then shouldn't the PSO stop as well?

A6: Depending on the status of switch "OnOff_lbest," either "gbest_core_loop" or "lbest_core_loop" is executed *while* the conditions are true rather than *until* they are true; so, instead of "do until" logic, it's "do while" logic.

In other words, once the true global minimum is achieved, the system has no way of detecting slight improvements to the approximation of the global minimizer, so that the search should cease. In the walk through of the toolbox documentation, one RegPSO trial demonstrates this functionality by fully minimizing the Rastrigin function, which causes the search to terminate rather than wasting time attempting further optimization. This also ties in with the first paragraph of the August 5th posting.
----------
In response to your questions, I have further improved the definitions of Appendix A. Thank you, and keep them coming.

Cordially,
George

Subject: More questions about the working of the PSO Research toolbox

From: Tricia Rambharose

Date: 13 Oct, 2010 14:01:05

Message: 35 of 68

Good day George,

Thank you once again for your detailed explanations. They are appreciated!

Concerning fg ~= true_global_minimum however, I need further clarification. My thinking is that true_global_minimum is like the PSO goal. Goal means that for a minimizing PSO, the PSO's aim is to find a function value that is at least this true_global_minimum value. Therefore since the PSO aims to minimize, a function value found that is below the true_global_minimum is as good as achieving the true_global_minimum. Hence the stopping condition in the while loop of Lbest_core and Gbest_core maybe should be

While .....
         fg > true_global_minimum
.
.
.
end

At least this is my thinking.

Subject: More questions about the working of the PSO Research toolbox

From: Tricia Rambharose

Date: 18 Oct, 2010 20:58:04

Message: 36 of 68

Good day George,

While running some tests using your PSO toolbox and I observe that in the value for 'fg' returned to gbest_core / lbest_core after the respective loop function, sometimes is unchanged for many iterations, even when RegPSO is activated. Can you please explain this. Thanks!

Subject: Alpha Version Feedback

From: George Evers

Date: 27 Oct, 2010 21:52:04

Message: 37 of 68

Tricia,

I have outlined changes according to the files affected. Most are specific to the NNbox-calls-PSORT alpha version, but some are general improvements that have also been applied to the live version.

TOOLBOX-WIDE IMPROVEMENTS
* Since another researcher is experimenting with PSORT-calls-NNbox functionality, I have changed switch "OnOff_NN_training" to "OnOff_Tricias_NN_training" to ensure that the two functionalities do not conflict.

* The fix already applied to the live version concerning switch "OnOff_graphs" has been applied to the NNbox-calls-PSORT alpha version.

* Since automatically saved workspaces are transparently named using the values of the most important variables to help users distinguish the desired workspace from others, long filenames are used. This combined with the long name of containing folder "PSO Research Toolbox 2010_modified for NN training" sometimes prevented workspaces from being saved since Windows treats the directory as part of the filename when saving, loading, or moving files. To minimize the risk of this problem occurring, the folder's name has been shortened to "PSORT".

* Switch "OnOff_user_validation_required" is now more explicitly named "OnOff_user_input_validation_required".

* Each occurrence of "logical(1)" and "logical(0)" have been replaced with the MATLAB-preferred "true" and "false" except in the control panel, where switching between 1's and 0's is more efficient.

CONTROL_PANEL.M
Please open the current version of "Control_Panel.m" and press Alt + {o, c, b} to compare with the version you last emailed me, which will automatically highlight all changes made to the file.

* The user can now conveniently set the number of trials in section "PARTICLE SWARM ALGORITHM SETTINGS" of the control panel along with swarm size, acceleration coefficients, inertia weight, and other basic PSO settings. This should be less confusing for users not training ANN's, who would not expect to look for a general setting such as the number of trials in the ANN training section. This also eliminates extraneous variable "trials".

* Similarly, "dim = ..." is now set in section "PARTICLE SWARM ALGORITHM SETTINGS" regardless of whether ANN training is active or not. This gives variable "dim" a fixed location, which prevents confusion for users who otherwise might attempt to set the problem dimensionality within the control panel when training an ANN. Please also note the corresponding removal of "dim = " from "Objectives.m".

INPUT_VALIDATION.M
Please open the current version of "Input_Validation.m" and press Alt + {o, c, b} to compare it with the previous version.

* Based on statement, "The following switches must be Off," in the NN section of the control panel, switch "OnOff_func_evals" is now automatically deactivated if switch "OnOff_Tricias_NN_training" is active. However, switch "OnOff_SuccessfulUnsuccessful" is not deactivated since it might be useful for terminating the optimization process once the cost function has been reduced to value "thresh_for_succ". Should you encounter any problems using switch "OnOff_SuccessfulUnsuccessful" or "thresh_for_succ", please communicate them with me.

* When switch "OnOff_graph_fg_mean" is active and "num_trials > 1", switch "OnOff_Autosave_Workspace_Per_Trial" needs to be active in order for history "fghist_over_all_trials" to be computed within "Load_trial_data_for_stats.m". The input validation phase now automatically accounts for this and displays the appropriate notification.

* Graphing switches "OnOff_graph_ObjFun_f_vs_2D", "OnOff_swarm_trajectory", and "OnOff_phase_plot" are now automatically deactivated for objectives that are not yet compatible (e.g. NN training).

DISPLAY_SETTINGS.M
Please open the current version of "Display_Settings.m" and press Alt + {o, c, b} to compare with the previous version.

* The final lines have been modified to make "DateString_dir" compatible with the ANN training add-in.

* "&& ~OnOff_Autodelete_Trial_Data" has been deleted from both occurrences of "if (OnOff_Autosave_Workspace_Per_Grouping || (OnOff_Autosave_Workspace_Per_Trial && ~OnOff_Autodelete_Trial_Data) || OnOff_Autosave_Workspace_Per_Column || OnOff_Autosave_Workspace_Per_Table)" to prevent an error when switches "OnOff_Autosave_Workspace_Per_Trial" and "OnOff_Autodelete_Trial_Data" are both active.

* "if ~OnOff_NN_training" has been replaced with "if OnOff_user_input_validation_required" since some users who are not training ANN's might wish to skip the explicit check as well.

ETHICS.M & REG_PSO_MAIN.M
I have changed "first_pso" to a logical variable in "Ethics.m" and "RegPSO_main.m" for reasons mentioned in the Sept. 23 post. "if ~first_pso" at the end of "Ethics.m" means, "If it is not the first call to the PSO Research Toolbox from Tricia's ANN training add-in...." The relevant changes can be located quickly by searching for "first_pso" within these files.

REGPSO_MAIN.M
Even when mean function value versus iteration is not to be graphed, the toolbox seeks to create row vector "fg_mean" in the workspace for analysis. This produced an error in conjunction with inactive switch "OnOff_Autosave_Workspace_Per_Trial" since the toolbox attempted to construct matrix "fghist_over_all_trials" from nonexistent workspaces in order to compute "fg_mean". Vector "fg_mean" is now only created if switch "OnOff_Autosave_Workspace_Per_Trial" is active.

REG_METHODS_0AND1.M
Please open the current version of "Reg_Methods_0And1.m" and press Alt + {o, c, b} to compare with the previous version.

Within the PSO Research Toolbox, each trial number maps uniquely to an initial state of the randomizer in order to ensure reproducibility of results. I see that you have set the randomizer based on the system clock since you want each call to the PSO Research Toolbox from the NN training add-in to produce unique results rather than being linked to the trial number since the trial number is 1 for each call. Clock-based seeding of the randomizer is now offset with "if OnOff_Tricias_NN_training" to avoid conflicting functionality.

LBEST_CORE.M & LBEST_CORE_LOOP.M
"disp('ENTERED LBEST CORE')" and "disp('ENTERED LBEST CORE LOOP')" are now only displayed "if OnOff_Tricias_NN_training" in order to avoid modifying toolbox functionality when NN training is not employed.

GRAPHS.M
The first lines of "OnOff_graphs.m" have been modified to prevent an error when function values are graphed versus iteration for only the last of multiple trials.

TITLE_GRAPHS.M
* Each occurrence of typo "['num2str(max_iter_per_grouping), '" has been replaced with "[num2str(max_iter_per_grouping), '".

* "gbest PSO" and "lbest PSO" are now capitalized in graph titles.

Please review these changes using MATLAB's "Compare against" feature, and let me know if you have any remaining questions. I have primarily checked that changes made to accommodate the NN add-in do not present difficulties when NN training is inactive: I will leave ANN-specific testing to you since you will naturally discover these bugs during the course of your studies.

You might want to add a README file explaining how to use the add-in. And remember to account for file "pathdef.m" before publishing since it was still present in the version you last emailed me.

Cordially,
George

Subject: More questions about the working of the PSO Research toolbox

From: George Evers

Date: 27 Oct, 2010 22:11:04

Message: 38 of 68

> My thinking is that true_global_minimum is like the PSO goal.

That's correct.

> ...the PSO's aim is to find a function value that is at least this true_global_minimum value.

The goal is to minimize the function to this value precisely.

> ...a function value found that is below the true_global_minimum is as good as achieving the true_global_minimum.

There's no function value less than variable "true_global_minimum", except possibly for application problems having an unknown or dynamic global minimum. Since you're minimizing error, "true_global_minimum" should be set to zero, and there is no smaller value.

The simplest argument supporting the validity of "while ... fg ~= true_global_minimum" may be that it works as intended. In the walk through, the only RegPSO trial to minimize the 30-D Rastrigin benchmark to 0 without error in any of the digits carried internally by MATLAB - i.e. f < 10^(-323) - terminated early as intended; however, your recommendation could be useful for application problems having an unknown global minimum. I'll implement and test this change for a later round of updates.

It seems that you want optimization to cease once the cost function has been minimized to a pre-specified value. If so, "thresh_for_succ" is the setting to use, which is set in "Objectives.m" according to the problem and its dimensionality (i.e. since problem difficulty depends both on the objective and its dimensionality).

To use "thresh_for_succ":
(1) Activate switch "OnOff_SuccessfulUnsuccessful" in the control panel to specify that the number of trials successful and unsuccessful should be counted.
(2) Activate switch "OnOff_Terminate_Upon_Success" in the control panel to specify that trials should terminate upon reaching the desired level of success.
(3) Within "Objectives.m," set "thresh_for_succ" equal to the function value at which optimization should cease (i.e. true global minimum + acceptable error). I've added "thresh_for_succ" to the code for objective "NN" and temporarily set its value to 1.

Subject: More questions about the working of the PSO Research toolbox

From: George Evers

Date: 27 Oct, 2010 22:14:04

Message: 39 of 68

Tricia,

Regrouping is triggered when particles are very close together. For more prompt triggering, either increase the stagnation threshold, "stag_thresh," or decrease the maximum number of iterations per grouping, "max_iter_per_grouping". There is a balance to be maintained between exploitation and exploration, however; and optimizing the overall rate of progress toward the global minimizer across groupings is more important than minimizing the number of iterations for which the function value remains the same within each grouping. A decent amount of exploitation prior to regrouping can lead to more reliable inferences of uncertainty per dimension by which to informedly construct the regrouping space, which can improve the overall rate of convergence.

"Tricia Rambharose" <rtrambharose@gmail.com> wrote in message <i9iccs$33t$1@fred.mathworks.com>...
> Good day George,
>
> While running some tests using your PSO toolbox and I observe that in the value for 'fg' returned to gbest_core / lbest_core after the respective loop function, sometimes is unchanged for many iterations, even when RegPSO is activated. Can you please explain this. Thanks!

Subject: ANN Training with PSO

From: Manish

Date: 29 Oct, 2010 13:37:05

Message: 40 of 68

Hi ,

I am working on PSO toolbox. I would like to train neural network with using PSO algorithm for training and validation of a dataset. I am using MATLAB 7.10.0
It is not clear how they implement PSO to ANN training.

looking forward for help?

Manish

"George Evers" <george@georgeevers.org> wrote in message <i7e51e$ork$1@fred.mathworks.com>...
> Tricia,
>
> The bug only affected the alpha version (i.e. not the publicly posted version) since the final lines of "Display_Settings.m" were modified to bypass the creation of "Date_String_dir" in the case of ANN training. I infer that the intention was to bypass the need for the user to validate the correctness of displayed settings prior to execution; hence, I have created new switch "OnOff_user_validation_required" in subsection "Miscellaneous Features" of the control panel's Section 2 and modified the first and last lines of "Display_Settings.m" accordingly. In the modified code, which I'll email you shortly, the switch is already de-activated to eliminate the need for the user to validate displayed settings. Let me know whether or not the intended functionality is achieved.
>
> Best Regards,
> George

Subject: More questions about the working of the PSO Research toolbox

From: Tricia Rambharose

Date: 29 Oct, 2010 14:06:03

Message: 41 of 68

Hi George,

Thank you for the emails. I will check and get back to you. I am also doing further research on PSO variations. Does your PSO toolbox cater for the FIPS (Fully Informed Particle Swarm) version of PSO?

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 2 Nov, 2010 22:14:04

Message: 42 of 68

Hi Manish,

There are several papers in literature about using PSO for NN training. NN training by PSO can be done in several ways. I am currently working on extending the PSO toolbox by George Evers for NN training. Testing of this extension should be done soon and I aim to make it freely available to the public.



"Manish " <vipmkgoyal@gmail.com> wrote in message <iaeim1$ap2$1@fred.mathworks.com>...
> Hi ,
>
> I am working on PSO toolbox. I would like to train neural network with using PSO algorithm for training and validation of a dataset. I am using MATLAB 7.10.0
> It is not clear how they implement PSO to ANN training.
>
> looking forward for help?
>
> Manish
>
> "George Evers" <george@georgeevers.org> wrote in message <i7e51e$ork$1@fred.mathworks.com>...
> > Tricia,
> >
> > The bug only affected the alpha version (i.e. not the publicly posted version) since the final lines of "Display_Settings.m" were modified to bypass the creation of "Date_String_dir" in the case of ANN training. I infer that the intention was to bypass the need for the user to validate the correctness of displayed settings prior to execution; hence, I have created new switch "OnOff_user_validation_required" in subsection "Miscellaneous Features" of the control panel's Section 2 and modified the first and last lines of "Display_Settings.m" accordingly. In the modified code, which I'll email you shortly, the switch is already de-activated to eliminate the need for the user to validate displayed settings. Let me know whether or not the intended functionality is achieved.
> >
> > Best Regards,
> > George

Subject: More questions about the working of the PSO Research toolbox

From: Tricia Rambharose

Date: 2 Nov, 2010 22:21:04

Message: 43 of 68

Hi George,

I also noticed no constriction factor used in your PSO toolbox. Or did I miss it? I read that this factor is used in the Clerc velocity update method in PSO. Is this considered in the scope of work for your PSO toolbox?

Thanks again for the responses. I will respond to your questions and comments soon.

Subject: ANN Training with PSO

From: Tricia Rambharose

Date: 11 Nov, 2010 16:46:04

Message: 44 of 68

Good day,

For those of you who asked about ANN using PSO, I completed the creation and testing of a ANN add-in for the PSO Research Toolbox, which was created by Mr. Geroge Evers. The next step is to make this available to the public on my website

@ tricia-rambharose.com

and hopefully on Mathworks. I will give updates when it is available for your use and feedback.

Subject: ANN Training with PSO

From: George Evers

Date: 12 Nov, 2010 21:29:05

Message: 45 of 68

"Manish " <vipmkgoyal@gmail.com> wrote in message <iaeim1$ap2$1@fred.mathworks.com>...
> Hi ,
>
> I am working on PSO toolbox. I would like to train neural network with using PSO algorithm for training and validation of a dataset. I am using MATLAB 7.10.0
> It is not clear how they implement PSO to ANN training.
>
> looking forward for help?
>
> Manish

Manish,

The following searches will jump directly to posts about ANN training with PSO:
"21 Jun"
"16 Mar"
"3 Apr"
"19 May, 2010 22:14:04"

Tricia has developed an NN training add-in for the Particle Swarm Optimization Research Toolbox [1]. When the beta version is released, portions of the following posts will also become relevant, some of which pertain to RegPSO [2]:
"26 Jul"
"29 Jul, 2010 20:14:20"
"5 Aug"
"13 Oct"
"27 Oct, 2010 22:11:04"
"27 Oct, 2010 22:14:04"

If you are new to the particle swarm algorithm, the overview at [3] might also be helpful. Should you have any specific questions about either the add-in or the training process itself, Tricia and I will be happy to help as we can.

Best Regards,
George

[1] http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox
[2] http://www.georgeevers.org/RegPSO.pdf
[3] http://www.georgeevers.org/particle_swarm_optimization.htm

Subject: More questions about the working of the PSO Research toolbox

From: George Evers

Date: 12 Nov, 2010 21:35:04

Message: 46 of 68

"Tricia Rambharose" <rtrambharose@gmail.com> wrote in message <iaekcb$4gs$1@fred.mathworks.com>...
> Hi George,
>
> Thank you for the emails. I will check and get back to you. I am also doing further research on PSO variations. Does your PSO toolbox cater for the FIPS (Fully Informed Particle Swarm) version of PSO?

Tricia,

After experimenting with various FIPS topologies, Mendes, Kennedy, and Neves wrote, "The very worst FIPS conditions in the study were the UAll and All topologies, where the particle is truly fully informed, gathering information from every single member of the population. The best were the Ring and Square versions, where the particle has three and five neighbors (counting itself), respectively, plus their U-versions, which subtract one" [1]. Therefore, using a literally fully informed model would probably not be the best approach unless recent research suggests doing so.

The URing topology was the best performer in terms of solution quality - with the only down side being its more cautious rate of convergence [1]; hence, URing is the FIPS model of choice unless your application is considerably time sensitive. But since basic Lbest PSO uses the same Ring topology, and since the Type 1" constriction model used for the FIPS experiments is mathematically equivalent to standard PSO with inertia weight [2], the URing FIPS should be regarded as very similar to "standard" Lbest PSO; the experiment could even be construed as a validation of the basic Lbest formulation.

I like the experiments with which Kennedy has been involved regarding different PSO schemata [1, 3, 4, 5, 6]; they are akin to a thorough exploration of the search space before settling on any one PSO type. It is quite interesting that the simple ring topology performed so well.

I have not yet added the fully informed model to the toolbox.

[1] R. Mendes, J. Kennedy, and J. Neves, "The Fully Informed Particle Swarm: Simpler, Maybe Better", IEEE Transactions on Evolutionary Computation, vol. 8, pp. 204-210, June 2004
[2] http://www.georgeevers.org/thesis.pdf (Ctrl + F: "Type 1")
[3] R. C. Eberhart and J. Kennedy, "A new optimizer using particle swarm theory," in Micro Machine and Human Science, MHS '95., Proceedings of the Sixth International Symposium on, Nagoya, Japan, 1995, pp. 39-43.
[4] J. Kennedy, The particle swarm: social adaptation of knowledge. Proceedings of IEEE International Conference on Evolutionary Computation, Indianapolis, IN, 1997, pp. 303-308
[5] J. Kennedy, "Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance," Proceedings of the 1999 Congress on Evolutionary Computation, Washington, DC, 1999
[6] M. Clerc and J. Kennedy, ""The particle swarm - explosion, stability, and convergence in multidimensional complex space"," IEEE Transactions on Evolutionary Computation, vol. 6, pp. 58-73, Feb. 2002.

Subject: More questions about the working of the PSO Research toolbox

From: Tricia Rambharose

Date: 13 Nov, 2010 20:11:03

Message: 47 of 68

Hi George,

Thanks for your reply concerning FIPS. For my research I am currently testing the performance of FIPS and to facilitate this I am working on adding a FIPS option to your PSO toolbox. As usual your support is very much appreciated.

Subject: More questions about the working of the PSO Research toolbox

From: George Evers

Date: 15 Nov, 2010 06:35:04

Message: 48 of 68

"Tricia Rambharose" <rtrambharose@gmail.com> wrote in message <iaq2sg$9l6$1@fred.mathworks.com>...
> Hi George,
>
> I also noticed no constriction factor used in your PSO toolbox. Or did I miss it? I read that this factor is used in the Clerc velocity update method in PSO. Is this considered in the scope of work for your PSO toolbox?

Tricia,

I have not written the constriction models into the Particle Swarm Optimization Research Toolbox; however, parameters for the popular Type 1" model can be converted to Clerc's equivalents for use in "standard" PSO via the following steps.
(1) Select phi_max1 and phi_max2 in the constriction model (i.e. c1 and c2 in the “standard” model).
(2) Calculate chi according to equation (5.3) of [1], which is restated as (2.17) in thesis [2].
(3) Set the inertia weight equal to the value calculated for chi in step 2.
(4) From the values selected in step 1, calculate the Clerc's equivalents according to equation (2.16) of thesis [2].

Should you experiment with constriction models, you might look into Type 1. As can be seen from Table V of [1], it slightly outperformed Type 1", though it seems to have received less attention. However, "standard" PSO with velocity clamping (i.e. "E&S" in Table V for "Eberhart & Shi") was the best performer overall.

I think the interesting question Clerc raised was whether or not well-balanced parameters could alleviate the tendency of particles to take unnecessarily large steps – since if unreasonably large steps could be prevented via proper parameter selection, velocity clamping would become unnecessary. So from an inside-out perspective, attempting to eliminate the parameter made sense. But from an outside-in perspective, velocity clamping ensures that step sizes are never unreasonably large, and it might as well be kept for the time being based on the results of Table V [1].

Looking at the big picture, empirically searching for well-performing parameters might eventually tell whether velocity clamping: (i) becomes unnecessary once the "drunkard's walk" is eliminated, or (ii) is a critical part of the algorithm - more analogous to limiting in magnitude each individual's emotive responses. Table III-3 of thesis makes some progress in this regard with parameters that appear to work well in general despite the fact that parameter selection is always somewhat problem-dependent [2]. Pederson independently searched for quality parameters during his thesis studies as well [3].

[1] M. Clerc and J. Kennedy, "The particle swarm - explosion, stability, and convergence in multidimensional complex space", IEEE Transactions on Evolutionary Computation, vol. 6, pp. 58-73, Feb. 2002.
http://clerc.maurice.free.fr/pso/ (see especially part 2)
- Ctrl + F, "5.3" to find Equation 5.3
- Ctrl + F, "empirical results" to find Table V

[2] G. Evers, “An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization,” M.S. thesis, The University of Texas – Pan American, Edinburg, TX, 2009
http://www.georgeevers.org/publications.htm
- Ctrl + F, "Type 1" to find the discussion of the Clerc’s equivalents
- Ctrl + F, “Table III-3” to find the results of an empirical search for quality parameters
- Ctrl + F, “Table V-2” to find the same alongside RegPSO results

[3] Pedersen, M.E.H. (2010). Tuning & Simplifying Heuristical Optimization (PhD thesis). University of Southampton, School of Engineering Sciences, Computational Engineering and Design Group.
http://www.hvass-labs.org/people/magnus/thesis.php
- Ctrl + F, “Table 5.10” to find the results of an empirical search for quality parameters

Subject: Finalizing Alpha Version

From: George Evers

Date: 19 Nov, 2010 05:41:03

Message: 49 of 68

Tricia,

I made a few more improvements. Once we both approve of the most recent round of changes, I will upload the compatible toolbox.

TRAINPSO.m
Variable "first_pso" is now set to "true" instead of "1" for consistency with its change to a logical variable within the toolbox.

CONTROL_PANEL.M
Rather than modifying each section number throughout the documentation and in all relevant toolbox comments, the section number has been removed from the switch check at "Neural Network training using PSO settings" in order to begin the count with (1) at "BASIC SWITCHES & PSO ALGORITHM SELECTION" as before.

OBJECTIVES.M
To ensure that toolbox variable “thresh_for_succ” is compatible with the add-in, the relevant code has been simplified. For NN training, the problem dimensionality is always equal to "length(NN_wb)"; hence, the dimensionality check has been removed for objective "NN". "thresh_for_succ" is now set to zero to ensure that it will not inadvertently affect add-in functionality (i.e. it will not cause the search to terminate unless either there is zero error, or the value of "thresh_for_succ" is intentionally set equal to the user's acceptable error). Comments have also been added clarifying how to use the variable. If there are any displays to be suppressed when NN training is active, we can insert an "if OnOff_Tricias_NN_training" check before them.

ETHICS.M
This file has been removed from the NNbox-calls-PSORT demo version, which will instead be governed by the BSD as per the community guidelines. It now pertains only to the professional version.

AUTOSAVE_WORKSPACE_PER_TABLE.M
This file has been removed from the demo since only the professional version organizes the statistics of multiple sets of trials into tables while automatically incrementing parameters between sets of trials and new tables. Each "table" in the demo consists of only one column of statistics, and "Autosave_Workspace_Per_Column.m" suffices to save the workspace once all trials have completed.

README.DOCX
* I recommend saving the README in .doc format instead of .docx to ensure compatibility with older word processors.

* You might include a list of steps by which to implement the add-in. For example:
(1) Atop "Control_Panel.m", activate switch "OnOff_Tricias_NN_training" by setting it to "logical(1)" (i.e. since this is not the default setting).
[Potential Topic for 2] Q: Is it necessary to change the current directory atop MATLAB's command window in order for the add-in to work from the command line?

Thank you for working with me to ensure full compatibility of the NN training add-in with the Particle Swarm Optimization Research Toolbox for the convenience of the user community.

Subject: Finalizing Alpha Version

From: Tricia Rambharose

Date: 22 Nov, 2010 04:34:04

Message: 50 of 68

George,

Thanks for the suggestions. You can go ahead and upload the PSORT modified for NN training functionality. As agreed, I will after upload the NN add-in.

Keep me informed.

Subject: Finalizing Alpha Version

From: George Evers

Date: 23 Nov, 2010 06:48:03

Message: 51 of 68

Tricia,

The compatible version of the toolbox has been submitted to the upload queue, so the add-in can be uploaded any time.

It might be helpful to clarify in the README that the add-in folder should be unzipped alongside PSO Research Toolbox folder "PSORTyyyymmdd".

A step might also be added suggesting the deactivation of switch "OnOff_user_input_validation_required" in section "(1) BASIC SWITCHES & PSO ALGORITHM SELECTION" of the control panel under heading "MISCELLANEOUS FEATURES".

Please paste a link to the add-in once it's approved so visitors to the thread will be able to find it quickly.

Best Regards,
George

Subject: Finalizing Alpha Version

From: Tricia Rambharose

Date: 23 Nov, 2010 14:55:04

Message: 52 of 68

Good day George,

Thanks for the update. I will send to you the README file for the add-in. Can you please check this file for any other information that may be necessary for users? Then I will upload the add-in.


Regards,
Tricia

Subject: Finalizing Alpha Version

From: George Evers

Date: 29 Nov, 2010 22:24:05

Message: 53 of 68

Tricia's add-in, which allows MATLAB's NN toolbox to call the PSO ResearchToolbox for training purposes, is now available at http://www.mathworks.com/matlabcentral/fileexchange/29565-neural-network-add-in-for-psort .

Subject: image pattern recognition

From: Mahmoud ABURUB

Date: 30 Nov, 2010 10:36:05

Message: 54 of 68

hello
i m computer engineering student, i dont know too much
about matlab, i face problem with image processin functions
like how minimize and image and what it the best functionality can be applied in order
to make edge detection method and then extract an object from that message

Subject: image pattern recognition

From: ImageAnalyst

Date: 30 Nov, 2010 10:39:07

Message: 55 of 68

On Nov 30, 5:36 am, "Mahmoud ABURUB" <mkmahmoud2...@gmail.com> wrote:
> hello
> i m computer engineering student, i dont know too much
> about matlab, i face problem with image processin functions
> like how minimize and image and what it the best functionality can be applied in order
> to make edge detection method and then extract an object from that message

----------------------------
You should start your own thread, rather than replying to the "Alpha
Version Feedback" thread and changing the suject line. In the
meantime, you can check out my demo:
http://www.mathworks.com/matlabcentral/fileexchange/25157

Subject: image pattern recognition

From: George Evers

Date: 11 Dec, 2010 20:52:05

Message: 56 of 68

"Mahmoud ABURUB" <mkmahmoud2100@gmail.com> wrote in message <id2k2l$4f1$1@fred.mathworks.com>...
> hello
> i m computer engineering student, i dont know too much
> about matlab, i face problem with image processin functions
> like how minimize and image and what it the best functionality can be applied in order
> to make edge detection method and then extract an object from that message

Mahmoud,

At a minimum, most universities subscribe to IEEE and Science Direct: to find out how to apply PSO, you might search for "PSO edge detection" and "ANN edge detection". I have only read one paper on the topic, but the results might help you short list alternatives.

Once you have a relatively efficient pseudo code, I would be happy to walk you through the steps to add it to the Particle swarm Optimization Research Toolbox at a relevantly named thread as time permits.

Concerning related MATLAB functions, ImageAnalyst's BlobsDemo looks like a great introduction using the image processing toolbox.

Subject: image pattern recognition

From: Geant Bepi

Date: 17 Dec, 2010 15:54:20

Message: 57 of 68

Hey guys!!

sorry for posting something irrelevant to the topic being discussed here.

but I need some urgent help.. (only a little help)

I invite the brains here to look at my simple problem mentioned below and help me with some tips;
link: http://www.mathworks.com/matlabcentral/newsreader/view_thread/297252#804997

appreciate any knowledge you'd like to share on the matter.

thanks heaps!

Subject: Problem in PSORT

From: Tricia Rambharose

Date: 4 Feb, 2011 19:53:03

Message: 58 of 68

Good day George,

I still have one problem persistent problem with the working of your PSORT. I have not fully tested out the latest version you have posted up for download as yet, however can you please check that where ever in the PSORT files you have "if OnOff_swarm_trajectory" that first it is checked "if OnOff_graphs". I usually set if OnOff_graphs to logical(0) and therefore I keep getting an error in Reg_Methods_0And1 which states "??? Undefined function or variable 'OnOff_swarm_trajectory'."

Subject: Problem in PSORT

From: George Evers

Date: 5 Feb, 2011 05:05:36

Message: 59 of 68

>... can you please check that where ever in the PSORT files you have "if OnOff_swarm_trajectory" that first it is checked "if OnOff_graphs".

Tricia,

According to the revision history, this bug was fixed in version 20101013 [1]. I just examined the most recent upload to be sure, and switch OnOff_graphs is definitely checked before each reference to switch OnOff_Swarm_Trajectory. If the current version presents any bugs, please paste the error notification, and I will get to the source of any problem.

The documentation has also been revised and is now available in .pdf format [2]. I'm sure it can still use some improvements though, and I hope you will let me know of any weaknesses that become evident.

Cordially,
George

[1] http://www.georgeevers.org/toolbox_updates.rtf
[2] http://www.georgeevers.org/pso_research_toolbox_documentation.pdf

Subject: Problem in PSORT

From: George Evers

Date: 5 Feb, 2011 05:05:37

Message: 60 of 68

>... can you please check that where ever in the PSORT files you have "if OnOff_swarm_trajectory" that first it is checked "if OnOff_graphs".

Tricia,

According to the revision history, this bug was fixed in version 20101013 [1]. I just examined the most recent upload to be sure, and switch OnOff_graphs is definitely checked before each reference to switch OnOff_Swarm_Trajectory. If the current version presents any bugs, please paste the error notification, and I will get to the source of any problem.

The documentation has also been revised and is now available in .pdf format [2]. I'm sure it can still use some improvements though, and I hope you will let me know of any weaknesses that become evident.

Cordially,
George

[1] http://www.georgeevers.org/toolbox_updates.rtf
[2] http://www.georgeevers.org/pso_research_toolbox_documentation.pdf

Subject: Problem in PSORT

From: Tricia Rambharose

Date: 5 Feb, 2011 13:57:03

Message: 61 of 68

Hi George,

Thanks for the reply. I will download the PSORT version currently available on Mathworks. Additionally, I want to discuss with you some ideas I have for the PSORT and applying it to my exact project. I prefer to discuss by Skype as email correspondence is slower and more difficult to keep track of. Are you able to Skype with me within the next 3 days?

"George Evers" wrote in message <iiilr1$74d$1@fred.mathworks.com>...
> >... can you please check that where ever in the PSORT files you have "if OnOff_swarm_trajectory" that first it is checked "if OnOff_graphs".
>
> Tricia,
>
> According to the revision history, this bug was fixed in version 20101013 [1]. I just examined the most recent upload to be sure, and switch OnOff_graphs is definitely checked before each reference to switch OnOff_Swarm_Trajectory. If the current version presents any bugs, please paste the error notification, and I will get to the source of any problem.
>
> The documentation has also been revised and is now available in .pdf format [2]. I'm sure it can still use some improvements though, and I hope you will let me know of any weaknesses that become evident.
>
> Cordially,
> George
>
> [1] http://www.georgeevers.org/toolbox_updates.rtf
> [2] http://www.georgeevers.org/pso_research_toolbox_documentation.pdf

Subject: Problem in PSORT

From: George Evers

Date: 10 Feb, 2011 03:41:03

Message: 62 of 68

"Tricia Rambharose" wrote in message <iijkvf$fb3$1@fred.mathworks.com>...
> Hi George,
>
> I want to discuss with you some ideas I have for the PSORT and applying it to my exact project. I prefer to discuss by Skype as email correspondence is slower and more difficult to keep track of. Are you able to Skype with me within the next 3 days?

Tricia,

It was nice Skyping with you earlier this week. I'll be online for at least a little while most evenings after your conference. Message me when you see me.

Regards,
George

Subject: ANN Training with PSO

From: Saby

Date: 2 Apr, 2011 03:29:04

Message: 63 of 68

Hi George,

I guess this thread is already solved but I have some doubt with your little code down there. Please excuse my dumbness.

You said the following thing....
Fifthly, interface the PSO toolbox and ANN code by creating a new objective function for the toolbox. If you use the "PSO Research Toolbox," just create a new file called "benchmark_BurakANN.m" using the pseudo code below to interface the two codes:

function [f] = benchmark_BurakANN(x, np)
global dim
f = zeros(np, 1);
for Internal_j = 1:np
     f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code
end

In this code can I know what is x, what is np, and what is internal_j?
Also I don't understand what you mean by ''the result of passing x(Internal_j, :) into your ANN code"? In short, I want to know what's going on in this code.

PS - I've downloaded your 'PSO Research Toolbox', but haven't seen the codes in it yet. Maybe that's the reason I don't understand this code?

Also, I will be grateful if you/anyone know the solution to following...
http://www.mathworks.com/matlabcentral/newsreader/view_thread/305532

"George Evers" wrote in message <hp68bb$j0t$1@fred.mathworks.com>...
> "Mohammed Ibrahim" <hammudy20@yahoo.com> wrote in message <hp5au2$3iu$1@fred.mathworks.com>...
> > "Burak " <newsreader@mathworks.com> wrote in message <f9sagf$dfn$1@fred.mathworks.com>...
> > > Hi there,
> > >
> > > I am a graduate student working on particle swarm
> > > optimization. I wanna to learn more about ANN training with
> > > PSO. Although there is a good PSO toolbox release, it seems
> > > complicated as I observe the source code for neural network
> > > training. There are some articles about this issue, but it
> > > is not clear how they implement PSO to ANN training
> > > Thanks for your answers and help
> > >
> > > Burak
>
> Burak, to train an ANN using PSO, firstly, identify a well-performing ANN for your application. Find characteristics that seem to work well for problems similar to yours: e.g. novel concepts, number of hidden layers, number of inputs, and types of inputs. Keep a detailed bibliography and save all relevant papers. Though you will train with PSO, you should keep notes of other good training algorithms with which to compare in order to fully demonstrate the validity of PSO as a training mechanism.
>
> Secondly, find a PSO type suitable to your application. For example, RegPSO [1: Chapters 4-6], [2] does a great job of escaping from premature convergence in order to find increasingly better solutions when there is time to regroup the swarm, which would seem to be the case for training ANN's in general.
>
> Thirdly, locate a good PSO toolbox - preferably one that is already capable of implementing the strain of PSO you would like to use. Ideally, the toolbox would contain standard gbest and lbest PSO's as well as the more evolved PSO type found in step two. If the variation of PSO you would like to use is not available in a suitable toolbox, locate a powerful toolbox, and contribute the code. The PSO toolbox doesn't need to have code for training ANN's since you can locate solid code for implementing ANN's and simply interface the best of both worlds.
>
> Fourthly, locate a good ANN code to interface with the toolbox - preferably written in the same langauge. As long as you can implement the ANN with code alone (e.g. as with MATLAB's neural net toolbox) rather than necessarily depending on a GUI, the two can be interfaced.
>
> Fifthly, interface the PSO toolbox and ANN code by creating a new objective function for the toolbox. If you use the "PSO Research Toolbox," just create a new file called "benchmark_BurakANN.m" using the pseudo code below to interface the two codes:
> function [f] = benchmark_BurakANN(x, np)
> global dim
> f = zeros(np, 1);
> for Internal_j = 1:np
> f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code
> end
>
> What makes sense to me is that each function value in column vector "f" would reflect the error (e.g. the difference or biased difference) between the ANN's prediction and the actually desired function value since it is the error that you want to minimize. To be more in line with real-world applications, you could translate each error into financial cost and minimize that value.
>
> FYI, the problem dimensionality will be equal to the number of ANN parameters that you wish to optimize so that each dimension represents one decision variable of the network to be optimized. For example, will you keep the number of hidden layers constant or include this as one dimension to be optimized? Read up on the most recent ANN literature: I could be wrong, but it is my impression that while more complex ANN's (e.g. those with more hidden layers) might be capable of solving more complicated problems, they also tend to memorize the data more quickly, which is a big problem since the goal is not memorization but prediction. I personally would leave the number of hidden layers constant at whatever seems to have worked best in the literature and possibly experiment with changing it at a later time.
>
> Happy Researching!
>
> Sincerely,
> George I. Evers
>
> [1] http://www.georgeevers.org/thesis.pdf
> [2] http://www.georgeevers.org/Evers_BenGhalia_IEEEConfSMC09.pdf

Subject: ANN Training with PSO

From: George Evers

Date: 15 May, 2011 13:19:04

Message: 64 of 68

Saby, thank you for your questions. I apologize for not responding sooner: the usual email notification somehow never arrived from the watch list.

Q: "In this code can I know what is x, what is np, and what is internal_j?"
A: I have added "Internal_j" to the list of variables defined in Appendix A of the documentation, the most recent version of which can always be downloaded from http://www.georgeevers.org/pso_research_toolbox_documentation.pdf : "x" and "np" are defined there as well. Just left-click on a variable's name in the Table of Contents to jump straight to its definition.

Q: Also I don't understand what you mean by ''the result of passing x(Internal_j, :) into your ANN code"?
> function [f] = benchmark_BurakANN(x, np)
> global dim
> f = zeros(np, 1);
> for Internal_j = 1:np
> f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code
> end
>
A: Two approaches were discussed for the ANN-training add-in, both of which involve a wrapper. In this case, the PSO Research Toolbox (POSRT) would call MATLAB's NN toolbox from an objective function to determine the efficacy of each position encountered by the swarm; the error would then be returned as the function value to be minimized, and the PSORT would execute as usual. "Internal_j" is just a counter incrementing from 1 by 1 until reaching the number of particles. The documentation explains why it is capitalized and why it is prefaced with "Internal_".

Q: In short, I want to know what's going on in this code.
(a) > function [f] = benchmark_BurakANN(x, np)
This would define the wrapper as a function that accepts the position matrix and number of particles as inputs and returns the np x 1 vector of function values.
(b) > global dim
The control panel declares the problem dimensionality global so that it can be accessed by any function that requires it. This line would load the the global variable into the local workspace of the function.
(c) > f = zeros(np, 1);
This would pre-allocate the desired size for the column vector of function values and initialize each value to zero before summing.
(d) > for Internal_j = 1:np
> f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code
> end
Each particle's position vector (i.e. each row of position matrix "x") is a candidate solution that proposes a value for each ANN characteristic to be optimized. This pseudo code would test the ANN at each position visited, and the function value returned would be an error value to be minimized. Though specific variable names are used, this is only pseudo code: you'd have to replace it with actual code to implement your particular ANN and compute the error value appropriate to your application.

Q: PS - I've downloaded your 'PSO Research Toolbox', but haven't seen the codes in it yet. Maybe that's the reason I don't understand this code?
A: This was just one approach discussed during brainstorming, and another direction was taken. Tricia made a good point that having MATLAB's ANN toolbox call the PSORT instead of vice versa would allow people to harness the full functionality of the ANN toolbox's GUI rather than requiring users to hard-code everything ANN. She proceeded with that approach while I began writing the other approach, which I have not yet had need to finish.

Q: Also, I will be grateful if you/anyone know the solution to following...
 http://www.mathworks.com/matlabcentral/newsreader/view_thread/305532
A: Sorry, I haven't used the fuzzy logic toolbox.

Subject: ANN Training with PSO

From: Saby

Date: 15 May, 2011 18:17:04

Message: 65 of 68

Hi George,

Thank you very much for all of your detailed answers. You really are very helpful person, and this is all very useful information.

As I could not understand your PSORT until my queries were satisfied by your answers, I wrote my own MATLAB code of PSO for training ANN - in the mean time. My code is working perfectly fine as per my requirements. I am not a pro-programmer, but in case if you would like to see my code, just let me know. Maybe you would not need to see it, as Tricia has made enough improvements in your toolbox to make it compatible for ANN training. I think it's just my fault that I could not understand your toolbox.

In any case, thank you once again for your detailed help!


Thanks,
Saby

"George Evers" wrote in message <iqojs8$295$1@newscl01ah.mathworks.com>...
> Saby, thank you for your questions. I apologize for not responding sooner: the usual email notification somehow never arrived from the watch list.
>
> Q: "In this code can I know what is x, what is np, and what is internal_j?"
> A: I have added "Internal_j" to the list of variables defined in Appendix A of the documentation, the most recent version of which can always be downloaded from http://www.georgeevers.org/pso_research_toolbox_documentation.pdf : "x" and "np" are defined there as well. Just left-click on a variable's name in the Table of Contents to jump straight to its definition.
>
> Q: Also I don't understand what you mean by ''the result of passing x(Internal_j, :) into your ANN code"?
> > function [f] = benchmark_BurakANN(x, np)
> > global dim
> > f = zeros(np, 1);
> > for Internal_j = 1:np
> > f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code
> > end
> >
> A: Two approaches were discussed for the ANN-training add-in, both of which involve a wrapper. In this case, the PSO Research Toolbox (POSRT) would call MATLAB's NN toolbox from an objective function to determine the efficacy of each position encountered by the swarm; the error would then be returned as the function value to be minimized, and the PSORT would execute as usual. "Internal_j" is just a counter incrementing from 1 by 1 until reaching the number of particles. The documentation explains why it is capitalized and why it is prefaced with "Internal_".
>
> Q: In short, I want to know what's going on in this code.
> (a) > function [f] = benchmark_BurakANN(x, np)
> This would define the wrapper as a function that accepts the position matrix and number of particles as inputs and returns the np x 1 vector of function values.
> (b) > global dim
> The control panel declares the problem dimensionality global so that it can be accessed by any function that requires it. This line would load the the global variable into the local workspace of the function.
> (c) > f = zeros(np, 1);
> This would pre-allocate the desired size for the column vector of function values and initialize each value to zero before summing.
> (d) > for Internal_j = 1:np
> > f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code
> > end
> Each particle's position vector (i.e. each row of position matrix "x") is a candidate solution that proposes a value for each ANN characteristic to be optimized. This pseudo code would test the ANN at each position visited, and the function value returned would be an error value to be minimized. Though specific variable names are used, this is only pseudo code: you'd have to replace it with actual code to implement your particular ANN and compute the error value appropriate to your application.
>
> Q: PS - I've downloaded your 'PSO Research Toolbox', but haven't seen the codes in it yet. Maybe that's the reason I don't understand this code?
> A: This was just one approach discussed during brainstorming, and another direction was taken. Tricia made a good point that having MATLAB's ANN toolbox call the PSORT instead of vice versa would allow people to harness the full functionality of the ANN toolbox's GUI rather than requiring users to hard-code everything ANN. She proceeded with that approach while I began writing the other approach, which I have not yet had need to finish.
>
> Q: Also, I will be grateful if you/anyone know the solution to following...
> http://www.mathworks.com/matlabcentral/newsreader/view_thread/305532
> A: Sorry, I haven't used the fuzzy logic toolbox.

Subject: ANN Training with PSO

From: George Evers

Date: 24 May, 2011 20:45:05

Message: 66 of 68

"Saby" wrote in message <iqp5b0$kvf$1@newscl01ah.mathworks.com>...
> ...in case if you would like to see my code, just let me know.

Saby, I'd like to see your approach. To keep the thread organized, please delete any irrelevant portions of the post to which you reply.

Best Regards,
George
http://www.georgeevers.org

Subject: ANN Training with PSO

From: omegayen

Date: 17 Jun, 2011 16:26:05

Message: 67 of 68

"George Evers" wrote in message <irh5ch$fqh$1@newscl01ah.mathworks.com>...
> "Saby" wrote in message <iqp5b0$kvf$1@newscl01ah.mathworks.com>...
> > ...in case if you would like to see my code, just let me know.
>
> Saby, I'd like to see your approach. To keep the thread organized, please delete any irrelevant portions of the post to which you reply.
>
> Best Regards,
> George
> http://www.georgeevers.org

Saby I would be interested in seeing your approach as well, anyway to contact you? Thanks.

Subject: ANN Training with PSO

From: Muhammad Qamar Raza

Date: 3 Sep, 2012 12:32:15

Message: 68 of 68

i am also using same toolbox and your add in but i facing some many errors.
when i compile trainpso file that gives
 trainpso
Error using trainpso (line 25)
Not enough input arguments.
(in code)
 function [net, tr] = trainpso(net, tr, trainV, valV, testV, varargin)

and when i use this tool box to train the network for my own model even working on trainlm but doesn't work on trainpso and gives following error.

SWITCH expression must be a scalar or string constant.

Error in network/subsref (line 140)
        switch (subs)

Error in trainpso (line 65)
max_perf_inc = net.trainParam.max_perf_inc;

Error in LoadScriptNN (line 84)
    net = trainpso(net, trainX', trainY');


pleas reply me asap.
i am very very very grateful to you for this.
i am waiting for reply

Thank you

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us