Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
simulation loop vectorization and slowdown

Subject: simulation loop vectorization and slowdown

From: Daphne

Date: 21 Jul, 2010 20:29:08

Message: 1 of 7


I am running a small simulation program that I wrote, changing parameters for each run, doing some analysis, and saving into an excel file. Since there are many parameters, I end up with 7 loops running within eachother...
My program is basically an m-file with several inputs, which I choose in the loops...

Heres an example of what it may look like.
for rate = [ 5, 15, 30, 60, 90, 120 ]
    for a = 1e-9*[ 10, 25, 50, 100, 200, 300, 400, 500 ]
        for divide_nr_by = [ 1000, 1550, 2200, 2900, 3650, 4550 ]
            for window = [ 10, 15 ]
                for perm = [1:20]
% run some analysis
end end end end end

Is there any way to somehow vectorize this even partially?
I would be happy to get rid of some of the loops, but still need all the conditions to run.

Another question, when I do run the loops, they initially run pretty fast, but then slow down. I don't carry any variables with me, but write it all into a file, and each matrix gets run over. Why would the slowdown occur?

Thanks!
Daphne

Subject: simulation loop vectorization and slowdown

From: Jan Simon

Date: 21 Jul, 2010 21:23:03

Message: 2 of 7

Dear Daphne,

> Heres an example of what it may look like.
> for rate = [ 5, 15, 30, 60, 90, 120 ]
> for a = 1e-9*[ 10, 25, 50, 100, 200, 300, 400, 500 ]
> for divide_nr_by = [ 1000, 1550, 2200, 2900, 3650, 4550 ]
> for window = [ 10, 15 ]
> for perm = [1:20]
> % run some analysis
> end end end end end

The FOR loops itself cannot be vectorized.
You describe the interesting part of your program by "% run some analysis". We definitely need much more details.

Jan

Subject: simulation loop vectorization and slowdown

From: James Tursa

Date: 21 Jul, 2010 23:04:03

Message: 3 of 7

"Daphne" <daphnew_too_nospam@yahoo.com> wrote in message <i27lak$l25$1@fred.mathworks.com>...
>
> Another question, when I do run the loops, they initially run pretty fast, but then slow down. I don't carry any variables with me, but write it all into a file, and each matrix gets run over. Why would the slowdown occur?

Usually the result of increasing an array size within the loop(s). Can't say for sure unless we see your code.

James Tursa

Subject: simulation loop vectorization and slowdown

From: someone

Date: 22 Jul, 2010 00:34:04

Message: 4 of 7

"Daphne" <daphnew_too_nospam@yahoo.com> wrote in message <i27lak$l25$1@fred.mathworks.com>...
>
> I am running a small simulation program that I wrote, changing parameters for each run, doing some analysis, and saving into an excel file. Since there are many parameters, I end up with 7 loops running within eachother...
> My program is basically an m-file with several inputs, which I choose in the loops...
>
> Heres an example of what it may look like.
> for rate = [ 5, 15, 30, 60, 90, 120 ]
> for a = 1e-9*[ 10, 25, 50, 100, 200, 300, 400, 500 ]
> for divide_nr_by = [ 1000, 1550, 2200, 2900, 3650, 4550 ]
> for window = [ 10, 15 ]
> for perm = [1:20]
> % run some analysis
> end end end end end
>
> Is there any way to somehow vectorize this even partially?

It MAY be possible to completely vectorize it and remove ALL the for loops (which doesn't necessarily mean it will run any faster). But from what little information you gave (% run some analysis), its impossible for ANYONE to tell.
 
> I would be happy to get rid of some of the loops, but still need all the conditions to run.
>
> Another question, when I do run the loops, they initially run pretty fast, but then slow down. I don't carry any variables with me, but write it all into a file, and each matrix gets run over. Why would the slowdown occur?

Are you writting to the same or different files? If you unnecessarily keep the files open between writes/loops, that COULD be part of the problem.

Its usually because vectors or matricies aren't properly preallocated. But, again, given what little information you provided, its impossible to tell.

Can you provide a SMALL sample of your code to show whats going on inside the for loops?

>
> Thanks!
> Daphne

Subject: simulation loop vectorization and slowdown

From: Daphne

Date: 22 Jul, 2010 04:42:03

Message: 5 of 7


Sorry for being so vague. I really didn't know where to start, as in the loops there are about 400 lines of code in the main function + about 5 subfunctions (the simulation, image procesing, iterations on various parameters and other goodies).
I guess what I would like to do is not vectorize the code itself (I've vectorized as much as I could think of), but perhaps find a way to reduce the number of loops needed to send the parameters list into the main subfunction. I'm guessing that's not possible.
I really don't know what to put here, and can't put the whole code (length and reveal).
I do make sure to preallocate and try to clear any unnecessary variables between runs, I also use the ~ for unneeded variables.

What the function does is basically is collect the parameters from the loops, call the main subfunction and generate an image according to specifications, then I run image processing procedures on a thresholded image (a previously determined threshold), compare the processed image to an original to find quality (another time-consuming bit, small enough to post so here it is)

A = matrix;
B = estimated_matrix;
W = weight_factor;
Q = W *(sum(sum(and( A, B)))/sum(sum( A))) + ...
    (1-W)*(sum(sum(and(1-A,1-B)))/sum(sum(1-A)));

Once good quality is obtained, many parameters are calculated and saved into a file.
Sorry I can't be more specific...

About the files, yes, I open (fopen) one file in the begining and use fprintf to write the line of final data to it after each run. Perhaps it would be better to just collect it all into a matrix and dump it to a file every few hundreds of lines (~60 columns worth of numerical data each run).

dlmwrite (as opposed to fprintf) doesn't require to keep the file open, does it?

Daphne

Subject: simulation loop vectorization and slowdown

From: James Allison

Date: 23 Jul, 2010 15:09:28

Message: 6 of 7

If your objective is to find specific parameter values that produce the
best image quality, I would suggest using an optimization approach
instead of an exhaustive full-factorial search like you are performing.
You will be able to identify a better solution with fewer function
evaluations.

The fminsearch algorithm might work, but if you have any bounds on
variables, or if the number of parameters is too large for fminsearch to
handle, you will need to try something from the optimization toolbox,
such as lsqnonlin or fmincon. If the objective function is non-smooth or
has multiple optima, you may need to use functions from the global
optimization toolbox:

http://www.mathworks.com/products/global-optimization/

If for some reason you need more than just the set of parameters that
produces the best image, I would recommend using a more efficient design
of experiments technique than full-factorial. Something like lhsdesign
can help you extract more information with fewer function evaluations
(and no loop).

Best Regards,

-James

Daphne wrote:
>
> Sorry for being so vague. I really didn't know where to start, as in the
> loops there are about 400 lines of code in the main function + about 5
> subfunctions (the simulation, image procesing, iterations on various
> parameters and other goodies). I guess what I would like to do is not
> vectorize the code itself (I've vectorized as much as I could think of),
> but perhaps find a way to reduce the number of loops needed to send the
> parameters list into the main subfunction. I'm guessing that's not
> possible. I really don't know what to put here, and can't put the whole
> code (length and reveal).
> I do make sure to preallocate and try to clear any unnecessary variables
> between runs, I also use the ~ for unneeded variables.
> What the function does is basically is collect the parameters from the
> loops, call the main subfunction and generate an image according to
> specifications, then I run image processing procedures on a thresholded
> image (a previously determined threshold), compare the processed image
> to an original to find quality (another time-consuming bit, small enough
> to post so here it is)
>
> A = matrix;
> B = estimated_matrix;
> W = weight_factor;
> Q = W *(sum(sum(and( A, B)))/sum(sum( A))) + ...
> (1-W)*(sum(sum(and(1-A,1-B)))/sum(sum(1-A)));
>
> Once good quality is obtained, many parameters are calculated and saved
> into a file. Sorry I can't be more specific...
>
> About the files, yes, I open (fopen) one file in the begining and use
> fprintf to write the line of final data to it after each run. Perhaps it
> would be better to just collect it all into a matrix and dump it to a
> file every few hundreds of lines (~60 columns worth of numerical data
> each run).
> dlmwrite (as opposed to fprintf) doesn't require to keep the file open,
> does it?
> Daphne

Subject: simulation loop vectorization and slowdown

From: Daphne

Date: 25 Jul, 2010 11:29:03

Message: 7 of 7


Thanks for the suggestions!
Unfortuntaly, in this case I do need all the permutations of the variables, as I am testing parameters.
What you sent is great for later experiments of mine, thanks!

Does anyone know how much slower is fprintf as compared to dlmwrite?
I wonder if its worth my time to convert...

Daphne


James Allison <james.allison@mathworks.com> wrote in message <i2cbb8$bsv$1@fred.mathworks.com>...
> If your objective is to find specific parameter values that produce the
> best image quality, I would suggest using an optimization approach
> instead of an exhaustive full-factorial search like you are performing.
> You will be able to identify a better solution with fewer function
> evaluations.
>
> The fminsearch algorithm might work, but if you have any bounds on
> variables, or if the number of parameters is too large for fminsearch to
> handle, you will need to try something from the optimization toolbox,
> such as lsqnonlin or fmincon. If the objective function is non-smooth or
> has multiple optima, you may need to use functions from the global
> optimization toolbox:
>
> http://www.mathworks.com/products/global-optimization/
>
> If for some reason you need more than just the set of parameters that
> produces the best image, I would recommend using a more efficient design
> of experiments technique than full-factorial. Something like lhsdesign
> can help you extract more information with fewer function evaluations
> (and no loop).
>
> Best Regards,
>
> -James
>
> Daphne wrote:
> >
> > Sorry for being so vague. I really didn't know where to start, as in the
> > loops there are about 400 lines of code in the main function + about 5
> > subfunctions (the simulation, image procesing, iterations on various
> > parameters and other goodies). I guess what I would like to do is not
> > vectorize the code itself (I've vectorized as much as I could think of),
> > but perhaps find a way to reduce the number of loops needed to send the
> > parameters list into the main subfunction. I'm guessing that's not
> > possible. I really don't know what to put here, and can't put the whole
> > code (length and reveal).
> > I do make sure to preallocate and try to clear any unnecessary variables
> > between runs, I also use the ~ for unneeded variables.
> > What the function does is basically is collect the parameters from the
> > loops, call the main subfunction and generate an image according to
> > specifications, then I run image processing procedures on a thresholded
> > image (a previously determined threshold), compare the processed image
> > to an original to find quality (another time-consuming bit, small enough
> > to post so here it is)
> >
> > A = matrix;
> > B = estimated_matrix;
> > W = weight_factor;
> > Q = W *(sum(sum(and( A, B)))/sum(sum( A))) + ...
> > (1-W)*(sum(sum(and(1-A,1-B)))/sum(sum(1-A)));
> >
> > Once good quality is obtained, many parameters are calculated and saved
> > into a file. Sorry I can't be more specific...
> >
> > About the files, yes, I open (fopen) one file in the begining and use
> > fprintf to write the line of final data to it after each run. Perhaps it
> > would be better to just collect it all into a matrix and dump it to a
> > file every few hundreds of lines (~60 columns worth of numerical data
> > each run).
> > dlmwrite (as opposed to fprintf) doesn't require to keep the file open,
> > does it?
> > Daphne

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us