Why are the optimization results with and without using parallel computing different?

Why do I not see the optimization speedup I expected using parallel computing?

Why does the optimization using parallel computing not make any progress?

The optimization time is dominated by the time it takes to simulate the model. When optimizing a Simulink

^{®}model, you can enable the Accelerator mode using**Simulation**>**Mode**>**Accelerator**in the Simulink Editor, to dramatically reduce the optimization time.### Note

The Rapid Accelerator mode in Simulink software is not supported for speeding up the optimization. For more information, see Use Accelerator Mode During Simulations.

The choice of ODE solver can also significantly affect the overall optimization time. Use a stiff solver when the simulation takes many small steps, and use a fixed-step solver when such solvers yield accurate enough simulations for your model. (These solvers must be accurate in the entire parameter search space.)

Reduce the number of tuned compensator elements or parameters and constrain their range to narrow the search space.

When specifying parameter uncertainty (not available when optimizing responses in a SISO Design Task), keep the number of sample values small since the number of simulations grows exponentially with the number of samples. For example, a grid of 3 parameters with 10 sample values for each parameter requires 10

^{3}=1000 simulations per iteration.

Different numerical precision on the client and worker machines can produce marginally different simulation results. Thus, the optimization method can take a different solution path and produce a different result.

When you use parallel computing with the

`Pattern search`

method, the search is more comprehensive and can result in a different solution.To learn more, see Parallel Computing with the Pattern Search Method.

When you optimize a model that does not have a large number of parameters or does not take long to simulate, you might not see a speedup in the optimization time. In such cases, the overhead associated with creating and distributing the parallel tasks outweighs the benefits of running the optimization in parallel.

Using the

`Pattern search`

method with parallel computing might not speed up the optimization time. Without parallel computing, the method stops the search at each iteration when it finds a solution better than the current solution. The candidate solution search is more comprehensive when you use parallel computing. Although the number of iterations might be larger, the optimization without using parallel computing might be faster.To learn more about the expected speedup, see Parallel Computing with the Pattern Search Method.

To troubleshoot the problem:

Run the optimization for a few iterations without parallel computing to see if the optimization progresses.

Check whether the remote workers have access to all model dependencies. Model dependencies include data variables and files required by the model to run.

To learn more, see Model Dependencies.

When
you use parallel computing with the `Pattern search`

method,
the software must wait until the current optimization iteration completes
before it notifies the workers to stop. The optimization does not
terminate immediately when you click **Stop**,
and, instead, appears to continue running.

Was this topic helpful?