## Documentation Center |

The optimization time is dominated by the time it takes to simulate the model. When optimizing a Simulink

^{®}model, you can enable the Accelerator mode using**Simulation**>**Mode**>**Accelerator**in the Simulink Editor, to dramatically reduce the optimization time.**Note:**The Rapid Accelerator mode in Simulink software is not supported for speeding up the optimization. For more information, see Accelerating Model Simulations During Optimization.The choice of ODE solver can also significantly affect the overall optimization time. Use a stiff solver when the simulation takes many small steps, and use a fixed-step solver when such solvers yield accurate enough simulations for your model. (These solvers must be accurate in the entire parameter search space.)

Reduce the number of tuned compensator elements or parameters and constrain their range to narrow the search space.

When specifying parameter uncertainty (not available when optimizing responses in a SISO Design Task), keep the number of sample values small since the number of simulations grows exponentially with the number of samples. For example, a grid of 3 parameters with 10 sample values for each parameter requires 10

^{3}=1000 simulations per iteration.

Different numerical precision on the client and worker machines can produce marginally different simulation results. Thus, the optimization method can take a completely different solution path and produce a different result.

The client and worker machines must have models in identical states. For example, you must verify that the model running on the client uses exactly the same variable values as the workers. You must also verify that the client and workers are accessing model dependencies in identical states.

When you use parallel computing with the

`Pattern search`method, the search is more comprehensive and can result in a different solution.To learn more, see Parallel Computing with the Pattern Search Method.

When you optimize a model that does not have a large number of parameters or does not take long to simulate, you might not see a speedup in the optimization time. In such cases, the overhead associated with creating and distributing the parallel tasks outweighs the benefits of running the optimization in parallel.

Using the

`Pattern search`method with parallel computing might not speed up the optimization time. Without parallel computing, the method stops the search at each iteration when it finds a solution better than the current solution. The candidate solution search is more comprehensive when you use parallel computing. Although the number of iterations might be larger, the optimization without using parallel computing might be faster.To learn more about the expected speedup, see Parallel Computing with the Pattern Search Method.

In some cases, the gradient computations on the remote worker
machines may silently error out when you use parallel computing. In
such cases, the Optimization Progress window shows that the `f(x)` and `max
constraint` values do not change, and the optimization terminates
after two iterations with the message `Unable to satisfy constraints`.
To troubleshoot the problem:

Run the optimization for a few iterations without parallel computing to see if the optimization progresses.

Check whether the remote workers have access to all model dependencies. Model dependencies include data variables and files required by the model to run.

To learn more, see Model Dependencies.

When you use parallel computing, the software must wait until
the current optimization iteration completes before it notifies the
workers to stop the optimization. The optimization does not terminate
immediately when you click **Stop**, and, instead,
appears to continue running.

Was this topic helpful?