Documentation Center

  • Trial Software
  • Product Updates

Analyze Sensitivity Using the Tuning Advisor

When you design MPC controllers, you can use the Tuning Advisor to help you determine which weight has the most influence on the closed-loop performance. The Tuning Advisor also helps you determine in which direction to change the weight to improve performance. Using the Advisor, you can know numerically how each weight impacts the closed-loop performance, which makes designing MPC controllers easier when the closed-loop responses does not depend intuitively on the weights.

To start the Tuning Advisor, click Tuning Advisor in a simulation scenario view (see Tuning Advisor Button). The next figure shows the default Tuning Advisor window for a distillation process in which there are two controlled outputs, two manipulated variables, and one measured disturbance (which the Tuning Advisor ignores). In this case, the originating scenario is Scenario1.

The Tuning Advisor populates the Current Tuning column with the most recent tuning weights of the controller displayed in the Controller in Design. In this case, Obj is the controller. The Advisor also initializes the Performance Weight column to the same values. The Scenario in Design displays the scenario from which you started the Tuning Advisor. The Advisor uses this scenario to evaluate the controller's performance.

The columns highlighted in grey are Tuning Advisor displays and are read-only. For example, signal names come from the Signal Definition View and are blank unless you defined them there.

To tune the weights using the Tuning Advisor:

  1. Specify the performance metric.

  2. Compute the baseline performance.

  3. Adjust the weights based on the computed sensitivities.

  4. Recompute the performance metric.

  5. Update the controller

Defining the Performance Metric

In order to obtain tuning advice, you must first provide a quantitative scalar performance measure, J.

Select the Performance Function

Select a performance metrics from the Select a performance function drop-down list in the upper right-hand corner of the Advisor. You can choose one of four standard ways to compute the performance measure, J. In each case, the goal is to minimize J.

  • ISE (Integral of Squared Error, the default). This is the standard linear quadratic weighting of setpoint tracking errors, manipulated variable movements, and deviations of manipulated variables from targets (if any). The formula is

    where Tstop is the number of controller sampling intervals in the scenario, yij is the deviation of output j from its setpoint (reference) at time step i, euij is the deviation of manipulated variable j from its target at time step i, Δuij is the change in manipulated variable j at time step i (i.e., Δuij = uijui–1, j), and , , and are non-negative performance weights.

  • IAE (Integral of Absolute Error). Similar to the ISE but with squared terms replaced by absolute values

    The IAE gives less emphasis to any large deviations.

  • ITSE (time-weighted Integral of Squared Errors)

    which penalizes deviations at long times more heavily than the ISE, i.e., it favors controllers that rapidly eliminate steady-state offset.

  • ITAE (time-weighted Integral of Absolute Errors)

    which is like the ITSE but with less emphasis on large deviations.

Specify Performance Weights

Each of the above formulae use the same three performance weights, , , and . All must be non-negative real numbers. Use the weights to:

  • Eliminate a term by setting its weight to zero. For example, a manipulated variable rarely has a target value, in which case you should set its to zero. Similarly if a plant output is monitored but doesn't have a setpoint, set its to zero.

  • Scale the variables so their absolute or squared errors influence J appropriately. For example, an eyij of 0.01 in one output might be as important as a value of 100 in another. If you have chosen the ISE, the first should have a weight of 100 and the second 0.01. In other words, scale all equally important expected errors to be of order unity.

    A Model Predictive Controller uses weights internally as tuning devices. Although there is some common ground, the performance weights and tuning weights should differ in most cases. Choose performance weights to define good performance and then tune the controller weights to achieve it. The Tuning Advisor's main purpose is to make this task easier.

Baseline Performance

After you define the performance metric and specify the performance weights, compute a baseline J for the scenario by clicking Baseline. The next figure shows how this transforms the above example (the two performance weights have also been set to zero because manipulated variable changes are acceptable if needed to achieve good setpoint tracking for the two (equally weighted) outputs. The computed J = 3.435 is displayed in Baseline Performance, to the right of the Baseline button.

The Tuning Advisor also displays response plots for the scenario with the baseline controller (not shown but discussed in Customize Response Plots).

Sensitivities and Tuning Advice

Click Analyze ito compute the sensitivities, as shown in the next figure. The columns labeled Sensitivity and Tuning Direction now contain advice.

Each sensitivity value is the partial derivative of J with respect to the controller tuning weight in the last entry of the same row. For example, the first output has a sensitivity of 0.08663. If we could assume linearity, a 1-unit increase in this tuning weight, currently equal to 1, would increase J by 0.08663 units. Since we want to minimize J, we should decrease the tuning weight, as suggested by the Tuning Direction entry.

The challenge is to choose an adjustment magnitude. The behavior is nonlinear so the sensitivity value is just a rough indication of the likely impact.

You must also consider the tuning weight's current magnitude. For example, if the current value were 0.01, a 1-unit increase would be extreme and a 1-unit decrease impossible, whereas if it were 1000, a 1-unit change would be insignificant.

It's best to focus on a small subset of the tuning weights for which the sensitivities suggest good possibilities for improvement.

In the above example, the are poor candidates. The maximum possible change in the suggested direction (decrease) is 0.1, and the sensitivities indicate that this would have a negligible impact on J. The are already zero and can't be decreased.

The are the only tuning weights worth considering. Again, it seems unlikely that a change will help much. The display below shows the effect of doubling the tuning weight on the bottoms purity (second) output. Note the 2 in the last column of this row. After you click Analyze, the response plots (not shown) make it clear that this output tracks its setpoint more accurately but at the expense of the other, and the overall J actually increases.

Notice also that the sensitivities have been recomputed with respect to the revised controller tuning weights. Again, there are no obvious opportunities for improved performance.

Thus, we have quickly determined that the default controller tuning weights are near-optimal in this case, and further tuning is not worth the effort.

Updating the Controller

If you decide a set of modified tuning weights is significantly better than the baseline set, click Update Controller in MPC Tool. The tuning weights in the Advisor's last column permanently replace those stored in the Controller in Design and become the new baseline. All displays update accordingly.

Restoring Baseline Tuning

If you click Restore Baseline Weights, the Advisor will revert to the most recent baseline condition.

Modal Dialog Behavior

By default, the Advisor window is modal, meaning that you won't be able to access any other MATLAB®windows while the Advisor is active. You can disable Tuning Advisor is Modal, as shown in the above example. This is not recommended, however. In particular, if you return to the Design Tool and modify your controller, your changes won't be communicated to the Advisor. Instead, close the Advisor, modify the controller and then re-open the Advisor.

Scenarios for Performance Measurement

The scenario used with the Advisor should be a true test of controller performance. It should include a sequence of typical setpoint changes and disturbances. It is also good practice to test controller robustness with respect to prediction model error. The usual approach is to define a scenario in which the plant being controlled differs from the controller's prediction model.

Was this topic helpful?