When you design MPC controllers, you can use the Tuning Advisor to help you determine which weight has the most influence on the closed-loop performance. The Tuning Advisor also helps you determine in which direction to change the weight to improve performance. Using the Advisor, you can know numerically how each weight impacts the closed-loop performance, which makes designing MPC controllers easier when the closed-loop responses does not depend intuitively on the weights.

To start the Tuning Advisor, click **Tuning
Advisor** in a simulation scenario view (see Tuning Advisor Button). The next figure shows the default
Tuning Advisor
window for a distillation process in which there are two controlled
outputs, two manipulated variables, and one measured disturbance (which
the Tuning Advisor ignores). In this case, the originating scenario
is **Scenario1**.

The Tuning Advisor populates the **Current
Tuning** column with the most recent tuning weights of the
controller displayed in the** Controller in Design**.
In this case, `Obj`

is the controller. The Advisor
also initializes the **Performance Weight** column
to the same values. The **Scenario in Design** displays
the scenario from which you started the Tuning Advisor. The Advisor
uses this scenario to evaluate the controller's performance.

The columns highlighted in grey are Tuning Advisor displays and are read-only. For example, signal names come from the Signal Definition View and are blank unless you defined them there.

To tune the weights using the Tuning Advisor:

Specify the performance metric.

Compute the baseline performance.

Adjust the weights based on the computed sensitivities.

Recompute the performance metric.

Update the controller

In order to obtain tuning advice, you must first provide a quantitative
scalar performance measure, *J*.

Select a performance metric from the **Select
a performance function** drop-down list in the upper right-hand
corner of the Advisor. You can choose one of four standard ways to
compute the performance measure, *J*. In each case,
the goal is to minimize *J*.

ISE (Integral of Squared Error, the default). This is the standard linear quadratic weighting of setpoint tracking errors, manipulated variable movements, and deviations of manipulated variables from targets (if any). The formula is

$$J={\displaystyle \sum _{i=1}^{Tstop}\left({\displaystyle \sum _{j=1}^{{n}_{y}}{({w}_{j}^{y}{e}_{yij})}^{2}+{\displaystyle \sum _{j=1}^{{n}_{u}}[{({w}_{j}^{u}{e}_{uij})}^{2}+{({w}_{j}^{\Delta u}\Delta {u}_{ij})}^{2}]}}\right)}$$

where

`Tstop`

is the number of controller sampling intervals in the scenario,*e*is the deviation of output_{yij}*j*from its setpoint (reference) at time step*i*,*e*is the deviation of manipulated variable_{uij}*j*from its target at time step*i*, Δ*u*is the change in manipulated variable_{ij}*j*at time step*i*(i.e., Δ*u*=_{ij}*u*–_{ij}*u*), and $${w}_{j}^{y}$$, $${w}_{j}^{u}$$, and $${w}_{j}^{\Delta u}$$ are nonnegative_{i–1, j}*performance weights*.IAE (Integral of Absolute Error). Similar to the ISE but with squared terms replaced by absolute values

$$J={\displaystyle \sum _{i=1}^{Tstop}\left({\displaystyle \sum _{j=1}^{{n}_{y}}\left|{w}_{j}^{y}{e}_{yij}\right|+{\displaystyle \sum _{j=1}^{{n}_{u}}(|{w}_{j}^{u}{e}_{uij}|+|{w}_{j}^{\Delta u}\Delta {u}_{ij}|)}}\right)}$$

The IAE gives less emphasis to any large deviations.

ITSE (time-weighted Integral of Squared Errors)

$$J={\displaystyle \sum _{i=1}^{Tstop}i\Delta t\left({\displaystyle \sum _{j=1}^{{n}_{y}}{({w}_{j}^{y}{e}_{yij})}^{2}+{\displaystyle \sum _{j=1}^{{n}_{u}}[{({w}_{j}^{u}{e}_{uij})}^{2}+{({w}_{j}^{\Delta u}\Delta {u}_{ij})}^{2}]}}\right)}$$

which penalizes deviations at long times more heavily than the ISE, i.e., it favors controllers that rapidly eliminate steady-state offset.

ITAE (time-weighted Integral of Absolute Errors)

$$J={\displaystyle \sum _{i=1}^{Tstop}i\Delta t}\left({\displaystyle \sum _{j=1}^{{n}_{y}}\left|{w}_{j}^{y}{e}_{yij}\right|+{\displaystyle \sum _{j=1}^{{n}_{u}}(|{w}_{j}^{u}{e}_{uij}|+|{w}_{j}^{\Delta u}\Delta {u}_{ij}|)}}\right)$$

which is like the ITSE but with less emphasis on large deviations.

Each of the above formulae use the same three performance weights, $${w}_{j}^{y}$$, $${w}_{j}^{u}$$, and $${w}_{j}^{\Delta u}$$. All must be non-negative real numbers. Use the weights to:

Eliminate a term by setting its weight to zero. For example, a manipulated variable rarely has a target value, in which case you should set its to zero. Similarly if a plant output is monitored but doesn't have a setpoint, set its $${w}_{j}^{y}$$ to zero.

Scale the variables so their absolute or squared errors influence

*J*appropriately. For example, an*e*of 0.01 in one output might be as important as a value of 100 in another. If you have chosen the ISE, the first should have a weight of 100 and the second 0.01. In other words, scale all equally important expected errors to be of order unity._{yij}A Model Predictive Controller uses weights internally as tuning devices. Although there is some common ground, the performance weights and tuning weights should differ in most cases. Choose performance weights to define good performance and then tune the controller weights to achieve it. The Tuning Advisor's main purpose is to make this task easier.

After you define the performance metric and specify the performance
weights, compute a baseline *J* for the scenario
by clicking **Baseline**. The next figure
shows how this transforms the above example (the two $${w}_{j}^{\Delta u}$$ performance weights have also
been set to zero because manipulated variable changes are acceptable
if needed to achieve good setpoint tracking for the two (equally weighted)
outputs. The computed *J* = 3.435 is displayed
in **Baseline Performance**, to the right
of the **Baseline** button.

The Tuning Advisor also displays response plots for the scenario with the baseline controller (not shown but discussed in Customize Response Plots).

Click **Analyze** to compute the
sensitivities, as shown in the next figure. The columns labeled **Sensitivity** and **Tuning
Direction** now contain advice.

Each sensitivity value is the partial derivative of *J* with
respect to the controller tuning weight in the last entry of the same
row. For example, the first output has a sensitivity of 0.08663. If
we could assume linearity, a 1-unit increase in this tuning weight,
currently equal to 1, would increase *J* by 0.08663
units. Since we want to minimize *J*, we should
decrease the tuning weight, as suggested by the **Tuning
Direction** entry.

The challenge is to choose an adjustment magnitude. The behavior is nonlinear so the sensitivity value is just a rough indication of the likely impact.

You must also consider the tuning weight's current magnitude. For example, if the current value were 0.01, a 1-unit increase would be extreme and a 1-unit decrease impossible, whereas if it were 1000, a 1-unit change would be insignificant.

It's best to focus on a small subset of the tuning weights for which the sensitivities suggest good possibilities for improvement.

In the above example, the $${w}_{j}^{\Delta u}$$ are
poor candidates. The maximum possible change in the suggested direction
(decrease) is 0.1, and the sensitivities indicate that this would
have a negligible impact on *J*. The $${w}_{j}^{u}$$ are already zero and can't be
decreased.

The $${w}_{j}^{y}$$ are the only tuning
weights worth considering. Again, it seems unlikely that a change
will help much. The display below shows the effect of doubling the
tuning weight on the bottoms purity (second) output. Note the 2 in
the last column of this row. After you click **Analyze,** the
response plots (not shown) make it clear that this output tracks its
setpoint more accurately but at the expense of the other, and the
overall *J* actually increases.

Notice also that the sensitivities have been recomputed with respect to the revised controller tuning weights. Again, there are no obvious opportunities for improved performance.

Thus, we have quickly determined that the default controller tuning weights are near-optimal in this case, and further tuning is not worth the effort.

The Tuning Advisor can help you to refine controller tuning weights for better performance. It also provides a quantitative performance measurement.

You can access the Tuning Advisor from the **Scenarios** node
in the Control and Estimation Tools Manager. Before you use the Advisor,
choose the controller horizons and sampling period, specify constraints,
and select a disturbance estimator (if the default estimator is inappropriate).
The Advisor does not provide help with these parameters.

The example considered here is a plant with four controlled outputs and four manipulated variables. There are no measured disturbances and the unmeasured disturbances are unmodeled.

After starting the design tool and importing the plant model, *G*,
which becomes the controller design basis, we accept the default values
for all controller parameters. We also load a second plant model, *G _{p}*,
in which all parameters of

The scenario shown in the previous figure specifies the controller
based on *G* and the plant *G _{p}*.
In other words, it tests the controllers robustness with respect to
plant-model mismatch. It also defines a series of setpoint changes
and disturbances.

Clicking **Tuning Advisor** opens
the MPC Tuning Advisor window. In the Tuning Advisor window, we specify
the following settings:

Select the IAE performance function (an arbitrary choice for illustration only).

Set all input performance weights to zero because the application does not have input targets.

Set all input rate performance weights to zero because the application has no cost for manipulated variable movement.

Leave the output performance weights at their default values (unity) because all controller outputs are of roughly equal magnitude and the application gives equal priority to the tracking of all four setpoints.

Click

**Baseline**.Click

**Analyze**.

The Tuning Advisor resembles the previous figure. The sensitivity
values indicate that a decrease in the `Out4`

weight
or an increase in the `Out2`

weight would have the
most impact. In general, however, the output tuning weights should
reflect the setpoint tracking priorities and it's preferable to adjust
the input rate tuning weights.

Sensitivities for **Input Rate Weights** `In1`

and `In4`

are
of roughly equal magnitude but the `In4`

suggestion
is a decrease and this weight is already near its lower bound of zero.
Thus, we focus on the `In1`

weight.

The next figure shows the Advisor after the `In1`

weight
has been increased in several steps from 0.1 to 4. Performance has
improved by nearly 20% relative to the baseline. Sensitivities indicate
that further adjustments to in input rate tuning weights will have
little impact.

At this point, we can consider adjusting the output tuning weights. It is possible that an attempt to control a particular output might be causing upsets in other outputs (because of model error).

The next figure shows the Tuning Advisor after additional adjustments. At this point, some sensitivities are still rather large, but a small change in the indicated tuning weight causes the sensitivity to change sign. Therefore, further progress will be difficult.

Overall, we have improved the performance by (26.69 − 20.14)/26.69 which is more than 20%.

If you decide a set of modified tuning weights is significantly
better than the baseline set, click **Update Controller
in MPC Tool**. The tuning weights in the Advisor's last column
permanently replace those stored in the **Controller
in Design** and become the new baseline. All displays update
accordingly.

If you click **Restore Baseline Weights**,
the Advisor will revert to the most recent baseline condition.

By default, the Advisor window is modal, meaning that you won't
be able to access any other MATLAB^{®} windows while the Advisor
is active. You can disable **Tuning Advisor is
Modal**, as shown in the above example. This is *not
recommended*, however. In particular, if you return to the
Design Tool and modify your controller, your changes won't be communicated
to the Advisor. Instead, close the Advisor, modify the controller
and then reopen the Advisor.

The scenario used with the Advisor should be a true test of controller performance. It should include a sequence of typical setpoint changes and disturbances. It is also good practice to test controller robustness with respect to prediction model error. The usual approach is to define a scenario in which the plant being controlled differs from the controller's prediction model.

Was this topic helpful?