Contents

Algorithms for Recursive Estimation

Types of Recursive Estimation Algorithms

You can choose from the following four recursive estimation algorithms:

You specify the type of recursive estimation algorithms as arguments in the recursive estimation commands.

For detailed information about these algorithms, see the corresponding chapter in System Identification: Theory for the User by Lennart Ljung (Prentice Hall PTR, Upper Saddle River, NJ, 1999).

General Form of Recursive Estimation Algorithm

The general recursive identification algorithm is given by the following equation:

θ^(t)=θ^(t1)+K(t)(y(t)y^(t))

θ^(t) is the parameter estimate at time t. y(t) is the observed output at time t and y^(t) is the prediction of y(t) based on observations up to time t-1. The gain, K(t), determines how much the current prediction error y(t)y^(t) affects the update of the parameter estimate. The estimation algorithms minimize the prediction-error term y(t)y^(t).

The gain has the following general form:

K(t)=Q(t)ψ(t)

The recursive algorithms supported by the System Identification Toolbox™ product differ based on different approaches for choosing the form of Q(t) and computing ψ(t), where ψ(t) represents the gradient of the predicted model output y^(t|θ) with respect to the parameters θ.

The simplest way to visualize the role of the gradient ψ(t) of the parameters, is to consider models with a linear-regression form:

y(t)=ψT(t)θ0(t)+e(t)

In this equation, ψ(t) is the regression vector that is computed based on previous values of measured inputs and outputs. θ0(t) represents the true parameters. e(t) is the noise source (innovations), which is assumed to be white noise. The specific form of ψ(t) depends on the structure of the polynomial model.

For linear regression equations, the predicted output is given by the following equation:

y^(t)=ψT(t)θ^(t1)

For models that do not have the linear regression form, it is not possible to compute exactly the predicted output and the gradient ψ(t) for the current parameter estimate θ^(t1). To learn how you can compute approximation for ψ(t) and θ^(t1) for general model structures, see the section on recursive prediction-error methods in System Identification: Theory for the User by Lennart Ljung (Prentice Hall PTR, Upper Saddle River, NJ, 1999).

Kalman Filter Algorithm

Mathematics of the Kalman Filter Algorithm

The following set of equations summarizes the Kalman filter adaptation algorithm:

θ^(t)=θ^(t1)+K(t)(y(t)y^(t))

y^(t)=ψT(t)θ^(t1)

K(t)=Q(t)ψ(t)

Q(t)=P(t1)ψ(t)R2+ψT(t)P(t1)ψ(t)

P(t)=P(t1)+R1P(t1)ψ(t)ψ(t)TP(t1)R2+ψ(t)TP(t1)ψ(t)

This formulation assumes the linear-regression form of the model:

y(t)=ψT(t)θ0(t)+e(t)

The Kalman filter is used to obtain Q(t).

This formulation also assumes that the true parameters θ0(t) are described by a random walk:

θ0(t)=θ0(t1)+w(t)

w(t) is Gaussian white noise with the following covariance matrix, or drift matrix R1:

Ew(t)wT(t)=R1

R2 is the variance of the innovations e(t) in the following equation:

y(t)=ψT(t)θ0(t)+e(t)

The Kalman filter algorithm is entirely specified by the sequence of data y(t), the gradient ψ(t), R1, R2, and the initial conditions θ(t=0) (initial guess of the parameters) and P(t=0) (covariance matrix that indicates parameters errors).

    Note:   To simplify the inputs, you can scale R1, R2, and P(t=0) of the original problem by the same value such that R2 is equal to 1. This scaling does not affect the parameters estimates.

Using the Kalman Filter Algorithm

The general syntax for the command described in Algorithms for Recursive Estimation is the following:

[params,y_hat]=command(data,nn,adm,adg)

To specify the Kalman filter algorithm, set adm to 'kf' and adg to the value of the drift matrix R1 (described in Mathematics of the Kalman Filter Algorithm).

Forgetting Factor Algorithm

Mathematics of the Forgetting Factor Algorithm

The following set of equations summarizes the forgetting factor adaptation algorithm:

θ^(t)=θ^(t1)+K(t)(y(t)y^(t))

y^(t)=ψT(t)θ^(t1)

K(t)=Q(t)ψ(t)

Q(t)=P(t1)λ+ψT(t)P(t1)ψ(t)

P(t)=1λ(P(t1)P(t1)ψ(t)ψ(t)TP(t1)λ+ψ(t)TP(t1)ψ(t))

To obtain Q(t), the following function is minimized at time t:

k=1tλtk(y(k)y^(k))2

This approach discounts old measurements exponentially such that an observation that is τ samples old carries a weight that is equal to λτ times the weight of the most recent observation. τ=11λ represents the memory horizon of this algorithm. Measurements older than τ=11λ typically carry a weight that is less than about 0.3.

λ is called the forgetting factor and typically has a positive value between 0.97 and 0.995.

    Note:   In the linear regression case, the forgetting factor algorithm is known as the recursive least-squares (RLS) algorithm. The forgetting factor algorithm for λ = 1 is equivalent to the Kalman filter algorithm with R1=0 and R2=1. For more information about the Kalman filter algorithm, see Kalman Filter Algorithm.

Using the Forgetting Factor Algorithm

The general syntax for the command described in Algorithms for Recursive Estimation is the following:

[params,y_hat]=command(data,nn,adm,adg)

To specify the forgetting factor algorithm, set adm to 'ff' and adg to the value of the forgetting factor λ (described in Mathematics of the Forgetting Factor Algorithm).

    Tip   λ typically has a positive value from 0.97 to 0.995.

Unnormalized and Normalized Gradient Algorithms

Mathematics of the Unnormalized and Normalized Gradient Algorithm

In the linear regression case, the gradient methods are also known as the least mean squares (LMS) methods.

The following set of equations summarizes the unnormalized gradient and normalized gradient adaptation algorithm:

θ^(t)=θ^(t1)+K(t)(y(t)y^(t))

y^(t)=ψT(t)θ^(t1)

K(t)=Q(t)ψ(t)

In the unnormalized gradient approach, Q(t) is the product of the gain γ and the identity matrix:

Q(t)=γ*ψ(t)

In the normalized gradient approach, Q(t) is the product of the gain γ, and the identity matrix is normalized by the magnitude of the gradient ψ(t):

Q(t)=γ*ψ(t)|ψ(t)|2

These choices of Q(t) update the parameters in the negative gradient direction, where the gradient is computed with respect to the parameters.

Using the Unnormalized and Normalized Gradient Algorithms

The general syntax for the command described in Algorithms for Recursive Estimation is the following:

[params,y_hat]=command(data,nn,adm,adg)

To specify the unnormalized gain algorithm, set adm to 'ug' and adg to the value of the gainγ (described in Mathematics of the Unnormalized and Normalized Gradient Algorithm).

To specify the normalized gain algorithm, set adm to 'ng' and adg to the value of the gainγ.

Was this topic helpful?