Recursive Algorithms for Online Estimation

General Form of Recursive Estimation

The general form of the recursive estimation algorithm is as follows:

θ^(t)=θ^(t1)+K(t)(y(t)y^(t))

θ^(t) is the parameter estimate at time t. y(t) is the observed output at time t and y^(t) is the prediction of y(t) based on observations up to time t-1. The gain, K(t), determines how much the current prediction error y(t)y^(t) affects the update of the parameter estimate. The estimation algorithms minimize the prediction-error term y(t)y^(t).

The gain has the following form:

K(t)=Q(t)ψ(t)

The recursive algorithms supported by the System Identification Toolbox™ product differ based on different approaches for choosing the form of Q(t) and computing ψ(t), where ψ(t) represents the gradient of the predicted model output y^(t|θ) with respect to the parameters θ.

The simplest way to visualize the role of the gradient ψ(t) of the parameters, is to consider models with a linear-regression form:

y(t)=ψT(t)θ0(t)+e(t)

In this equation, ψ(t) is the regression vector that is computed based on previous values of measured inputs and outputs. θ0(t) represents the true parameters. e(t) is the noise source (innovations), which is assumed to be white noise. The specific form of ψ(t) depends on the structure of the polynomial model.

For linear regression equations, the predicted output is given by the following equation:

y^(t)=ψT(t)θ^(t1)

For models that do not have the linear regression form, it is not possible to compute exactly the predicted output and the gradient ψ(t) for the current parameter estimate θ^(t1). To learn how you can compute approximation for ψ(t) and θ^(t1) for general model structures, see the section on recursive prediction-error methods in [1].

Types of Recursive Estimation Algorithms

The System Identification Toolbox software provides the following recursive estimation algorithms for online estimation:

The forgetting factor and Kalman Filter formulations are more computationally intensive than gradient and unnormalized gradient methods. However, they have much better convergence properties.

Forgetting Factor

The following set of equations summarizes the forgetting factor adaptation algorithm:

θ^(t)=θ^(t1)+K(t)(y(t)y^(t))

y^(t)=ψT(t)θ^(t1)

K(t)=Q(t)ψ(t)

Q(t)=P(t1)λ+ψT(t)P(t1)ψ(t)

P(t)=1λ(P(t1)P(t1)ψ(t)ψ(t)TP(t1)λ+ψ(t)TP(t1)ψ(t))

Q(t) is obtained by minimizing the following function at time t:

k=1tλtk(y(k)y^(k))2

See section 11.2 in [1] for details.

This approach discounts old measurements exponentially such that an observation that is τ samples old carries a weight that is equal to λτ times the weight of the most recent observation. τ=11λ represents the memory horizon of this algorithm. Measurements older than τ=11λ typically carry a weight that is less than about 0.3.

λ is called the forgetting factor and typically has a positive value between 0.97 and 0.995.

    Note:   The forgetting factor algorithm for λ = 1 is equivalent to the Kalman filter algorithm with R1=0 and R2=1. For more information about the Kalman filter algorithm, see Kalman Filter.

Kalman Filter

The following set of equations summarizes the Kalman filter adaptation algorithm:

θ^(t)=θ^(t1)+K(t)(y(t)y^(t))

y^(t)=ψT(t)θ^(t1)

K(t)=Q(t)ψ(t)

Q(t)=P(t1)ψ(t)R2+ψT(t)P(t1)ψ(t)

P(t)=P(t1)+R1P(t1)ψ(t)ψ(t)TP(t1)R2+ψ(t)TP(t1)ψ(t)

This formulation assumes the linear-regression form of the model:

y(t)=ψT(t)θ0(t)+e(t)

Q(t) is computed using a Kalman filter.

This formulation also assumes that the true parameters θ0(t) are described by a random walk:

θ0(t)=θ0(t1)+w(t)

w(t) is Gaussian white noise with the following covariance matrix, or drift matrix R1:

Ew(t)wT(t)=R1

R2 is the variance of the innovations e(t) in the following equation:

y(t)=ψT(t)θ0(t)+e(t)

The Kalman filter algorithm is entirely specified by the sequence of data y(t), the gradient ψ(t), R1, R2, and the initial conditions θ(t=0) (initial guess of the parameters) and P(t=0) (covariance matrix that indicates parameters errors).

    Note:   It is assumed that R1 and P(t = 0) matrices are scaled such that R2 = 1. This scaling does not affect the parameter estimates.

Normalized and Unnormalized Gradient

In the linear regression case, the gradient methods are also known as the least mean squares (LMS) methods.

The following set of equations summarizes the unnormalized gradient and normalized gradient adaptation algorithm:

θ^(t)=θ^(t1)+K(t)(y(t)y^(t))

y^(t)=ψT(t)θ^(t1)

K(t)=Q(t)ψ(t)

In the unnormalized gradient approach, Q(t) is given by:

Q(t)=γ*ψ(t)

In the normalized gradient approach, Q(t) is given by:

Q(t)=γ*ψ(t)|ψ(t)|2

These choices of Q(t) update the parameters in the negative gradient direction, where the gradient is computed with respect to the parameters. See pg. 372 in [1] for details.

References

[1] Ljung, L. System Identification: Theory for the User. Upper Saddle River, NJ: Prentice-Hall PTR, 1999.

See Also

|

More About

Was this topic helpful?