Documentation Center

  • Trial Software
  • Product Updates

Contents

RLS Adaptive Filters

Compare RLS and LMS Adaptive Filter Algorithms

This section provides an introductory example that uses the RLS adaptive filter function adaptfilt.rls.

If LMS algorithms represent the simplest and most easily applied adaptive algorithms, the recursive least squares (RLS) algorithms represents increased complexity, computational cost, and fidelity. In performance, RLS approaches the Kalman filter in adaptive filtering applications, at somewhat reduced required throughput in the signal processor.

Compared to the LMS algorithm, the RLS approach offers faster convergence and smaller error with respect to the unknown system, at the expense of requiring more computations.

In contrast to the least mean squares algorithm, from which it can be derived, the RLS adaptive algorithm minimizes the total squared error between the desired signal and the output from the unknown system.

Note that the signal paths and identifications are the same whether the filter uses RLS or LMS. The difference lies in the adapting portion.

Within limits, you can use any of the adaptive filter algorithms to solve an adaptive filter problem by replacing the adaptive portion of the application with a new algorithm.

Examples of the sign variants of the LMS algorithms demonstrated this feature to demonstrate the differences between the sign-data, sign-error, and sign-sign variations of the LMS algorithm.

One interesting input option that applies to RLS algorithms is not present in the LMS processes — a forgetting factor, λ, that determines how the algorithm treats past data input to the algorithm.

When the LMS algorithm looks at the error to minimize, it considers only the current error value. In the RLS method, the error considered is the total error from the beginning to the current data point.

Said another way, the RLS algorithm has infinite memory — all error data is given the same consideration in the total error. In cases where the error value might come from a spurious input data point or points, the forgetting factor lets the RLS algorithm reduce the value of older error data by multiplying the old data by the forgetting factor.

Since 0 ≤λ< 1, applying the factor is equivalent to weighting the older error. When λ = 1, all previous error is considered of equal weight in the total error.

As λ approaches zero, the past errors play a smaller role in the total. For example, when λ = 0.9, the RLS algorithm multiplies an error value from 50 samples in the past by an attenuation factor of 0.950 = 5.15 x 10-3, considerably deemphasizing the influence of the past error on the current total error.

Inverse System Identification Using adaptfilt.rls

Rather than use a system identification application to demonstrate the RLS adaptive algorithm, or a noise cancellation model, this example use the inverse system identification model shown in here.

Cascading the adaptive filter with the unknown filter causes the adaptive filter to converge to a solution that is the inverse of the unknown system.

If the transfer function of the unknown is H(z) and the adaptive filter transfer function is G(z), the error measured between the desired signal and the signal from the cascaded system reaches its minimum when the product of H(z) and G(z) is 1, G(z)*H(z) = 1. For this relation to be true, G(z) must equal 1/H(z), the inverse of the transfer function of the unknown system.

To demonstrate that this is true, create a signal to input to the cascaded filter pair.

x = randn(1,3000);

In the cascaded filters case, the unknown filter results in a delay in the signal arriving at the summation point after both filters. To prevent the adaptive filter from trying to adapt to a signal it has not yet seen (equivalent to predicting the future), delay the desired signal by 32 samples, the order of the unknown system.

Generally, you do not know the order of the system you are trying to identify. In that case, delay the desired signal by the number of samples equal to half the order of the adaptive filter. Delaying the input requires prepending 12 zero-values samples to x.

delay = zeros(1,12);
d = [delay x(1:2988)]; % Concatenate the delay and the signal.

You have to keep the desired signal vector d the same length as x, hence adjust the signal element count to allow for the delay samples.

Although not generally true, for this example you know the order of the unknown filter, so you add a delay equal to the order of the unknown filter.

For the unknown system, use a lowpass, 12th-order FIR filter.

ufilt = fir1(12,0.55,'low');

Filtering x provides the input data signal for the adaptive algorithm function.

xdata = filter(ufilt,1,x);

To set the input argument values for the adaptfilt.rls object, use the constructor adaptfilt.rls, providing the needed arguments l, lambda, and invcov.

For more information about the input conditions to prepare the RLS algorithm object, refer to adaptfilt.rls in the reference section of this user's guide.

p0 = 2*eye(13);
lambda = 0.99;
ha = adaptfilt.rls(13,lambda,p0);

Most of the process to this point is the same as the preceding examples. However, since this example seeks to develop an inverse solution, you need to be careful about which signal carries the data and which is the desired signal.

Earlier examples of adaptive filters use the filtered noise as the desired signal. In this case, the filtered noise (xdata) carries the unknown system information. With Gaussian distribution and variance of 1, the unfiltered noise d is the desired signal. The code to run this adaptive filter example is

[y,e] = filter(ha,xdata,d);

where y returns the coefficients of the adapted filter and e contains the error signal as the filter adapts to find the inverse of the unknown system. You can review the returned elements of the adapted filter in the properties of ha.

The next figure presents the results of the adaptation. In the figure, the magnitude response curves for the unknown and adapted filters show. As a reminder, the unknown filter was a lowpass filter with cutoff at 0.55, on the normalized frequency scale from 0 to 1.

Viewed alone (refer to the following figure), the inverse system looks like a fair compensator for the unknown lowpass filter — a high pass filter with linear phase.

Was this topic helpful?