Note: This page has been translated by MathWorks. Please click here

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

This section provides introductory examples using some of the least mean squares (LMS) adaptive filter functionality in the toolbox.

The toolbox provides `dsp.LMSFilter`

, which is a System object™ that uses LMS
algorithms to search for the optimal solution to the adaptive filter.
The `dsp.LMSFilter`

object supports these algorithms:

The LMS algorithm, which solves the Wiener-Hopf equation and finds the filter coefficients for an adaptive filter

The normalized variation of the LMS algorithm

The sign-data variation of the LMS algorithm, where the correction to the filter weights at each iteration depends on the sign of the input

*x(k)*The sign-error variation of the LMS algorithm, where the correction applied to the current filter weights for each successive iteration depends on the sign of the error,

*e(k)*The sign-sign variation of the LMS algorithm, where the correction applied to the current filter weights for each successive iteration depends on both the sign of

*x(k)*and the sign of*e(k)*.

To demonstrate the differences and similarities among the various
LMS algorithms supplied in the toolbox, the LMS and NLMS adaptive
filter examples use the same filter for the unknown system. The unknown
filter is the constrained lowpass filter from `fircband`

examples.

[b,err,res]=fircband(12,[0 0.4 0.5 1], [1 1 0 0], [1 0.2],... {'w' 'c'}); fvtool(b,1);

From the figure you see that the filter is indeed lowpass and constrained to 0.2 ripple in the stopband. With this as the baseline, the adaptive LMS filter examples use the adaptive LMS algorithms to identify this filter in a system identification role.

To review the general model for system ID mode, look at System Identification for the layout.

For the sign variations of the LMS algorithm, the examples use noise cancellation as the demonstration application, as opposed to the system identification application used in the LMS examples.

To use the adaptive filter functions in the toolbox you need to provide three things:

The adaptive LMS algorithm to use. You can select the algorithm of your choice by setting the

`Method`

property of`dsp.LMSFilter`

to the desired algorithm.An unknown system or process to adapt to. In this example, the filter designed by

`fircband`

is the unknown system.Appropriate input data to exercise the adaptation process. In terms of the generic LMS model, these are the desired signal

*d(k)*and the input signal*x(k)*.

Start by defining an input signal `x`

.

x = 0.1*randn(250,1);

The input is broadband noise. For the unknown system filter,
use `fircband`

to create a twelfth-order lowpass
filter:

[b,err,res] = fircband(12,[0 0.4 0.5 1],[1 1 0 0],[1 0.2],{'w','c'});

Although you do not need them here, include the `err`

and `res`

output
arguments.

Now filter the signal through the unknown system to get the desired signal.

d = filter(b,1,x);

With the unknown filter designed and the desired signal in place you construct and apply the adaptive LMS filter object to identify the unknown.

Preparing the adaptive filter object requires that you provide
starting values for estimates of the filter coefficients and the LMS
step size. You could start with estimated coefficients of some set
of nonzero values; this example uses zeros for the 12 initial filter
weights. Set the `InitialConditions`

property of `dsp.LMSFilter`

to
the desired initial values of the filter weights.

For the step size, 0.8 is a reasonable value — a good compromise between being large enough to converge well within the 250 iterations (250 input sample points) and small enough to create an accurate estimate of the unknown filter.

mu = 0.8; lms = dsp.LMSFilter(13,'StepSize',mu,'WeightsOutputPort',true);

Finally, using the `dsp.LMSFilter`

object `lms`

,
desired signal, `d`

, and the input to the filter, `x`

,
run the adaptive filter to determine the unknown system and plot the
results, comparing the actual coefficients from `fircband`

to
the coefficients found by `dsp.LMSFilter`

.

[y,e,w] = lms(x,d);; stem([b.' w]) title('System Identification by Adaptive LMS Algorithm') legend('Actual Filter Weights','Estimated Filter Weights',... 'Location','NorthEast')

As an experiment, try changing the step size to 0.2. Repeating
the example with `mu = 0.2`

results in the following stem plot. The estimated
weights fail to approximate the actual weights closely.

mu = 0.2; lms = dsp.LMSFilter(13,'StepSize',mu,'WeightsOutputPort',true); [y,e,w] = lms(x,d); stem([b.' w]) title('System Identification by Adaptive LMS Algorithm') legend('Actual Filter Weights','Estimated Filter Weights',... 'Location','NorthEast')

Since this may be because you did not iterate over the LMS algorithm enough times, try using 1000 samples. With 1000 samples, the stem plot, shown in the next figure, looks much better, albeit at the expense of much more computation. Clearly you should take care to select the step size with both the computation required and the fidelity of the estimated filter in mind.

for index = 1:4 x = 0.1*randn(250,1); d = filter(b,1,x); [y,e,w] = lms(x,d); end stem([b.' w]) title('System Identification by Adaptive LMS Algorithm') legend('Actual Filter Weights','Estimated Filter Weights',... 'Location','NorthEast')

To improve the convergence performance of the LMS algorithm, the normalized variant (NLMS) uses an adaptive step size based on the signal power. As the input signal power changes, the algorithm calculates the input power and adjusts the step size to maintain an appropriate value. Thus the step size changes with time.

As a result, the normalized algorithm converges more quickly with fewer samples in many cases. For input signals that change slowly over time, the normalized LMS can represent a more efficient LMS approach.

In the normalized LMS algorithm example, you used `fircband`

to create the filter that you would
identify. So you can compare the results, you use the same filter,
and set the `Method`

property on `dsp.LMSFilter`

to `'Normalized LMS'`

.
to use the normalized LMS algorithm variation. You should see better
convergence with similar fidelity.

First, generate the input signal and the unknown filter.

x = 0.1*randn(500,1); [b,err,res] = fircband(12,[0 0.4 0.5 1], [1 1 0 0], [1 0.2],... {'w' 'c'}); d = filter(b,1,x);

Again `d`

represents the desired signal *d(x)* as
you defined it earlier and `b`

contains the filter
coefficients for your unknown filter.

lms = dsp.LMSFilter(13,'StepSize',mu,'Method',... 'Normalized LMS','WeightsOutputPort',true);

You use the preceding code to initialize the normalized LMS
algorithm. For more information about the optional input arguments,
refer to `dsp.LMSFilter`

.

Running the system identification process is a matter of using
the `dsp.LMSFilter`

object with the desired signal,
the input signal, and the initial filter coefficients and conditions
specified in `s`

as input arguments. Then plot the
results to compare the adapted filter to the actual filter.

[y,e,w] = lms(x,d); stem([b.' w]) title('System Identification by Normalized LMS Algorithm') legend('Actual Filter Weights','Estimated Filter Weights',... 'Location','NorthEast')

As shown in the following stem plot (a convenient way to compare the estimated and actual filter coefficients), the two are nearly identical.

If you compare the convergence performance of the regular LMS algorithm to the normalized LMS variant, you see the normalized version adapts in far fewer iterations to a result almost as good as the nonnormalized version.

lms_normalized = dsp.LMSFilter(13,'StepSize',mu,... 'Method','Normalized LMS','WeightsOutputPort',true); lms_nonnormalized = dsp.LMSFilter(13,'StepSize',mu,... 'Method','LMS','WeightsOutputPort',true); [~,e1,~] = lms_normalized(x,d); [~,e2,~] = lms_nonnormalized(x,d); plot([e1,e2]); title('Comparing the LMS and NLMS Conversion Performance'); legend('NLMS Derived Filter Weights', ... 'LMS Derived Filter Weights','Location', 'NorthEast');

When the amount of computation required to derive an adaptive filter drives your development process, the sign-data variant of the LMS (SDLMS) algorithm may be a very good choice as demonstrated in this example.

Fortunately, the current state of digital signal processor (DSP) design has relaxed the need to minimize the operations count by making DSPs whose multiply and shift operations are as fast as add operations. Thus some of the impetus for the sign-data algorithm (and the sign-error and sign-sign variations) has been lost to DSP technology improvements.

In the standard and normalized variations of the LMS adaptive filter, coefficients for the adapting filter arise from the mean square error between the desired signal and the output signal from the unknown system. Using the sign-data algorithm changes the mean square error calculation by using the sign of the input data to change the filter coefficients.

When the error is positive, the new coefficients are the previous
coefficients plus the error multiplied by the step size *µ*.
If the error is negative, the new coefficients are again the previous
coefficients minus the error multiplied by *µ* —
note the sign change.

When the input is zero, the new coefficients are the same as the previous set.

In vector form, the sign-data LMS algorithm is

$$\begin{array}{cc}w(k+1)=w(k)+\mu e(k)\mathrm{sgn}\left[x(k)\right],& \\ & \mathrm{sgn}\left[x(k)\right]=\{\begin{array}{l}1,x(k)>0\\ 0,x(k)=0\\ -1,x(k)<0\end{array}\end{array}$$

with vector **w** containing the
weights applied to the filter coefficients and vector **x** containing the input data. *e(k)* (equal
to desired signal - filtered signal) is the error at time *k* and
is the quantity the SDLMS algorithm seeks to minimize. *µ* (`mu`

)
is the step size.

As you specify `mu`

smaller, the correction
to the filter weights gets smaller for each sample and the SDLMS error
falls more slowly. Larger `mu`

changes the weights
more for each step so the error falls more rapidly, but the resulting
error does not approach the ideal solution as closely. To ensure good
convergence rate and stability, select `mu`

within
the following practical bounds

$$0<\mu <\frac{1}{N\left\{InputSignalPower\right\}}$$

where *N* is the number of samples in the signal.
Also, define `mu`

as a power of two for efficient
computing.

A series of large input values, coupled with the quantization process may result in the error growing beyond all bounds. You restrain the tendency of the sign-data algorithm to get out of control by choosing a small step size (µ<< 1) and setting the initial conditions for the algorithm to nonzero positive and negative values. |

In this noise cancellation example, set `dsp.LMSFilter`

`Method`

property
to `'Sign-Data LMS'`

. This example requires two input
data sets:

Data containing a signal corrupted by noise. In Using an Adaptive Filter to Remove Noise from an Unknown System, this is

*d(k)*, the desired signal. The noise cancellation process removes the noise, leaving the signal.Data containing random noise (

*x(k)*in Using an Adaptive Filter to Remove Noise from an Unknown System) that is correlated with the noise that corrupts the signal data. Without the correlation between the noise data, the adapting algorithm cannot remove the noise from the signal.

For the signal, use a sine wave. Note that `signal`

is
a column vector of 1000 elements.

signal = sin(2*pi*0.055*[0:1000-1]');

Now, add correlated white noise to `signal`

.
To ensure that the noise is correlated, pass the noise through a lowpass
FIR filter, and then add the filtered noise to the signal.

noise = randn(1000,1); nfilt = fir1(11,0.4); % Eleventh order lowpass filter fnoise = filter(nfilt,1,noise); % Correlated noise data d = signal + fnoise;

`fnoise`

is the correlated noise and `d`

is
now the desired input to the sign-data algorithm.

To prepare the `dsp.LMSFilter`

object for processing,
set the weight initial conditions (`InitialConditions`

)
and mu (`StepSize`

) for the object. As noted earlier
in this section, the values you set for `coeffs`

and `mu`

determine
whether the adaptive filter can remove the noise from the signal path.

In System Identification Using the LMS Algorithm, you constructed a default filter that sets the filter coefficients to zeros. In most cases that approach does not work for the sign-data algorithm. The closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively.

For this example, start with the coefficients in the filter
you used to filter the noise (`nfilt`

), and modify
them slightly so the algorithm has to adapt.

coeffs = nfilt.' -0.01; % Set the filter initial conditions. mu = 0.05; % Set the step size for algorithm updating.

With the required input arguments for `dsp.LMSFilter`

prepared,
construct the LMS filter object, run the adaptation, and view the
results.

lms = dsp.LMSFilter(12,'Method','Sign-Data LMS',... 'StepSize',mu,'InitialConditions',coeffs); [~,e] = lms(noise,d); L = 200; plot(0:L-1,signal(1:L),0:L-1,e(1:L)); title('Noise Cancellation by the Sign-Data Algorithm'); legend('Actual Signal','Result of Noise Cancellation',... 'Location','NorthEast');

When `dsp.LMSFilter`

runs, it uses far fewer
multiply operations than either of the LMS algorithms. Also, performing
the sign-data adaptation requires only bit shifting multiplies when
the step size is a power of two.

Although the performance of the sign-data algorithm as shown in the next figure is quite good, the sign-data algorithm is much less stable than the standard LMS variations. In this noise cancellation example, the signal after processing is a very good match to the input signal, but the algorithm could very easily grow without bound rather than achieve good performance.

Changing `InitialConditions`

, `mu`

,
or even the lowpass filter you used to create the correlated noise
can cause noise cancellation to fail and the algorithm to become useless.

In some cases, the sign-error variant of the LMS algorithm (SELMS) may be a very good choice for an adaptive filter application.

In the standard and normalized variations of the LMS adaptive filter, the coefficients for the adapting filter arise from calculating the mean square error between the desired signal and the output signal from the unknown system, and applying the result to the current filter coefficients. Using the sign-error algorithm replaces the mean square error calculation by using the sign of the error to modify the filter coefficients.

When the error is positive, the new coefficients are the previous
coefficients plus the error multiplied by the step size *µ*.
If the error is negative, the new coefficients are again the previous
coefficients minus the error multiplied by *µ* —
note the sign change. When the input is zero, the new coefficients
are the same as the previous set.

In vector form, the sign-error LMS algorithm is

$$\begin{array}{cc}w(k+1)=w(k)+\mu \mathrm{sgn}\left[e(k)\right]\left[x(k)\right],& \\ & \mathrm{sgn}\left[e(k)\right]=\left\{\begin{array}{l}1,e(k)>0\\ 0,e(k)=0\\ -1,e(k)<0\end{array}\right\}\end{array}$$

with vector **w** containing the
weights applied to the filter coefficients and vector **x** containing the input data. *e(k)* (equal
to desired signal - filtered signal) is the error at time *k* and
is the quantity the SELMS algorithm seeks to minimize. *µ* (`mu`

)
is the step size. As you specify `mu`

smaller, the
correction to the filter weights gets smaller for each sample and
the SELMS error falls more slowly.

Larger `mu`

changes the weights more for each
step so the error falls more rapidly, but the resulting error does
not approach the ideal solution as closely. To ensure good convergence
rate and stability, select `mu`

within the following
practical bounds

$$0<\mu <\frac{1}{N\left\{InputSignalPower\right\}}$$

where *N* is the number of samples in the signal.
Also, define `mu`

as a power of two for efficient
computation.

A series of large error values, coupled with the quantization
process may result in the error growing beyond all bounds. You restrain
the tendency of the sign-error algorithm to get out of control by
choosing a small step size ( |

In this noise cancellation example, the `dsp.LMSFilter`

object requires two input data
sets:

Data containing a signal corrupted by noise. In Using an Adaptive Filter to Remove Noise from an Unknown System, this is

*d(k)*, the desired signal. The noise cancellation process removes the noise, leaving the signal.Data containing random noise (

*x(k)*in Using an Adaptive Filter to Remove Noise from an Unknown System) that is correlated with the noise that corrupts the signal data. Without the correlation between the noise data, the adapting algorithm cannot remove the noise from the signal.

For the signal, use a sine wave. Note that `signal`

is
a column vector of 1000 elements.

signal = sin(2*pi*0.055*[0:1000-1]');

Now, add correlated white noise to `signal`

.
To ensure that the noise is correlated, pass the noise through a lowpass
FIR filter, then add the filtered noise to the signal.

noise = randn(1000,1); nfilt = fir1(11,0.4); % Eleventh order lowpass filter. fnoise = filter(nfilt,1,noise); % Correlated noise data. d = signal + fnoise;

`fnoise`

is the correlated noise and `d`

is
now the desired input to the sign-data algorithm.

To prepare the `dsp.LMSFilter`

object for processing,
set the weight initial conditions (`InitialConditions`

)
and mu (`StepSize`

) for the object. As noted earlier
in this section, the values you set for `coeffs`

and `mu`

determine
whether the adaptive filter can remove the noise from the signal path.
In System Identification Using the LMS Algorithm, you constructed
a default filter that sets the filter coefficients to zeros.

Setting the coefficients to zero often does not work for the sign-error algorithm. The closer you set your initial filter coefficients to the expected values, the more likely it is that the algorithm remains well behaved and converges to a filter solution that removes the noise effectively.

For this example, you start with the coefficients in the filter
you used to filter the noise (`nfilt`

), and modify
them slightly so the algorithm has to adapt.

coeffs = nfilt.' -0.01; % Set the filter initial conditions. mu = 0.05; % Set step size for algorithm update.

With the required input arguments for `dsp.LMSFilter`

prepared,
run the adaptation and view the results.

lms = dsp.LMSFilter(12,'Method','Sign-Error LMS',... 'StepSize',mu,'InitialConditions',coeffs); [~,e] = lms(noise,d); L = 200; plot(0:199,signal(1:200),0:199,e(1:200)); title('Noise Cancellation Performance by the Sign-Error LMS Algorithm'); legend('Actual Signal','Error After Noise Reduction',... 'Location','NorthEast')

When the sign-error LMS algorithm runs, it uses far fewer multiply operations than either of the LMS algorithms. Also, performing the sign-error adaptation requires only bit shifting multiplies when the step size is a power of two.

Although the performance of the sign-data algorithm as shown in the next figure is quite good, the sign-data algorithm is much less stable than the standard LMS variations. In this noise cancellation example, the signal after processing is a very good match to the input signal, but the algorithm could very easily become unstable rather than achieve good performance.

Changing the weight initial conditions (`InitialConditions`

)
and mu (`StepSize`

), or even the lowpass filter you
used to create the correlated noise can cause noise cancellation to
fail and the algorithm to become useless.

One more example of a variation of the LMS algorithm in the toolbox is the sign-sign variant (SSLMS). The rationale for this version matches those for the sign-data and sign-error algorithms presented in preceding sections. For more details, refer to Noise Cancellation Using the Sign-Data LMS Algorithm.

The sign-sign algorithm (SSLMS) replaces the mean square error
calculation with using the sign of the input data to change the filter
coefficients. When the error is positive, the new coefficients are
the previous coefficients plus the error multiplied by the step size *µ*.

If the error is negative, the new coefficients are again the
previous coefficients minus the error multiplied by *µ* —
note the sign change. When the input is zero, the new coefficients
are the same as the previous set.

In essence, the algorithm quantizes both the error and the input by applying the sign operator to them.

In vector form, the sign-sign LMS algorithm is

$$\begin{array}{cc}w(k+1)=w(k)+\mu \mathrm{sgn}\left[e(k)\right]\mathrm{sgn}\left[x(k)\right],& \\ & \mathrm{sgn}\left[z(k)\right]=\{\begin{array}{l}1,z(k)>0\\ 0,z(k)=0\\ -1,z(k)<0\end{array}\end{array}$$

where

$$z(k)=\left[e(k)\right]\mathrm{sgn}\left[x(k)\right]$$

Vector **w** contains the weights
applied to the filter coefficients and vector **x** contains
the input data. *e(k)* ( = desired signal - filtered signal) is the error at time *k* and
is the quantity the SSLMS algorithm seeks to minimize. *µ*(`mu`

)
is the step size. As you specify `mu`

smaller, the
correction to the filter weights gets smaller for each sample and
the SSLMS error falls more slowly.

Larger `mu`

changes the weights more for each
step so the error falls more rapidly, but the resulting error does
not approach the ideal solution as closely. To ensure good convergence
rate and stability, select `mu`

within the following
practical bounds

$$0<\mu <\frac{1}{N\left\{InputSignalPower\right\}}$$

where *N* is the number of samples in the
signal. Also, define `mu`

as a power of two for efficient
computation.

A series of large error values, coupled with the quantization
process may result in the error growing beyond all bounds. You restrain
the tendency of the sign-sign algorithm to get out of control by choosing
a small step size ( |

In this noise cancellation example, `dsp.LMSFilter`

object requires two input data
sets:

Data containing a signal corrupted by noise. In Using an Adaptive Filter to Remove Noise from an Unknown System, this is

*d(k)*, the desired signal. The noise cancellation process removes the noise, leaving the cleaned signal as the content of the error signal.Data containing random noise (

*x(k)*in Using an Adaptive Filter to Remove Noise from an Unknown System) that is correlated with the noise that corrupts the signal data, called. Without the correlation between the noise data, the adapting algorithm cannot remove the noise from the signal.

For the signal, use a sine wave. Note that `signal`

is
a column vector of 1000 elements.

signal = sin(2*pi*0.055*[0:1000-1]');

Now, add correlated white noise to `signal`

.
To ensure that the noise is correlated, pass the noise through a lowpass
FIR filter, then add the filtered noise to the signal.

noise = randn(1000,1); nfilt = fir1(11,0.4); % Eleventh order lowpass filter fnoise = filter(nfilt,1,noise); % Correlated noise data d = signal + fnoise;

`fnoise`

is the correlated noise and `d`

is
now the desired input to the sign-data algorithm.

To prepare the `dsp.LMSFilter`

object for processing,
set the weight initial conditions (`InitialConditions`

)
and mu (`StepSize`

) for the object. As noted earlier
in this section, the values you set for `coeffs`

and `mu`

determine
whether the adaptive filter can remove the noise from the signal path.
In System Identification Using the LMS Algorithm, you constructed
a default filter that sets the filter coefficients to zeros. Usually
that approach does not work for the sign-sign algorithm.

The closer you set your initial filter coefficients to the expected
values, the more likely it is that the algorithm remains well behaved
and converges to a filter solution that removes the noise effectively.
For this example, you start with the coefficients in the filter you
used to filter the noise (`nfilt`

), and modify them
slightly so the algorithm has to adapt.

coeffs = nfilt.' -0.01; % Set the filter initial conditions. mu = 0.05; % Set the step size for algorithm updating.

With the required input arguments for `dsp.LMSFilter`

prepared,
run the adaptation and view the results.

lms = dsp.LMSFilter(12,'Method','Sign-Sign LMS',... 'StepSize',mu,'InitialConditions',coeffs); [~,e] = lms(noise,d); L = 200; plot(0:199,signal(1:200),0:199,e(1:200)); title('Noise Cancellation Performance by the Sign-Error LMS Algorithm'); legend('Actual Signal','Error After Noise Reduction',... 'Location','NorthEast')

When `dsp.LMSFilter`

runs, it uses far fewer
multiply operations than either of the LMS algorithms. Also, performing
the sign-sign adaptation requires only bit shifting multiplies when
the step size is a power of two.

Although the performance of the sign-sign algorithm as shown in the next figure is quite good, the sign-sign algorithm is much less stable than the standard LMS variations. In this noise cancellation example, the signal after processing is a very good match to the input signal, but the algorithm could very easily become unstable rather than achieve good performance.

Changing the weight initial conditions (`InitialConditions`

)
and mu (`StepSize`

), or even the lowpass filter you
used to create the correlated noise can cause noise cancellation to
fail and the algorithm to become useless.

As an aside, the sign-sign LMS algorithm is part of the international CCITT standard for 32 Kb/s ADPCM telephony.

Was this topic helpful?