Documentation Center

  • Trial Software
  • Product Updates



Predicted mean-squared error for adaptive filter


[mmse,emse] = msepred(ha,x,d)
[mmse,emse,meanw,mse,tracek] = msepred(ha,x,d)
[mmse,emse,meanw,mse,tracek] = msepred(ha,x,d,m)


[mmse,emse] = msepred(ha,x,d) predicts the steady-state values at convergence of the minimum mean-squared error (mmse) and the excess mean-squared error (emse) given the input and desired response signal sequences in x and d and the property values in the adaptfilt object ha.

[mmse,emse,meanw,mse,tracek] = msepred(ha,x,d) calculates three sequences corresponding to the analytical behavior of the LMS adaptive filter defined by ha:

  • meanw — contains the sequence of coefficient vector means. The columns of matrix meanw contain predictions of the mean values of the LMS adaptive filter coefficients at each time instant. The dimensions of meanw are (size(x,1))-by-(ha.length).

  • mse — contains the sequence of mean-square errors. This column vector contains predictions of the mean-square error of the LMS adaptive filter at each time instant. The length of mse is equal to size(x,1).

  • tracek — contains the sequence of total coefficient error powers. This column vector contains predictions of the total coefficient error power of the LMS adaptive filter at each time instant. The length of tracek is equal to size(x,1).

[mmse,emse,meanw,mse,tracek] = msepred(ha,x,d,m) specifies an optional input argument m that is the decimation factor for computing meanw, mse, and tracek. When m > 1, msepred saves every mth predicted value of each of these sequences. When you omit the optional argument m, it defaults to one.

    Note   msepred is available for the following adaptive filters only: — Using msepred is the same for any adaptfilt object constructed by the supported filters.


Analyze and simulate a 32-coefficient adaptive filter using 25 trials of 2000 iterations each.

x = zeros(2000,25); d = x;     % Initialize variables
ha = fir1(31,0.5);             % FIR system to be identified
x = filter(sqrt(0.75),[1 -0.5],sign(randn(size(x))));
n = 0.1*randn(size(x));        % observation noise signal
d = filter(ha,1,x)+n;          % desired signal
l = 32;                        % Filter length
mu = 0.008;                    % LMS step size.
m  = 5;                        % Decimation factor for analysis
                               % and simulation results
ha = adaptfilt.lms(l,mu);
[mmse,emse,meanW,mse,traceK] = msepred(ha,x,d,m);
[simmse,meanWsim,Wsim,traceKsim] = msesim(ha,x,d,m);
nn = m:m:size(x,1);
PlotTitle ={'Average Coefficient Trajectories for';...
            'W(12), W(13), W(14), and W(15)'};
xlabel('Time Index'); ylabel('Coefficient Value');
semilogy(nn,simmse,[0 size(x,1)],[(emse+mmse)...
(emse+mmse)],nn,mse,[0 size(x,1)],[mmse mmse]);
title('Mean-Square Error Performance');
axis([0 size(x,1) 0.001 10]);
legend('MSE (Sim.)','Final MSE','MSE','Min. MSE');
xlabel('Time Index'); ylabel('Squared Error Value');
title('Sum-of-Squared Coefficient Errors'); axis([0 size(x,1)...
0.0001 1]);
xlabel('Time Index'); ylabel('Squared Error Value');

Viewing the plots in this figure you see the various error values plotted in both simulation and theory. Each subplot reveals more information about the results as the simulation converges with the theoretical performance.

See Also

| |

Was this topic helpful?