Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Parameter estimation of RFPA with RLS

Subject: Parameter estimation of RFPA with RLS

From: drgz

Date: 29 May, 2010 18:28:05

Message: 1 of 1


I'm trying to estimate the coefficients for a memory polynomial (MP) model by using a recursive least-squares (RLS) approach, but there's a few things that confuse me and makes in wonder if I do this the right way or not. I'm trying to use RLS so I can avoid inverting the data regressor matrix, which I've found to be quite ill-conditioned in most cases, almost no matter what kinds of parameters I choose in the model.

I make the regressor matrix with the following code
function X = get_regressor_matrix(x,K,P)
% inputs:
% x - input signal (i.e. 16qam signal) - dim, (N x 1)
% P - max. nonlinearity order (i.e 7) - dim (1 x 1)
% K - max. memory order (i.e. 15) - dim (1 x 1)
% outputs:
% X - regressor matrix - dim (N x (P*K)+1)

% Initialize regressor matrix
X = zeros(length(x),(K*P)+1);
idx = 1;
X(:,idx) = ones(size(x));

for k = 0:K-1
    % Delay signal with k samples
    x_kp = [zeros(k,1); x(1:end-k)];
    for p = 1:P
        idx = idx+1;
        X(:,idx) = x_kp.*abs(x_kp).^(p-1);

, which works as expected when I solve the system as

coeffs = pseudoinverse(X)*y; (pseudoinverse by Bruno Luong)
y_model = X*coeffs;

As this is my first time with the RLS filter, I'm just curious if I've done this the right way. The following code is what I use for the RLS filter
function [ym,e] = rls_matrix_input(X,y,delta,lambda)
% inputs:
% X - regressor matrix - dim (N x (K*P + 1))
% y - measured/desired output - dim (N x 1)
% delta - for initialization of inverse covariance matrix - dim (1 x 1)
% lambda - forgetting factor - dim (1 x 1)
% outputs:
% ym - model output - dim (N x 1)
% e - a priori error - dim (N x 1)

% Number of coefficients to be estimated, derived as the smallest
%%% dimension of the data regressor matrix
num_of_coeff = min(size(X));

% Initialize coefficient vector and inverse covariance matrix
C = zeros(num_of_coeff,1);
Qi = delta*eye(num_of_coeff);

% Number of samples in signal
N = max(size(X));

% Initialize error vector and estimated model output signal vector
e = zeros(N,1);
ym = zeros(N,1);

% Make sure dimensions are correct for rest of script
X = X.';

for n = 1:N
    % Data vector, dim = (num_of_coeff x 1)
    p_vec = Qi*X(:,n);
    % Calculate gain vector, dim = (num_of_coeff x 1)
    k = p_vec./(lambda+X(:,n)'*p_vec);
    % A priori error
    e(n) = y(n)-C'*X(:,n);
    % Coefficient vector update
    C = C+k*conj(e(n));
    % Current model output
    ym(n) = C'*X(:,n);
    % Covariance matrix update
    Qi = lambda^(-1)*Qi-lambda^(-1)*k*X(:,n)'*Qi;

Is it correct to say that the model output is ym(n) = C'*X(:,n) after the coefficients update? I would assume so as this gives a quite low MSE when I compare it to the measured/desired output, however, I can't say for sure.

Any help is greatly appreciated.

Best regards,


Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us