Asked by Craig Cowled
on 31 Jan 2013

I'm using the cftool toolbox to find fits for a complex valued transfer function. The toolbox clearly can't handle a complex numbers, so I have separated the data into its real and imaginary components and I now have two curve fits.

By way of background, my data is an Accelerance Frequency Response Function obtained from an impact hammer test of a structure and ought to look something like this:

H = (-w^2) / (k-(w^2)m+iwc)

where w is frequency in rad/s, k is stiffness, m is mass and c is damping.

My question is this: is there a simple way that the cftool can find the best fit for both data sets without giving me two separate values for mass, stiffness and damping? Put another way, is it possible to force the coefficients from two curve fits to be co-dependant and thereby find the best fit for the complex valued data?

*No products are associated with this question.*

Answer by Matt J
on 31 Jan 2013

Accepted answer

I don't think so, but LSQCURVEFIT can fit vector-valued functions, if you've got it.

Craig Cowled
on 31 Jan 2013

Thanks Matt, I'll have a look at lsqcurvefit. Was hoping there was an easier way rather than having to learn a new function. Honestly, I would think that this ought to be a basic option in the cftool toolbox.

Answer by Craig Cowled
on 3 May 2013

I just noticed another question on the forum in which someone suggested to use *mldivide*. I'll have a look at this when I get some time, but has anyone else used *mldivide* to curve fit a complex valued transfer function?

Show 1 older comment

Matt J
on 3 May 2013

In order to use MLDIVIDE, you need a model with linear dependence on the unknown parameters. The equation you posted can be re-arranged to be linear

H .*(k-(w^2)m+iwc) = -w^2

This is now a system of linear equations in the unknown parameters (k,m,c) which you can express in matrix/vector form A*kmc=b by defining the matrix A and column vector b as below. You can then solve this equation using mldivide,

H=(:); w=w(:);

A=[H, -H.*w.^2, i*H.*w]; b= -w.^2;

kmc=real(A\b);

k=kmc(1); m=kmc(2); c=kmc(3);

The problem with this, however, is that re-arrangement of the original equation can affect your error statistics. It would be more recommendable to use this to get an initial guess of kmc and then feed that and the original model equation to LSQCURVEFIT.

Craig Cowled
on 24 May 2013

Thank you MattJ and ZhangLu for your suggestions. I have successfully managed to use the LSQNONLIN function to curve fit my complex function. The LSQNONLIN function is exceptionally fast and produces a better curvefit of my data than I am able to achieve using the much slower Genetic Algorithm.

If anyone out there is interested in curve-fitting a complex function, which is essentially a locus in 3D space, you might find the following code useful. It is worth noting that you do NOT need to separate the real and imaginary components of your data or the curve fitting function.

Firstly, I create a function that takes a vector, x, as the input and then outputs a vector which, in my case, is the absolute difference between my experimental data and the function I am trying to fit. Code follows ...

function deltaH = LSQcurvefit_H_2modes(x)

% load MAT file containing the variables f and y (Exp Data) load('fy.mat');

% Assign variables for resonant frequ, modal mass and damping w1 = x(1); w2 = x(2); m1 = x(3); m2 = x(4); c1 = x(5); c2 = x(6);

% FRF function for curve-fitting (complex valued) z = -(f.^2) .* ((1 ./ ((w1.^2 * m1) - (f.^2 * m1) - (1i * f * c1))) ... + (1 ./ ((w2.^2 * m2) - (f.^2 * m2) - (1i * f * c2))));

% Create a weighting function that emphasises values near resonance L = length(f); jj = find(f > w1, 1, 'first') - 1; % index of first peak kk = find(f > w2, 1, 'first') - 1; % index of second peak ll = 20; % affects the steepness of the weighting function c = (w1 + w2) / 2; % mid-point of both peaks window(1:jj) = (2 .^ (ll * f(1:jj) * 1 / w1)) / (2 ^ ll); % < 1st peak window(jj + 1:kk) = 0.5 * (((f(jj + 1:kk) - c) / ((w2 - w1) / 2)) ... .^ 2) + 0.5; % 1st peak < f < 2nd peak window(kk + 1:L) = (1.5 .^ (-ll * ((f(kk + 1:L) - f(L)) / ... (f(L) - w2)))) / (5 * 1.5 ^ ll) + 0.8; % > 2nd peak

% Apply the weighting function to Exp Data as well as ... % curve-fitting function. IMPORTANT to use absolute values ... % otherwise LSQNONLIN will find complex values for w1, m1, c1 etc.. y1 = abs(window' .* y); z1 = abs(window' .* z);

% Objective function for LSQNONLIN to process deltaH = z1 - y1;

end % LSQcurvefit_H_2modes()

Prior to calling LSQNONLIN, you need to define the starting point for the optimisation problem as well as the lower and upper bounds. In my case, I know that w1 and w2 will be very close to the resonance peaks and I know that m1, m2, c1, and c2 will be real positive numbers. I am also quite certain that, in my case, the mass will be a lot less than 10,000 kg and the damping will be relatively low. So I can limit my upper bounds accordingly.

x0 = [41.6 48.8 1000 1000 500 500]; LB = [41 48 0 0 0 0]; UB = [42 49 10000 10000 10000 10000];

Then I call LSQNONLIN:

[x,resnorm,residual,exitflag,output] = ... lsqnonlin(@LSQcurvefit_H_2modes, x0, LB, UB);

The solver was very, very fast (less than 0.1 seconds). Very Happy.

Hopefully this code can help others who may want to curve fit a locus in 3D space.

Matt J
on 24 May 2013

**y1 = abs(window'.*y)**

I'm sure you'll be skeptical, since you've already reached results that you like. However, the above part isn't the best idea because the use of ABS() makes your residuals non-differentiable in regions where window'.*y are close to zero. The algorithms used by LSQNONLIN, however, assume differentiability. One way to avoid this is to use abs().^2 instead.

However, it's probably even better just to split up into real/imaginary parts as Zhang proposed. This really doesn't involve code any more complicated than what you're doing now. There is no real advantage trying to avoid it:

delt = window' .* (y-z); deltaH = [real(delt(:)); imag(delt(:))];

That's it. Only simple changes to your last 2 lines of code required.

Finally, note that this same technique could have been applied in LSQCURVEFIT as well, and with the same ease.

Answer by Craig Cowled
on 25 May 2013

MattJ, not sceptical ... always worthwhile to know the intrinsic quirks of an algorithm.

I used your code and compared the differences:

1) The size of the residuals vector doubled. It must be calculating residuals for the real and imaginary parts separately. Intuitively, this is bound to produce a better result.

2) The difference in results for w1 and w2 (tightly bounded) was less than 0.1%.

3) The difference in results for the other variables, however, was between 1% and 7%.

4) Both methods produced plots that closely approximated my data.

I think I'll use the code you suggested, it seems better to use a method that calculates residuals of real and imag parts separately.

Thank you for sharing.

Matt J
on 25 May 2013

The 7% difference is interesting, though. You might experiment by trying to solve with simulated data where you know the ground truth k,m,c. Then you can know which approach is really more accurate. Similar to what you say in (1), my intuition is also that the real/imag approach should do better, since it uses a larger number of directly observable data.

Related Content

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn moreOpportunities for recent engineering grads.

Apply Today
## 0 Comments