LSQ of exponential function without linearizing

2 views (last 30 days)
Hi. I have a problem solving the inverse problem of this form: y = a*exp(b*x1)*exp(c*x2)
I need to solve for a,b,c given data: y, x1, x2. I believe this requires an iterative LSQ approach using damping by the Levenberg-Marquardt algorithm. However, I find that the solution is highly dependent on starting point. y is in the range 10^10 while x1 and x2 are in the range [0:1]. I have an expectation that b and c are in the range [-2:2] meaning that "a" is much bigger in the range [10^9:10^10].
DO you guys have any good advices for how to make the inverse method stable? Standardization / normalization?
Best regards, Commat
  3 Comments
Matt J
Matt J on 4 Mar 2014
Edited: Matt J on 4 Mar 2014
I have an expectation that b and c are in the range [-2:2] meaning that "a" is much bigger in the range [10^9:10^10].
I don't think that follows. The magnitude of a will depend on the magnitudes of x1 and x2 as well as on b and c.
commat
commat on 4 Mar 2014
Edited: commat on 4 Mar 2014
Hi Matt. Thanks for your reply.
cond([x1(:), x2(:)]) = 2.044
and
cond([x1(:), x2(:), ones(size(x1(:)))]) = 27.0614
What is this telling me?
I agree with you on the magnitude but getting an approximate solution by linearinzing the expression and then solve with traditional Gauss-Newton yields b=1.68 and c=-1.70.
I believe the problem with my LSQ approach is that the Jacobian is very indifferent in values being that. J(:,1) = small while J(:,2) and J(:,3) are very big. I think that this would disturb how the solution is iteratively directed?

Sign in to comment.

Answers (1)

Matt J
Matt J on 4 Mar 2014
Edited: Matt J on 5 Mar 2014
I don't know exactly what you mean by "linearizing". If you take the log of both sides of your model equation, you obtain linear equations in b,c, and log(a)
log(y(:))=x1(:)*b +x2(:)*c +log(a)
Your result for cond([x1(:), x2(:), ones(size(x1(:)))]) says you should be able to get a pretty stable solution using mldivide,
params=[x1(:), x2(:), ones(size(x1(:)))]\log(y(:));
a=exp(params(3));
b=params(1);
c=params(2);
These kinds of transformations don't always play well when you have additive measurement noise, but the above should at least be a good initial guess [a0,b0,c0] for an iterative method.
Without actual data to run, I can only guess what might be wrong with the iterative method you're using. However, I would recommend normalizing your data to smaller, more manageable, numbers y=y/10^10. This should have no effect other than to re-express a in different units. You might also try using FMINSPLEAS , which can take advantage of the fact that y is linear w.r.t a.
flist={@(bc,x) exp(x*bc(:))};
[bc,a] = fminspleas(flist,[b0,c0],[x1(:),x2(:)],y(:)/1e10);
a=a*1e10;
b=bc(1);
c=bc(2);
  4 Comments
commat
commat on 6 Mar 2014
Hi Matt. Yes, I understand the fact that the cost function can have a local minima but if it actually returns the least misfit to data should the least square then stay in that minima? A
Matt J
Matt J on 6 Mar 2014
Yes, it should, but I still don't understand what you say you're witnessing. You think it's starting to converge to the global solution, then jumping to another sub-optimal one? What is the evidence of that?

Sign in to comment.

Categories

Find more on Mathematics in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!