How to fit a curve above/below another one?

So, I need to tweak a few variables and fit a curve to another one - at least at some region (using MATLAB).
However, I am only interested in fits that are above the curve that I have. Also, what I use to fit is a function (there is no straightforward equation).
I tried checking a few points and giving it a penalty (either by multiplying the difference by a large number or by assigning the difference as infinity). Neither worked as it seems MATLAB optimization algorithms cannot figure out what I am trying to do and falls in terrible local minimas that are not a fit at all.
I know this sounds a little complicated. But, do you have any suggestion to how to do this (maybe a little simpler)?
I understand this is not something limited to MATLAB. But, I googled a little and was not able to find anything similar.
Thanks!
PS. The problem has constrains. I generally use lsqnonlin solver. But, I have tried other solvers, too.

8 Comments

Help you, without you even giving us any screenshots to visualize what you're describing? How many "fits above the curve" do you want/need to do?
Well, I can't help without plots/diagrams/screenshots, but if you're lucky someone might. Otherwise read this link
Thanks for the reply.
The attached is the curve I want to fit to it.
The resulting curve (output of the optimization) behaves similarly (decreases) but can be tweaked by 3-4 input parameters the curve generating function has.
Is this helpful?
Are you using fminsearch for fitting? If so, I would have expected this to work: "multiplying the difference by a large number".
"Is this helpful?" Not particularly, no.
You have some understanding of what you're envisioning but we have no klews other than what you tell/show us...and that's precious little.
What is the "curve generating function" and what are its parameters of which you speak? Or, if that is just some wished-for result, at a bare minimum show what generated the existing curve (or is it just data points from somewhere?) and what an acceptable result would look like (and how we would know it is acceptable from some other of an infinite population of possibilities that wouldn't be.
Jeff,
I am sorry. I forgot to mention my one huge part of it and it is that this problem has constraints! I added this to the question description.
dpb,
That is fair.
Here are more details:
There is a model that currently exists and there are some experimental results. What I am trying to do is to modify the current model so that its results are closer to the experimental results. The curve is basically stress in a material and is generated by a function I have written (I can only treat it as a black box as I can't easily figure out how each variable can affect the shape of the output). There are a number of variables (the number can also change. But, let's say 3 for now) that I can tweak to generate a curve similar to the experimental results and the current model.
An acceptable curve is what is between the two curves. Obviously, ideally, it matches the experimental curve. However, anything that is between the two would be fine. I attached an example of an acceptable curve (temp1.png). This curve is obtained by running a curve-fitting to the current model; not the experimental model. As you see, the fitted curve is "below" the targeted curve. The same thing happens (at best - if it is actually a fit) when I choose the experimental curve. What I get is basically below that one - which is not what I am looking for. Now, the two ends of the curve are very important and should be almost where they have to be (must match the target curve).
Another thing that I should mention is that since what I use to generate the results (curve) is complex and nonlinear, the fitted curve may change a lot and may even look strange (jagged for example) depending on the starting point. My solution for that so far was to run it multiple times with randomly generated starting points which are within constraints.
Also, the main solver I have used was lsqnonlin, but I gave other solvers that can handle constraints a shot, too.
I hope these help. Please let me know if I am still not clear enough.
Siavash,
Having constraints isn't a reason to avoid fminsearch, because you can always build the constraints into the error function. Let fminsearch adjust as many free parameters (x's) as you need, and then compute your "real" constrained parameters with those x's, imposing exactly the constraints you want. E.g.
Realp1 = x(1)^2 + 10; % real parameter 1 must be at least 10
Realp2 = Realp1 - x(2)^2; % real parameter 2 must be less than real parameter 1
...
Once you have the real parameter values, use those to compute your error function, and add in some penalty if the function goes out of the bounds you want (e.g. below the other curve). Using inf as a penalty doesn't work well, though--fminsearch is much more likely to find good solutions if you use an error score that distinguishes results that are only slightly out of bounds from those that are extremely out of bounds.
Thank you Jeff. This is a brilliant idea. I am playing with fminsearch now.
One question/concern: I am thinking when I multiply error by a large number, and since fminsearch still does not have any heuristic method, it still may fall in a local minima. Error is large. But, it still is a local minima. Isn't this correct? And do you think I can do anything other than using randomly generated starting point to overcome this?
Thanks again!
[Sorry for the slow answer--away from internet for a few days.]
I'm glad you like the fminsearh idea. If the answer helps you, please accept it.
Yes, you are right that local minima may be a problem and that trying different starting points is often the only way to address it. You might generate starting points randomly, or using a grid, or randomly within the cells of a grid, etc.

Sign in to comment.

 Accepted Answer

> I am afraid you posted the answer as a comment.
Oops. Let this be the follow-up to the above as an answer.
>do you have a suggestion on how to enfore double constraints (Like, a<x<b)
Well, there is almost always a way to map fminsearch's (-inf,+inf) onto whatever legal set of values that you want. Here's a trick for the a<x<b example:
Frac = x(1)^2 / (1+x(1)^2);
Realp1 = a + (b-a)*Frac;
> enforce a number to only be an integer (choosing between cases 1,2 or 3)
This one is much tougher. Floor definitely won't work--fminsearch gets frustrated/confused if it changes a parameter and the error value doesn't change. If there are really only 3 cases you might run fminsearch three times (with that parameter fixed to a different value each time) and simply see which one produces the smallest error.
With a lot more cases, what I usually do is compute a compromise error function as a weighted average of floor(iParm) and ceil(iParm). For example, suppose fminsearch nominates a value of 4.35 for a parameter that should be an integer. Compute errorBelow as the error at floor(4.35) and errorAbove as the error at ceil(4.35). Then tell fminsearch that the overall error at 4.35 is (0.65*errorBelow+0.35*errorAbove). As long as the error function isn't the same at the two integers, fminsearch will always drift toward the integer with the lower error score. There is a bit more explanation and some code to do this at fminsearcharb (including for problems with more than one integer parameter).

More Answers (1)

Thank you.
I am afraid you posted the answer as a comment. So, I am unable to accept it as an answer.
Also, do you have a suggestion on how to enfore double constraints (Like, a<x<b) or to enfore a number to only be an integer (choosing between cases 1,2 or 3) using fminsearch similar to the one-way constraint ideas you suggested earlier?
I mean, I can always use floor function. But, will it work with this algorithm?

Asked:

on 30 May 2019

Answered:

on 4 Jun 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!