Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Linear interpolation problem

Subject: Linear interpolation problem

From: Heinrich Acker

Date: 7 Nov, 2012 19:46:09

Message: 1 of 19

Dear Matlab users,

I have the following problem, which seems to be too general for not already being solved by somebody. The interpolation error I find when using 'interp1' with the default linear method is not as small as it could be, given the linear model and the number of data points. Please consider this example:

% interval
x = 0:0.01:1;

% function
y = x.^2;

% knots to use for interpolation
xi = 0:0.1:1;

% interpolate
yi = interp1(x,y,xi);

% show function and interpolation result
plot(x,y)
hold all
plot(xi,yi,'.-')

% interpolate again, this time to find the errors at all values of x
yi2 = interp1(xi,yi,x);

% plot the error
figure
plot(x,yi2-y)

The error shown in the second figure is always positive, because of the sign of the curvature of the function. In this particular case, it would be easy to shift the yi values in order to minimize the error. In general, it seems not so easy. I can think of heavily iterative, optimizing algorithms that try to trim the result of 'interp1' in order to minimize the error. But for the data sets I am interested in, such an algorithm is likely to be too slow. I wonder if there is a solution more clever than iteratively trying to improve the result of 'interp1'. My questions are: Is there a proven, available algorithm to minimize the error for the general case? Is one available in Matlab? How is it called if it has a name?

Thank you.

Heinrich

Subject: Linear interpolation problem

From: TideMan

Date: 7 Nov, 2012 19:55:17

Message: 2 of 19

On Thursday, November 8, 2012 8:46:09 AM UTC+13, Heinrich Acker wrote:
> Dear Matlab users,
>
>
>
> I have the following problem, which seems to be too general for not already being solved by somebody. The interpolation error I find when using 'interp1' with the default linear method is not as small as it could be, given the linear model and the number of data points. Please consider this example:
>
>
>
> % interval
>
> x = 0:0.01:1;
>
>
>
> % function
>
> y = x.^2;
>
>
>
> % knots to use for interpolation
>
> xi = 0:0.1:1;
>
>
>
> % interpolate
>
> yi = interp1(x,y,xi);
>
>
>
> % show function and interpolation result
>
> plot(x,y)
>
> hold all
>
> plot(xi,yi,'.-')
>
>
>
> % interpolate again, this time to find the errors at all values of x
>
> yi2 = interp1(xi,yi,x);
>
>
>
> % plot the error
>
> figure
>
> plot(x,yi2-y)
>
>
>
> The error shown in the second figure is always positive, because of the sign of the curvature of the function. In this particular case, it would be easy to shift the yi values in order to minimize the error. In general, it seems not so easy. I can think of heavily iterative, optimizing algorithms that try to trim the result of 'interp1' in order to minimize the error. But for the data sets I am interested in, such an algorithm is likely to be too slow. I wonder if there is a solution more clever than iteratively trying to improve the result of 'interp1'. My questions are: Is there a proven, available algorithm to minimize the error for the general case? Is one available in Matlab? How is it called if it has a name?
>
>
>
> Thank you.
>
>
>
> Heinrich

Approximating a quadratic with piecewise linear portions is obviously inaccurate, never mind the program (Matlab, Excel, whatever).
If this concerns you, you should use a higher order interpolation method. Indeed, quadratic interpolation would be exact for your function. But in general, I find PCHIP works for most functions (available within interp1).

Subject: Linear interpolation problem

From: Heinrich Acker

Date: 8 Nov, 2012 10:23:08

Message: 3 of 19

Thank you, TideMan for these hints. In my application, the function is unknown, and linear approximation is already chosen for runtime reasons. With my example, I did not use real data, but wanted to illustrate the fact that the result delivered by 'interp1' is not the best result possible with linear approximation. I am looking for an algorithm that performs better with the same type of approximation. As you can see from my example, 'interp1' does not minimize the maximum error in the data set.

>
> Approximating a quadratic with piecewise linear portions is obviously inaccurate, never mind the program (Matlab, Excel, whatever).
> If this concerns you, you should use a higher order interpolation method. Indeed, quadratic interpolation would be exact for your function. But in general, I find PCHIP works for most functions (available within interp1).

Subject: Linear interpolation problem

From: Bruno Luong

Date: 8 Nov, 2012 12:35:09

Message: 4 of 19

"Heinrich Acker" wrote in message <k7g16b$3j6$1@newscl01ah.mathworks.com>...
> Thank you, TideMan for these hints. In my application, the function is unknown, and linear approximation is already chosen for runtime reasons. With my example, I did not use real data, but wanted to illustrate the fact that the result delivered by 'interp1' is not the best result possible with linear approximation. I am looking for an algorithm that performs better with the same type of approximation. As you can see from my example, 'interp1' does not minimize the maximum error in the data set.
>

So what is your preference: runtime or accuracy? They are tradeoff, you have to pick a choice.

BTW, interp1 poor runtime is mainly due to the overhead, not the numerical method behind it.

Bruno

Subject: Linear interpolation problem

From: Heinrich Acker

Date: 8 Nov, 2012 13:57:07

Message: 5 of 19

Perhaps I should have explained this in more detail at the beginning. The application is in embedded control, a tiny 16bit µC will do the actual approximation, and it already has a critical workload. Because of this, it is already decided for the project to use linear interpolation, since the combination of indexing, one multiplication, and one addition fits into the loop. I want to use Matlab to supply the embedded software with calibration data. I currently use 'interp1' to do this. From plots similar to the code example given, I have seen that it does not deliver the optimal result (if the maximum error is regarded as the cost). I have to deliver to the µC system a set of gain and offset values with a fixed number of values. How can I minimize the maximum error?

>
> So what is your preference: runtime or accuracy? They are tradeoff, you have to pick a choice.
>
> BTW, interp1 poor runtime is mainly due to the overhead, not the numerical method behind it.
>
> Bruno

Subject: Linear interpolation problem

From: Bruno Luong

Date: 8 Nov, 2012 15:22:12

Message: 6 of 19

"Heinrich Acker" wrote in message <k7gdnj$g6r$1@newscl01ah.mathworks.com>...
>How can I minimize the maximum error?

You could but you have to give up in runtime performance. Willing you?

Bruno

Subject: Linear interpolation problem

From: Heinrich Acker

Date: 8 Nov, 2012 16:08:17

Message: 7 of 19

"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <k7gin4$55m$1@newscl01ah.mathworks.com>...
> "Heinrich Acker" wrote in message <k7gdnj$g6r$1@newscl01ah.mathworks.com>...
> >How can I minimize the maximum error?
>
> You could but you have to give up in runtime performance. Willing you?
>
> Bruno

Yes! Computing the coefficients for the µC currently takes 5 ms on my i7 notebook, based on 'interp1' - that's fast. The dumb one-coefficient-at-a-time optimizer I could write myself would problably run seconds, that's too slow. But an optimized solution that runs in 100 ms would be great. By the way, I'm familiar with stealing code from the library m-files and reducing the runtime by deleting error checks, deleting array reshapings, and reducing number of function calls.

Heinrich

Subject: Linear interpolation problem

From: Bruno Luong

Date: 8 Nov, 2012 19:12:17

Message: 8 of 19

"Heinrich Acker" wrote in message <k7gldh$gdg$1@newscl01ah.mathworks.com>...

>
> Yes! Computing the coefficients for the µC currently takes 5 ms on my i7 notebook, based on 'interp1' - that's fast. The dumb one-coefficient-at-a-time optimizer I could write myself would problably run seconds, that's too slow. But an optimized solution that runs in 100 ms would be great. By the way, I'm familiar with stealing code from the library m-files and reducing the runtime by deleting error checks, deleting array reshapings, and reducing number of function calls.

OK, so did you try interp1 with 'spline' option?

Bruno

Subject: Linear interpolation problem

From: dpb

Date: 8 Nov, 2012 20:42:45

Message: 9 of 19

On 11/8/2012 10:08 AM, Heinrich Acker wrote:
> "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message
> <k7gin4$55m$1@newscl01ah.mathworks.com>...
>> "Heinrich Acker" wrote in message
>> <k7gdnj$g6r$1@newscl01ah.mathworks.com>...
>> >How can I minimize the maximum error?
>>
>> You could but you have to give up in runtime performance. Willing you?
>>
...

> Yes! Computing the coefficients for the µC currently takes 5 ms on my i7
> notebook, based on 'interp1' - that's fast. The dumb
> one-coefficient-at-a-time optimizer I could write myself would problably
> run seconds, that's too slow. But an optimized solution that runs in 100
> ms would be great. By the way, I'm familiar with stealing code from the
> library m-files and reducing the runtime by deleting error checks,
> deleting array reshapings, and reducing number of function calls.

Is this a one-time factory calibration or is it required in the field?

How many terms can you stand in the field and how different would be the
data sets if they are field-calibrated? IOW, does it matter how long it
takes to find an optimal set of coefficients since once done it's done
or do you also need a field calibration function?

If it's reasonably fixed you may find that actual evaluation time could
be cut down by using fewer terms even if were higher order.

What's the best attack is more than likely highly dependent on the
actual dataset characteristics and whether they're variable or not as
mentioned above.

Since this is for an instrument, are you aware of the old Embedded
System Programming/Design magazine and through it Jack Crenshaw's column
he wrote over the years on many topics one of which was such calibration
issues. I don't have a link at hand; if I get a chance this evening
I'll see if I can find it--I know he wrote a series of articles on
almost precisely the topic. One really nice thing of Jack's
approach--he always ended up w/ stuff for the implementation.

You might start w/

<http://www.embedded.com/search?keyword=crenshaw>

I'm not sure if all the old archives are still up or not...

--

Subject: Linear interpolation problem

From: Heinrich Acker

Date: 9 Nov, 2012 13:58:08

Message: 10 of 19

dpb <none@non.net> wrote in message <k7h5dl$3ts$1@speranza.aioe.org>...
>
> Is this a one-time factory calibration or is it required in the field?
>
> How many terms can you stand in the field and how different would be the
> data sets if they are field-calibrated? IOW, does it matter how long it
> takes to find an optimal set of coefficients since once done it's done
> or do you also need a field calibration function?
>
> If it's reasonably fixed you may find that actual evaluation time could
> be cut down by using fewer terms even if were higher order.
>
> What's the best attack is more than likely highly dependent on the
> actual dataset characteristics and whether they're variable or not as
> mentioned above.
>
> Since this is for an instrument, are you aware of the old Embedded
> System Programming/Design magazine and through it Jack Crenshaw's column
> he wrote over the years on many topics one of which was such calibration
> issues. I don't have a link at hand; if I get a chance this evening
> I'll see if I can find it--I know he wrote a series of articles on
> almost precisely the topic. One really nice thing of Jack's
> approach--he always ended up w/ stuff for the implementation.
>
> You might start w/
>
> <http://www.embedded.com/search?keyword=crenshaw>
>
> I'm not sure if all the old archives are still up or not...
>
> --

Now I have bookmarked this valuable site I did not know, thank you for the link. I knew Jack Crenshaw as the author of "Let's Build A Compiler", a series of articles where he first tells that he is not an expert in the field, and then writes the best introduction to compiler design I have ever seen. But searching embedded.com, I could not find something closely related. Instead, a note from the editor that they sometimes put old stuff back online on request.

Subject: Linear interpolation problem

From: Heinrich Acker

Date: 9 Nov, 2012 14:31:13

Message: 11 of 19

"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <k7h06h$ab$1@newscl01ah.mathworks.com>...
>
> OK, so did you try interp1 with 'spline' option?
>
> Bruno

Well, not really, because I don't know how to use splines in my application. The complete operation, which I probably should have explained at the beginning, at the price of a lengthy post, is like that:

1. The uncalibrated device with the µC is fixed on the test bench and gets a signal to start measuring. A stimulus is applied. The test bench collects both stimulus and measurement data.

2. The data, a plenty of noisy coordinate pairs, is fed into the Matlab application, which has the task of computing the calibration table for the µC. It is this task that currently requires 5 ms, but where I would be happy with an optimized result that takes 100 ms.

3. The calibration table, a set of gain/offset pairs, is flashed into the µC memory.

4. The µC gets the signal that calibration is done and it has to work with the calibration data for the rest of its life. Since the system is stable, this does not cause long-term problems.

5. During operation, the µC has about 50 µs to get from uncalibrated raw measurement to calibrated output. The only way I know to do this is by piecewise linear interpolation: Some LSBs of the uncalibrated measurement are taken as an index to the table of gain/offset pairs, then the calibrated output is computed in the form y = m(index)*x+b(index). Having the segments as a power of two means that the some LSBs *are* the index without further computation. In embedded code this translates to a shift operation, perhaps with a typecast, instead of a multiplication or division.

The decision to have the embedded code use piecewise linear interpolation is already fixed, based on the assumption of the whole team that there is nothing faster. I am trying to find a better soultion on the Matlab side. The code example in my first post was meant to illustrate the finding that interp1 does not deliver the optimal solution, as the second figure shows: the error is always positive. Would the result be shifted by one half of the maximum error, this maximum error would be reduced by 50%. In the meantime, I have written an extension to my code. It takes the result of interp1, and does exactly this shifting. It is resonably fast and successful in reducing the error, but since I do not touch the gain values, this is still not the optimal solution.

Heinrich

Subject: Linear interpolation problem

From: dpb

Date: 9 Nov, 2012 14:39:59

Message: 12 of 19

On 11/9/2012 7:58 AM, Heinrich Acker wrote:
...

> Now I have bookmarked this valuable site I did not know, thank you for
> the link. I knew Jack Crenshaw as the author of "Let's Build A
> Compiler", a series of articles where he first tells that he is not an
> expert in the field, and then writes the best introduction to compiler
> design I have ever seen. But searching embedded.com, I could not find
> something closely related. Instead, a note from the editor that they
> sometimes put old stuff back online on request.

That's a bummer...I've got all the old hardcopy of ESP from the initial
issue--I'll try to see if I can find the series of articles I'm recalling.

You might try and ask if they can put all the old "Programmers' Toolbox"
series of columns back--or at least an index of the columns published.
They were too valuable to lose and you're right--Jack was/is one of the
most successful I've run across at finding the right level between
theory and application for exposition to give the deadline-stressed the
info needed in a form useable almost immediately yet w/ enough
background to extend beyond the actual specific problem/example.

He published a book called, Programmers Toolbox or similar--I don't have
a copy at hand and I don't know if the particular subject on the
calibration issue was included in it or not otomh.

Oh...another resource--Jack Ganssle. Jack also wrote a column for
ESP/ESD and has a consulting business for embedded system development.
He tells story and has also written extensively about one of the very
early systems he worked on that had a very similar calibration
issue--might contact Jack and see if he has hands on anything there that
is online or could easily be linked to...

<www.ganssle.com>
jack@ganssle.com

I hesitate to try to suggest too much more specifically w/o more
background on the problem/data...

--

Subject: Linear interpolation problem

From: dpb

Date: 9 Nov, 2012 18:10:15

Message: 13 of 19

On 11/9/2012 7:58 AM, Heinrich Acker wrote:
...

> Now I have bookmarked this valuable site I did not know, thank you for
> the link. I knew Jack Crenshaw as the author of "Let's Build A
> Compiler", a series of articles where he first tells that he is not an
> expert in the field, and then writes the best introduction to compiler
> design I have ever seen. But searching embedded.com, I could not find
> something closely related. Instead, a note from the editor that they
> sometimes put old stuff back online on request.

That's a bummer...I've got all the old hardcopy of ESP from the initial
issue--I'll try to see if I can find the series of articles I'm recalling.

You might try and ask if they can put all the old "Programmers' Toolbox"
series of columns back--or at least an index of the columns published.
They were too valuable to lose and you're right--Jack was/is one of the
most successful I've run across at finding the right level between
theory and application for exposition to give the deadline-stressed the
info needed in a form useable almost immediately yet w/ enough
background to extend beyond the actual specific problem/example.

He published a book called, Programmers Toolbox or similar--I don't have
a copy at hand and I don't know if the particular subject on the
calibration issue was included in it or not otomh.

Oh...another resource--Jack Ganssle. Jack also wrote a column for
ESP/ESD and has a consulting business for embedded system development.
He tells story and has also written extensively about one of the very
early systems he worked on that had a very similar calibration
issue--might contact Jack and see if he has hands on anything there that
is online or could easily be linked to...

<www.ganssle.com>
jack@ganssle.com

I hesitate to try to suggest too much more specifically w/o more
background on the problem/data...

--

Subject: Linear interpolation problem

From: dpb

Date: 9 Nov, 2012 18:50:38

Message: 14 of 19

On 11/9/2012 8:31 AM, Heinrich Acker wrote:
...

> 3. The calibration table, a set of gain/offset pairs, is flashed into
> the µC memory.


> 4. The µC gets the signal that calibration is done and it has to work
> with the calibration data for the rest of its life. Since the system is
> stable, this does not cause long-term problems.
>
> 5. During operation, the µC has about 50 µs to get from uncalibrated raw
> measurement to calibrated output. The only way I know to do this is by
> piecewise linear interpolation: Some LSBs of the uncalibrated
> measurement are taken as an index to the table of gain/offset pairs,
> then the calibrated output is computed in the form y =
> m(index)*x+b(index). Having the segments as a power of two means that
> the some LSBs *are* the index without further computation. In embedded
> code this translates to a shift operation, perhaps with a typecast,
> instead of a multiplication or division.
>
> The decision to have the embedded code use piecewise linear
> interpolation is already fixed, based on the assumption of the whole
> team that there is nothing faster. I am trying to find a better soultion
> on the Matlab side. The code example in my first post was meant to
> illustrate the finding that interp1 does not deliver the optimal
> solution, as the second figure shows: the error is always positive.
> Would the result be shifted by one half of the maximum error, this
> maximum error would be reduced by 50%. In the meantime, I have written
> an extension to my code. It takes the result of interp1, and does
> exactly this shifting. It is resonably fast and successful in reducing
> the error, but since I do not touch the gain values, this is still not
> the optimal solution.

Is the response of each instrument similar enough that this is a
fixed-size table for every instrument I presume?

If so, how many points are being used and if you look at the historical
data how similar are the coefficients? Question being, can you perhaps
study previous results and use them in a statistical sense to find an
optimal solution that can then be tweaked perhaps on a unit-by-unit basis?

Again, need far more details of the dataset than have to have really
specific ideas other than the general solution of the boundary condition
problem of moving n points to minimize error for a production "cheap but
cheery" solution...

--

Subject: Linear interpolation problem

From: dpb

Date: 9 Nov, 2012 21:47:24

Message: 15 of 19

On 11/9/2012 8:31 AM, Heinrich Acker wrote:
...

> The decision to have the embedded code use piecewise linear
> interpolation is already fixed, based on the assumption of the whole
> team that there is nothing faster. I am trying to find a better soultion
> on the Matlab side....

I've been blathering all around it w/o ever actually saying it--your
problem is "piecewise regression".

I remembered my notebook of specific useful papers from ages ago--don't
have electronic copy but if have access to a uni or research library
somewhere

JASA (Journal of the American Statistical Assoc.)
Sept. 1970, Vol. 65, No. 331
Piecewise Regression
V. E. McGee and W. T. Carleton

A quick google found a pretty nice exposition on the two-segment
version; the paper deals with multiple.

<http://www.fs.fed.us/rm/pubs/rmrs_gtr189.pdf>

I didn't look to see if the Curve Fitting Toolbox has an
implementation...or you should check the File Exchange...

--

Subject: Linear interpolation problem

From: John D'Errico

Date: 9 Nov, 2012 23:54:11

Message: 16 of 19

"Heinrich Acker" wrote in message <k7edq1$akq$1@newscl01ah.mathworks.com>...
> Dear Matlab users,
>
> I have the following problem, which seems to be too general for not already being solved by somebody. The interpolation error I find when using 'interp1' with the default linear method is not as small as it could be, given the linear model and the number of data points. Please consider this example:
>
> % interval
> x = 0:0.01:1;
>
> % function
> y = x.^2;
>
> % knots to use for interpolation
> xi = 0:0.1:1;
>
> % interpolate
> yi = interp1(x,y,xi);
>
> % show function and interpolation result
> plot(x,y)
> hold all
> plot(xi,yi,'.-')
>
> % interpolate again, this time to find the errors at all values of x
> yi2 = interp1(xi,yi,x);
>
> % plot the error
> figure
> plot(x,yi2-y)
>
> The error shown in the second figure is always positive, because of the sign of the curvature of the function. In this particular case, it would be easy to shift the yi values in order to minimize the error. In general, it seems not so easy. I can think of heavily iterative, optimizing algorithms that try to trim the result of 'interp1' in order to minimize the error. But for the data sets I am interested in, such an algorithm is likely to be too slow. I wonder if there is a solution more clever than iteratively trying to improve the result of 'interp1'. My questions are: Is there a proven, available algorithm to minimize the error for the general case? Is one available in Matlab? How is it called if it has a name?
>

It simply no longer is interpolation if you adjust the linear
segments to minimize the overall error.

Anyway, is it easy? No, especially not if you will hope to
find an optimal set of segments. I've played with writing
tools that do try to do it in some sense optimally, but they
are slow and not at all efficient, instead more of a greedy
algorithm. Even at that, its not that fast.

If you want more accuracy, use a spline interpolant. Or
perhaps try out chebfun. It is on the FEX, and does a very
nice job.

If you don't really want to do interpolation, then why are
you trying to use interp1 anyway?

John

Subject: Linear interpolation problem

From: Heinrich Acker

Date: 12 Nov, 2012 17:54:16

Message: 17 of 19

dpb <none@non.net> wrote in message <k7jtmg$jtn$1@speranza.aioe.org>...
> On 11/9/2012 8:31 AM, Heinrich Acker wrote:
> ...
>
> > The decision to have the embedded code use piecewise linear
> > interpolation is already fixed, based on the assumption of the whole
> > team that there is nothing faster. I am trying to find a better soultion
> > on the Matlab side....
>
> I've been blathering all around it w/o ever actually saying it--your
> problem is "piecewise regression".
>
> I remembered my notebook of specific useful papers from ages ago--don't
> have electronic copy but if have access to a uni or research library
> somewhere
>
> JASA (Journal of the American Statistical Assoc.)
> Sept. 1970, Vol. 65, No. 331
> Piecewise Regression
> V. E. McGee and W. T. Carleton
>
> A quick google found a pretty nice exposition on the two-segment
> version; the paper deals with multiple.
>
> <http://www.fs.fed.us/rm/pubs/rmrs_gtr189.pdf>
>
> I didn't look to see if the Curve Fitting Toolbox has an
> implementation...or you should check the File Exchange...
>
> --

Thanks again for an interesting link. That's the kind of paper I can work with. I could not find the JASA paper, but I'm not too sad about that. Usually, the products of the math research are much too difficult in their description for me as an electronics design engineer. Not doable to create code from them. Also thank you for pointing me to the word "regression" - I actually did not use it in searching ... There is an interesting submission on the File Exchange called "BrokenStickRegression" that I found because of this. Now I have two traces to follow ...

Heinrich

Subject: Linear interpolation problem

From: Heinrich Acker

Date: 12 Nov, 2012 18:29:15

Message: 18 of 19

"John D'Errico" <woodchips@rochester.rr.com> wrote in message <k7k533$r8b$1@newscl01ah.mathworks.com>...
>
> It simply no longer is interpolation if you adjust the linear
> segments to minimize the overall error.
>
> Anyway, is it easy? No, especially not if you will hope to
> find an optimal set of segments. I've played with writing
> tools that do try to do it in some sense optimally, but they
> are slow and not at all efficient, instead more of a greedy
> algorithm. Even at that, its not that fast.
>
> If you want more accuracy, use a spline interpolant. Or
> perhaps try out chebfun. It is on the FEX, and does a very
> nice job.
>
> If you don't really want to do interpolation, then why are
> you trying to use interp1 anyway?
>
> John

Thank you for your help.

For me, it is already a valuable information that the problem is difficult and a standard algorithm for all applications is not available. This makes it feasible to try a greedy algorithm that uses as much application-specific information as possible. I my case, I could restrict the work on the few segments where an error limit is exceeded. I could also let it run until the time the process allows is over, and use the best result known at that time.

Chebfun looks very interesting and has examples that come close to my problem, I will study that.

I used interp1 because it provides a result that comes close to what I need, so I wanted to use it as a starting point for the optimization. Not good?

Heinrich

Subject: Linear interpolation problem

From: dpb

Date: 12 Nov, 2012 20:23:17

Message: 19 of 19

On 11/12/2012 11:54 AM, Heinrich Acker wrote:
...

>> JASA (Journal of the American Statistical Assoc.)
>> Sept. 1970, Vol. 65, No. 331
>> Piecewise Regression
>> V. E. McGee and W. T. Carleton
>>
>> A quick google found a pretty nice exposition on the two-segment
>> version; the paper deals with multiple.
>>
>> <http://www.fs.fed.us/rm/pubs/rmrs_gtr189.pdf>
...

> Thanks again for an interesting link. That's the kind of paper I can
> work with. I could not find the JASA paper, but I'm not too sad about
> that. Usually, the products of the math research are much too difficult
> in their description for me as an electronics design engineer. Not
> doable to create code from them. Also thank you for pointing me to the
> word "regression" - I actually did not use it in searching ... There is
> an interesting submission on the File Exchange called
> "BrokenStickRegression" that I found because of this. Now I have two
> traces to follow ...
...

Yeah, JASA isn't oriented towards implementation, true. I didn't look
at the index for Technometrics, nor have I looked at possible code
implementations at Netlib, etc.

I let membership to ASA lapse after about mid/latter-90s so don't have
anything recent on hand (and haven't unpacked the dozen or so boxes of
journals that includes JASA/Technometrics since the move back to the
farm so only the few papers I had copied and stuck in my notebook are
easily at hand here... :(

Here is a link to the JASA site--as you say, it may not be worth doing
but if you are interested it is a pretty good paper and not _terribly_
arcane.

<http://www.jstor.org/discover/10.2307/2284278?uid=3739256&uid=2&uid=4&sid=21101427584797>

--

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us