Knot distribution “optimal” for interpolation

`knots = optknt(tau,k,maxiter) `

optknt(tau,k)

`knots = optknt(tau,k,maxiter) `

provides the knot sequence `t`

that
is *best* for interpolation from *S*_{k,t} at
the site sequence `tau`

, with `10`

the
default for the optional input `maxiter`

that bounds
the number of iterations to be used in this effort. Here, *best* or *optimal* is
used in the sense of Micchelli/Rivlin/Winograd and Gaffney/Powell,
and this means the following: For any *recovery scheme* *R* that
provides an interpolant *Rg* that matches a given *g* at
the sites `tau(1)`

, ...,` tau(n)`

,
we may determine the smallest constant const_{R} for which ‖*g* – *Rg*‖ ≤ const_{R} ‖*D ^{k}_{g}*‖
for all smooth functions

Here, ‖*f*‖:=sup_{tau(1)
< x < tau(n)}|*f*(*x*)|.
Then we may look for the optimal recovery scheme as the scheme *R* for
which const_{R} is as small
as possible. Micchelli/Rivlin/Winograd have shown this to be interpolation
from *S*_{k,t},
with `t`

uniquely determined by the following conditions:

`t(1)`

=`...`

=`t(k)`

=`tau(1);`

`t(n+1) = ... = t(n+k) = tau(n);`

Any absolutely constant function

*h*with sign changes at the sites`t(k+1)`

, ...,`t(n)`

and nowhere else satisfies$$\int}_{\text{tau}(1)}^{\text{tau}(n)}f(x)h(x)dx=0\text{forall}f\in {S}_{k,t$$

Gaffney/Powell called this interpolation scheme *optimal* since
it provides the *center* function in the band formed
by all interpolants to the given data that, in addition, have their *k*th
derivative between *M* and –*M* (for
large *M*).

`optknt(tau,k) `

is the same
as `optknt(tau,k,10)`

.

See the last part of the example “Spline Interpolation” for an illustration. For the following highly nonuniform knot sequence

t = [0, .0012+[0, 1, 2+[0,.1], 4]*1e-5, .002, 1];

the command `optknt(t,3)`

will fail, while
the command `optknt(t,3,20)`

, using a high value
for the optional parameter `maxiter`

, will succeed.

This is the Fortran routine `SPLOPT`

in *PGS*. It is based on an algorithm described in [1], for the
construction of that sign function *h* mentioned above. It is
essentially Newton's method for the solution of the resulting nonlinear system of equations, with `aveknt(tau,k)`

providing the first guess for `t(k+1)`

,
...,`t(n)`

, and some damping used to maintain the Schoenberg-Whitney conditions.

[1] C. de Boor. "Computational
aspects of optimal recovery." In *Optimal Estimation in Approximation
Theory*, C.A. Micchelli & T.J. Rivlin eds., Plenum Publ., New
York, 1977, 69-91.

[2] P.W. Gaffney & M.J.D. Powell. "Optimal interpolation." In
*Numerical Analysis*, G.A. Watson ed., *Lecture
Notes in Mathematics, No. 506*, Springer-Verlag, 1976, 90-99.

[3] C.A. Micchelli, T.J. Rivlin & S. Winograd. "The optimal recovery of
smooth functions." *Numer. Math*. **80**, (1974), 903-906.