# spap2

Least-squares spline approximation

## Syntax

`spap2(knots,k,x,y) spap2(l,k,x,y) sp = spap2(...,x,y,w) spap2({knorl1,...,knorlm},k,{x1,...,xm},y) spap2({knorl1,...,knorlm},k,{x1,...,xm},y,w) `

## Description

`spap2(knots,k,x,y) ` returns the B-form of the spline f of order `k` with the given knot sequence `knots` for which

```(*) y(:,j) = f(x(j)), all j ```

in the weighted mean-square sense, meaning that the sum

$\sum _{j}w\left(j\right)|y\left(:,j\right)-f\left(x\left(j\right)\right){|}^{2}$

is minimized, with default weights equal to 1. The data values `y(:,j)` may be scalars, vectors, matrices, even ND-arrays, and |z|2 stands for the sum of the squares of all the entries of z. Data points with the same site are replaced by their average.

If the sites `x` satisfy the (Schoenberg-Whitney) conditions

then there is a unique spline (of the given order and knot sequence) satisfying (*) exactly. No spline is returned unless (**) is satisfied for some subsequence of `x`.

`spap2(l,k,x,y) `, with `l` a positive integer, returns the B-form of a least-squares spline approximant, but with the knot sequence chosen for you. The knot sequence is obtained by applying `aptknt` to an appropriate subsequence of `x`. `T`he resulting piecewise-polynomial consists of `l` polynomial pieces and has `k-2` continuous derivatives. If you feel that a different distribution of the interior knots might do a better job, follow this up with

```sp1 = spap2(newknt(sp),k,x,y)); ```

`sp = spap2(...,x,y,w) ` lets you specify the weights `w` in the error measure (given above). `w` must be a vector of the same size as `x`, with nonnegative entries. All the weights corresponding to data points with the same site are summed when those data points are replaced by their average.

`spap2({knorl1,...,knorlm},k,{x1,...,xm},y) ` provides a least-squares spline approximation to gridded data. Here, each `knorli` is either a knot sequence or a positive integer. Further, `k` must be an `m`-vector, and `y` must be an (`r+m`)-dimensional array, with `y(:,i1,...,im)` the datum to be fitted at the `site` `[x{1}(i1),...,x{m}(im)]`, all `i1`, ..., `im`. However, if the spline is to be scalar-valued, then, in contrast to the univariate case, `y` is permitted to be an `m`-dimensional array, in which case `y(i1,...,im)` is the datum to be fitted at the `site` `[x{1}(i1),...,x{m}(im)]`, all `i1`, ..., `im`.

`spap2({knorl1,...,knorlm},k,{x1,...,xm},y,w) ` also lets you specify the weights. In this `m`-variate case, `w` must be a cell array with `m` entries, with `w{i}` a nonnegative vector of the same size as `xi`, or else `w{i}` must be empty, in which case the default weights are used in the `i`th variable.

## Examples

```sp = spap2(augknt([a,xi,b],4),4,x,y) ```

is the least-squares approximant to the data `x`, `y`, by cubic splines with two continuous derivatives, basic interval [`a`..`b`], and interior breaks `xi`, provided `xi` has all its entries in `(a..b)` and the conditions (**) are satisfied in some fashion. In that case, the approximant consists of `length(xi)+1` polynomial pieces. If you do not want to worry about the conditions (**) but merely want to get a cubic spline approximant consisting of `l` polynomial pieces, use instead

```sp = spap2(l,4,x,y); ```

If the resulting approximation is not satisfactory, try using a larger `l`. Else use

```sp = spap2(newknt(sp),4,x,y); ```

for a possibly better distribution of the knot sequence. In fact, if that helps, repeating it may help even more.

As another example, `spap2(1, 2, x, y); `provides the least-squares straight-line fit to data `x`,`y`, while

```w = ones(size(x)); w([1 end]) = 100; spap2(1,2, x,y,w); ```

forces that fit to come very close to the first data point and to the last.

expand all

### Algorithms

`spcol` is called on to provide the almost block-diagonal collocation matrix (Bj,k(xi)), and `slvblk` solves the linear system (*) in the (weighted) least-squares sense, using a block QR factorization.

Gridded data are fitted, in tensor-product fashion, one variable at a time, taking advantage of the fact that a univariate weighted least-squares fit depends linearly on the values being fitted.