## Contents

- How does residueEst work?
- Use a polynomial to fit a subsequence of points. Polyfit would do.
- The trick is learn from Goldilocks.
- Residue of exp(z)/z, around z0 == 0
- A second order pole, exp(3*z)./z.^2
- A second order pole, 1./z.^2, at z0 = 0
- Another 2nd order pole with a zero residue, 1./sin(z).^2, z0 = pi

## How does residueEst work?

%{ How might one best compute the residue of a complex function at a known pole of low order using numerical techniques? Since the function is not finite at the pole, we cannot evaluate it directly there. The approach taken by residueEst is to sample the function at a geometric sequence of points approaching the pole. This is not unlike the approach I took with my derivest tools. For example, suppose we wish to compute the residue of the first order pole of 1./sin(z), at z = pi. (The residue is -1.) For a first order pole, the residue is given by the limit: lim (f(z)*(z-z0)) z --> z0 ResidueEst solves this problem by choosing some offset, dz, from z0. Pick some small offset, say 1e-16. Now, evaluate the function at a sequence of points z0 + dz, z0 + k*dz, z0 + k^2*dz, ..., z0 + k^n*dz. The default in derivest is to use the geometric factor of k = 4. %} format long g z0 = pi; dz = 1e-16; k = 4; delta = dz*k.^(0:30) % Now, evaluate the function at each of these points, z0 + delta, and % multiply by (z0 + delta - z0) == delta. fun = @(z) 1./sin(z); fun_of_z = fun(z0+delta).*delta

delta = Columns 1 through 3 1e-16 4e-16 1.6e-15 Columns 4 through 6 6.4e-15 2.56e-14 1.024e-13 Columns 7 through 9 4.096e-13 1.6384e-12 6.5536e-12 Columns 10 through 12 2.62144e-11 1.048576e-10 4.194304e-10 Columns 13 through 15 1.6777216e-09 6.7108864e-09 2.68435456e-08 Columns 16 through 18 1.073741824e-07 4.294967296e-07 1.7179869184e-06 Columns 19 through 21 6.8719476736e-06 2.74877906944e-05 0.0001099511627776 Columns 22 through 24 0.0004398046511104 0.0017592186044416 0.0070368744177664 Columns 25 through 27 0.0281474976710656 0.112589990684262 0.45035996273705 Columns 28 through 30 1.8014398509482 7.20575940379279 28.8230376151712 Column 31 115.292150460685 fun_of_z = Columns 1 through 3 0.816561967659769 -1.24368623276475 -0.96741494953197 Columns 4 through 6 -1.05007818637943 -0.998645996304325 -0.999393504822813 Columns 7 through 9 -1.0006650249676 -1.00016932159893 -1.0000454724843 Columns 10 through 12 -0.999997574134254 -1.0000025407217 -1.00000060598878 Columns 13 through 15 -1.00000012230672 -1.00000000138628 -1.00000000424339 Columns 16 through 18 -1.00000000082177 -0.999999999966395 -1.0000000000115 Columns 19 through 21 -1.00000000003004 -1.00000000013473 -1.0000000020163 Columns 22 through 24 -1.00000003223861 -1.00000051580866 -1.00000825298128 Columns 25 through 27 -1.00013205914403 -1.00211587978403 -1.03462138368425 Columns 28 through 30 -1.85044046871399 -9.03930662202117 55.26083092194 Column 31 -142.066423215372

## Use a polynomial to fit a subsequence of points. Polyfit would do.

Note that if we look at the points that are very close to z0, then the polynomial may have strange coefficients.

% For a first order pole, we are really only interested in the value % of the constant terms for this polynomial model. Essentially, this % is the extrapolated prediction of the limiting value of our product % extrapolated down to delta == 0. P0 = polyfit(delta(1:4),fun_of_z(1:4),2) % A nice feature of this sequence of points, is we can re-use the % function values, building a sliding sequence of models. P0 = polyfit(delta(2:5),fun_of_z(2:5),2) P0 = polyfit(delta(3:6),fun_of_z(3:6),2) P0 = polyfit(delta(4:7),fun_of_z(4:7),2) P0 = polyfit(delta(5:8),fun_of_z(5:8),2) % See how, as we move along this sequence using a sliding window of 4 % points, the constant terms is approaching -1. P0 = polyfit(delta(5:8),fun_of_z(5:8),2) P0 = polyfit(delta(12:15),fun_of_z(12:15),2) % At some point, we move just too far away from the pole, and our % extrapolated estimate at the pole becomes poor again. P0 = polyfit(delta(26:29),fun_of_z(26:29),2) P0 = polyfit(delta(27:30),fun_of_z(27:30),2)

Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT. P0 = 1.40722697171131e+29 -1.08866090694287e+15 0.165206417147826 Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT. P0 = -7.74680625721916e+26 25552618526783.7 -1.14677041257162 Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT. P0 = 4.67339267331006e+24 -451780266912.317 -1.00159018384202 Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT. P0 = -1.05363184798865e+24 514903808559.053 -1.03508462443342 Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT. P0 = 3.39997173195685e+21 -6525897035.24524 -0.9986021313655 Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT. P0 = 3.39997173195685e+21 -6525897035.24524 -0.9986021313655 Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the polynomial, or try centering and scaling as described in HELP POLYFIT. P0 = -3107361376.59391 101.434422959505 -1.00000049047424 P0 = -0.114884717006806 -0.297892275588167 -0.928218832274888 P0 = 0.149132944160628 -2.4122832973759 0.881916750679264

## The trick is learn from Goldilocks.

Choose a prediction of the pole for some model that is not too close to the pole, nor to far away. The choice is made by a simple set of rules. First, discard any predictions of the limit that are either NaN or inf. Then trim off a few more predictions, leaving only those predictions in the middle. Next, each prediction is made from a polynomial model with ONE more data point than coefficients to estimate. This yields an estimate of the uncertainty in the constant term using standard statistical methodologies. While a 95% confidence limit has no true statistical meaning, since the data is not truly random, we can still use the information.

ResidueEst choose the model that had the narrowest confidence bounds around the constant term.

[res,errest] = residueEst(@(z) 1./sin(z),pi) % See that if the true residue is really is -1, then we should have % (approximately) % % abs(res - (-1)) < errest [abs(res - (-1)),errest]

res = -1.00000000000222 errest = 1.23720470391972e-11 ans = 2.22333262911434e-12 1.23720470391972e-11

## Residue of exp(z)/z, around z0 == 0

Here the residue should be exp(0) == 1

[res,errest] = residueEst(@(z) exp(z)./z,0)

res = 1 errest = 0

## A second order pole, exp(3*z)./z.^2

The residue should be 3.

[res,errest] = residueEst(@(z) exp(3*z)./z.^2,0,'poleorder',2) % The error estimate is a reaonable one here too. [abs(res - 3),errest]

res = 3.00000000019214 errest = 2.26398426905353e-10 ans = 1.92144078425827e-10 2.26398426905353e-10

## A second order pole, 1./z.^2, at z0 = 0

The residue should be 0

```
[res,errest] = residueEst(@(z) 1./z.^2,0,'poleorder',2)
```

res = 1.77121275308468e-15 errest = 1.83193682355051e-14

## Another 2nd order pole with a zero residue, 1./sin(z).^2, z0 = pi

The residue here is zero

```
[res,errest] = residueEst(@(z) 1./(sin(z).^2),pi,'poleorder',2)
```

res = -6.43430530792682e-09 errest = 3.61963048877858e-08