Code covered by the BSD License

### Highlights fromAdaptive Robust Numerical Differentiation

4.8

4.8 | 25 ratings Rate this file 153 Downloads (last 30 days) File Size: 166 KB File ID: #13490

27 Dec 2006 (Updated 01 Jun 2011)

Numerical derivative of an analytically supplied function, also gradient, Jacobian & Hessian

File Information
Description

The DERIVESTsuite provides a fully adaptive numerical differentiation tool for both scalar and vector valued functions. Tools for derivatives (up to 4th order) of a scalar function are provided, as well as the gradient vector, directional derivative, Jacobian matrix, and Hessian matrix. Error estimates are provided for all tools.

DERIVEST provides a robust adaptive numerical differentiation (up to the fourth derivative) of a user supplied function, much as quad does for integration. It is semi-intelligent, trying to use that step size which minimizes its estimate of the uncertainty in the derivative.

High order methods are used, although full control is provided to the user when you want it. You can direct the order of the method to be used, the general class of difference method employed (forward, backward, or central differences), the number of terms employed in its generalized Richardson acceleration scheme, step sizes, etc.

Although you can not provide a user supplied tolerance, DERIVEST does return an estimate of its uncertainty in the final result.

For example, the derivative of exp(x), at x=1 is exp(1)==2.71828182845905. DERIVEST does quite well.

[d,err]=derivest(@(x) exp(x),1)
d =
2.71828182845904

err =
1.02015503167879e-14

See the provided demos for many more examples.

Acknowledgements

Numerical Derivative Of Analytic Function inspired this file.

MATLAB release MATLAB 7.0.1 (R14SP1)
Tags for This File
Everyone's Tags
Tags I've Applied
18 May 2013

Absolutely awesome work! So easy to use, brilliantly done. This is probably the best thing to happen to mankind since sliced bread. And to put it up for free? John, you sir are a good man!

30 Jan 2013

Just great

01 Nov 2012
12 May 2012
20 Dec 2011
11 Jul 2011

Thanks a lot for this useful tool!

I have a question about how far away will the program evaluate the fun from the initial point. I have a function which only exits in a certain range around the initial point(but big enough in my sense, e.g. starting at .1, requiring >0). However, the program will break down for evaluating out of range points after a fare amount of calculation. Could you please give me some idea about why this happens?

(BTW, my function has some randomness, does this matter?)

Thanks so much!

26 Jun 2011

Excellent! Some of its predecessors also do accurate derivatives, but this one is particularly easy to use for gradients and Hessians.

18 Apr 2011

H John,

This looks great. I'm trying to use the Hessian function to find the Fisher Information Matrix for Max-likelihood. However the Hessian program is choosing parameters which are not allowable in my chosen model. How can I restrict the Hessian algorythmn for permissible parameter values

04 Apr 2011

Your problem is in the way you called the code.

H=hessian(@(p) @reg2002, [0.03 3 -5])

Here is part of the error message:

"??? The following error occurred converting from function_handle to double:
Error using ==> double
Conversion to double from function_handle is not possible."

Is reg2002 a function m-file, into which you are trying to pass the variable p? If so, then you could have done this

H=hessian('reg2002', [0.03 3 -5])

or, you could have done this

H=hessian(@reg2002, [0.03 3 -5])

In either case, it assumes that reg2002 is an m-file, and passes in a variable with two elements in it asa vector. It would return a 2x2 Hessian matrix. The use of @reg2002 turns that into a function handle.

See the error message you had though. Matlab sees this:

@(p) @reg2002

and tries to return a function handle as the result of a function. Then my code looks at it, and tries to use the function handle as a number. Of course it fails.

Finally, you might also have done this

H=hessian(@(p) reg2002(p), [0.03 3 -5])

Here, you would be creating an anonymous function, a function of the variable p. It is yet another function handle that the code can use properly.

04 Apr 2011

hi John, many thanks for the wonderful suite!

I met a problem when I use the function from the m-file. my code is:

H=hessian(@(p) @reg2002, [0.03 3 -5])

??? The following error occurred converting from function_handle to double:
Error using ==> double
Conversion to double from function_handle is not possible.

Error in ==> derivest at 330
f_x0(j) = fun(x0(j));

Error in ==> hessdiag at 59
[HD(ind),err(ind),finaldelta(ind)] = derivest( ...

Error in ==> hessian at 74
[hess,err] = hessdiag(fun,x0);

I don't what is wrong here. is there mistakes I made? many thanks in advance.

Emma

04 Apr 2011

Felipe - Note that the submissions #15235 and #26807 actually appeared AFTER my tool (#13490) did. Submissions are assigned sequential numbers on the FEX. I see no reason to acknowledge a later submission on the FEX. In general, since forward acknowledgement would seem to require precognition of some sort, it is rather difficult to accomplish. In fact, most of the references in that Wikipedia article referenced appeared several years after this tool was written and posted on the FEX.

I will modify the title in light of your comment though, since the phrase might conceivably confuse some people.

04 Apr 2011

"Automatic" in the title might confound viewers that it implements "automatic differentiation", see <http://en.wikipedia.org/wiki/Automatic_differentiation> and submission # 26807.

You might want to use the acknowledgement field so that related submission #15235 back-links to yours.

Haven't tried it yet, reason why I'm not rating it.

28 Mar 2011
22 Feb 2011

Hi John,

Thank you very much for the great suite!

The information on the Hessian-function explicitly states that it is not for frequent use on an expensive to evaluate objective function. I tried it anyway, because that is what I need. Most of the time, I get something reasonable (hard to check, but I'll worry about that later), but sometimes I get NaN's. Any idea what the problem might be?

Perhaps some more details are warranted: The function I want to differentiate is a likelihood-function. It is a function of time, and I need the Hessian at the maximum for each timestep. (The maximum is one point associated to the last time period.) The function takes 5 parameters.

Any help or ideas appreciated!

Thank you very much,
Sturla

27 Jan 2011

If you are looking for all of the second partials, then yes, a loop on the Hessian computation seems the solution.

26 Jan 2011

Hi John,
Thanks much for the quick response.
jacobianest was indeed what I needed, should have looked into the whole package first.

I was looking at the hessian function, which doesn't allow me to use a vector function (perhaps understandably, as the output would be a tensor otherwise).

Proceeding with the matrix-vector example
@(x) A*x;
Should I loop over the rows of and call hessian m times to get all the Hessians?
for i=1:m
f = @(x) A(i,:)*x; // Returns a scalar
[h(i) herr(i)] = hessian(f, x0);
end

Thanks,
- Aydin

26 Jan 2011

Hi Aydin,

The trick is, derivest itself takes only a function of a single variable. Suppose I had a scalar function of three variables. Thus it takes a set of three variables, returning a single number. If you wish to compute a partial derivative of that function, then you get a single number out. But that is a partial derivative, NOT a derivative. "The" derivative of a scalar function of three variables is given by the gradient vector, composed of three partial derivatives. So I have provided the tool gradest. Use it for scalar functions of multiple variables.

Now, suppose I have a function of n variables, that produces m output results? (This is what you have by the way.) Now you must compute a Jacobian MATRIX, an m by n matrix of partial derivatives. Again, derivest is not the tool, nor is gradest. This is a different problem that requires a different tool yet, so I wrote jacobianest.

>> A = magic(3)
A =
8 1 6
3 5 7
4 9 2

>> [J,Jerr] = jacobianest(fun,[1;2;3])
J =
8 1 6
3 5 7
4 9 2

Jerr =
7.7667e-14 9.7084e-15 5.3057e-14
2.6528e-14 4.3706e-14 7.09e-14
3.8833e-14 9.4059e-14 1.9417e-14

In the case that you really only wanted to know the derivative of a specific element of the result, with respect to a specific single variable out of the several inputs, you need to define a function where the OTHER variables are held fixed. Thus...

>> x1 = 1;
>> x2 = 2;
>> fun2 = @(x3) [0 1 0]*fun([x1;x2;x3]);
>> [d23,derr] = derivest(fun2,3,'vectorized','no')
d23 =
7
derr =
4.785e-14

Thus, fun2 fixes the first two variables, then extracts out the second output result. derivest can now operate, as long as I tell it that this function is not vectorized. It is not set up to take a vector of inputs and produce one result for each. fun2 takes only a scalar input, and produces only a scalar output.

John

25 Jan 2011

Hi John,
Thanks for the lovely tool.
I couldn't figure out how to differentiate multivariable functions with it though. For example,
f = @(x) A*x

where A is a matrix and x is a vector.

Your code seems to do the following:

% Loop over the elements of x0, reducing it to
% a scalar problem. Sorry, vectorization is not
% complete here, but this IS only a single loop.
der = zeros(nx0);
errest = der;
finaldelta = der;
for i = 1:n
...
end

while A*x doesn't really reduce to a single scalar problem as far as I can see. Any suggestions?

21 Jan 2011

Sorry John, sorted out the problem, was my fault... thanks again for these useful codes.
Alessandro

21 Jan 2011

Hi John,

I tried to run the Rosenbrock example included in the help of your routine hessian, i.e. I ran

rosen = @(x) (1-x(1)).^2 + 105*(x(2)-x(1).^2).^2;
[h,err] = hessian(rosen,[1 1])

I get the following error:

"??? Error using ==> subsindex
Function 'subsindex' is not defined for values of class 'function_handle'."

do you know how to solve this?
thank you.
Alessandro

18 Dec 2010

I believe that fmincon has improved since my statement in the past, so that the hessian as returned is now a better approximation. Of course, if you supply the hessian, no updates are needed.

17 Dec 2010

Hi Errico,

In another post, you stated that matlab does not compute the Hessian very well. Is this why the Hessian may not get updated in fmincon? Secondly, would providing a user-suppled Hessian (through the DERIVEST routine, for e.g.) be able to improve the ability of fmincon to update the Hessian in all cases? Thanks!

08 Dec 2010

HI Errico

I tried to expand your code in order to derivate Bilinear Matrix inequalities. The program is still in a rough form, in particular I didn't implement the error part of the estimation, but I was able to produce derivatives of a BMI function correctly. If you are interested in sending you the programm please let me know. I will try though one of these days to implement the error term as well.

Kimonas

06 Dec 2010

Hi Andrew,

The point is, derivest uses a sequence of nested approximations, using a geometric sequence of points for the steps. For example, suppose we chose to compute the derivative of tan(x), at x = pi*2/3.

[D,errest] = derivest(@tan,pi*2/3)

D =
4

errest =
7.0302e-13

Tan(x) has a singularity at x = pi/2, so if the step size is chosen to be large enough, the approximations will actually span that singularity. This will cause the corresponding estimate to be useless, but since there are multiple estimates produced, some of them will be good, and I can figure out which estimates in the sequence ARE good and which ones are bad.

I pick that estimate with the lowest predicted error, along with some additional criteria.

A nice thing about the nested sequence of approximations is it allows me to deal with functions that have problem scaling. I can almost always find a viable solution.

John

06 Dec 2010

In derivest_demo.m, I found the following statement:
"Although a central rule may put some samples in the wrong places, it may still succeed"

How would it put samples in the "wrong places"?

29 Nov 2010

Hi Prasanna,
In theory, I could have written derivest to provide higher derivatives. In practice, high order numerical differentiation is difficult to do. Differentiation is a noise amplifier. It takes any tiny noise in your system, and boosts it. By the time you get up to the 4th derivative or so, the noise down in the least significant bits can start to cause numerical problems. For this reason, I limited it there. Any further, and I was no longer able to trust what was coming out in some test cases. And since I had to put a limit somewhere, I chose 4 as the maximum order of derivative it would provide.

You can use the error estimate provided as a measure of how well I think I was able to do. As an example, try a simple trig function. Differentiate sin(x), at x = pi/4. The result will always be plus or minus sqrt(2)/2, depending on the derivative order.

>> fun = @sin;
>> [D,DE] = derivest(fun,pi/4,'derivativeorder',1)
D =
0.70711
DE =
7.0617e-15

For the first derivative, I can give virtually full precision. However, the second derivative is not quite so verifiably computable, and the error estimate reflects that fact.

>> [D,DE] = derivest(fun,pi/4,'derivativeorder',2)
D =
-0.70711
DE =
1.666e-12

>> [D,DE] = derivest(fun,pi/4,'derivativeorder',3)
D =
-0.70711
DE =
6.6146e-11

>> [D,DE] = derivest(fun,pi/4,'derivativeorder',4)
D =
0.70711
DE =
1.9122e-05

See that by the time I've gotten to the 4th derivative, derivest tells me only to trust the first 4 or 5 digits in the result. Had I allowed higher orders here, the result would rapidly become a virtual random number generator.

Now, with care, one could modify the code to work for 5th derivatives. Just look for par.DerivativeOrder, and see what was done, extending the code for 5th order derivatives. A quick check through the code finds only one or two spots where serious understanding of the methods employed was necessary. (To be honest, leave such a modification of the code to me.)

Can you nest derivest calls? Yes, in theory you can do so to produce higher order derivatives, and the error estimate will tell you if it thinks it is generating garbage for a result. This test computes the 5th derivative of sin(x), at pi/4.

>> format long g
>> d2 = @(x) derivest(fun,pi/4 + x,'derivativeorder',2);

>> [d5,d5err] = derivest(d2,0,'derivativeorder',3)
d5 =
0.707106775613828
d5err =
4.17633428512851e-08

>> sqrt(2)/2
ans =
0.707106781186548

Note that derivest thinks it was able to do so reasonably well, and it was just about right on where it tells you to no longer trust the digits in the result. In fact, in this case, the nested result is probably more accurate than a direct computation of the 5th derivative. So the 3rd derivative of the second derivative is more accurate (but a bit slower!) than a direct computation of the 5th derivative.

HTH,
John

28 Nov 2010

Hello John,
This piece of code is great. I had one question, I have a function for which I have no analytical expression, hence cannot vectorize it. But I can do pretty fast numerical computation of it and I know that it has smooth first, second, third and fourth derivatives by applying your DerivTest function on it. I need one more derivative to complete my calculation and I was wondering if I should make an anonymous function out of the fourth derivative calculation using DerivTest and take its first derivative to get the fifth. I wanted to know if there is some specific reason for which this code goes only upto 4th derivative?

19 Nov 2010

Thanks very much, that was exceptionally helpful. These are a nice set of functions and very much appreciated.

Unsurprisingly, calling a loop to gradest for several points inside fminsearch is not the fastest thing in the world, but such are the trade-offs, and it does seem to do the job!

18 Nov 2010

I have provided the related function gradest to compute a gradient, zipped with derivest. For example, I'll pick arbitrary values for the parameters k1,k2,k3,k4,k5, and then define a function handle.

k1 = 1/2;
k2 = 1/3;
k3 = 1/4;
k4 = 1/5;
k5 = 1/6;

fun = @(xyz) k1*xyz(1)*(1- ...
exp(-k2*(xyz(2)^k3)*(xyz(3)^k4)*(xyz(1)^k5)))

ans =
0.21997 0.018836 0.010046

For multiple points, wrap a loop around the call to gradest.

John

17 Nov 2010

I may be missing something obvious here, but is there a clever way to use derivest to calculate a set of partial derivatives?

I have a function something like
f = k1 * x * (1 - exp(-k2*(y^k3)*(z^k4)*(x^k5))
and would like to be able to calculate partial derivatives df/dx, df/dy, and df/dz for sets of specific values of x, y, and z.

I am doing this in order to calculate an error weighting for least squares fitting -- I have uncertainties in x, y, and z which I would like to map onto f in order to determine a weighting factor for the minimization (so I want the gradient at each point). I would prefer to determine the derivative numerically because that will allow me the most flexibility in being able to try different forms of the function, etc.

Many thanks.

27 Mar 2010

Large relative reported errors might mean that the derivative computed is small compared to the uncertainty in it. It may also mean that there is some problem in the computation, leaving the derivative uncertain. It is difficult to know what has happened from only this description.

One option is to compute the derivative in question using derivest in stead of the hessian tool. While this will be less efficient, there are many more options for overcoming problems that I have provided in derivest itself. That way you could be sure of what is the problem, and hopefully overcome it.

17 Mar 2010

Hi,

I have some problem that sometimes when I compute the hessian I get that err contains terms that are larger than terms in the reported hessian. What are the reasons for this and are there any general ways to rewrite something to avoid this problem?

01 Mar 2010

i'll get release 14 or 7

28 Feb 2010

Hi V,

You must be using an old MATLAB release. Here is what happens when I try your example in R2010a:

>> myfun1 = inline('exp(x)');
[d,err]=derivest(myfun1,1,'Vectorized','no')

d =

2.7183

err =

2.1641e-14

So then I looked more carefully at your comment. I see the warning message gives the answer away.

Warning: File: C:\MATLAB6p5\work\derivest.m Line: 463 Column: 1
Unmatched "end".

You are running version 6.5 (i.e., release 13.) Derivest was written in Release 14, or version 7. As such, it uses several things that will cause your MATLAB release to be quite unhappy.

Sorry,
John

28 Feb 2010

hi john,

i tried this

myfun1 = inline('exp(x)');
[d,err]=derivest(myfun1,1,'Vectorized','no')

and i got the error below,

Warning: File: C:\MATLAB6p5\work\derivest.m Line: 463 Column: 1
Unmatched "end".
(Type "warning off MATLAB:m_warning_end_without_block" to suppress this warning.)
Warning: File: C:\MATLAB6p5\work\derivest.m Line: 523 Column: 1
Unmatched "end".
(Type "warning off MATLAB:m_warning_end_without_block" to suppress this warning.)
Warning: File: C:\MATLAB6p5\work\derivest.m Line: 538 Column: 1
Unmatched "end".
(Type "warning off MATLAB:m_warning_end_without_block" to suppress this warning.)
Warning: File: C:\MATLAB6p5\work\derivest.m Line: 573 Column: 1
Unmatched "end".
(Type "warning off MATLAB:m_warning_end_without_block" to suppress this warning.)
Warning: File: C:\MATLAB6p5\work\derivest.m Line: 659 Column: 1
Unmatched "end".
(Type "warning off MATLAB:m_warning_end_without_block" to suppress this warning.)
Warning: File: C:\MATLAB6p5\work\derivest.m Line: 759 Column: 1
Unmatched "end".
(Type "warning off MATLAB:m_warning_end_without_block" to suppress this warning.)
??? Error using ==> factorial
N must be a positive integer.

Error in ==> C:\MATLAB6p5\work\derivest.m (fdamat)
On line 564 ==> c = 1./factorial(1:2:(2*nterms));

Error in ==> C:\MATLAB6p5\work\derivest.m
On line 275 ==> fdarule = [1 0]/fdamat(par.StepRatio,1,2);

10 Jan 2010

Another really excellent piece of code, I hope TMW buy you a nice christmas present every year for extending their software so extensively for them.

23 Nov 2009
19 Oct 2009

Hello,

Hessian does not work, neither in any example nor when I try to use it. It gives the message:
Function 'subsindex' is not defined for values of class 'function_handle'.
All other functions seem to be great though..
Best,

19 Aug 2009

Hi John,

Thanks for the timely and detailed reply! I had assumed that derivest handles all cases. Now I see jacobianest is a standalone function. Thanks again! And great code!

Michelle

18 Aug 2009

Hi Michelle,

The problem is, when you define myfun as you have, it is not a scalar valued function. It returns a vector argument. Thus we can do this:

myfun=inline('[exp(x);exp(2*x)]');

myfun(2)
ans =
7.3891
54.598

See that there are two numbers returned for a scalar input.

(By the way, I'd strongly recommend the use of function handles and anonymous functions instead of inline functions. An inline function is comparatively very slow to evaluate.)

The derivest code computes the derivative of a scalar valued function. Change your function to return a scalar value, as do myfun1 and myfun2 below, and derivest works as designed.

myfun1 = inline('exp(x)');
[d,err]=derivest(myfun1,1,'Vectorized','no')

d =
2.7183
err =
9.3127e-15

myfun2 = inline('exp(2*x)');
[d,err]=derivest(myfun2,1,'Vectorized','no')
d =
14.778
err =
1.8401e-13

Now, how might one define the derivative of a vector valued function? This is defined mathematically as the Jacobian. In fact, in the derivest suite, I have provided the jacobianest function to compute what you wish. It works on a vector valued function, returning a jacobian matrix. Here, since the parameter of these functions is a scalar, the Jacobian matrix is just a vector of length 2.

[d,err] = jacobianest(myfun,1)
d =
2.7183
14.778
err =
3.6841e-14
1.4736e-13

John

18 Aug 2009

Thanks so much for making such a useful suite available!

I was experimenting with the code and typed in the following two commands:

>> myfun=inline('[exp(x);exp(2*x)]')
>> [d,err]=derivest(myfun,1,'Vectorized','no')

but got back error messages that read

??? In an assignment A(I) = B, the number of elements in B and
I must be the same.

Error in ==> <a href="error:/a/fmalx1/lcl/fma/res1/mxw/research/min/code/matlab/DERIVESTsuite/derivest.m,362,1">derivest at 362</a>
f_plusdel(j) = fun(x0i+h*delta(j));

I'm posting it here because I thought you might be able to spot the problem much more quickly than I could. Could you let me know?

27 Jun 2009

Benn,

My guess is that you have written your own factorial function, probably as a homework assignment, since the error messages show a recursive implementation of factorial. This causes a failure in derivest, since your version is not properly vectorized. By vectorized, I mean that when called with a list of numbers, the Mathworks version of factorial computes the factorial of each number.

You can learn which factorial version is called by typing this at the command line:

which factorial -all

MATLAB will show which version of the factorial function will be used. You should see the version that you wrote listed at the very top.

At first, I thought that possibly you simply are using an old version of MATLAB, but this is unlikely, since the Mathworks would never have written the factorial function to have a recursive definition as I see. (Recursive factorial definitions are a nice way to teach a student recursive coding styles, but they are a terrible coding style in practice because of the extraneous, wasted function call overhead just to multiply a list of consecutive numbers together.)

The solution is to remove the bad (homework assignment) version of factorial from your search path. It will be inefficient anyway compared to the supplied version of factorial that MATLAB already comes with, and your version is not properly vectorized. At the very least, move it to another place far down on your search path so that matlab will use the correct version of factorial instead.

John

27 Jun 2009

help! just downloaded the toolbox and tried to run the most basic example in the derivest documentation:

[d,e]=derivest(@(x) exp(x),1)

and got a bunch of errors of this form:

Error in ==> factorial at 8
x = n*factorial(n-1);

Error in ==> factorial at 8
x = n*factorial(n-1);

Error in ==> derivest>fdamat at 564
c = 1./factorial(1:2:(2*nterms));

Error in ==> derivest at 283
fdarule = [0 1 0]/fdamat(par.StepRatio,1,3);

16 Feb 2009

Works really fine! very helpful! Thanks...

16 Feb 2009

This worked better than fminsearch when used as a damped newton method for finding a root with extremely high sensitivity. fminsearch was overly slow, and often overshot the root, even though the tolerance was high, and fzero didn't work at all. Derivest was fast.
This is the second time I've used great code from John D'Erico.
Thanks!

13 Jan 2009

I've just uploaded a new release that repairs the problem of complex domain differentiation. For example, the current release now gets this correct:

exp(1+i)
ans =
1.4687 + 2.2874i

derivest(@(z) exp(z),1+i)
ans =
1.4687 + 2.2874i

27 Dec 2008

When I use this function to get the derivative at the complex numbers,problem appears.For example:
derivest(@(z) exp(z),1+i)
ans =
1.46869393991590 - 2.28735528717885i
However,the derivative of exp(z) at z(complex numbers) should be exp(z)
exp(1+i)
ans =
1.46869393991589 + 2.28735528717884i
which is the conjucate of the above.

08 Aug 2008

Rock 'n' Roll

08 Aug 2008

Rock 'n' Roll

06 Mar 2008

Works just great. I was in a situation where using hessiancs was problematic because the functions being evaluated had a transiently complex nature. hessiancs works well with functions that never use any complex arithmetic, but watch out for using complex steps if the internals use any complex arithmetic, even if the inputs and outputs are both real. Downloaded the package, changed function names, and it worked right out of the box - slower, but 'bullet-proof' sometimes beats 'fast'. Good documentation, etc., too. I appreciate that hessiancs used a similar form to the one you used - I'll go back to it when I need more speed and have the time to fix my loss functions.

Scott

08 Feb 2008

Can anyone let me know what conflicted with lightspeed? I'll try to fix it if possible. It might be a function name that I chose. Otherwise, this is just basic Matlab code.

08 Feb 2008

Very useful! I was able to use this to check my symbolically derived hessian matrix in an optimization routine... optimset('DerivativeCheck','on') checks the gradient numerically, but apparently not the hessian.

In case anyone else has a problem, I found that this was incompadible with the matlab Lightspeed toolbox, but that is no doubt the Lightspeed toolbox's fault. Removing lightspeed fixed the issue.

20 Dec 2007
07 Oct 2007

THIS WEBSITE IS VERY HELPFULL :)

01 Sep 2007

I get this error sometimes:
??? Improper assignment with rectangular empty matrix.

Error in ==> derivest at 457
[errest(i),ind] = min(errors);

- why that would be?

02 Apr 2007

The best!

27 Dec 2006

Minor changes

08 Feb 2007

Added an option for differentiating a non-vectorized function.

12 Feb 2007

Fixed a bug in derivest which sometimes occurred for very small values of the variable.

08 Mar 2007