What do you think of my numerical Jacobian, using the central-difference method?

Hi everyone!
Here's my 6x6 numerical Jacobian, using the central-difference method.
For now, I've used the same step-size h for each differentiation variable.
What do you think of it? Any suggestions?
Thanks in advance!
%% write six first-order equations
rdots = [vGx; vGy; omega];
rddots = [aGx; aGy; omegadot];
%% specify step-size
h = .001;
%% use simple function notation
F1 = rdots(1);
F2 = rdots(2);
F3 = rdots(3);
F4 = rddots(1);
F5 = rddots(2);
F6 = rddots(3);
%% write a central-difference method with respect to the six state variables
% gradient in the first component function F1
J11 = ( F1( (xG+h), yG, theta, vGx, vGy, omega ) - F1( (xG-h), yG, theta, vGx, vGy, omega ) ) / 2*h;
J12 = ( F1( xG, (yG+h), theta, vGx, vGy, omega ) - F1( xG, (yG -h), theta, vGx, vGy, omega ) ) / 2*h;
J13 = ( F1( xG, yG, (theta+h), vGx, vGy, omega ) - F1( xG, yG, (theta-h), vGx, vGy, omega ) ) / 2*h;
J14 = ( F1( xG, yG, theta, (vGx+h), vGy, omega ) - F1( xG, yG, theta, (vGx-h), vGy, omega ) ) / 2*h;
J15 = ( F1( xG, yG, theta, vGx, (vGy+h), omega ) - F1( xG, yG, theta, vGx, (vGy-h), omega ) ) / 2*h;
J16 = ( F1( xG, yG, theta, vGx, vGy, (omega+h) ) - F1( xG, yG, theta, vGx, vGy, (omega-h) ) ) / 2*h;
% gradient in the second component function F2
J21 = ( F2( (xG+h), yG, theta, vGx, vGy, omega ) - F2( (xG-h), yG, theta, vGx, vGy, omega ) ) / 2*h;
J22 = ( F2( xG, (yG+h), theta, vGx, vGy, omega ) - F2( xG, (yG -h), theta, vGx, vGy, omega ) ) / 2*h;
J23 = ( F2( xG, yG, (theta+h), vGx, vGy, omega ) - F2( xG, yG, (theta-h), vGx, vGy, omega ) ) / 2*h;
J24 = ( F2( xG, yG, theta, (vGx+h), vGy, omega ) - F2( xG, yG, theta, (vGx-h), vGy, omega ) ) / 2*h;
J25 = ( F2( xG, yG, theta, vGx, (vGy+h), omega ) - F2( xG, yG, theta, vGx, (vGy-h), omega ) ) / 2*h;
J26 = ( F2( xG, yG, theta, vGx, vGy, (omega+h) ) - F2( xG, yG, theta, vGx, vGy, (omega-h) ) ) / 2*h;
% gradient in the third component function F3
J31 = ( F3( (xG+h), yG, theta, vGx, vGy, omega ) - F3( (xG-h), yG, theta, vGx, vGy, omega ) ) / 2*h;
J32 = ( F3( xG, (yG+h), theta, vGx, vGy, omega ) - F3( xG, (yG -h), theta, vGx, vGy, omega ) ) / 2*h;
J33 = ( F3( xG, yG, (theta+h), vGx, vGy, omega ) - F3( xG, yG, (theta-h), vGx, vGy, omega ) ) / 2*h;
J34 = ( F3( xG, yG, theta, (vGx+h), vGy, omega ) - F3( xG, yG, theta, (vGx-h), vGy, omega ) ) / 2*h;
J35 = ( F3( xG, yG, theta, vGx, (vGy+h), omega ) - F3( xG, yG, theta, vGx, (vGy-h), omega ) ) / 2*h;
J36 = ( F3( xG, yG, theta, vGx, vGy, (omega+h) ) - F3( xG, yG, theta, vGx, vGy, (omega-h) ) ) / 2*h;
% gradient in the fourth component function F4
J41 = ( F4( (xG+h), yG, theta, vGx, vGy, omega ) - F4( (xG-h), yG, theta, vGx, vGy, omega ) ) / 2*h;
J42 = ( F4( xG, (yG+h), theta, vGx, vGy, omega ) - F4( xG, (yG -h), theta, vGx, vGy, omega ) ) / 2*h;
J43 = ( F4( xG, yG, (theta+h), vGx, vGy, omega ) - F4( xG, yG, (theta-h), vGx, vGy, omega ) ) / 2*h;
J44 = ( F4( xG, yG, theta, (vGx+h), vGy, omega ) - F4( xG, yG, theta, (vGx-h), vGy, omega ) ) / 2*h;
J45 = ( F4( xG, yG, theta, vGx, (vGy+h), omega ) - F4( xG, yG, theta, vGx, (vGy-h), omega ) ) / 2*h;
J46 = ( F4( xG, yG, theta, vGx, vGy, (omega+h) ) - F4( xG, yG, theta, vGx, vGy, (omega-h) ) ) / 2*h;
% gradient in the fifth component function F5
J51 = ( F5( (xG+h), yG, theta, vGx, vGy, omega ) - F5( (xG-h), yG, theta, vGx, vGy, omega ) ) / 2*h;
J52 = ( F5( xG, (yG+h), theta, vGx, vGy, omega ) - F5( xG, (yG -h), theta, vGx, vGy, omega ) ) / 2*h;
J53 = ( F5( xG, yG, (theta+h), vGx, vGy, omega ) - F5( xG, yG, (theta-h), vGx, vGy, omega ) ) / 2*h;
J54 = ( F5( xG, yG, theta, (vGx+h), vGy, omega ) - F5( xG, yG, theta, (vGx-h), vGy, omega ) ) / 2*h;
J55 = ( F5( xG, yG, theta, vGx, (vGy+h), omega ) - F5( xG, yG, theta, vGx, (vGy-h), omega ) ) / 2*h;
J56 = ( F5( xG, yG, theta, vGx, vGy, (omega+h) ) - F5( xG, yG, theta, vGx, vGy, (omega-h) ) ) / 2*h;
% gradient in the sixth component function F6
J61 = ( F6( (xG+h), yG, theta, vGx, vGy, omega ) - F6( (xG-h), yG, theta, vGx, vGy, omega ) ) / 2*h;
J62 = ( F6( xG, (yG+h), theta, vGx, vGy, omega ) - F6( xG, (yG -h), theta, vGx, vGy, omega ) ) / 2*h;
J63 = ( F6( xG, yG, (theta+h), vGx, vGy, omega ) - F6( xG, yG, (theta-h), vGx, vGy, omega ) ) / 2*h;
J64 = ( F6( xG, yG, theta, (vGx+h), vGy, omega ) - F6( xG, yG, theta, (vGx-h), vGy, omega ) ) / 2*h;
J65 = ( F6( xG, yG, theta, vGx, (vGy+h), omega ) - F6( xG, yG, theta, vGx, (vGy-h), omega ) ) / 2*h;
J66 = ( F6( xG, yG, theta, vGx, vGy, (omega+h) ) - F6( xG, yG, theta, vGx, vGy, (omega-h) ) ) / 2*h;
% Form the 6x6 Jacobian
J = [ J11, J12, J13, J14, J15, J16;
J21, J22, J23, J24, J25, J26;
J31, J32, J33, J34, J35, J36;
J41, J42, J43, J44, J45, J46;
J51, J52, J53, J54, J55, J56;
J61, J62, J63, J64, J65, J66 ];

2 Comments

Hi John!
What's the best way for me to learn to write my own central-difference code? I've practiced a bit with simple functions such as x^2 and x^2 + y and x + y, and have no issues with toy problems such as those. I played with the step-size h, too, and noticed that too small a step-size gives the incorrect derivative. Is that very popular Jacobest / derivest code on the File Exchange your code? I see your name on it. If so, should I take a peek at that, and see how you wrote it? Or should I try something else first?
The Matlab syntax is the thing that is difficult here, I think.
Ideally, I'd love to write my own method, rather than use something from the File Exchange.
Thanks!
I moved my comment to an answer, and there I show how I would write a finite difference Jacobian. There are many ways to do it of course. (Yes, Jacobest is mine. But you want to learn to write your own. And jacobest does some tricky stuff with adaptive estimation of the derivatives. Those tools are all about adapting the finite difference to the function itself. It works nicely, but, tricky, and that would just be confusing.)

Sign in to comment.

 Accepted Answer

I'm assuming rdots and rddots are symbolic functions, based on your previous posts and on the fact that you have concatenated them together as vectors (which can only be done if the functions are symbolic). Assuming this, I would do,
F=matlabFunction([rdots;rddots], 'vars',{{xG, yG, theta, vGx, vGy, omega}});
function J = numericalJacobian(F,X,h)
arguments
F
X (6,1) double %numerical values for [xG; yG; theta; vGx; vGy; omega]
h=1e-5; %finite differencing step size
end
Delta=speye(6);
J=nan(6);
for i=1:6
J(:,i)=( F(X+h*Delta(:,i)) - F(X-h*Delta(:,i)) )/(2*h); %central difference calcs
end
end

32 Comments

Hi Matt!
Thanks for your answer!
I'm afraid I don't understand it:
What is F, X, Delta, and speye?
Also, in Matlab code, how do we differentiate one variable while holding the others constant?
My naive code doesn't work at all, it turns out.
I've already practiced with toy examples such as x^2, x+y, and x^2 + y, and played with the step-size h, and I noticed that the central-difference method gives an exact derivative, which is nice -- so long as the step-size isn't too small.
I think the biggest issue for me here is the Matlab syntax: how to differentiate one variable while holding all others constant.
I look forward to hearing from you.
Thanks again!
What is F, X, Delta, and speye?
F is the output of matlabFunction, which you are familiar with. It converts your symbolic function into a numeric function.
X is the point at which the Jacobian of F is to be calculated.
Delta is the output of the Matlab command speye which creates a (sparse) identity matrix.
Also, in Matlab code, how do we differentiate one variable while holding the others constant?
That's what this line does,
J(:,i)=( F(X+h*Delta(:,i)) - F(X-h*Delta(:,i)) )/(2*h); %central difference calcs
The i-th column J(:,i) of the Jacobian are the derivatives with respect to i-th unknown, with the other unknowns held constant.
Hi Matt!
This looks super cool, as soon as I understood what Delta does!
I'm off to dinner now and will resume working on this shortly.
Thanks again for your help!!
Hi Matt!
If I have a bunch of symbolic variables for physical parameters, which I want to later specify the values for in the numerical codes (so that I can change them quickly without having to use matlabFunction again), should these physical parameters be added to your 'vars' list and get passed into matlabFunction? That would be like 30 variables to add. Or, are we just passing in the dynamical state variables to matlabFunction?
Thanks!
@Matt J never used X + h*Delta, but X + h*Delta(:,i), and this is a 6x1 vector.
Oo very cool -- thanks Torsten!!
I'm trying to do things slightly differently, primarily because I like differentiating across, row by row (think gradient in component function 1, gradient in component function 2, etc.), and partly because I want to write my own code. I have this so far, but somehow J is just an identity matrix, when its expression (I think) should be massive, because zdot is massive. Note that I haven't used matlabFunction at this point yet. So far, within my workflow, I use matlabFunction last. So, perhaps I want a symbolic Jacobian, and then matlabFunction it, to convert it to a numerical Jacobian function file. Also, none of the parameters in the system are specified here in this symbolic code. I specify all parameter values in the later numerical code, where I can quickly change parameter values and not have to use matlabFunction again.
%% use equationsToMatrix to derive symbolic equations in the form Ax = b
alphas = [aGx; aGy; omegadot];
[M, b] = equationsToMatrix(eqns,alphas);
M = simplify(M);
b = simplify(b);
rdots = [vGx; vGy; omega]; % first 3 ODEs
rddots = M\b; % solves Mx = b for x, last 3 ODEs
zdot = [rdots; rddots]; % all six first-order ODEs
vars = [xG, yG, theta, vGx, vGy, omega]; % differentiation variables
h = 1e-6; % finite-differencing step size
J = nan(6); % preallocate a 6x6 Jacobian matrix
Delta = speye(6); % 6x6 sparse identity matrix
for i = 1:6
J(i,:) = ( ( zdot(i) + h*Delta(i,:) ) - ( zdot(i) - h*Delta(i,:) ) ) / (2*h);
end
Where did you loose the ODE functions f1,f2,...,f6 depending on the ODE variables xG, yG, theta, vGx, vGy, omega ?
It seems that "zdot" in your code from above is f1,...,f6 , but evaluated at a specific point z. That doesn't help. From this numerical (6x1) vector, it's not possible to derive the Jacobian. You need the functions f1,...,f6 depending on the (not yet specified) variables xG, yG, theta, vGx, vGy, omega that constitute your ODE system:
dxG/dt = f1(xG, yG, theta, vGx, vGy, omega)
dyG/dt = f2(xG, yG, theta, vGx, vGy, omega)
dtheta/dt = f3(xG, yG, theta, vGx, vGy, omega)
dvGx/dt = f4(xG, yG, theta, vGx, vGy, omega)
dvGy/dt = f5(xG, yG, theta, vGx, vGy, omega)
domega/dt = f6(xG, yG, theta, vGx, vGy, omega)
Only with the help of the functions, it is possible to evaluate f1,f2,...,f6 at perturbed vectors z+h*ei and z-h*ei from which you can get an approximate Jacobian.
To clarify, zdot is f1, f2, ... , f6 are my six RHS equations of motion, which are not yet evaluated at any point. The first 3 equations of zdot are trivial, and the last 3 equations are massive (exceed line length in the Command Window).
No. If you want to determine the Jacobian numerically (e.g. using central differences), you only need the 6 ODE functions f1,...,f6 that constitute your ODE system:
dxG/dt = f1(xG, yG, theta, vGx, vGy, omega)
dyG/dt = f2(xG, yG, theta, vGx, vGy, omega)
dtheta/dt = f3(xG, yG, theta, vGx, vGy, omega)
dvGx/dt = f4(xG, yG, theta, vGx, vGy, omega)
dvGy/dt = f5(xG, yG, theta, vGx, vGy, omega)
domega/dt = f6(xG, yG, theta, vGx, vGy, omega)
Where are these functions (that you already used as input to ODE45 to solve your ODE system, I guess) ?
Perhaps it's because I have not yet specified inputs to these 6 functions?
I think that's what you're saying and showing, too.
I basically am suffering from only knowing how to do simulations: in the numerical file construct the system of equations and specify inputs in the function file. Then, in the simulation file, call that numerical function file and pass it into ode45.
So, for this Stability workflow, it is a bit different.
To get the approximate Jacobian, my zdot has functions with no inputs specified yet.
I have to specify them, which is what you're saying, I think.
Thanks so much for your patience; I really appreciate your time.
I'm heading home now, and I'll resume work on this shortly.
should these physical parameters be added to your 'vars' list and get passed into matlabFunction?
matlabFunction could be used to create a function of two vector-valued inputs, where one set are your fixed problem parameters. You can then reduce this to a one-input function of your free variables only, as in the example below.
syms a b c x y z
Fsym=[a*x;b*y;c*z]; %symbolic function, [a,b,c] are fixed problem parameters
Fnum=matlabFunction(F,'vars',{[x,y,z], [a,b,c]}) %numeric function
f = function_handle with value:
@(in1,in2)[in2(:,1).*in1(:,1);in2(:,2).*in1(:,2);in2(:,3).*in1(:,3)]
f=@(xyz)Fnum(xyz,[a0,b0,c0]) %[a0,b0,c0] are numerical values assigned to [a,b,c].
That would be like 30 variables to add
The size of this problem and large number of variables makes me wonder if symbolic manipulation is appropriate for this problem at all. It looks like you could have written the whole thing as a numerical function from the get-go.
I wonder in particular why you need to develop complicated symbolic expressions for rddots=M\b. Once you have expressions for M and b separately, you could just calculate rddots numerically.
Yes, I indeed calculate rddots = M\b numerically in a Matlab function file, within my Simulation workflow.
I just am confused about where and how to start my Stability workflow, so I've been changing things around, just feeling a little lost, at the moment. Today is like Day 2 or Day 3, so it's pretty new to me.
Why I used symbolic coding was because my equations were complicated in this sense:
The equations are coupled, and variables need to show up in both the LHS and RHS of the equations.
Something like x = x + y + z is fine in the math world, but in numerical programming, Matlab would throw an error saying x is undefined.
Would you happen to know of a solution to this problem, so that maybe I can bypass the symbolic workflow?
I should say that my simulation workflow, from symbolic to numerical, to solutions obtained from ode45 have now passed every known consistency test / check that I've performed, so I'm somewhat confident that the process is correct -- but perhaps you're right, and that I could've bypassed working symbolically.
Please feel free to share your thoughts.
Thanks!
Do you think I should go to my numerical workflow, and compute the Jacobian there, instead of computing it within my symbolic code?
Thanks!
If I simulate my system, and look at my six equations when ode45 makes a first call to the numerical file where the equations are stored, I have these values below (accelerations are pretty close to zero, because I am using a fixed point in the initial conditions). Could I then estimate a Jacobian with these values, using the central-difference method? (Sorry if this question is very basic!)
ans =
1.079334150722090
-0.646522132307390
0
0.000000000000007
-0.000000000000018
0.000000000009776
I don't think I understand the question. If you have the means of evaluating a function at different points, you obviously have the means of calculating its finite differences.

Hi Matt!

Yes, I think my error in thinking is this: trying to find a Jacobian function to apply globally, to any point. The Jacobian has to be applied locally around a point of interest — it changes, when we change the point of interest, just like the Taylor expansion around some point of interest.

Today, I'm going to write a central-difference method for a very simple system of two equations in the variables x and y, and then I'll take it from there. The math isn't hard for me, but the coding is. So, I should take a step back for now, and start slower. I think I was told to eventually use, or try Jacobianest( ), John's highly-popular method. So, I'll do that too, to compare results, once I have my own method written up correctly
Thanks again for your help!
Hi @Matt J,
I now have this toy problem code, which I think might work, and might follow closely, if not exactly, what you did:
%% Jacobian Practice
zdot = @(z) myrhs(z); % call function zdot
InitialGuess = [3*pi/2,-3*pi/2]; % pass a good initial guess to fsolve
X = fsolve(zdot,InitialGuess) % a fixed point
h = 1e-6; % finite-differencing step-size
delta = speye(2);
J = nan(2);
for i = 1:2
J(:,i) = ( zdot( X + h*delta(:,i) ) - zdot( X - h*delta(:,i) ) ) / (2*h);
end
%% write a system of two equations in two unknowns
function zdot = myrhs(z)
x = z(1);
y = z(2);
xdot = x^2 + y;
ydot = 3*sin(x) - 3 ;
zdot = [xdot; ydot];
end
I would prefer to loop over i and j, for the ith row and jth column of the Jacobian. So, I want to see if I can do that now. Thanks!
Looping over i and j would feel most natural for me -- computing the gradient in each row for a component function -- so I want to see if I can do that.
Hi @Matt J,
Here's my central difference code. Now I know 3 ways to do it: Yours, Torsten's, and my own.
There's a 4th way to do it: Jacobianest( ), John's code, which I'll soon get to.
Thanks again!
%% Jacobian Practice
zdot = @(z) myrhs(z);
InitialGuess = [1,1]; % pass a good initial guess to fsolve
X = fsolve(zdot,InitialGuess) % a fixed point
h = .000001; % finite-differencing step-size
delta = speye(2);
J = nan(2);
e = zeros(2,1);
% for i = 1:2
%
% J(:,i) = ( zdot( X + h*delta(:,i) ) - zdot( X - h*delta(:,i) ) ) / (2*h);
%
% end
for i = 1:2
e(i) = 1;
FL = ( zdot( X + h * e ) - zdot( X - h * e ) ) / (2*h);
J(:,i) = FL;
e(i) = 0;
end
eigNum = eig(J)
%% write a system of two equations in two unknowns
function zdot = myrhs(z)
x = z(1);
y = z(2);
xdot = x + y^2 - pi;
ydot = x + y;
zdot = [xdot; ydot];
end
Take a look again at my code.
The line
X = X(:)
is important to make the X from "fsolve" a column vector. Otherwise you get nonsense when computing X + h*e.
If you had checked J against the analytical Jacobian, you would have noticed that your result for J is wrong.

Hi @Torsten!

Oo maybe I didn’t clear my workspace; I should’ve done a clear all. Then I would’ve noticed. Will resume work on this tomorrow.

Thanks for letting me know!

Hi @Torsten,
I'm back at this now. I've done a "clear all", re-ran my code, and still get the correct numerical Jacobian, without write X = X(:). Even before you left this comment for me, I had wondered about this, by checking the output of fsolve. Whether I leave the output from fsolve as-is, or transpose it, the numerical Jacobian is the same in each case. Is that concerning?
Now that I know how to write a central difference method (probably 80% comfortable with it) evaluated at some fixed point gotten from fsolve, I'm thinking about going back to my symbolic math code and writing a central-difference code there to estimate a Jacobian -- without evaluating it at any particular point yet. I just want that symbolic Jacobian gotten from using the central-difference method and then use matlabFunction to write the Jacobian to a numerical file. And later, evaluate that numerical Jacobian function file at any fixed point that I find in my system (as I change the system's parameter values). Would that exercise be utterly pointless? Or, could there be some value in writing a central-difference method within my symbolic math code, instead of using the symbolic Jacobian( ) function (which creates a massive Jacobian)? The real hope, may be a long shot, is that a symbolic Jacobian gotten via the central-difference method gives a matrix that is easy to understand and make sense of. I wonder if it'll be just as massive as the Jacobian gotten from using the Jacobian( ) Matlab command.
I'm back at this now. I've done a "clear all", re-ran my code, and still get the correct numerical Jacobian, without write X = X(:).
Don't you see the difference ?
%% Jacobian Practice
zdot = @(z) myrhs(z);
InitialGuess = [1,1]; % pass a good initial guess to fsolve
X = fsolve(zdot,InitialGuess) % a fixed point
Equation solved. fsolve completed because the vector of function values is near zero as measured by the value of the function tolerance, and the problem appears regular as measured by the gradient.
X = 1×2
-2.3416 2.3416
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
h = .000001; % finite-differencing step-size
delta = speye(2);
J = nan(2);
e = zeros(2,1);
% for i = 1:2
%
% J(:,i) = ( zdot( X + h*delta(:,i) ) - zdot( X - h*delta(:,i) ) ) / (2*h);
%
% end
for i = 1:2
e(i) = 1;
FL = ( zdot( X + h * e ) - zdot( X - h * e ) ) / (2*h);
J(:,i) = FL;
e(i) = 0;
end
J
J = 2×2
1.0000 -4.6833 1.0000 1.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
X = X(:);
for i = 1:2
e(i) = 1;
FL = ( zdot( X + h * e ) - zdot( X - h * e ) ) / (2*h);
J(:,i) = FL;
e(i) = 0;
end
J
J = 2×2
1.0000 4.6833 1.0000 1.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
%% write a system of two equations in two unknowns
function zdot = myrhs(z)
x = z(1);
y = z(2);
xdot = x + y^2 - pi;
ydot = x + y;
zdot = [xdot; ydot];
end
I think my first fixed point that I've evaluated is a stable one!
Would that exercise be utterly pointless?
I don't understand why you make things that complicated. Use the functions for your ODE system that you took as input to ode45 and apply the central-differencing code from above to get the Jacobians in your fixed points.
Yes, I've done that now and think I have a stable fixed point.
The hope was due to the small chance of getting a manageable-looking symbolic Jacobian, to see if I could learn anything from it. That chance is slim-to-none, I think -- unfortunately.
Goodnight, talk soon!
Thanks for all of your help!

Sign in to comment.

More Answers (2)

What do we think of it? I'm sorry, but blech!
Why? Don't number all of your variables! Don't create long lists of numbered variables, then build a matrix like that. SHIVER. Blech squared. Hey, you asked.
Instead, learn to use arrays. Even loops. The code you do show is not complete, so we can't really know exactly what you have done, nor can we change your code in any easy way to show you how to improve it. For example, it seems that vGx, vGy, etc., are functions.
Think about how nasty code like yours would get when you actually have a problem of any serious size? An actual, real problem? As well, using those numbered variables will be hell to debug. It is easy to mistype something, and then it can be difficult to see.
How would I want to do this?
You have 3 variables. Keep them in a vector. I'll call it Vars. Inside the function, if it is an m-file function, you can unpack the vector into named variables, if that will make your code easier to read for you.
You have three functions. Create a vector of functions. I'll make up some functions.
Funs = {@(X) X(1)^2 + X(2)^2 + X(3)^2;...
@(X) sin(X(1));...
@(X) cos(X(1) + X(3))};
Next, I'll create a delta vector, allowing me to perturb any of the variables, by some small amount.
nvars = 3;
perturb = @(n,incr) ((1:nvars) == n)*incr;
So the perturb function creates a vector of length nvars, which is zero at all positions except for the indicated one. For example. Perturb the third variable, by an increment of 0.001. It can handle a negative increment.
perturb(3,0.001)
ans = 1×3
1.0e-03 * 0 0 1.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Now, create a Jacobian matrix.
nfuns = 3;
Jac = zeros(nfuns,nvars);
incr = 0.001;
X0 = [1 2 4]; % point around which we will create the jacobian matrix
for ifun = 1:nfuns
for ivar = 1:nvars
Jac(ifun,ivar) = (Funs{ifun} (X0 + perturb(ivar,incr)) - Funs{ifun} (X0 + perturb(ivar,-incr)))/(2*incr);
end
end
Jac
Jac = 3×3
2.0000 4.0000 8.0000 0.5403 0 0 0.9589 0 0.9589
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
The code is fairly simple. It is easy to see what was done. If there is a problem in the code, you need only verify ONE line of code. Everything else is obvious. And even that line of code is easy to read, since I created the perturb function, which makes it clear how it works.
Finally, I should check, is that a good estimate of the Jacobian matrix? This is an important step always. Verify that your code works. Don't just trust that it looks right. Everytime I do that, I'm wrong. (I could tell some painful auto-biographical stories...) But that means you should always check your code as you go along, one code fragment at a time. Then when you put it all together, it works.
x = sym('x',[1,3]);
funsyms = [subs(Funs{1} (x));subs(Funs{2} (x));subs(Funs{3} (x))];
fjac = jacobian(funsyms,x)
fjac = 
vpa(subs(fjac,x,X0),5)
ans = 
And that seems perfect. Again, the trick is to get used to working with vectors. Avoid those lists of numbered variables. That is a place you don't want your code to be going. Just imagine what your code would look like if you had a hundred variables.
Better code of course, would be a function, that would take a vector of functions, an increment, and a point to evaluate those derivatives at. I might do that by just encapsulating the code I wrote above into a single function. Could I have written it without the loops? Surely yes. But the loops are easy to read and write. Loops are not a terrible thing in MATLAB. And code that lacks loops might be more complicated, harder to read, harder to debug.

2 Comments

Hi John!
Thanks so much for taking the time and care to write this answer!
I was told to use your method, eventually, but that for now, it would be good for me to write my own central-difference method.
I look forward to using your code!
Thanks again for your help!

Hi John!

As of tonight, I was able to write my own central-difference method! Now I know four ways to do it: Matt J’s, Torsten’s, my own, and I will soon get to yours! I notice two things:

1. The numerical Jacobian agrees with the exact Jacobian for up to 6, 7, or maybe 8 digits after the decimal point. Presumably, your Jacobianest code is more accurate.

2. Using a smaller step-size doesn’t necessarily mean that the numerical Jacobian agrees better with the exact Jacobian.

So, those are my two observations for now.

Just wanted to say thanks again for your time and help!

Goodnight!

Sign in to comment.

It cannot be correct because F1,...,F6 are scalars, not functions.
And if F1,...,F6 were functions, the 2*h expression must be bracketed when calculating the Jacobian elements:
/ (2*h)
instead of
/ 2*h

1 Comment

Hi Torsten!
Yes, I also have indexing errors.
For an array, F1( ... ) would index into F1; I mistakenly used ( ) to supply input arguments.
What do you think is good way for me to learn to write my own central-difference method in Matlab?
For instance, is there a good code online that I can glance at, to have an idea of how I should write my own code?
Thanks!

Sign in to comment.

Categories

Find more on Mathematics and Optimization in Help Center and File Exchange

Products

Release

R2024b

Asked:

on 29 Apr 2025

Edited:

on 1 May 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!