I have a function of two variables, V(x1, x2) that's something like:
syms x1 x2
V = 0.5*(x1^2 + x2^2)
The partial derivatives V_x1 and V_x2 (with respect to x1 and x2) should be
V_x1 = x1
V_x2 = x2
This can be verified by something like jacobian(V, [x1 x2]).
I'm in the situation where I only know V numerically as a matrix of scalars, for example, on a grid:
[x1,x2] = meshgrid(-1:.1:1, -1:.1:1)
V = 0.5*(x1.^2+x2.^2) % used just for generating the scalar version of V
I'm interested in obtaining V_x1 and V_x2 as above. Obviously, I can no longer use the jacobian command because V is no longer an analytic function. I'm currently doing something like:
Vx1 = gradient(V, 1)
Vx2 = gradient(V, 2)
However, I get different numerical values than if I do the Jacobian by hand and evaluate it on the mesh grid. If I calculate the difference between Vx1 and the exact solution (which is x1),
norm(Vx1 - x1)
I'm expecting zero but it is not zero. If I plot Vx1, I see that it visually looks very similar to the exact solution.
Am I using the wrong function with gradient()? How do I explain the discrepancy between its result and the analytical result?