Minimum Euclidean distance to a general surface is a somewhat nasty problem to solve. With ONE independent variable, IF the function is a polynomial one, there is a solution, of sorts. Not trivial. But a solution. If the surface is non-polynomial, it gets nasty. And in multiple dimensions, things can get messy too. Lets see what happens with a one variable problem.
I'll pick a random cubic polynomial. Not even something where I'll pick the coefficients myself.
format long g
coef = randn(1,4)
0.318765239858981 -1.30768829630527 -0.433592022305684 0.34262446653865
I'll use these as coefficients of a cubic polynomial, to be evaluated by polyval.
Now, suppose we choose some arbitrary point in the plane? What is the
xy = randn(1,2)*2
What is the point of closest approach? Be careful. This might change your mind. :)
How would I solve for the projection in terms of minimum Euclidean distance? I would solve for the point that minimizes the function:
as a function of x. Do that by differentiating, and then searching for the roots of the polynomial. So the square of the Euclidean distance i the (x,y) plane is just:
D2 = (P - xy(2))^2 + (x-xy(1))^2;
xroots = double(solve(diff(D2,x)))
0.167082630638436 + 0i
2.91225594186202 + 0i
4.72410209755873 + 0i
-0.48309085270852 - 1.29552403846027i
-0.48309085270852 + 1.29552403846027i
If we ignore the complex roots,
12.8937799521223 + 0i
50.8355152126218 + 0i
5.09171123781745 + 0i
6.70892471381755 + 7.41419202614555i
6.70892471381755 - 7.41419202614555i
the real root with the minimum distance (squared) in the (x,y) plane is the third root. (Note that there will ALWAYS be at least one real solution to this problem. I'll leave the simple proof of that claim to the interested student.)
As you can see, with the axes equal in spacing, the line is orthogonal to the curve, as it must be for a projected point of minimum distance.
So not trivial to solve for the point of minimum Euclidean distance, but not terribly difficult either, at least in one dimension. With very little extra thought, I could have written it as a call to roots, and not gone the symbolic route at all.
With two independent variables though, now we will end up with two multinomial equations, in two unknown. A solution will again always exist for the minimum distance, but we will not have recourse to a tool like roots at all. And there will again be in general multiple solutions, so we would need to find the one with the smallest distance. Solving for a zero of the gradient only yields a stationary point.
So possible, but not perfectly trivial. And you will not be able to compute a solution in a vectorized form.