(a-b)^2 + (c-d)^2 + (e-f)^2 = 0 is differentiable only because it is smoothly invertible, that it can be translated into a series of variable reductions. Inequalities cannot be inverted that way. You cannot even code a > 0 invertibly -- if you could then c>=d could be coded as (c-d)^2 - delta_c = 0 together with however you coded delta_c > 0.
Unless, that is, you are okay with coding Heaviside functions, in which delta_c > 0 translates to Heaviside(delta_c) - 1 = 0 after having defined Heaviside(0) as 0 (Heaviside(0) does not have a fixed value, not really; one of the common conventions says Heaviside(0) = 1/2).
But diff(Heaviside(delta_c),delta_c) is Dirac(delta_c) and that is considered a distribution rather than a particular value, definitely not continuously differentiable. I would not consider it suitable for the use in this situation, but perhaps the theory of GRG is more flexible than I am.