Asked by Del
on 30 Jan 2013

I am trying to find the optimal Lagrange multipliers for this problem:

min 100*(V4 - V2 + (V1 - V3)^2)^2 + (V3 - V1 + 1)^2 s.t [V5 - (V1 - V3)*(V2 - V4) + 1; V3 - V1 + V6 - (V2 - V4)^2; V1 - V3 + V7 - 1/2]=0 [V1;V2;V3;V4;V5;V6;V7] >=0

The optimal minimizer that I am getting is:

V =

0.5000 2.0687 0.0000 0.0687 -0.0000 4.5000 0.0000

MATLAB is giving me the Lagrange multipliers:

lambda.eqnonlin=

1.0e+03 *

-0.7000 0.0000 1.7510

lambda.lower=

1.0e+03 *

0 0 0 0 0.7000 0 1.7510

However, when I take the gradient of the Lagrangian function at the optimal solution V, the answer is not 0!

Any idea why?

*No products are associated with this question.*

Answer by Matt J
on 30 Jan 2013

**Here is what was written: "fmincon stopped because the size of the current search direction is less than twice the default value of the step size tolerance and constraints are satisfied to within the default value of the constraint tolerance."**

which means you didn't converge with respect to the first order optimality measure. Your objective is a variant of Rosenbrock, so presumably it's supposed to be hard to converge to a proper solution. Try increasing MaxIter to something ridiculously large and make sure you get an exitflag=1.

Del
on 30 Jan 2013

Here is what I am typing:

options = optimset( 'MaxIter', Inf); V0=zeros(7,1);lb=zeros(7,1);

[V,fval,exitflag,output,lambda,grad] = fmincon(@function_TEST2_V,V0,[],[],[],[],lb,[],@constraints_TEST2_V)

I am not sure if that is what I am supposed to do, however, it is not making any difference, the exit flag is still 4

Opportunities for recent engineering grads.

## 5 Comments

## Matt J (view profile)

Direct link to this comment:http://www.mathworks.com/matlabcentral/answers/60510#comment_126151

Floating point precision? Inexact convergence? What

didyou get for the Lagrangian gradient and what code did you use to calculate it?## Del (view profile)

Direct link to this comment:http://www.mathworks.com/matlabcentral/answers/60510#comment_126157

I used the gradient of the Lagrangian formula:

gradient_f(V)+((lambda.eqnonlin')*Jacobian_h (V))+(lambda.lower'* (-I))

where h is the vector function of the equality constraints; and I is the identity matrix.

for LAMBDA= lambda.eqnonlin, and MU= lambda.lower, it's giving me the answer:

grad_f+((LAMBDA')*Jacobian_h)-MU' =

ans =

which as you can see is not close to the zero vector (within any reasonable tolerance).

## Matt J (view profile)

Direct link to this comment:http://www.mathworks.com/matlabcentral/answers/60510#comment_126159

What solver are you using? Are you sure that the optimization succeeded/completed? What exitflag did it return? Are you sure your gradient and Jacobian functions are correct (it would help to show them)? Did the optimization use them or did you use the default finite difference differentiation?

## Del (view profile)

Direct link to this comment:http://www.mathworks.com/matlabcentral/answers/60510#comment_126161

I used fmincon, starting with the value V0=zeros(7,1), it returned the exitflag 4. I checked the gradient and jacobian functions by hand, and they are correct. I only used fmincon to find the minimizer V, and to find the optimal Lagrange multipliers. I just calculated the gradient and Jacobian so that I could use them in the gradient lagrangian function, and I was hoping to get 0.

## Del (view profile)

Direct link to this comment:http://www.mathworks.com/matlabcentral/answers/60510#comment_126163

Here is what was written: "fmincon stopped because the size of the current search direction is less than twice the default value of the step size tolerance and constraints are satisfied to within the default value of the constraint tolerance."

and

grad_f_V'=

Jacobian_h_V =