How do I add positivity and negativity constraints for a convex optimization problem solved using a quasi Newton method?

4 views (last 30 days)
I have a convex optimization problem which I solve using quasi Newton method with the Hessian approximation and descent direction found using LBFGS. How much to traverse to that descent direction is determined by line search with Armijo rule.
However, now I want to add a constraint that some of the variables are positive and some of the variables are negative.
How do I add this constraint?
I have this idea but I am not sure if this is the right way to do it. I can take the gradients wrt to log of the parameter value and have it as
gradient of f wrt log(param) = param* gradient of f wrt param.
Then I can use the LBFGS to approximate the descent direction.
Then I can use line search to make the descents, but when I have to satisfy the armijo rule criteria, I will have to evaluate the actual parameter values as
param(new) = exp(log(param) - alpha*(descent direction)).
So my question is the hessian and gradients are stored for LBFGS wrt log(param) values, but when we have to evaluate the objective function, we use exponent to get the param values.
Will this cause an issue or this is not the right way to do it?

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!