When is minimum p-norm solution independent of p?
Show older comments
I have created three optimization models for the same objective function but different norms-L1, L2, Linf- and subjected to the same constraints as shown below.
‖F_1*w+F^0 ‖_L1=min. (first objective function)
‖F_1*w+F^0 ‖_L2=min. (second objective function)
‖F_1*w+F^0 ‖_L∞=min. (third objective function)
subjected to the same constraints
The results of both models (based on L1 and L2 ) are identical, while the result of the third model(based on Linf) is very close to the results of L1 and L2.
My question is: is there any explanation of these identicality between the results of L1 and L2 also for the very close result of Linf ?
Thanks
Accepted Answer
More Answers (2)
Bruno Luong
on 6 Dec 2020
Edited: Bruno Luong
on 6 Dec 2020
This code is a toy example of 2 x 2 with random f0 and show most of the time it driven by the linear constraints and all three give [1;1] as solution. [1;1] is a sharp vertice of the feasible region. So it converge there regardless the selected norm.
Some other time they give different results (not a vertice of the feasible region). As long as the dimension of the span of the active constraints is less than the dimension of the problem (size of x), the current solution still have some degree of freedom to move and the objective norm is effectively matter (e.g. simple example of mean/median/(min+max)/2).
x = optimvar('x',2,1,'LowerBound',0);
A=[-7 8;
8 -7];
b=[1; 1];
f0 = 2*rand(2,1);
L1prob = optimproblem();
L1prob.Constraints.ineg = A*x <= b;
L1prob.Objective = sum(f0-x);
L1sol = solve(L1prob);
L1sol.x
L2prob = L1prob;
L2prob.Objective = sum((f0-x).^2);
L2sol = solve(L2prob);
L2sol.x
L10prob = L1prob;
L10prob.Objective = sum((f0-x).^10); % ~Linf, cannot translated by MATLAB pb based
L10prob = solve(L10prob, struct('x',[0;0]));
L10prob.x
So yes it's possible. Where as an explanation is suitable for YOUR problem, hard to tell with so little details.
13 Comments
Ahmed Galal
on 6 Dec 2020
Ahmed Galal
on 6 Dec 2020
Matt J
on 6 Dec 2020
But i have linear and nonlinear constraints
But are any of the nonlinear constraints active at the solution you're getting?
Ahmed Galal
on 6 Dec 2020
Bruno Luong
on 6 Dec 2020
Edited: Bruno Luong
on 6 Dec 2020
Whereas the constraint is linear or non-linear it's irrelevant for the discussion of the reason I suggest. The important thing is solution might get stuck at the vertice. The sharpness is more relevant, meaning the linearized (active) constraints are very mutually anti-correlated.
Ahmed Galal
on 6 Dec 2020
Edited: Ahmed Galal
on 6 Dec 2020
Then they are not active, and may as well not be there. The solution you're getting will remain the solution and satisfy the nonlinear constraints even if you tell the solver to ignore them.
But Bruno is right. Whether your constraints are linear or nonlinear, they can still determine the solution independently of the objective norm.
Bruno Luong
on 6 Dec 2020
Edited: Bruno Luong
on 6 Dec 2020
Comment restored (please do not delete a comment that has been address):
Ahmed Galal: "Does this mean that the norm in the objective function is useless?"
NOOOOOO, sometime it does sometime it does not.
From what we show you, nothing allows to make such black and white assumption.
It depends on the constraints you impose, which is active, their sharpness, the location of F0. It can give the same solution or not.
Ahmed Galal
on 6 Dec 2020
Ahmed Galal
on 6 Dec 2020
Ahmed Galal
on 7 Dec 2020
Bruno Luong
on 7 Dec 2020
Edited: Bruno Luong
on 7 Dec 2020
- If you have an nice (convex) objective function f(x) and g(x):=2*f(x) and search for
xminf = argmin f(x)
xming = argmin g(x)
by wharever method to find them.
Is xminf = xming?
What are the values of min of f and g at the argmin?
2. Do you know you can check the min vavue by using the second output when calling MATLAB minimizer (here fmincon is given example, but any MATLAB solver will have the same output convention)
[xmin, fmin, ...] = fmincon(...)
These two should give you a hints to answer your question.
Bruno Luong
on 7 Dec 2020
Edited: Bruno Luong
on 7 Dec 2020
0 votes
The optimizers fail to initialize find the first feasible point with its gradient and return the same guess, which are identical.
6 Comments
Ahmed Galal
on 7 Dec 2020
Edited: Ahmed Galal
on 7 Dec 2020
Bruno Luong
on 7 Dec 2020
Hence you can conclude that the l2 minimization is not working: fail, not converge, converge to local minima.
Ahmed Galal
on 7 Dec 2020
The standard deviation for the L2 norm must always be less than the STD for the norm L1!
That might be true if the elements of F0(i) are uniformly distributed, if F1=eye(N), and if you dropped the constraints, but I don't see why that would be true otherwise. The L2-norm you are using is not weighted by the distribution of F0 as far as you've told us.
Ahmed Galal
on 8 Dec 2020
Edited: Ahmed Galal
on 8 Dec 2020
I'm just saying I'm struggling to see a statistical argument that would guarantee that STD for the L2 norm would be less than for the L1 norm. Assuming F0 is supposed to be Gaussian distributed N(-F1*wtrue,sig*I), then it is known that a minimum variance unbiased estimator of wtrue is a linear function of F0, but the min. L2 norm estimator is only a linear function of F0 in the unconstrained case, so I don't know how we extend things to the constrained case. Also, F1 is rank deficient in this case, so I don't immediately see how that will affect things either.
Categories
Find more on Optimization Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!