How to minimize the 12th norm of a difference vector?

7 views (last 30 days)
Hi everybody, I have two vectores, lets say x and y. I calculate the difference vector d=x-y; I want to minimize d through the 12-norm of the difference (Euclidean distance) vector over 100 iterations. Is there anyone who could help me?
Kind regards, Joaquim
  7 Comments
Joaquim
Joaquim on 28 Feb 2017
I was confused by that too. I'm following a paper that states first the Euclidean distance, and after the 12-norm. However, from what I read, if it was the Euclidean distance, there was no need to minimize over 100 iterations? Am I correct?
John D'Errico
John D'Errico on 28 Feb 2017
Exactly. Euclidean norm would be simply a linear least squares solution, so the 2-norm.
Somehow, I wonder if the number 12 is a typo. Euclidean implies a 2-norm, and does so quite strongly.
At the same time, the number 12 could easily be a typo, so far less strength in my confidence in that number. After all, why 12? Personally, I think many people would say 42 is a much better number, if you just want some number. I like 17 myself. The point is, if you are going for the 12-norm, then why not the infinity norm of the difference?
As for the iterative need of the problem, that is hard to say. That seems to imply they were using some higher order norm, as it is simple to solve using a 2-norm, but an optimization for 12. UNLESS, some people do not appreciate how to solve a simple linear least squares using direct methods. So maybe all they knew is how to use an iterative scheme. How can I know?
In my gut, I'd say what happened is maybe the authors were thinking about using a 1 norm, but maybe swapped to the idea of a 2-norm, as more common practice. Or maybe it was just a complete typo. The issue of iteration is another puzzle that can be explained away, or not. Editors will simply not catch this kind of thing. That is why they have referees for papers.
So, UGH. Can you contact the authors? Does the paper have any indication as to why they might use a 12-norm? If not, it would be so unusual that it is suspect. Were I writing a paper, I would want to put in serious, in-depth justification as to why I was using a 12-norm, instead of the utterly common 2-norm. I also expect that any referee worth their salt would have said the same thing. Sadly, I think some journals are now less demanding, and sometimes don't even bother with the referee process. I recall being told about non-refereed journals at some time in the past, although everything I ever published had them. I only got one paper through that did not require any revision in my career, and that was aided by my co-author, who is pretty demanding himself.

Sign in to comment.

Accepted Answer

John D'Errico
John D'Errico on 28 Feb 2017
Edited: John D'Errico on 28 Feb 2017
This is quite confusing, because you have not ever explained what you really want. In your words:
"x is a weighted sum of two vectores A and B, x= weight1 *A + weigth2*B. The purpose of the minimization is to find the weights that minimize x-y."
You say that you have vector y, and two vectors A and B. Then you need to compute the weighted linear combination x of vectors A and B, that minimizes the norm (lets not worry about which norm for the moment), so
norm(x-y) = norm((w1*A + w1*B) - y)
If you really want the 12norm here, this is a straight optimization problem, but the 12 norm is really not much different from the infinity norm. Solving an inf norm problem will reduce to a linear programming problem, whereas solving the 2-norm problem is simple linear least squares.
For example, to solve the 2-norm (i.e., Euclidean distance, which you did say above) problem, all you need do is:
W12 = [A(:),B(:)]\y(:);
weight1 = W12(1);
weight2 = W12(2);
That simple. So it is trivial to do. There are at least a half dozen other ways to compute the same set of weights in MATLAB, but the above solution is simplest. It yields a set of weights that minimize the 2-norm of the difference between x and y. There are no iterations needed. That is the true solution, IF you really wanted a 2-norm.
However, if you really wanted a 12 norm, and would not be interested in the infinity norm, then some iterative scheme would be employed.
Do you really need to do this as a 12 norm? Why would you pick such an arbitrary value as 12? As I said, it will be relatively little different from the infinity norm. For example:
v = randn(1,100);
norm(v,12)
ans =
2.926
norm(v,inf)
ans =
2.811
Whereas, the 2-norm is quite different.
norm(v,2)
ans =
9.8925
So if you insist on computing the solution under a 12 norm via some (unspecified iterative scheme) I can show you some ways to do so. Will ANY reasonable iterative scheme suffice?
For example, lets make up some data, then I'll solve it under a 12-norm using fminsearch.
y = randn(100,1);
A = randn(100,1);
B = randn(100,1);
The 2 norm solution is:
W12_2norm = [A(:),B(:)]\y(:)
W12_2norm =
-0.0024203
0.11984
Now, create an objective function to use for an optimizer.
fun_12 = @(W12) norm((A*W12(1) + B*W12(2)) - y,12);
fun_12(W12_2norm)
ans =
2.8766
The 2-norm solution is not the minimum set of weights for the 12norm problem. But it will provide starting values for the optimization.
[W12_12norm,fval] = fminsearch(fun_12,W12_2norm)
W12_12norm =
0.31792
-0.11325
fval =
2.7072
So a different set of weights has been obtained, which yields a somewhat lower value for the 12-norm.
Again, the 12-norm seems a bit arbitrary to me. A different solution comes from the infinity norm.
fun_inf = @(W12) norm((A*W12(1) + B*W12(2)) - y,inf);
[W12_infnorm,fval] = fminsearch(fun_inf,W12_2norm)
W12_infnorm =
-0.0047671
-0.68259
fval =
2.7561
I need to be a bit careful there, because the inf norm is really not well-solved using fminsearch, because the inf norm will not be a smooth, differentiable function. fminsearch would prefer differentiability. I'd have been better off using a linprog solution on that, but since you have not asked at all for an inf norm solution, I'm not going to go that deeply into the solution, especially since I have no idea what you really want.
If you can explain more clearly what you really need, I could give a more precise answer. And there are lots of iterative ways I could have solved this problem (fminbnd, for example), but I don't see why any particular iterative solution would be better than what I've shown already.
  2 Comments
Joaquim
Joaquim on 28 Feb 2017
I'll try to be more clear. First I have two images, A and B. From these two images, I'll obtain X, throught
X= weight1*A+weight2*B, with weight1+weight2=1
Then I calculate the difference
difference=abs(X-Y);
What I want to do is optimize the weights by minimizing the difference matrix. The paper I'm following is not clear about if it is the Euclidean distance or the 12-norm, since both are referred. However, it is always referred that the minimization is done over 100 iterations. From the answers I'm getting, the Euclidean distance doesn't need the iterations.
If you could help me getting possible solutions for both ways, I would very grateful. Please notice that the sum of weights has to be 1.
I'm sorry if I wasn't still very clear.
Much appreciation, Joaquim
John D'Errico
John D'Errico on 28 Feb 2017
Edited: John D'Errico on 28 Feb 2017
Ok. One extra piece of information, that the sum of the weights must be 1. I'd probably implement that as:
X = w*A + (1-w)*B
or it can be written as:
X = B + w*(A-B)
In either case, we have the constraint built in implicitly. The former is perhaps more obvious to follow what is being done though. Whatever weight you get for w, subtract it from 1 to get the second weight.
There is no need to compute the absolute difference between them though, since any norm deals with that. The norm of the simple difference will suffice. If they are images, make sure the result is a double. And if they are image arrays, then make sure you string out the difference array as a vector, since you really want a vector norm here, not a matrix norm.
So, a simple algorithm for the 2-norm takes one line of code, assuming image arrays A, B, and Y. They will need to be doubles of course, since otherwise integer arithmetic will cause problems.
w = (A(:)-B(:))\(Y - B(:));
Effectively, what I did was use the form:
X = B + w*(A-B)
Then, we will minimize the norm of the difference,
norm(Y - (B + w*(A-B),2)
A, B and Y are known here. Only w is an unknown. The solution in MATLAB is as I wrote it.
w = (A(:)-B(:))\(Y - B(:));
But that is the 2-norm solution, also known as the linear least squares solution. Note that the least squares solution has no explicit constraint on the weights. So a weight could arise that is less than zero, or greater than 1. In the latter case, the counter-weight would be negative, since they were forced to sum to 1.
For a different norm, thus 12, 17, or 42, take your pick. :) I would do it using fminbnd.
fun = @(w) norm(Y(:) - (B(:) + w*(A(:)-B(:)),12);
w = fminbnd(fun,[0,1]);
weight1 = w;
weight2 = 1-w;
So minimize fun as a function of w, over the range for w of [0,1]. That presumes both weights must be bounded in the interval [0,1].
Can one of the weights be greater than 1 or less than zero? I guess I should have asked that. But if you are willing to see weights that range from -1 to 2 but still summing to 1 for example, then this will suffice:
w = fminbnd(fun,[-1,2]);
Now, how would I solve the 2-norm problem, if we added the information about bounding the weights to the interval [0,1]? Let me know if that is an issue, and I can do that for you too. (I'm mulling over at least 3 choices of algorithm for that as I write. Do you have the optimization toolbox?)

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!