From: <HIDDEN>
Newsgroups: comp.soft-sys.matlab
Subject: Re: Quadratic Cost Function x^T Q x
Date: Fri, 21 May 2010 05:48:04 +0000 (UTC)
Organization: Imperial College London
Lines: 49
Message-ID: <ht56qk$qce$>
References: <ht3lnt$92i$> <ht3oj8$gpr$> <ht3tl0$mbi$> <ht3vtl$o1d$> <ht41ak$at$> <ht42pg$8he$> <ht495s$eq2$> <ht4al2$jcc$> <ht4bq9$4t9$> <ht4em4$853$> <ht4feg$q4b$> <ht4urj$p85$>
Reply-To: <HIDDEN>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Trace: 1274420884 27022 (21 May 2010 05:48:04 GMT)
NNTP-Posting-Date: Fri, 21 May 2010 05:48:04 +0000 (UTC)
X-Newsreader: MATLAB Central Newsreader 1192337
Xref: comp.soft-sys.matlab:637920

"Matt J " <mattjacREMOVE@THISieee.spam> wrote in message <ht4urj$p85$>...
> "Jason" <> wrote in message <ht4feg$q4b$>...
> > > You mean x is a vector of homogeneous coordinates!?!
> > I agree about what you say: "for any solution x, another solution is c*x for any scalar c." I experienced this, but what can I do about it?
> > 
> ======================
> Well, you've already answered that question. By applying the constraint
> x(1)^2+x(2)^2=1 for example, not only do you get rid of the continuous 1D space of non-unique solutions c*x, but also the degenerate solution x=0 is excluded from the search. 
> I'm emphasizing this because you still seem to think the constraint is unnecessary, when in fact it has very important consequences for the performance of the minimization algorithm.
> > I am reading about how to do this using polyfit, I have no idea how to start, since as I said I want to compute the common tangent to a set of ellipses which is solved if x'Qx = 0.
> ===============
> No, this is the first time you've explicitly said anything about multiple ellipses. However, now that I understand the problem, I have a few more options to recommend.
> First, you can divide the problem into a minimization over two separate regions
> (1) The region where  we constrain x(3)=0, i.e. where the tangent line passes through the origin.
> (2) The region where we constrain x(3)=1, i.e., the region where the tangent line does not pass through the origin. Note that this is equivalent to minimizing over the entire region x(3)~=0 because in homogeneous coordinates x(3)~=0 and x(3)=1 form an equivalence class.
> Once you've found the minimum in each case, you can pick the one with the least value of J.
> Once we've constrained x(3) either to 0 or 1, there are a few consequences
> (I) The cost function J(x) reduces to a 2D function of x(1) and x(2)
> (II) It becomes relatively easy to search the 2D space for the global solution by parametrizing the space as 
> x(1) = t*cosd(alpha) 
> x(2) = t*sind(alpha)
> For each fixed alpha, the cost function J reduces to a 4-th order 1D polynomial in t, which can therefore be minimized rapidly, analytically, and globally in t. I can do this in a loop over a finely sampled range of angles alpha, say alpha=0:0.1:180, until I find a good approximation of the global minimum over all alpha and t and hence over all 
> x(1) and x(2). This should be fairly quick, considering how easy it is to minimize 1D polynomials
> A related approach, but slightly less robust, would be to code your own version of the coordinate descent algorithm. In this algorithm, you alternatingly minimize over each 
> x(i) while the other x(j) are fixed. Here again J reduces to a quartic polynomial in x(i) which can be globally/analytically minimized in each iteration. Because you are globally minimizing at each step, you have a better than average chance of not getting stuck in a local min. This approach can be applied either to the original 3D problem or one of the above reductions to 2D, although the latter would be better for reasons already discussed.

Dear Matt, 

Your reply was excellent. This is exactly what I have implemented now.

Many thanks.

Kind regards,