Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Unoptimisationable

Subject: Unoptimisationable

From: aaron brocq

Date: 19 Apr, 2009 20:45:46

Message: 1 of 26

Very much stuck with this problem, have looked high and low for at least a month now but cannot find it. I am looking for an equation whereby one of BFGS, DFP or Steepest Descent, cannot optimise but the remaining two will.

I then have to prove this fact using Matlab. This would help me hugely if anyone knows of any solutions.

Thanks for your help!

Aaron

Subject: Unoptimisationable

From: Image Analyst

Date: 19 Apr, 2009 21:22:01

Message: 2 of 26

aaron brocq
I don't know but I like your subject line!
Why are you asking for this? Usually we just want to solve a problem and want a way that solves it, not one that doesn't. Can you give some background? (i.e. homework or idle curiosity)

Subject: Unoptimisationable

From: Matt

Date: 20 Apr, 2009 18:27:01

Message: 3 of 26

aaron brocq <arun.september@gmail.com> wrote in message <32065160.57252.1240173976942.JavaMail.jakarta@nitrogen.mathforum.org>...
> Very much stuck with this problem, have looked high and low for at least a month now but cannot find it. I am looking for an equation whereby one of BFGS, DFP or Steepest Descent, cannot optimise but the remaining two will.
------------

What does it mean to optimize an "equation" ? Does it mean that your objective function must be linear least squares. Also, what step size rules are these 3 algs to be run with?

Subject: Unoptimisationable

From: aaron brocq

Date: 20 Apr, 2009 21:12:04

Message: 4 of 26

Hi ImageAnalyst,
 I thought the post title would get peoples attention, guess it worked! I need to know this for a project that I'm doing, I was given the hint that this would be big marks if I could find it.

''What does it mean to optimize an "equation" ? Does it mean that your objective function must be linear least squares. Also, what step size rules are these 3 algs to be run with?''

Matt, there are no conditions on this objective function, other than it be unconstrained.

Thanks guys.

Aaron

Subject: Unoptimisationable

From: John D'Errico

Date: 20 Apr, 2009 21:48:06

Message: 5 of 26

aaron brocq <arun.september@gmail.com> wrote in message <28114932.2595.1240261955314.JavaMail.jakarta@nitrogen.mathforum.org>...
> Hi ImageAnalyst,
> I thought the post title would get peoples attention, guess it worked! I need to know this for a project that I'm doing, I was given the hint that this would be big marks if I could find it.
>
> ''What does it mean to optimize an "equation" ? Does it mean that your objective function must be linear least squares. Also, what step size rules are these 3 algs to be run with?''
>
> Matt, there are no conditions on this objective function, other than it be unconstrained.
>

No. You misunderstand. You cannot optimize an
equation.

http://en.wikipedia.org/wiki/Equation

See that a characteristic of an equation is the
presence of a relational operator, generally an
equality. So optimizing an equation makes no
sense.

Do you perhaps with to optimize a function?

john

Subject: Unoptimisationable

From: aaron brocq

Date: 20 Apr, 2009 22:15:29

Message: 6 of 26

That is correct John, I meant optimise a function.
Sincere apologies.
Aaron

Subject: Unoptimisationable

From: Bruno Luong

Date: 20 Apr, 2009 22:49:01

Message: 7 of 26

aaron brocq <arun.september@gmail.com> wrote in message <32065160.57252.1240173976942.JavaMail.jakarta@nitrogen.mathforum.org>...
> Very much stuck with this problem, have looked high and low for at least a month now but cannot find it. I am looking for an equation whereby one of BFGS, DFP or Steepest Descent, cannot optimise but the remaining two will.
>
> I then have to prove this fact using Matlab. This would help me hugely if anyone knows of any solutions.

You cannot *PROVE* anything valuable, because the problem is incompletely given. Let's me explain/

Fact 1: If the starting points is in the basin attraction of a local minimum, three methods will perform OK, with convergence to the very same minimum - soon or later.

Fact 2: all convex function has as basin attraction the entire space!

Thus: The only way to trick one of these method if to build a function with two or more local minima, and artificially trap one of these method into a "wrong" basin. This is possible because three method provides three different decent directions, and we can build a artificially wall to isolate one the the current point of one method (peak any of them). But the wall has to be tuned depending where the starting point is to work like expected.

Therefore this does not PROVE anything, because of Fact 1.

Who is the person who gives you this silly work? Sorry I cannot resist.

Bruno

Subject: Unoptimisationable

From: Bruno Luong

Date: 20 Apr, 2009 22:55:03

Message: 8 of 26

Sorry, should read:
> (pick any of them).

Bruno

Subject: Unoptimisationable

From: Matt

Date: 21 Apr, 2009 05:04:01

Message: 9 of 26

aaron brocq <arun.september@gmail.com> wrote in message <28114932.2595.1240261955314.JavaMail.jakarta@nitrogen.mathforum.org>...
>
> Matt, there are no conditions on this objective function, other than it be unconstrained.
-------

But you still didn't answer my question about what step size rule you are meant to assume.

It's an important detail, because if you are allowed to assume a constant stepsize at all iterations, the problem becomes a lot easier. In particular, we know that Newton's method with a constant stepsize can converge for functions with unbounded curvature whereas steepest descent with constant stepsize may not be able to. It probably wouldn't be much more work to establish this for quasi-Newton methods as well.

As an example, consider the scalar function f(x)=x^(3/2).

For steepest descent, the stepsize at a point x has to satisfy at the very least

stepsize<4*sqrt(x)/3

merely to ensure monotonic descent of the objective function. Thus, a constant stepsize will never work if the iterations are to monotonically approach the global min at x=0.

However, Newton's method will converge to x=0 for any stepsize<0.5.

Subject: Unoptimisationable

From: Matt

Date: 21 Apr, 2009 05:23:02

Message: 10 of 26

"Matt " <xys@whatever.com> wrote in message <gsjk41$mjq$1@fred.mathworks.com>...

> For steepest descent, the stepsize at a point x has to satisfy at the very least
>
> stepsize<4*sqrt(x)/3
>
> merely to ensure monotonic descent of the objective function. Thus, a constant stepsize will never work if the iterations are to monotonically approach the global min at x=0.
-------------

This was a little imprecise. I should have said "for any given constant stepsize, there are initial points from which convergence cannot occur" for steepest descent. In particular, from initial points x0 satisfying

abs(x0)=9*(stepsize^2)/16

the minimum will never be reached. The algorithm will oscillate between the two points x0 satisfying the above.

Subject: Unoptimisationable

From: aaron brocq

Date: 21 Apr, 2009 11:37:31

Message: 11 of 26

>But you still didn't answer my question about what step size rule you are meant to assume.
>In particular, we know that Newton's method with a constant stepsize can converge for functions with >unbounded curvature whereas steepest descent with constant stepsize may not be able to. It probably >wouldn't be much more work to establish this for quasi-Newton methods as well.

Matt, I have just been told to compare quasi newton optimisation methods for various functions. Any variables such as step size etc, are up to me.

>Fact 1: If the starting points is in the basin attraction of a local minimum, three methods will perform OK, >with convergence to the very same minimum - soon or later.
>Thus: The only way to trick one of these method if to build a function with two or more local minima, and >artificially trap one of these method into a "wrong" basin. This is possible because three method provides >three different decent directions, and we can build a artificially wall to isolate one the the current point of >one method (peak any of them). But the wall has to be tuned depending where the starting point is to >work like expected.

Regarding Fact 1 Bruno, I was told by my senior lecturer that the opposite was possible and there are functions whereby one of the methods will not optimise.
Do you have an example I could test?

Aaron

Subject: Unoptimisationable

From: Bruno Luong

Date: 21 Apr, 2009 11:56:01

Message: 12 of 26

aaron brocq <arun.september@gmail.com> wrote in message <20007910.5671.1240313881491.JavaMail.jakarta@nitrogen.mathforum.org>...

>
> Regarding Fact 1 Bruno, I was told by my senior lecturer that the opposite was possible and there are functions whereby one of the methods will not optimise.
> Do you have an example I could test?
>

One upon of the time, I have built a simple in 2D example where the same problem; solved on the same optimize; started at the same points; converge to different local minima. I can hear : What change then - you would ask - ? Only the Hilbertian norm of the parameter space (or in simplistic term: the scalling of parameters).

It is not hard to build one, just take a look of what the minimizers are doing and put a "wall" in the objective function to stop the current point to cross back to the basin of the global minimum.

Sorry; but I find this kind of work is not worth for spending time, at least mine. It does not bring any valuable information about algorithm, beside the fact that minimizing is a delicate work and need a care on every aspect. This is of course my own opinion.

Bruno

Subject: Unoptimisationable

From: John D'Errico

Date: 21 Apr, 2009 11:58:02

Message: 13 of 26

aaron brocq <arun.september@gmail.com> wrote in message <20007910.5671.1240313881491.JavaMail.jakarta@nitrogen.mathforum.org>...
> >But you still didn't answer my question about what step size rule you are meant to assume.
> >In particular, we know that Newton's method with a constant stepsize can converge for functions with >unbounded curvature whereas steepest descent with constant stepsize may not be able to. It probably >wouldn't be much more work to establish this for quasi-Newton methods as well.
>
> Matt, I have just been told to compare quasi newton optimisation methods for various functions. Any variables such as step size etc, are up to me.
>
> >Fact 1: If the starting points is in the basin attraction of a local minimum, three methods will perform OK, >with convergence to the very same minimum - soon or later.
> >Thus: The only way to trick one of these method if to build a function with two or more local minima, and >artificially trap one of these method into a "wrong" basin. This is possible because three method provides >three different decent directions, and we can build a artificially wall to isolate one the the current point of >one method (peak any of them). But the wall has to be tuned depending where the starting point is to >work like expected.
>
> Regarding Fact 1 Bruno, I was told by my senior lecturer that the opposite was possible and there are functions whereby one of the methods will not optimise.
> Do you have an example I could test?
>
> Aaron

Perhaps your lecturer was referring to the inability of
steepest descent (coupled with an inexact line search)
to handle problems with long, curved, narrow valleys.
Any problem where the eigenvalues of the Hessian
matrix near the solution have differing magnitudes
will be a difficulty.

This is a known problem for steepest descent.

John

Subject: Unoptimisationable

From: Matt

Date: 21 Apr, 2009 12:59:02

Message: 14 of 26

aaron brocq <arun.september@gmail.com> wrote in message <20007910.5671.1240313881491.JavaMail.jakarta@nitrogen.mathforum.org>...

> Regarding Fact 1 Bruno, I was told by my senior lecturer that the opposite was possible and there are functions whereby one of the methods will not optimise.
-----------

And you've been told this by me as well. And I've given you and example. The function f(x)=x^(3/2) will not be optimized by steepest descent when you use a constant stepsize.

Subject: Unoptimisationable

From: aaron brocq

Date: 21 Apr, 2009 20:56:04

Message: 15 of 26

Appreciate your help guys, tried the given function and when applied to the fminunc command I get the following problem;

Line search cannot find an acceptable point along the current search direction.

y = 0

If I then try to optimise it using DFP method, the following output comes up

 I have a function which I am trying to minimise;

f=x(1)^(3/2);

when applied to the fminunc command I get the following problem;

Line search cannot find an acceptable point along the current
 search direction.

y = 0

If I then try to optimise it using DFP method, the following output comes up

                                                        First-order
 Iteration Func-count f(x) Step-size optimality
     0 2 1 1.5
     1 4 0 0.666667 0.0001
Line search cannot find an acceptable point along the current search direction.

This is also the same when using BFGS and also for steepdescent. So it seems none of the three can solve this!?

Subject: Unoptimisationable

From: aaron brocq

Date: 22 Apr, 2009 13:25:44

Message: 16 of 26

Having greater problems now,

function f=Fun6a(x)
f=x(1)^(3/2);
end

is my M file, using fminunc I am returned with the following output;
y=fminunc(@Fun6a,1);

??? Error using ==> feval
Undefined function or method 'Fun6a' for input arguments of type 'double'.

Error in ==> fminunc at 218
   f = feval(funfcn{3},x,varargin{:});

And again this is the same for BFGS DFP and Steepest Descent. Anyone have any ideas how to correct this?

Subject: Unoptimisationable

From: Matt

Date: 22 Apr, 2009 14:44:02

Message: 17 of 26

aaron brocq <arun.september@gmail.com> wrote in message <17002811.8040.1240347416664.JavaMail.jakarta@nitrogen.mathforum.org>...

> This is also the same when using BFGS and also for steepdescent. So it seems none of the three can solve this!?
------

I made a mistake. The function should be f(x)=abs(x)^(3/2)

The line search operation is probably failing because, without the abs(), the function f(x)=x^(3/2) is undefined for x<0

I just want to point out, however, that if you use a proper line search algorithm, all 3 algorithms will probably work fine. My proposal was that you don't do line search, but instead use a constant stepsize.

Subject: Unoptimisationable

From: aaron brocq

Date: 22 Apr, 2009 14:51:05

Message: 18 of 26

Ive now been told by my senior lecturer that this is not the correct function and that I should be looking at test functions or something linked to Rosenbrock's function.

Subject: Unoptimisationable

From: Matt

Date: 22 Apr, 2009 16:03:03

Message: 19 of 26

aaron brocq <arun.september@gmail.com> wrote in message <25125789.11455.1240411895845.JavaMail.jakarta@nitrogen.mathforum.org>...
> Ive now been told by my senior lecturer that this is not the correct function and that I should be looking at test functions or something linked to Rosenbrock's function.

The problem is no longer clear. Since you haven't said what is incorrect, in your instructor's view, about the proposal f(x)=abs(x)^(3/2), there isn't much to go on in terms of suggesting as an alternative.

Subject: Unoptimisationable

From: aaron brocq

Date: 22 Apr, 2009 23:50:00

Message: 20 of 26

>I just want to point out, however, that if you use a proper line search algorithm, all 3 algorithms will >probably work fine. My proposal was that you don't do line search, but instead use a constant stepsize.

Using fminunc on the new corrected function
y=fminunc(@Fun6a,1);
Warning: Gradient must be provided for trust-region method;
  using line-search method instead.
In fminunc at 281
Line search cannot find an acceptable point along the current search direction.

Using BFGS, DFP and Steepest Descent yields the following,

y=fminunc(@Fun6a,1,options);
Warning: Gradient must be provided for trust-region method;
  using line-search method instead.
> In fminunc at 281
                                                        First-order
 Iteration Func-count f(x) Step-size optimality
     0 2 1 1.5
     1 4 0 0.666667 0.0001

Line search cannot find an acceptable point along the current search direction.

This doesn't seem correct as each method has the exact same outcome?

Subject: Unoptimisationable

From: aaron brocq

Date: 23 Apr, 2009 11:39:52

Message: 21 of 26

>I just want to point out, however, that if you use a proper line search algorithm, all 3 >algorithms will probably work fine. My proposal was that you don't do line search, but >instead use a constant stepsize.

The quasi newton methods all work for abs(x)^(3/2) with line search as you suggested, how do I prevent them using line search? Will using Gradobj be sufficient?

Subject: Unoptimisationable

From: Matt

Date: 23 Apr, 2009 14:40:18

Message: 22 of 26

aaron brocq <arun.september@gmail.com> wrote in message <21387750.277.1240486823479.JavaMail.jakarta@nitrogen.mathforum.org>...
> >I just want to point out, however, that if you use a proper line search algorithm, all 3 >algorithms will probably work fine. My proposal was that you don't do line search, but >instead use a constant stepsize.
>
> The quasi newton methods all work for abs(x)^(3/2) with line search as you suggested, how do I prevent them using line search? Will using Gradobj be sufficient?

I was thinking that you would code the algorithms yourself. It's a 1D problem. You should be able to do it in about 5-10 lines.

Subject: Unoptimisationable

From: aaron brocq

Date: 23 Apr, 2009 17:43:01

Message: 23 of 26

Im not fluent in how to do this, part of my project was also to learn how to use Matlab.

Subject: Unoptimisationable

From: Steven Lord

Date: 24 Apr, 2009 14:45:44

Message: 24 of 26


"aaron brocq" <arun.september@gmail.com> wrote in message
news:6768720.1645.1240508612321.JavaMail.jakarta@nitrogen.mathforum.org...
> Im not fluent in how to do this, part of my project was also to learn how
> to use Matlab.

In my opinion, one of the best ways to learn how to do something is to do
it. Now for some things that's not a great option (brain surgery? Start
off with simulations of the real thing.) but for MATLAB programming it is a
pretty good idea. I'm assuming from the other posts you've made that you
have test cases you've been working with, so you've got a way to try to
locate any bugs in your code. If you really get stuck, post what you've got
to the newsgroup, ask a specific question about where you're stuck, and
indicate that you're rewriting the wheel to get practice in wheel
construction (so people don't suggest "Forget writing your own, just use
$FUNCTION") and you should receive some assistance.

--
Steve Lord
slord@mathworks.com

Subject: Unoptimisationable

From: Matt

Date: 24 Apr, 2009 15:43:02

Message: 25 of 26

aaron brocq <arun.september@gmail.com> wrote in message <6768720.1645.1240508612321.JavaMail.jakarta@nitrogen.mathforum.org>...
> Im not fluent in how to do this, part of my project was also to learn how to use Matlab.

You're fluent enough to write functions to be passed to fminunc. What I'm proposing shouldn't require more. It should look something like this:

x=something;
stepsize=something;

for i=1:NumIterations

 [fval,gradient]=Fun6a(x); %return function value and gradient

 x=x-stepsize*gradient;

end

Subject: Unoptimisationable

From: aaron brocq

Date: 24 Apr, 2009 23:55:44

Message: 26 of 26

Much appreciated Matt, ill give that a go.

>In my opinion, one of the best ways to learn how to do something is to do
>it.

I understand your point Steve, the problem with myself is that time is against me, my deadline is this Monday, I'm putting the final touches to my project. The help that I asked for would make a vast difference with my achieved grade so at this point the quicker I can put into use a function the better for myself, I'm sure you understand.
But thanks anyway.

Tags for this Thread

No tags are associated with this thread.

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us