Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
convex optimization problem

Subject: convex optimization problem

From: kumar vishwajeet

Date: 24 Aug, 2012 08:25:08

Message: 1 of 7

Hi,
I have an objective function of the following type:
J = x log(x/k) + cx
Subject to : f1(x) = alpha and f2(x) = beta
where alpha, beta, k and c are constants and x is a vector to be solved for. So, it is a convex function. Which is the best optimization routine in MATLAB to solve such problems. I'm currently using fmincon and it is slower than snail.

Thanks.

Subject: convex optimization problem

From: Matt J

Date: 24 Aug, 2012 12:12:07

Message: 2 of 7

"kumar vishwajeet" wrote in message <k17dp4$sol$1@newscl01ah.mathworks.com>...
> Hi,
> I have an objective function of the following type:
> J = x log(x/k) + cx
> Subject to : f1(x) = alpha and f2(x) = beta
> where alpha, beta, k and c are constants and x is a vector to be solved for. So, it is a convex function. Which is the best optimization routine in MATLAB to solve such problems. I'm currently using fmincon and it is slower than snail.
==========

Is c a vector? I.e., is cx the same as dot(c,x)?

Also you haven't said what f1(x) and f2(x) look like. Unless they are linear, the problem is not a convex program overall.

One reason things might be slow is x log(x/k) is not differentiable at x=0, which violates the rules of fmincon if x=0 is in the feasible set. Do you also have a constraint x>=epsilon>0?

Subject: convex optimization problem

From: Alan_Weiss

Date: 24 Aug, 2012 12:51:05

Message: 3 of 7

On 8/24/2012 4:25 AM, kumar vishwajeet wrote:
> Hi,
> I have an objective function of the following type:
> J = x log(x/k) + cx Subject to : f1(x) = alpha and f2(x) = beta
> where alpha, beta, k and c are constants and x is a vector to be
> solved for. So, it is a convex function. Which is the best
> optimization routine in MATLAB to solve such problems. I'm currently
> using fmincon and it is slower than snail.
>
> Thanks.
This is a convex problem only with restrictions on the functions f1 and
f2. I mean, there can be multiple local minima, depending on the
functions f1 and f2.

But as far as your question goes, the only solver in the toolbox to
handle this type of problem is fmincon. If you are dissatisfied with its
speed, you might want to try some of the suggestions in
http://www.mathworks.com/help/toolbox/optim/ug/br44iv5-1.html#br44pif
You might also want to try various algorithms, such as sqp.

Good luck,

Alan Weiss
MATLAB mathematical toolbox documentation

Subject: convex optimization problem

From: kumar vishwajeet

Date: 24 Aug, 2012 21:14:07

Message: 4 of 7

Alan_Weiss <aweiss@mathworks.com> wrote in message <k17tbp$k7a$1@newscl01ah.mathworks.com>...
> On 8/24/2012 4:25 AM, kumar vishwajeet wrote:
> > Hi,
> > I have an objective function of the following type:
> > J = x log(x/k) + cx Subject to : f1(x) = alpha and f2(x) = beta
> > where alpha, beta, k and c are constants and x is a vector to be
> > solved for. So, it is a convex function. Which is the best
> > optimization routine in MATLAB to solve such problems. I'm currently
> > using fmincon and it is slower than snail.
> >
> > Thanks.
> This is a convex problem only with restrictions on the functions f1 and
> f2. I mean, there can be multiple local minima, depending on the
> functions f1 and f2.
>
> But as far as your question goes, the only solver in the toolbox to
> handle this type of problem is fmincon. If you are dissatisfied with its
> speed, you might want to try some of the suggestions in
> http://www.mathworks.com/help/toolbox/optim/ug/br44iv5-1.html#br44pif
> You might also want to try various algorithms, such as sqp.
>
> Good luck,
>
> Alan Weiss
> MATLAB mathematical toolbox documentation


I'm sorry. The actual cost function is :
J = summmation(i = 1:N) (x_i log(x_i/k_i) + c_i*x_i
Subject to : x_i >= 0 and summation(i=1:N) (x_i) = 1

where x_i and c_i are scalars.
Is it a good idea to change the constraint "x_i >= 0" to "x_i >= eps", where eps is epsilon. I knew that x log x is not defined at x = 0. So I intentionally made x log x = 0 for x = 0. This is valid for maximum entropy problems.

Subject: convex optimization problem

From: Alan_Weiss

Date: 27 Aug, 2012 12:33:14

Message: 5 of 7

On 8/24/2012 5:14 PM, kumar vishwajeet wrote:
> Alan_Weiss <aweiss@mathworks.com> wrote in message
> <k17tbp$k7a$1@newscl01ah.mathworks.com>...
>> On 8/24/2012 4:25 AM, kumar vishwajeet wrote:
>> > Hi,
>> > I have an objective function of the following type:
>> > J = x log(x/k) + cx Subject to : f1(x) = alpha and f2(x) = beta
>> > where alpha, beta, k and c are constants and x is a vector to be >
>> solved for. So, it is a convex function. Which is the best >
>> optimization routine in MATLAB to solve such problems. I'm currently
>> > using fmincon and it is slower than snail.
>> >
>> > Thanks.
>> This is a convex problem only with restrictions on the functions f1
>> and f2. I mean, there can be multiple local minima, depending on the
>> functions f1 and f2.
>>
>> But as far as your question goes, the only solver in the toolbox to
>> handle this type of problem is fmincon. If you are dissatisfied with
>> its speed, you might want to try some of the suggestions in
>> http://www.mathworks.com/help/toolbox/optim/ug/br44iv5-1.html#br44pif
>> You might also want to try various algorithms, such as sqp.
>>
>> Good luck,
>>
>> Alan Weiss
>> MATLAB mathematical toolbox documentation
>
>
> I'm sorry. The actual cost function is :
> J = summmation(i = 1:N) (x_i log(x_i/k_i) + c_i*x_i Subject to : x_i
> >= 0 and summation(i=1:N) (x_i) = 1
>
> where x_i and c_i are scalars.
> Is it a good idea to change the constraint "x_i >= 0" to "x_i >= eps",
> where eps is epsilon. I knew that x log x is not defined at x = 0. So
> I intentionally made x log x = 0 for x = 0. This is valid for maximum
> entropy problems.
When using the sqp or interior-point algorithms there is no need to set
x(i)>=eps; these algorithms obey bounds at all iterations, and your
extension of the definition to the boundary is fine.
http://www.mathworks.com/help/toolbox/optim/ug/brhkghv-11.html#br9p_ry
You might find that your problem runs faster if you provide gradients
and, for the interior point algorithm, even a Hessian. If you have
Symbolic Math Toolbox you can do this automatically:
http://www.mathworks.com/help/toolbox/optim/ug/brn4nh7.html#brv_i_1
But for your problem, it is easy enough to calculate the gradients by hand.

Good luck,

Alan Weiss
MATLAB mathematical toolbox documentation

Subject: convex optimization problem

From: Matt J

Date: 27 Aug, 2012 13:31:16

Message: 6 of 7

Alan_Weiss <aweiss@mathworks.com> wrote in message <k1fpea$e5r$1@newscl01ah.mathworks.com>...
>
> When using the sqp or interior-point algorithms there is no need to set
> x(i)>=eps; these algorithms obey bounds at all iterations, and your
> extension of the definition to the boundary is fine.
=================

I'm not so sure. The function is not differentiable at the the boundary. The directional derivative goes to infinity when approached from the right and has no definition when approached from the left. This violates a lot of textbook assumptions.

A solution might be to make the change of variables

x_i=k_i*exp(z_i)

leading to the reformulated problem


J = summmation(i = 1:N) k_i*exp(z_i)*(z_i*+ c_i)

Subject to : summation(i=1:N) k_i*exp(z_i) = 1


This is now finitely differentiable everywhere. It's true that this breaks convexity, but it is a monotonic change of variables, so the problem should still be unimodal (I think).

Subject: convex optimization problem

From: Matt J

Date: 27 Aug, 2012 13:48:21

Message: 7 of 7

"Matt J" wrote in message <k1fsr4$rf0$1@newscl01ah.mathworks.com>...
>
> I'm not so sure. The function is not differentiable at the the boundary. The directional derivative goes to infinity when approached from the right and has no definition when approached from the left. This violates a lot of textbook assumptions.
>
> A solution might be to make the change of variables
>
> x_i=k_i*exp(z_i)
>
> leading to the reformulated problem


Hmmm. On second thought, that doesn't fix things, because if x_i wants to go to 0, then z_i will want to go to -Inf. You'll need to impose lower bounds on z_i, which is just the same as if you imposed x_i>=eps.

Tags for this Thread

No tags are associated with this thread.

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us