Code covered by the BSD License  

Highlights from
Representing Polyhedral Convex Hulls by Vertices or (In)Equalities

5.0

5.0 | 12 ratings Rate this file 52 Downloads (last 30 days) File Size: 8.81 KB File ID: #30892
image thumbnail

Representing Polyhedral Convex Hulls by Vertices or (In)Equalities

by

 

30 Mar 2011 (Updated )

Express bounded polyhedron via equalities/inequalities or vertices.

| Watch this File

File Information
Description

This submission contains VERT2LCON and LCON2VERT, which will find the linear constraints defining a bounded polyhedron in R^n, given its vertices, or vice versa. They are extensions of Michael Kleder's VERT2CON and CON2VERT functions that can handle cases where the polyhedron is not solid in R^n. Points in a non-solid polyhedron satisfy both a set of inequalities A*x<=b and a set of equalities Aeq*x=beq. The inequalities define a solid polyhedron while the equalities define an affine set intersecting its interior (illustrated in 3D in the above screenshot). Some other refinements to the original VERT2CON and CON2VERT subroutines, meant to improve speed and reliability, have also been made.
 

SYNTAX:
 
   [A,b,Aeq,beq]=VERT2LCON(V,TOL)

   [V,nr,nre]=LCON2VERT(A,b,Aeq,beq,TOL)
 
The rows of the N x n matrix V are a series of N vertices of the polyhedron
in R^n, defined by the linear constraints
   
    A*x <= b
    Aeq*x = beq
 
For LCON2VERT, Aeq=beq=[] are the default inputs, implying no equality constraints. The output "nr" lists non-redundant inequality constraints, and "nre" lists non-redundant equality constraints.

It is to be emphasized that A,b must define a solid region, while Aeq and beq should be used to further constrain the region to a sub-manifold of R^n, if needed. You should not use the inequality constraints to express a region in R^n that can be described by equality constraints.
 
The optional TOL argument is a tolerance used for rank-estimation purposes and to handle finite precision issues. Default=1e-10.
 
 
EXAMPLE:
 
Consider the 3D polyhedron defined by x+y+z=1, x>=0, y>=0, z>=0.
 
 
 >> [A,b,Aeq,beq]=vert2lcon(eye(3))
  
     A =
 
         0.4082 -0.8165 0.4082
         0.4082 0.4082 -0.8165
        -0.8165 0.4082 0.4082
 
 
     b =
 
         0.4082
         0.4082
         0.4082
 
 
     Aeq =
 
         0.5774 0.5774 0.5774
 
 
     beq =
 
         0.5774

And the reverse conversion is,

   >> V=lcon2vert(A,b,Aeq,beq)
 
          V =
  
              1.0000 0.0000 0.0000
              0.0000 0.0000 1.0000
             -0.0000 1.0000 0.0000

****
NEW as of 12/18/2103

QLCON2VERT has been added to the distribution as a quick version of LCON2VERT. QLCON2VERT needs to be passed a known point x0 in the relative interior of the polyhedron (see "help qlcon2vert"). This allows it to skip some intensive search steps sometimes needed in LCON2VERT.

Acknowledgements

Vert2 Con Vertices To Constraints and Con2 Vert Constraints To Vertices inspired this file.

MATLAB release MATLAB 7.11 (R2010b)
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (45)
31 Jul 2014 Matt J

Cong,

Points on the boundary of the polytope are expected to violate the inequalities by small amounts due to finite precision arithmetic. When I compute the violations for your A,b data with the code below, I find that they are indeed very small O(1e-13),

>> V=lcon2vert(A,b);
>> slacks=bsxfun(@minus, b,A*V');
>> violations = slacks(slacks<0)

violations =

1.0e-13 *

-0.1066
-0.0355
-0.5684
-0.0711
-0.0355
-0.0355
-0.5684
-0.0355
-0.2842
-0.5684
-0.1066
-0.2487
-0.3375

31 Jul 2014 Cong

I just did a test about your program, and I found that an extreme vertex that obtained does not satisfy all inequalities. Note that I just test inequalities case (3D):
A=
-32.7680000000000 17.6400000000000 -3.20000000000000
-3.37500000000000 6.25000000000000 -1.50000000000000
17.5760000000000 2.56000000000000 2.60000000000000
79.5070000000000 10.8900000000000 4.30000000000000
32.7680000000000 -17.6400000000000 3.20000000000000
3.37500000000000 -6.25000000000000 1.50000000000000
-17.5760000000000 -2.56000000000000 -2.60000000000000
-79.5070000000000 -10.8900000000000 -4.30000000000000
b=
5
18
68
244
2
-11
-61
-219
One of the extreme vertices: [1.5530 4.9322 9.7233]
In order to test if this three values satisfy all inequalities, I multiply A by [1.5530 4.9322 9.7233]' to compare the new set of b values with the previous b values, and I found not all of new b values less than the original b values, which means they are not satisfied.

12 Mar 2014 Ivan Bogun  
19 Dec 2013 Matt J

Hi Richard,
You might already have a modification that you like, but I encourage you to look at QLCON2VERT, in my release yesterday. It allows you to supply an x0 as you asked, but it also works when there are equality constraints, i.e. x0 only has to be in the relative interior. Also, QLCON2VERT will skip the boundedness check by default, which can save a lot of time if you know a priori that the set is bounded. Finally, LCON2VERT itself should be faster now for certain problems. It now checks first whether the origin is an interior point, something it can do very quickly, before performing more complicated searches.

18 Dec 2013 Richard Katzwer

Disregard the previous question, I now see that it is easily modifiable. Simply
1. pass an extra argument x0 to con2vert
2. check if x0 in the polyhedron
2a. If yes, continue as normal with s = x0;
2b. Otherwise, set s = slackfun(pinv(A)*b) and proceed as if there were no x0.

17 Dec 2013 Richard Katzwer

Great software, Matt J. I've found it to be far more robust than con2vert.

Quick question, though. Suppose I have a set of linear constraints [A,b], and I *know* a point x0 that satisfies A*x0 <= b. Is there a way to modify lcon2vert so that it doesn't have to work to find an interior point and can use a user-supplied one?

01 Mar 2013 Matt J

Hi Andreas,

Thanks for your feedback.

Your example doesn't demonstrate a bug, as far as I can see. The rows of the matrix returned by lcon2vert are the vertices of the polyhedron. The vertices are not guaranteed to be listed in any particular order.

So, in your example, there is no reason that it would return eye(3), if that's what you were expecting -- only some permutation of the rows of eye(3).

01 Mar 2013 Andreas

Hi Matt,

first of all thanks for the great work!

It seems that the example with V=eye(3) does not work. Dimensions are swapped when using the output of vert2lcon with lcon2vert. This is what I get:

>> V=eye(3)

V =

1 0 0
0 1 0
0 0 1

>> [A,b,Aeq,beq]=vert2lcon(V)

A =

0.4082 -0.8165 0.4082
0.4082 0.4082 -0.8165
-0.8165 0.4082 0.4082

b =

0.4082
0.4082
0.4082

Aeq =

0.5774 0.5774 0.5774

beq =

0.5774

>> lcon2vert(A,b,Aeq,beq)

ans =

1.0000 0.0000 0.0000
0.0000 0.0000 1.0000
-0.0000 1.0000 0.0000

I am using R2012a. Is there something that can be done about this "bug"?

11 Dec 2012 Matt J

@Sandro: that would be a problem with the way you generate your constraints. It might be worth elaborating on how you do that in a post to the Newsgroup, or to Answers.

11 Dec 2012 Sandro

Hi Matt,

The problem is that by changing input parameters I got different sets of inequality constraints, but I don't know beforehand whether some will degenerate to equality.

How do you think?

11 Dec 2012 Matt J

Hi Sandro,

The inequality constraints x2<=0, -x2<=0 are equivalent to x2=0. You should use the Aeq input argument to express this.

11 Dec 2012 Sandro

Hi Matt,

I just wonder whether lcon2vert can handle this case: 0<=x1<=1, x2<=0, -x2<=0, in R^2.

It seems to fail in this case. Any suggestion?

Best,
Sandro

12 Nov 2012 Matt J

If you cannot upgrade, just replace the line with

[trash,E]=max(P);

12 Nov 2012 Xu

My bad. Perhaps my Matlab version (2007) is too old.

12 Nov 2012 Xu

Syntax error in line 89..... Could you please check that?

19 Aug 2012 Matt J

Hi Cong,
No. There is no connection to Nelder-Mead that is apparent to me.

20 Jul 2012 Cong

Hi Matt,
Thanks for your explanation. I was wondering that if the algorithm is related to the simplex method which proposed by Nelder and Mead?

09 Jul 2012 Matt J

Hi Cong,
Thanks for your feedback. There is no formal reference that I can give you, I'm afraid. I largely reverse-engineered Michael Kleder's code. Approximately speaking, though, it works like this: every bounded polyhedron has a dual polyhedron. The face normals of the original polyhedron are the vertices of the dual polyedron and vice versa. Because we are given the face normals of the original polyhedron, we know the vertices of the dual. The code can use CONVHULLN to compute the dual face normals from the dual vertices. The dual face normals are the vertices of the original polyhedron.

09 Jul 2012 Cong

Hi Matt,

I am sorry about my mistake, and you are right. I should only expand upper values or decrease lower values to test this program, and it give me the results quickly. In order to have a better understanding of this program, could you provide some references to the algorithm used?
Best regards,
Cong

09 Jul 2012 Matt J

Hi Cong,

I disagree that your second data set defines a non-empty polyhedron. If it were non-empty, then I should be able to sum together any 2 of your inequalities and obtain another true inequality. But if I add your 7th and 17th inequality together, I obtain

0*x(1)+0*x(2) <= -9.5583e+005

which clearly cannot be satisfied by any point [x(1), x(2)].

08 Jul 2012 Cong

Matt, I agree with you that my previous data does not exist a polytope. However, I think it failed to give a polytope when I use another data which definitely exists a polyhedron:
A=[0.1882 0.0936
0.2080 0.0853
0.2435 0.0393
0.2488 -0.0170
0.2482 -0.0211
0.2444 -0.0366
0.2287 -0.0668
0.2005 -0.0892
0.1939 -0.0918
0.1342 -0.0913
-0.1882 -0.0936
-0.2080 -0.0853
-0.2435 -0.0393
-0.2488 0.0170
-0.2482 0.0211
-0.2444 0.0366
-0.2287 0.0668
-0.2005 0.0892
-0.1939 0.0918
-0.1342 0.0913];

b=[-7495.00
-1048038.00
-11224.49
-11567.01
-10924.00
-11159.33
-968495.00
-9129.12
-7411.05
-6410.33
9718.79
1358994.00
14616.50
1512609.00
14285.24
14592.97
12664.93
11938.08
9650.65
8312.30];

If I do not let it run sufficient time which would be quite long, then the error is "Unable to locate a point near the interior of the feasible region". Thus, is it possible if I just let it run and it will provide a polyhedron sooner or later?

Best regards,
Cong

08 Jul 2012 Matt J

Cong - it appears that the polyhedron is empty and therefore the "Initializers" cannot find a point inside it. I intend to modify the code soon so that it will bail out with V=[] when this occurs.

08 Jul 2012 Cong

Hi Matt,
Thanks for providing the code.
However, the program will run a long time(unknown) if I use the data:
A=[0.188 0.094
0.208 0.085
0.243 0.039
0.249 -0.017
0.248 -0.021
0.244 -0.037
0.229 -0.067
0.201 -0.089
0.194 -0.092
0.134 -0.091
-0.188 -0.094
-0.208 -0.085
-0.243 -0.039
-0.249 0.017
-0.248 0.021
-0.244 0.037
-0.229 0.067
-0.201 0.089
-0.194 0.092
-0.134 0.091];
b=[-8495.00
-11480.40
-13224.50
-11567.00
-10924.00
-11159.30
-12684.95
-9129.12
-7411.05
-7410.33
9718.79
11589.90
17616.50
15126.10
14285.20
14593.00
16664.90
11938.10
9650.65
8312.30];
In addition, I find the "Initializer2" always spend much time on this calculation. Do you have any idea to make it more efficient?

Best regards,
Cong

09 Apr 2012 Deepanshu Vasal  
16 Feb 2012 Matt J

@Deepanshu - actually the 2nd case doesn't appear to work for me, either. I think I've found the problem, but it will take me a few days until I can fix it.

15 Feb 2012 Deepanshu Vasal

Matt, Thanks for your reply. Having equality constraint was not the problem. If I take
A = - [
0 0 0 0 0 1 -2 1 ;
0 0 1 -2 1 0 0 0 ;
eye(8);
-eye(8)];
and
b = - [zeros(2,1); -10*ones(16,1)];

it does not work but if I reduce redundant dimensions and put

A = - [
0 0 0 1 -2 1 ;
1 -2 1 0 0 0 ;
eye(6);
-eye(6)];
and
b = - [zeros(2,1); -10*ones(12,1)];

it works.

but I used lrs package at http://cgm.cs.mcgill.ca/~avis/C/lrs.html

and I got vertices for my problem.

Anyway I appreciate your effort and your such timely response. good luck

Deepanshu

14 Feb 2012 Matt J

@Deepanshu - Are you sure the first 8 constraints are not meant to be equality constraints?

In any case, I answered you in an earlier email, but here's the recap: the region described by your inequality constraints appears to be non-solid. Such inequality constraints are numerically-unstable, which is why all MATLAB Optimization Toolbox functions ask you to represent your linear constraints in the form:

Aineq*x<=bineq
Aeq*x=beq

where Aineq*x=bineq describes a SOLID region.

LCON2VERT works similarly. You must rewrite your non-solid region as above and invoke lcon2vert in 4-argument form:

lcon2vert(Aineq,bineq,Aeq,beq)

13 Feb 2012 Deepanshu Vasal

Hi Matt,

I am trying to use your code with my dataset, but I am getting singular matrix warning with the vertices generated are out of bounds (Nan, inf).
My data set is as follows :-

A = - [

0 1 0 -2 0 1 0 0 ;
0 0 0 0 0 1 -2 1 ;
0 0 1 -2 1 0 0 0 ;

-1 1 1 -1 0 0 0 0 ;
0 0 0 -1 1 1 -1 0 ;


0 0 0 1 -1 0 -1 1 ;
1 -1 0 -1 1 0 0 0 ;
0 0 1 -1 0 -1 1 0 ;
eye(8);
-eye(8)];

b = - [zeros(8,1); -10*ones(16,1)];

is there something that cannot be handled by this code? I see that the initial point taken is fine (satisfies the constraints.) but cannot figure out where is the problem in the code.
If you come across any problem, please let me know. Also do you know the reference for the algorithm used in this code?

Thanks so much.
Deepanshu
Thanks so much
Deepanshu

31 Oct 2011 Matt J

@david: You need to tell me how I can reproduce the error.

31 Oct 2011 david

Hi Matt,
I'll be very thankful if you helps me with this problem: When I applied your function i get the following error :
"??? Maximum recursion limit of 500 reached. Use set(0,'RecursionLimit',N)
to change the limit. Be aware that exceeding your available stack space can
crash MATLAB and/or your computer."

What i have to do in this case ?

23 Sep 2011 Matt J

Hi Sandro,

I'm not having any such difficult in R2011a, so perhaps you need to upgrade. Here is the test that I ran, and no errors resulted

>> [a,b,ae,be]=vert2lcon(eye(20));
>> V=lcon2vert(a,b,ae,be);
>> norm(V'*V-eye(20))

ans =

5.7516e-015

22 Sep 2011 Sandro

hi Matt,

Do you have any exposure of how high the dim could be? I am testing something like x1+x2+...+xn=1, xi>=0, i=1,2,...n. I randomly generate some points within this region and also give the vertices like (0,0,0..,0), (1,0,0,..0), (0,1,0,...0)...to help checking.

It looks like n=10 works, but n=20 is stuck due to qullmx function called by convexhulln.

Is there any possible we can have function like qullmx but more powerful in the sense of dim and speed?

Sandro

18 Sep 2011 Harshavardhan Sundar

Hi Matt, Thanks again (Officially)....the latest of your code lcon2vert and vert2lcon are working absolutely fine....

08 Sep 2011 Matt J

@Harshavardhan - I've just updated the package with what I believe is a more robust version of LCON2VERT. At the very least, it successfully handles your example data set. Also, it will more reliably report cases where it cannot find a solution.

06 Sep 2011 Matt J

@Harshavardhan - This is a resend of a previous comment which seems to have gotten lost by the FEX.

The data set you've provided is quite intriguing! I've traced the problem you're seeing to Michael Kleder's CON2VERT code (my lcon2vert is mainly a wrapper for this).

The problem is that, in order to work, CON2VERT needs to find an initial point c inside the polytope. The simple method that it uses seems to have had a good rate of success historical success, but lo and behold, your particular A,b data set has defeated it!

I know this is the problem however, because

(i) I've checked that the initial point c that it generates does not satisfy A*c<b

(ii) When I manually change CON2VERT, hardcoding the initial c to a point that I know is in your polytope, I get the correct vertices.

I've alerted Michael Kleder to this via the CON2VERT webpage, but he doesn't appear to have been monitoring the page for several years. I doubt he'll be responding with any input.

I have some ideas of my own to improve the initialization scheme, but they will take some time to develop.

In the meantime, if you're in a situation where you already know an initial point, c, inside your polytope, a workaround for you might be to modify CON2VERT so that it accepts this c is an additional input argument.

Beyond that, I don't know what to say. You have difficult data...

06 Sep 2011 Matt J

@Harshavardhan - one additional note. In the meantime, you should also modify this line of code

if ef ~= 1

to this

if ~all(A*c < b)

This will at least warn you properly that the function is about to fail.

05 Sep 2011 Harshavardhan Sundar

Hi Matt. First of all thanks a lot for the wonderful piece of code. The code works fine for most of the cases. However there are a few occasions where I get erroneous vertices. One such example is below. My A matrix is :
A=[1 0 0
0 1 0
0 0 1
-1 0 0
0 -1 0
0 0 -1
-0.387696185212551 -0.295276274520099 0.703237014010810
5.73326513862449 -0.422448628998633 -0.113292931747631
-3.79255233000727 -0.360645958414035 -0.673892379691217
-1.27188203250127 0.544750863219514 -0.791324376598499
-2.30540199426269 -2.50522755054822 -0.291771858420232
0.616578245027929 -0.654271120193835 0.574406987898971
-1.49508806389885 -2.68877782397954 -0.989225453798092
5.34556895341194 -0.717724903518731 0.589944082263178
-3.40485614479472 -0.0653696838939366 -1.37712939370203
-0.884185847288714 0.840027137739613 -1.49456139060931
-1.91770580905014 -2.20995127602812 -0.995008872431042
0.228882059815378 -0.949547394713933 1.27764400190978
-1.10739187868630 -2.39350154945944 -1.69246246780890
1.94071280861722 -0.783094587412668 -0.787185311438849
4.46138310612323 0.122302234220881 -0.904617308346131
3.42786314436180 -2.92767617954685 -0.405064790167863
5.11668689359657 0.231822491195202 -0.687699919646602
4.23817707472564 -3.11122645297817 -1.10251838554572
-2.52067029750601 -0.905396821633549 0.117431996907282
1.48715033574458 -2.14458159213419 0.382120521270985
-3.17597408497934 -1.01491707860787 -0.0994853917922465
2.29746426610842 -2.32813186556550 -0.315333074106875
-1.03351996176143 -3.04997841376774 0.499552518178267
-0.655303787473336 -0.109520256974320 -0.216917388699529
-0.223206031397585 -3.23352868719905 -0.197901077199593
-1.68882374923476 -3.15949867074206 0.282635129478739
-0.810313930363841 0.183550273431316 0.697453595377860
-0.878509818870922 -3.34304894417337 -0.414818465899121]
b=[6
4
2
0
0
0
-0.154147860211922
16.6671436561829
-8.83179250583593
-1.48693603688105
-8.92116339978305
0.423928365940654
-8.95704617639833
16.5129957959710
-8.67764464562401
-1.33278817666913
-8.76701553957113
0.269780505728732
-8.80289831618640
7.83535115034699
15.1802076193019
7.74598025639987
16.2432152902423
7.71009747978459
-7.34485646895489
-0.0893708939471196
-8.40786413989528
-0.125253670562397
-7.43422736290201
-1.06300767094039
-7.47011013951728
-8.49723503384240
0.0358827766152754
-8.53311781045768]

I used the plotregion command available in MATLAB central to plot the region. I have reasons to believe that the plotregion command is plotting the region accurately. However for this particular example the vertices I get (graphically) from the plotregion command are not coinciding with the vertices obtained from lcon2vert. I also tried con2vert by Michael Kleder with little success.

05 Sep 2011 Harshavardhan Sundar  
14 Jul 2011 Timothy Stratton

Nice very powerful!

16 Apr 2011 Sandro

Hi Matt,

my concern of using consolidator is because I think function unique has pretty high requirement of two vectors to be the same. When run vert2lcon on a large data set, i have seen a large A with many rows are the same up to 6-8 digital number. This enlarges the size of A and it may not be necessary for the user.

"A and b are produced by Michael Kleder's functions which seem to normalize it this way already."

I doubt about it.

Best,
Sandro

14 Apr 2011 Matt J

Hi Sandro. Thanks for the feedback. I'll think about your recommendations. Could you explain the advantage of consolidator over unique? I chose a normalization for Aeq and beq that led to nice whole integers where possible. A and b are produced by Michael Kleder's functions which seem to normalize it this way already.

Regarding your rating. It's fine. The FEX only scores according to your most recent rating. This is to allow people to update their ratings if the author makes improvements to the submission.

14 Apr 2011 Sandro

Sorry Matt, I am not sure how to remove the first rating. Two comments: "unique" function may be better to be replaced by "consolidator"; It may be good to normalize A,b,etc before the final output(I am not quite sure about its necessity.)

14 Apr 2011 Matt J

Ah, ok.

14 Apr 2011 Sandro

It is definitely a good job!!! I may mess up the rate submission above.

14 Apr 2011 Sandro  
Updates
06 Apr 2011

added LCON2VERT which does the inverse of VERT2CON

08 Sep 2011

Improved the reliability of CON2VERT subroutine, both in terms of performance and of error reporting.

14 Sep 2011

*Further robustness of lcon2vert
*Improved weeding of non-unique constraints in vert2lcon output
*Outputs A,Aeq of vert2lcon will now have normalized rows.

07 Mar 2012

Various bug fixes and improvements in robustness. These address failure cases discovered recently by users.

06 Aug 2012

If lcon2vert fails to find an initial interior point after many iterations of the alg it uses, it will now quit with V=[].

18 Dec 2013

* improved error checking in lcon2vert
* some improved/faster search criteria added to lcon2vert.
* added qlcon2vert, a faster version of lcon2vert which skips certain checks and precomputations

06 Jan 2014

added screenshot

Contact us