In article <fcn4l4$8hd$1@canopus.cc.umanitoba.ca>,
roberson@ibd.nrccnrc.gc.ca (Walter Roberson) wrote:
> In article
<ellieandrogerxyzzy1709071039060001@dialup4.232.15.10.dial1.losangeles1.level3.net>,
> Roger Stafford <ellieandrogerxyzzy@mindspring.com.invalid> wrote:
>
> > Yanis, here is another method for getting random convex polygons.
>
> Interesting method (what I've managed to understand of it). Experimentally,
> though, with small number of sides, the observed number of 90 degree
> angles is inordinately high. The evidence suggests that when the
> random polygon would not normally close as convex, that the adjustment
> done is likely to result in a 90 degree angle (or something visually
> close to it 90 degrees, perhaps less sometimes.)
>
> The conditions become rarer as the number of sides increases, it
> appears, but I've seen it a few times even with n=20.
>
> > a = rand(n,1);
> > a = 2*pi*(rand+cumsum(a/sum(a))); % Generate n increasing angles
>
> The rand() in the second statement threw me a bit, but I see now
> that it is just a single random number, acting to impart a rotation
> to the angles determined by cumsum(a/sum(a)) .
>
> What is the last (i.e., maximum) value of cumsum(a/sum(a))? It'll
> have to be 1. But what is the second last value? It turns out that
> it can be arbitrarily small:
>
> Suppose you have N1 random values each less than 1/t, followed
> by a single 1. Then sum(a) will be 1 + (N1)/t, and the second
> last value in the cumsum will be (N1)/t / (1+(N1)/t). This
> simplifies to (N1)/(t+N1), from which we see that as t grows
> increasingly large (and thus increasingly improbable), the second
> last value can become decreasingly small. [Crosscheck: for t=1,
> the usual range for random numbers, (N1)/(1+N1) is (N1)/N for
> the second last value, which is reasonable.] This particular
> sequence is, of course, improbable, but it does illustrate that
> we might expect some fairly wide gaps between the second last and last
> vertex.
>
> For example, a test a moment ago generated
> a = [0.33776 0.74589 0.16304 0.06549 0.84123]
> cumsum(a/sum(a)) = [0.15685 0.50323 0.57894 0.60935 1];
> The next test,
> a = [0.13385 0.29837 0.613 0.12035 0.96812]
> cumsum(a/sum(a)) = [0.062732 0.20257 0.48986 0.54627 1]
> Secondlast values around 0.71 are common; the 0.55 is the smallest
> I've seen, by a fair bit. The 0.61 shows there are other patterns
> that still lead to large gaps in the angles.

Hello Walter,
I am unable to reproduce the results you refer to as to the
secondtolast value of a. Appending zero to the beginning, the
increments in a are intended to each be initially a random value in [0,1]
that is subsequently normalized to have a total sum of 1. Thus each
increment should have the same expected value, namely 1/n (for n polygon
vertices.) In particular, the difference between the last value, which is
1 because of the normalization, and the secondtolast should also average
out at 1/n. After the 'cumsum' operation, the a values represent total
angles from the beginning. In other words, the angles produced by a =
2*pi*(rand+cumsum(a/sum(a))) should be statistically uniformly distributed
over a certain range of 2*pi length, which is itself randomly determined,
if that can be considered a proper statement.
I ran an experiment that repeated the two lines
a = rand(n,1); % (n = 10)
a = cumsum(a/sum(a));
one hundred thousand times and got an average of 0.90004749850469 for the
secondtolast a value, which is in good agreement with its expected
value. (I dropped the left 'rand' and the 2*pi multiplier here for
simplicity.) The idea behind this code was to give each of the n polygon
outer angles an expected value of pi+2*pi/n with a sum of precisely
(n+2)*pi as they must have, and it seems to me those two lines achieve
just that.
The cases of 90 degree angles you refer to are the chance result of 'a'
increment values that are near pi/2, or more accurately, are near pi/2
after the final adjustment. For fairly small n this should happen
frequently, but for larger n it should be more infrequent. It is true
that the final adjustment will alter angles, but their sum remains fixed
and some should increase while others decrease. I see no particular
reason that pi/2 should be particularly favored in this respect unless one
is using an n value near 4.
I don't follow your reasoning in, "Suppose you have N1 random values
each less than 1/t, followed by a single 1. ...." There are actually N
(n) random values and they are not followed by a 1 to begin with, but
possess a 1 sum only after normalization. I think there must be some
misunderstanding here.
The aspect of this method that bothers me the most is, as I mentioned
earlier, that I cannot rigorously prove that all polygons produced this
way will be convex after the final offset correction. The analysis I have
done leads me to suspect that it is actually impossible to produce a
nonconvex polygon with this algorithm but the proof of this has eluded
me. Looked at from a practical point of view, in the many dozens of
polygons I have generated in tests, not one of them suffered from
nonconvexity, as shown by a convexity test that utilized the full
16decimal place accuracy of matlab. When n is large it is to be expected
that the polygon inner angles would often be close to pi, but I found none
that ever became equal to or greater than pi after adjustment.
Roger Stafford
