For this particular experiment, I'm clearly on the cusp of breaking into larger problems with a 2^60 size. Ideally my problem size gets up to 100-200 which, until now, didn't seem possible. It may not be. My problem is not in storing big integers (tried using uint64), but in manipulating them with existing functions. Much support has been added for these numbers, but many functions, like binary/decimal conversion, are limited by the 2^53 size. This is where your tools come in perfectly--if you'll recall you found me trying to convert binary to decimal with some inferior software.
I know a lot of people use this page to gripe about your software not being perfect for what they want -- I was trying to express my gratitude and contribute something that might be useful to you or someone. We're not all out to get you, John. Peace, love, and thanks for the software.
Here are some stats on the big numbers. I think I will just write my code to ignore duplicates since it's not time effective to remove them. Sorting takes a little while, but I think I can work out of order just fine. So I'm good for now I think, just thought you'd be interested in some run times, and that you'd tell me if something looks off.
>> tic,vsamples = randint(N,[500000,1]);toc
Elapsed time is 1138.346514 seconds.
>> tic, vsamplesU=unique(vsamples);toc
Elapsed time is 1811.413225 seconds.
Elapsed time is 1833.296390 seconds.
I see also that unique() was contributed to the toolbox, so with only ~55 collisions per 500,000 samples, I could easily remove the duplicates using unique. I'm guessing that will be more efficient than the re-sampling approach.
I've just been diving into the documentation (thank you for all of that so much, by the way). I see we can generate vectors of random numbers with replacement, but is it possible (or would it be difficult) to generate random numbers from a population without replacement?