From: <HIDDEN>
Newsgroups: comp.soft-sys.matlab
Subject: Help: Error when convert a very large integer(for example 2^256-1) to a binary due to the rounding error
Date: Mon, 20 Apr 2009 13:12:01 +0000 (UTC)
Organization: The MathWorks, Inc.
Lines: 34
Message-ID: <gshsb1$jpu$>
Reply-To: <HIDDEN>
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 8bit
X-Trace: 1240233121 20286 (20 Apr 2009 13:12:01 GMT)
NNTP-Posting-Date: Mon, 20 Apr 2009 13:12:01 +0000 (UTC)
X-Newsreader: MATLAB Central Newsreader 1809150
Xref: comp.soft-sys.matlab:534049

I want to use matlab to convert a very large integer (can be larger than 2^255) to a 256 bits long binary sequence, but since the integer is so large that, matlab will do some rounding operation, like: they will treat 2^256-2 the same as 2^256, if you input 2^256-(2^256-2) in matlab, the result will be 0. So when I convert the 2^256 and 2^256-2 to binary sequences, the results will be the same.

 I have tried with "de2bi", and I have also written a function by myself, but both do not help :

%calculate binary representation of integer, whose highest-order digit is the right-most one. 

function out = dediconv(integer,n)  
out = zeros(1,n);
if integer < 2^n
 for i=n:-1:1
     if integer>=2^(i-1)
         out(i) = 1;
         integer = integer - 2^(i-1);
     if integer == 0

    error('debiconf_shan: the given length n is too small for representing the given integer.' );