"Richard " <rfrank@dominionsw.com> wrote in message <jcfgck$geh$1@newscl01ah.mathworks.com>...
> If I have two sets of data that are very similar , and want to find the difference, is there a best practice for avoiding loss of significance.? I.e. add a constant to one set prior to subtraction and the subtracting the offset?
      
When two nearly equal binary floating point numbers are subtracted, we often state that there has been a "loss of significance". However, it is important to understand just what this statement means.
If one assumes that each of the nearly equal numbers is itself precisely correct, then in such a case the result of the subtraction will also be precisely correct  no round off error at all! On the other hand, if we assume they have equal twos exponents and that each is in error by something like half the least bit value, then the difference will have an error of no more than twice that absolute value in the worst case. For a 'double' format number in matlab this is about one part in 10^16 of the original numbers' magnitudes. There is absolutely nothing that can remedy such a possible increase in error, since it is inherent in the nature of the subtraction process itself. Still, from that point of view it does not seem like a catastrophic loss.
It is only when we compare this somewhat increased error with the magnitude of the difference obtained in the subtraction that it can assume a far greater importance. If the numbers are so close to one another that their difference is, say, only onemillionth their separate sizes, then this error now looms up as one part in 10^10, a "loss" of some six decimal digits, twenty bits, in the ratio between size and error. However that is strictly a matter of our perception of the significance of such an error.
It should be evident from this discussion that for a given subtraction event there is absolutely nothing that can done to lessen such an effect. The subtraction process has no magical builtin "error correction" abilities. It is inherent in the nature of the arithmetic process of subtraction itself. Even a human carrying out such arithmetic would be forced to make the same error.
The only kinds of remedy that exist either increase the accuracy of numerical representation  the number of bits to represent numbers  or somehow alter the nature of the algorithm being used to accomplish desired end results. (It should be said that there are indeed often many such ways around these problems using clever algorithmic changes.)
(End of floating point 1a lecture.)
Roger Stafford
