MOO-cows Mailing List Archive


Re: Floating point is like that... (was Re: Floating point error)

In message <11976708@prancer.Dartmouth.EDU> , you wrote:
> --- Tom Ritchford wrote:
>     If programmers are assuming that they are going to get
> "exact" numbers with their floating point calculations,
> I fear that they will be sadly disappointed. 
> --- end of quoted material ---
> Agreed.  The entire point of floating point numbers is increased range at the
> expense of precision.

I think the horse is dead!  We can stop beating it now.  The problem is
not that floating point is inherently imprecise, but that MOO prints out
more digits in the mantissa than the precision of the floating point type
provides.  If your floating point representation uses a 53 bit mantissa
for the `double' type, then you represent numbers as N/(2^53) * 10^E.  If
you do the obvious math:

	$ bc -l

This is the smallest difference between any two mantissas that is possible.
Let's take everyone's favorite example, 1.6.  This is 0.16 * 10^1.  So the
mantissa is 0.16:


That result is the number we're going to cram into 53 bits.  But we can
only keep the integer part, so we'll round to 1441151880758559.  So we
no longer have 0.16, but THE FARTHEST WE CAN BE OFF is that first number,
about 1E-16:


So you can see we're off by about 3E-17.  That's no problem AS LONG AS WE
DON'T PRINT IT OUT!  That 3.108E-17 is just round off error.  In this case,
if we only print out 15 digits of the mantissa, the round off error is
rounded off and we don't see it.  If you elect to print 800 digits in the
name of precision, you'll just get a lot more junk.

This is not the famous Pentium bug.  This is not a bug in your math
library.  It's a bug in the format width specification used by MOO, and
Pavel is already looking into correcting it.



Home | Subject Index | Thread Index