From: bart.c on
david mainzer wrote:

>>>> sum = 0.0
>>>> for i in range(10):
> ... sum += 0.1
> ...
>>>> sum
> 0.99999999999999989
>>>>
>
> But thats looks a little bit wrong for me ... i must be a number
> greater
> then 1.0 because 0.1 =
> 0.100000000000000005551115123125782702118158340454101562500000000000
> in python ... if i print it.
>
> So i create an example program:
>
> sum = 0.0
> n = 10
> d = 1.0 / n
> print "%.60f" % ( d )
> for i in range(n):
> print "%.60f" % ( sum )
> sum += d
>
> print sum
> print "%.60f" % ( sum )
>
>
> - -------- RESULTs ------
> 0.100000000000000005551115123125782702118158340454101562500000
> 0.000000000000000000000000000000000000000000000000000000000000
> 0.100000000000000005551115123125782702118158340454101562500000
> 0.200000000000000011102230246251565404236316680908203125000000
> 0.300000000000000044408920985006261616945266723632812500000000
> 0.400000000000000022204460492503130808472633361816406250000000
> 0.500000000000000000000000000000000000000000000000000000000000
> 0.599999999999999977795539507496869191527366638183593750000000
> 0.699999999999999955591079014993738383054733276367187500000000
> 0.799999999999999933386618522490607574582099914550781250000000
> 0.899999999999999911182158029987476766109466552734375000000000
> 1.0
> 0.999999999999999888977697537484345957636833190917968750000000
>
> and the jump from 0.50000000000000*** to 0.59999999* looks wrong
> for me ... do i a mistake or is there something wrong in the
> representation of the floating points in python?

I think the main problem is, as sum gets bigger, the less significant bits
of the 0.1 representation fall off the end (enough to make it effectively
just under 0.1 you're adding instead of just over).

> can anybody tell me how python internal represent a float number??

Try "google ieee floating point". The problems aren't specific to Python.

--
Bartc

From: bart.c on
david mainzer wrote:

>>>> sum = 0.0
>>>> for i in range(10):
> ... sum += 0.1
> ...
>>>> sum
> 0.99999999999999989
>>>>
>
> But thats looks a little bit wrong for me ... i must be a number
> greater
> then 1.0 because 0.1 =
> 0.100000000000000005551115123125782702118158340454101562500000000000
> in python ... if i print it.
>
> So i create an example program:
>
> sum = 0.0
> n = 10
> d = 1.0 / n
> print "%.60f" % ( d )
> for i in range(n):
> print "%.60f" % ( sum )
> sum += d
>
> print sum
> print "%.60f" % ( sum )
>
>
> - -------- RESULTs ------
> 0.100000000000000005551115123125782702118158340454101562500000
> 0.000000000000000000000000000000000000000000000000000000000000
> 0.100000000000000005551115123125782702118158340454101562500000
> 0.200000000000000011102230246251565404236316680908203125000000
> 0.300000000000000044408920985006261616945266723632812500000000
> 0.400000000000000022204460492503130808472633361816406250000000
> 0.500000000000000000000000000000000000000000000000000000000000
> 0.599999999999999977795539507496869191527366638183593750000000
> 0.699999999999999955591079014993738383054733276367187500000000
> 0.799999999999999933386618522490607574582099914550781250000000
> 0.899999999999999911182158029987476766109466552734375000000000
> 1.0
> 0.999999999999999888977697537484345957636833190917968750000000
>
> and the jump from 0.50000000000000*** to 0.59999999* looks wrong
> for me ... do i a mistake or is there something wrong in the
> representation of the floating points in python?

I think the main problem is, as sum gets bigger, the less significant bits
of the 0.1 representation fall off the end (enough to make it effectively
just under 0.1 you're adding instead of just over).

> can anybody tell me how python internal represent a float number??

Try "google ieee floating point". The problems aren't specific to Python.

--
Bartc

From: Nobody on
On Wed, 07 Jul 2010 15:08:07 +0200, Thomas Jollans wrote:

> you should never rely on a floating-point number to have exactly a
> certain value.

"Never" is an overstatement. There are situations where you can rely
upon a floating-point number having exactly a certain value.

First, floating-point values are exact. They may be an approximation
to some other value, but they are still exact values, not some kind of
indeterminate quantum state. Specifically, a floating-point value is a
rational number whose denominator is a power of two.

Second, if the result of performing a primitive arithmetic operation
(addition, subtraction, multiplication, division, remainder) or the
square-root function on the equivalent rational values is exactly
representable as a floating-point number, then the result will be exactly
that value.

Third, if the result of performing a primitive arithmetic operation or the
square-root function on the equivalent rational values *isn't* exactly
representable as a floating-point number, then the floating-point result
will be obtained by rounding the exact value according to the FPU's
current rounding mode.

All of this is entirely deterministic, and follows relatively simple
rules. Even if the CPU has a built-in random number generator, it will
*not* be used to generate the least-significant bits of a floating-point
arithmetic result.

The second and third cases above assume that floating-point arithmetic
follows IEEE-754 (the second case is likely to be true even for systems
which don't strictly adhere to IEEE-754). This is true for most modern
architectures, provided that:

1. You aren't using Borland C, which forcibly "optimises" x/y to x*(1/y),
so 12/3 isn't exactly equal to 4, as 1/3 isn't exactly representable. Real
compilers won't use this sort of approximation unless specifically
instructed (e.g. -funsafe-math-optimizations for gcc).

2. You aren't using one of the early Pentium chips.

In spite of this, there are some "gotcha"s. E.g. on x86, results are
computed to 80-bit (long double) precision internally. These will be
truncated to 64 bits if stored in memory. Whether the truncation occurs is
largely up to the compiler, although it can be forced with -ffloat-store
with gcc.

More complex functions (trigonometric, etc) are only accurate to within a
given relative error (e.g. +/- the value of the least significant bit), as
it isn't always possible to determine the correct value for the least
significant bit for a given rounding mode (and even if it is theoretically
possible, there is no limit to the number of bits of precision which would
be required).


From: Ethan Furman on
Nobody wrote:
> On Wed, 07 Jul 2010 15:08:07 +0200, Thomas Jollans wrote:
>
>> you should never rely on a floating-point number to have exactly a
>> certain value.
>
> "Never" is an overstatement. There are situations where you can rely
> upon a floating-point number having exactly a certain value.

It's not much of an overstatement. How many areas are there where you
need the number
0.100000000000000005551115123125782702118158340454101562500000000000?

If I'm looking for 0.1, I will *never* (except by accident ;) say

if var == 0.1:

it'll either be <= or >=.

By contrast, if I'm dealing with integers I can say if var == 4 because
I *know* that there are values that var can hold that are *exactly* 4.
Not 3.999999999817263 or 4.0000000019726.

~Ethan~
From: Raymond Hettinger on
On Jul 7, 5:55 am, Mark Dickinson <dicki...(a)gmail.com> wrote:
> On Jul 7, 1:05 pm, david mainzer <d...(a)tu-clausthal.de> wrote:
>
>
>
> > Dear Python-User,
>
> > today i create some slides about floating point arithmetic. I used an
> > example from
>
> >http://docs.python.org/tutorial/floatingpoint.html
>
> > so i start the python shell on my linux machine:
>
> > dm(a)maxwell $ python
> > Python 2.6.5 (release26-maint, May 25 2010, 12:37:06)
> > [GCC 4.3.4] on linux2
> > Type "help", "copyright", "credits" or "license" for more information.>>> >>> sum = 0.0
> > >>> >>> for i in range(10):
>
> > ...     sum += 0.1
> > ...>>> >>> sum
> > 0.99999999999999989
>
> > But thats looks a little bit wrong for me ... i must be a number greater
> > then 1.0 because 0.1 = 0.100000000000000005551115123125782702118158340454101562500000000000
> > in python ... if i print it.

[Mark Dickinson]
> So you've identified one source of error here, namely that 0.1 isn't
> exactly representable (and you're correct that the value stored
> internally is actually a little greater than 0.1).  But you're
> forgetting about the other source of error in your example: when you
> do 'sum += 0.1', the result typically isn't exactly representable, so
> there's another rounding step going on.  That rounding step might
> produce a number that's smaller than the actual exact sum, and if
> enough of your 'sum += 0.1' results are rounded down instead of up,
> that would easily explain why the total is still less than 1.0.

One key for understanding floating point mysteries is to look at the
actual binary sums rather that their approximate representation as a
decimal string. The hex() method can make it easier to visualize
Mark's explanation:

>>> s = 0.0
>>> for i in range(10):
.... s += 0.1
.... print s.hex(), repr(s)


0x1.999999999999ap-4 0.10000000000000001
0x1.999999999999ap-3 0.20000000000000001
0x1.3333333333334p-2 0.30000000000000004
0x1.999999999999ap-2 0.40000000000000002
0x1.0000000000000p-1 0.5
0x1.3333333333333p-1 0.59999999999999998
0x1.6666666666666p-1 0.69999999999999996
0x1.9999999999999p-1 0.79999999999999993
0x1.cccccccccccccp-1 0.89999999999999991
0x1.fffffffffffffp-1 0.99999999999999989

Having used hex() to understand representation error (how the binary
partial sums are displayed), you can use the Fractions module to gain
a better understanding of rounding error introduced by each addition:

>>> s = 0.0
>>> for i in range(10):
exact = Fraction.from_float(s) + Fraction.from_float(0.1)
s += 0.1
actual = Fraction.from_float(s)
error = actual - exact
print '%-35s%-35s\t%s' % (actual, exact, error)


3602879701896397/36028797018963968 3602879701896397/36028797018963968
0
3602879701896397/18014398509481984 3602879701896397/18014398509481984
0
1351079888211149/4503599627370496 10808639105689191/36028797018963968
1/36028797018963968
3602879701896397/9007199254740992
14411518807585589/36028797018963968 -1/36028797018963968
1/2
18014398509481985/36028797018963968 -1/36028797018963968
5404319552844595/9007199254740992
21617278211378381/36028797018963968 -1/36028797018963968
3152519739159347/4503599627370496
25220157913274777/36028797018963968 -1/36028797018963968
7205759403792793/9007199254740992
28823037615171173/36028797018963968 -1/36028797018963968
2026619832316723/2251799813685248
32425917317067569/36028797018963968 -1/36028797018963968
9007199254740991/9007199254740992
36028797018963965/36028797018963968 -1/36028797018963968

Hope this helps your slides,


Raymond