From: Giacomo Boffi on 8 Jul 2010 11:52
"Zooko O'Whielacronx" <zooko(a)zooko.com> writes:
> I'm starting to think that one should use Decimals by default and
> reserve floats for special cases.
would you kindly lend me your Decimals ruler? i need to measure the
sides of the triangle whose area i have to compute
From: Chris Rebert on 8 Jul 2010 12:31
On Thu, Jul 8, 2010 at 8:52 AM, Giacomo Boffi <giacomo.boffi(a)polimi.it> wrote:
> "Zooko O'Whielacronx" <zooko(a)zooko.com> writes:
>> I'm starting to think that one should use Decimals by default and
>> reserve floats for special cases.
> would you kindly lend me your Decimals ruler? i need to measure the
> sides of the triangle whose area i have to compute
If your ruler doesn't have a [second] set of marks for centimeters and
millimeters, that's really one cheap/cruddy ruler you're using.
From: Zooko O'Whielacronx on 8 Jul 2010 12:38
On Thu, Jul 8, 2010 at 4:58 AM, Adam Skutt <askutt(a)gmail.com> wrote:
> I can't think of any program I've ever written where the inputs are
> actually intended to be decimal. Â Consider a simple video editing
> program, and the user specifies a frame rate 23.976 fps. Â Is that what
> they really wanted? Â No, they wanted 24000/1001 but didn't feel like
> typing that.
Okay, so there was a lossy conversion from the user's intention
(24000/1001) to what they typed in (23.976).
>>> instr = '23.976'
Now as a programmer you have two choices:
1. accept what they typed in and losslessly store it in a decimal:
>>> from decimal import Decimal as D
>>> x = D(instr)
>>> print x
2. accept what they typed in and lossily convert it to a float:
>>> x = float(instr)
>>> print "%.60f" % (x,)
option 2 introduces further "error" between what you have stored in
your program and what the user originally wanted and offers no
advantages except for speed, right?
>> I'm sorry, what will never be true? Are you saying that decimals have
>> a disadvantage compared to floats? If so, what is their disadvantage?
> He's saying that once you get past elementary operations, you quickly
> run into irrational numbers, which you will not be representing
> accurately. Â Moreover, in general, it's impossible to even round
> operations involving transcendental functions to an arbitrary fixed-
> precision, you may need effectively infinite precision in order to the
> computation. Â In practice, this means the error induced by a lossy
> input conversion (assuming you hadn't already lost information) is
> entirely negligible compared to inherent inability to do the necessary
But this is not a disadvantage of decimal compared to float is it?
These problems affect both representations. Although perhaps they
affect them differently, I'm not sure.
I think sometimes people conflate the fact that decimals can easily
have higher and more variable precision than floats with the fact that
decimals are capable of losslessly storing decimal values but floats
From: Adam Skutt on 8 Jul 2010 13:04
On Jul 8, 11:36 am, Mark Dickinson <dicki...(a)gmail.com> wrote:
> I think that's because we're talking at cross-purposes.
> To clarify, suppose you want to compute some value (pi; log(2);
> AGM(1, sqrt(2)); whatever...) to 1000 significant decimal places.
> Then typically the algorithm (sometimes known as Ziv's onion-peeling
> method) looks like:
> (1) Compute an initial approximation to 1002 digits (say), with known
> absolute error (given by a suitable error analysis); for the sake of
> argument, let's say that you use enough intermediate precision to
> guarantee an absolute error of < 0.6 ulps.
> (2) Check to see whether that approximation unambiguously gives you
> the correctly-rounded 1000 digits that you need.
> (3) If not, increase the target precision (say by 3 digits) and try
> It's the precision increase in (3) that I was calling small, and
> similarly it's step (3) that isn't usually needed more than once or
> twice. (In general, for most functions and input values; I dare say
> there are exceptions.)
> Step (1) will often involve using significantly more than the target
> precision for intermediate computations, depending very much on what
> you happen to be trying to compute. IIUC, it's the extra precision in
> step (1) that you don't want to call 'small', and I agree.
> IOW, I'm saying that the extra precision required *due to the table-
> maker's dilemma* generally isn't a concern.
Yes, though I think attributing only the precision added in step 3 to
the table-maker's dilemma isn't entirely correct. While it'd be
certainly less of a dilemma if we could precompute the necessary
precision, it doesn't' help us if the precision is generally
unbounded. As such, I think it's really two dilemmas for the price of
> > I actually agree with much of what you've said. It was just the
> "impossible" claim that went over the top (IMO). The MPFR library
> amply demonstrates that computing many transcendental functions to
> arbitrary precision, with correctly rounded results, is indeed
That's because you're latching onto that word instead of the whole
sentence in context and making a much bigger deal out of than is
appropriate. The fact that I may not be able to complete a given
calculation for an arbitrary precision is not something that can be
ignored. It's the same notional problem with arbitrary-precision
integers: is it better to run out of memory or overflow the
calculation? The answer, of course, is a trick question.
From: Mark Dickinson on 8 Jul 2010 13:07
On Jul 8, 2:59 pm, Stefan Krah <stefan-use...(a)bytereef.org> wrote:
> pow() is trickier. Exact results have to be weeded out before
> attempting the correction loop for correct rounding, and this is
> For example, in decimal this expression takes a long time (in cdecimal
> the power function is not correctly rounded):
> Decimal('100.0') ** Decimal('-557.71e-742888888')
Hmm. So it does. Luckily, this particular problem is easy to deal
with. Though I dare say that you have more up your sleeve. :)?