From: Mark Dickinson on
On Jul 8, 2:00 pm, Adam Skutt <ask...(a)gmail.com> wrote:
> On Jul 8, 7:23 am, Mark Dickinson <dicki...(a)gmail.com> wrote:> On Jul 8, 11:58 am, Adam Skutt <ask...(a)gmail.com> wrote:
>
> > > accurately.  Moreover, in general, it's impossible to even round
> > > operations involving transcendental functions to an arbitrary fixed-
> > > precision, you may need effectively infinite precision in order to the
> > > computation.
>
> > Impossible?  Can you explain what you mean by this?  Doesn't the
> > decimal module do exactly that, giving correctly-rounded exp() and
> > log() results to arbitrary precision?
>
> You run into the table-maker's dilemma: there's no way to know in
> advance how many digits you need in order to have n bits of precision
> in the result.

Sure. But it's a bit of a stretch to go from not knowing what
resources you'll need in advance to calling something 'impossible'. :)

> For some computations, the number of bits required to
> get the desired precision can quickly overwhelm the finite limitations
> of your machine (e.g., you run out of RAM first or the time to compute
> the answer is simply unacceptable).

Perhaps in theory. In practice, though, it's very rare to need to
increase precision more than once or twice beyond an initial first
guesstimate, and the amount of extra precision needed is small. That
increase is unlikely to cause problems unless you were operating right
up against your machine's limits in the first place.

--
Mark
From: Stefan Krah on
Adam Skutt <askutt(a)gmail.com> wrote:
> On Jul 8, 7:23�am, Mark Dickinson <dicki...(a)gmail.com> wrote:
> > On Jul 8, 11:58�am, Adam Skutt <ask...(a)gmail.com> wrote:
> >
> > > accurately. �Moreover, in general, it's impossible to even round
> > > operations involving transcendental functions to an arbitrary fixed-
> > > precision, you may need effectively infinite precision in order to the
> > > computation.
> >
> > Impossible? �Can you explain what you mean by this? �Doesn't the
> > decimal module do exactly that, giving correctly-rounded exp() and
> > log() results to arbitrary precision?
>
> You run into the table-maker's dilemma: there's no way to know in
> advance how many digits you need in order to have n bits of precision
> in the result. For some computations, the number of bits required to
> get the desired precision can quickly overwhelm the finite limitations
> of your machine (e.g., you run out of RAM first or the time to compute
> the answer is simply unacceptable).

Yes, this appears to be unsolved yet, see also:

http://www.cs.berkeley.edu/~wkahan/LOG10HAF.TXT

"Is it time to quit yet? That's the Table-Maker's Dilemma. No general
way exists to predict how many extra digits will have to be carried to
compute a transcendental expression and round it _correctly_ to some
preassigned number of digits. Even the fact (if true) that a finite
number of extra digits will ultimately suffice may be a deep theorem."


However, in practice, mpfr rounds correctly and seems to be doing fine.

In addition to this, I've been running at least 6 months of continuous
tests comparing cdecimal and decimal, and neither log() nor exp() poses
a problem.

pow() is trickier. Exact results have to be weeded out before
attempting the correction loop for correct rounding, and this is
complicated.


For example, in decimal this expression takes a long time (in cdecimal
the power function is not correctly rounded):


Decimal('100.0') ** Decimal('-557.71e-742888888')



Stefan Krah


From: Adam Skutt on
On Jul 8, 9:22 am, Mark Dickinson <dicki...(a)gmail.com> wrote:
> On Jul 8, 2:00 pm, Adam Skutt <ask...(a)gmail.com> wrote:
>
> > On Jul 8, 7:23 am, Mark Dickinson <dicki...(a)gmail.com> wrote:> On Jul 8, 11:58 am, Adam Skutt <ask...(a)gmail.com> wrote:
>
> > > > accurately.  Moreover, in general, it's impossible to even round
> > > > operations involving transcendental functions to an arbitrary fixed-
> > > > precision, you may need effectively infinite precision in order to the
> > > > computation.
>
> > > Impossible?  Can you explain what you mean by this?  Doesn't the
> > > decimal module do exactly that, giving correctly-rounded exp() and
> > > log() results to arbitrary precision?
>
> > You run into the table-maker's dilemma: there's no way to know in
> > advance how many digits you need in order to have n bits of precision
> > in the result.
>
> Sure.  But it's a bit of a stretch to go from not knowing what
> resources you'll need in advance to calling something 'impossible'. :)
>
> > For some computations, the number of bits required to
> > get the desired precision can quickly overwhelm the finite limitations
> > of your machine (e.g., you run out of RAM first or the time to compute
> > the answer is simply unacceptable).
>
> Perhaps in theory.  In practice, though, it's very rare to need to
> increase precision more than once or twice beyond an initial first
> guesstimate, and the amount of extra precision needed is small.  That
> increase is unlikely to cause problems unless you were operating right
> up against your machine's limits in the first place.
I suspect your platitude isn't especially comforting for those who
need more computing capability than we can currently construct.
However, I wouldn't call the amount of extra needed precision "small"
for most transcendental functions, as it's frequently more than double
in the worse-case situations and increases non-linearly as the number
of desired digits increases.

Which almost brings us full circle to where I was originally pointing:
the "rounding" problem is inherent in the finite nature of a physical
computer, so you cannot make the rounding problem go away. As such,
talking about differences in rounding between decimal and binary
representations is somewhat of a corner case. Replacing "float" with
"decimal" won't get rid of the problems that floating-point brings to
the table in the first place. The issues that come up all have to do
with lack of human understanding of what the computer is doing. Take
even as something as innocent as equality between two floating-point
numbers: even exact rounding of all operations doesn't solve this
entirely common problem. Accordingly, once we explain why this
doesn't work, we frequently don't need the enhanced functionality
decimal provides and hopefully can make the determination on our own.

If you want to make elementary arithmetic (add, subtract, multiple,
divide) behave intuitively then you (arguably) want an arbitrary-
precision fractional/rational number class. After that, the right
solution is education.

Adam
From: Victor Eijkhout on
Zooko O'Whielacronx <zooko(a)zooko.com> wrote:

> I'm starting to think that one should use Decimals by default and
> reserve floats for special cases.

Only if one has Power6 (or 7) which has hardware support for BCD.
Otherwise you will have slow applications.

Victor.
--
Victor Eijkhout -- eijkhout at tacc utexas edu
From: Mark Dickinson on
On Jul 8, 3:29 pm, Adam Skutt <ask...(a)gmail.com> wrote:
> On Jul 8, 9:22 am, Mark Dickinson <dicki...(a)gmail.com> wrote:
> > On Jul 8, 2:00 pm, Adam Skutt <ask...(a)gmail.com> wrote:
> > > For some computations, the number of bits required to
> > > get the desired precision can quickly overwhelm the finite limitations
> > > of your machine (e.g., you run out of RAM first or the time to compute
> > > the answer is simply unacceptable).
>
> > Perhaps in theory.  In practice, though, it's very rare to need to
> > increase precision more than once or twice beyond an initial first
> > guesstimate, and the amount of extra precision needed is small.  That
> > increase is unlikely to cause problems unless you were operating right
> > up against your machine's limits in the first place.
>
> I suspect your platitude isn't especially comforting for those who
> need more computing capability than we can currently construct.
> However, I wouldn't call the amount of extra needed precision "small"

I think that's because we're talking at cross-purposes.

To clarify, suppose you want to compute some value (pi; log(2);
AGM(1, sqrt(2)); whatever...) to 1000 significant decimal places.
Then typically the algorithm (sometimes known as Ziv's onion-peeling
method) looks like:

(1) Compute an initial approximation to 1002 digits (say), with known
absolute error (given by a suitable error analysis); for the sake of
argument, let's say that you use enough intermediate precision to
guarantee an absolute error of < 0.6 ulps.

(2) Check to see whether that approximation unambiguously gives you
the correctly-rounded 1000 digits that you need.

(3) If not, increase the target precision (say by 3 digits) and try
again.

It's the precision increase in (3) that I was calling small, and
similarly it's step (3) that isn't usually needed more than once or
twice. (In general, for most functions and input values; I dare say
there are exceptions.)

Step (1) will often involve using significantly more than the target
precision for intermediate computations, depending very much on what
you happen to be trying to compute. IIUC, it's the extra precision in
step (1) that you don't want to call 'small', and I agree.

IOW, I'm saying that the extra precision required *due to the table-
maker's dilemma* generally isn't a concern.

I actually agree with much of what you've said. It was just the
"impossible" claim that went over the top (IMO). The MPFR library
amply demonstrates that computing many transcendental functions to
arbitrary precision, with correctly rounded results, is indeed
possible.

--
Mark