From: Mark Dickinson on
On Dec 6, 7:34 pm, Anton81 <gerenu...(a)googlemail.com> wrote:
> I do some linear algebra and whenever the prefactor of a vector turns
> out to be zero, I want to remove it.

Hmm. Comparing against zero is something of a special case. So you'd
almost certainly be doing an 'if abs(x) < tol: ...' check, but the
question is what value to use for tol, and that (again) depends on
what you're doing. Perhaps 'tol' could be something like 'eps *
scale', where 'eps' is an indication of the size of relative error
you're prepared to admit (eps = 1e-12 might be reasonable; to allow
for rounding errors, it should be something comfortably larger than
the machine epsilon sys.float_info.epsilon, which is likely to be
around 2e-16 for a typical machine), and 'scale' is something closely
related to the scale of your problem: in your example, perhaps scale
could be the largest of all the prefactors you have, or some sort of
average of all the prefactors. There's really no one-size-fits-all
easy answer here.

> I'd like to keep the system comfortable. So basically I should write a
> new class for numbers that has it's own __eq__ operator?

That's probably not a good idea, for the reasons that Carl Banks
already enumerated.

Mark
From: Mark Dickinson on
On Dec 7, 12:16 am, David Cournapeau <courn...(a)gmail.com> wrote:
> If you can depend on IEEE 754 semantics, one relatively robust method
> is to use the number of representable floats between two numbers. The
> main advantage compared to the proposed methods is that it somewhat
> automatically takes into account the amplitude of input numbers:

FWIW, there's a function that can be used for this in Lib/test/
test_math.py in Python svn; it's used to check that math.gamma isn't
out by more than 20 ulps (for a selection of test values).

def to_ulps(x):
"""Convert a non-NaN float x to an integer, in such a way that
adjacent floats are converted to adjacent integers. Then
abs(ulps(x) - ulps(y)) gives the difference in ulps between two
floats.

The results from this function will only make sense on platforms
where C doubles are represented in IEEE 754 binary64 format.

"""
n = struct.unpack('<q', struct.pack('<d', x))[0]
if n < 0:
n = ~(n+2**63)
return n

--
Mark
From: dbd on
On Dec 7, 4:28 am, sturlamolden <sturlamol...(a)yahoo.no> wrote:
> ...
>
> You don't understand this at all do you?
>
> If you have a sine wave with an amplitude less than the truncation
> error, it will always be approximately equal to zero.
>
> Numerical maths is about approximations, not symbolic equalities.
>
> > 1.0 + eps is the smallest value greater than 1.0, distinguishable from
> > 1.0.
>
> Which is the reason 0.5*eps*sin(x) is never distinguishable from 0.
> ...

A calculated value of 0.5*eps*sin(x) has a truncation error on the
order of eps squared. 0.5*eps and 0.495*eps are readily distinguished
(well, at least for values of eps << 0.01 :).

At least one of us doesn't understand floating point.

Dale B. Dalrymple

From: Carl Banks on
On Dec 7, 10:53 am, dbd <d...(a)ieee.org> wrote:
> On Dec 7, 4:28 am, sturlamolden <sturlamol...(a)yahoo.no> wrote:
>
> > ...
>
> > You don't understand this at all do you?
>
> > If you have a sine wave with an amplitude less than the truncation
> > error, it will always be approximately equal to zero.
>
> > Numerical maths is about approximations, not symbolic equalities.
>
> > > 1.0 + eps is the smallest value greater than 1.0, distinguishable from
> > > 1.0.
>
> > Which is the reason 0.5*eps*sin(x) is never distinguishable from 0.
> > ...
>
> A calculated value of 0.5*eps*sin(x) has a truncation error on the
> order of eps squared. 0.5*eps and 0.495*eps are readily distinguished
> (well, at least for values of eps << 0.01 :).
>
> At least one of us doesn't understand floating point.

You're talking about machine epsilon? I think everyone else here is
talking about a number that is small relative to the expected smallest
scale of the calculation.


Carl Banks
From: dbd on
On Dec 7, 12:58 pm, Carl Banks <pavlovevide...(a)gmail.com> wrote:
> On Dec 7, 10:53 am, dbd <d...(a)ieee.org> wrote:
> > ...
>
> You're talking about machine epsilon?  I think everyone else here is
> talking about a number that is small relative to the expected smallest
> scale of the calculation.
>
> Carl Banks

When you implement an algorithm supporting floats (per the OP's post),
the expected scale of calculation is the range of floating point
numbers. For floating point numbers the intrinsic truncation error is
proportional to the value represented over the normalized range of the
floating point representation. At absolute values smaller than the
normalized range, the truncation has a fixed value. These are not
necessarily 'machine' characteristics but the characteristics of the
floating point format implemented.

A useful description of floating point issues can be found:

http://dlc.sun.com/pdf/800-7895/800-7895.pdf

Dale B. Dalrymple