From: Francois Grieu on
On 05/06/2010 20:54, Maaartin wrote:
> IIRC, the results of the four basis operations must equal
> to the rounded exact values - according to the rounding mode.
> So there are always bit-for-bit identical, assuming you can
> control the rounding mode.

I wish I found an online reference stating this clearly.

> But I know no portable way for controlling the rounding mode.

The following C idiom has been standardized over a decade ago, and has
gained fair support among compiler vendors.

#include <fenv.h>

#ifdef FE_TONEAREST
if (fesetenv(FE_TONEAREST))
#endif
{
// handle "no rounding-to-nearest support" error
}

Quoting the C99 standard:

Each of the macros
FE_DOWNWARD
FE_TONEAREST
FE_TOWARDZERO
FE_UPWARD
is defined if and only if the implementation supports getting and
setting the represented rounding direction by means of the fegetround
and fesetround functions. Additional implementation-defined rounding
directions, with macro definitions beginning with FE_ and an uppercase
letter, may also be specified by the implementation. The defined macros
expand to integer constant expressions whose values are distinct
nonnegative values.

The fesetround function establishes the rounding direction represented
by its argument round. If the argument is not equal to the value of a
rounding direction macro, the rounding direction is not changed.

The fesetround function returns a zero value if and only if the argument
is equal to a rounding direction macro (that is, if and only if the
requested rounding direction was established).


Francois Grieu
From: Maaartin on
On Jun 13, 7:08 am, Phil Carmody <thefatphil_demun...(a)yahoo.co.uk>
wrote:
> Many of computer science's brightest minds and sharpest coders
> have used floating point operations in order to perform exact
> numeric computations. (Dan Bernstein comes to mind, for example.)

He used it for Poly1305 (and maybe others), where he wrote:
"Warning: The FreeBSD operating system starts each program by
instructing
the CPU to round all oating-point mantissas to 53 bits, rather than
using the CPU's natural 64-bit precision. Make sure to disable this
instruction. Under gcc,
for example, the code asm volatile("fldcw %0"::"m"(0x137f)) speci es
full
64-bit mantissas."

So, it's possible to use FP for integer operations, but the
portability may be a problem.

Morever, the OP does a different thing. Instead of optimizing an
integer based program using FP his algorithm was FP based.

> But I use floating point in order to implement arbitrary-precision
> integers.

I thought, it was no more necessary given the current HW, but I'm sure
you know better. Could you elaborate on it a bit?
From: unruh on
On 2010-06-13, Phil Carmody <thefatphil_demunged(a)yahoo.co.uk> wrote:
> unruh <unruh(a)wormhole.physics.ubc.ca> writes:
>> On 2010-06-13, Phil Carmody <thefatphil_demunged(a)yahoo.co.uk> wrote:
>> > MrD <mrdemeanour(a)jackpot.invalid> writes:
>> >> Phoenix wrote:
>> >> > On 4 Jun, 13:31, Maaartin <grajc...(a)seznam.cz> wrote:
>> >> >
>> >> >> The scaling itself is no problem. An integer PRNG works
>> >> >> independently of all the floating-point problems, the resulting
>> >> >> real numbers may differ slightly, but this mustn't play any role in
>> >> >> e.g., simulation or equation solving (if it did, the whole result
>> >> >> would be garbage). For crypto you need no floating-point at all.
>> >> >> OTOH, using a floating-point based PRNG means that the generated
>> >> >> sequences may deviate substantially using different architectures,
>> >> >> compilers, and/or rounding modes. This should be no big problem for
>> >> >> non-cryptographic applications, but it makes it not exactly
>> >> >> repeatable. For crypto it's simply unusable.
>> >> > My question.
>> >> > IEEE 754-2008 is sufficient for all areas of science, except for
>> >> > crypto?
>> >>
>> >> Floating-point representation is *not* sufficient for all areas of
>> >> science. An FP number is only an approximation to a real number, and so
>> >> FP is not suitable for problems requiring precise results. For crypto,
>> >> results of computations must be accurate and repeatable in respect of
>> >> every bit, otherwise the output will not be useable. FP doesn't give you
>> >> that.
>> >
>> > You appear to be claiming that floating point operations are
>> > non-deterministic, a clear falsity. The fact that most languages
>> > don't give you control over how the floating point operations are
>> > performed is not a failing of floating point units.
>>
>> IF you do not specify precision, accuracy, rounding, they are
>> "non-deterministic". Ie you cannot predict what the output will be.
>> IF you specify all those, then it is probably deterministic. But since
>> different manufacturers specify them differently they will be
>> deterministically different.
>
> Yeah, and if you don't power them, they don't give any results
> at all. In order to get things done correctly, you need to ask for
> them to be done correctly.

What is "correctly" for example in rounding? Is .5 rounded to 1 or 0? Is
3.5 rounded to 3 or 4? Why is one more correct than the other?

>
> And I don't know of any FPU which will perform 1.0 + 1.0 and give
> an answer different from 2.0, no matter what rounding mode is used.

Try 1/3+1/3 -2/3

> I can think of about 2^120 other operations which will be performed
> identically no matter what rounding mode is on any IEEE754-compliant
> double-precision unit. Those operations give one an awful lot of
> flexibility to get computations done correctly.

You gave me an interger operation. Yes, integer operations will probably
be done correctly. But ehn why not use integers and make sure the answer
is what you want.

>
>> > Many of computer science's brightest minds and sharpest coders
>> > have used floating point operations in order to perform exact
>> > numeric computations. (Dan Bernstein comes to mind, for example.)
>> >
>> >> I can imagine various (non-portable) cryptographic uses that could be
>> >> made of floating-point hardware, but in general you have to use
>> >> arbitrary-precision integers.
>> >
>> > But I use floating point in order to implement arbitrary-precision
>> > integers.
>>
>> ????
>
> I, that is me, the poster of this post,
> use, as in utilise,
> floating point, as in those sign/exponent/mantissa things,
> in order to implement, as in programming computers,
> arbitrary-precision integers, like - integers with arbitrary precision.
>
> Which bit wasn't clear?

The bit where you explained how ans why you would do that.

>
> At 128-512 bits, I was about two times faster than GMP, back in
> the days before x86_64 came along. (I've not been number-crunching
> since then, and I guess they've pulled their socks up since then.)
>
> Credit where credit is due - my code was heavily based on Dan
> Bernstein's zmodexp, he did the hard work, I just generalised.
>
> You might want to look into GIMPS - they do some pretty heavy
> arbitrary precision exact integer work, and they do it almost
> all with FPUs.

I will accept that an FPU will handle a number which can be exactly
represented by the floating point format exactly. That is NOT what the
OP was doing. He was using true FP numbers, which were not representable
exactly by the format, as I recall.
(actually, 2* (1+ 2^-25)-1 assuming the matisa is 24 bit may well not give
the right answer.


>
> Phil