From: Phoenix on
On 7 Jun, 18:40, unruh <un...(a)wormhole.physics.ubc.ca> wrote:
> What potential? And since you have now seen that there are huge
> disadvantages, that potential would seem to have disappeared.

No, Unruh I am not disappeared.

I am here. I am reading your and outhers posts.

By now it's time to finnish. Seems to me and maybe to all, this it is/
was a bad idea.

Thank you all.
From: Phil Carmody on
MrD <mrdemeanour(a)jackpot.invalid> writes:
> Phoenix wrote:
> > On 4 Jun, 13:31, Maaartin <grajc...(a)seznam.cz> wrote:
> >
> >> The scaling itself is no problem. An integer PRNG works
> >> independently of all the floating-point problems, the resulting
> >> real numbers may differ slightly, but this mustn't play any role in
> >> e.g., simulation or equation solving (if it did, the whole result
> >> would be garbage). For crypto you need no floating-point at all.
> >> OTOH, using a floating-point based PRNG means that the generated
> >> sequences may deviate substantially using different architectures,
> >> compilers, and/or rounding modes. This should be no big problem for
> >> non-cryptographic applications, but it makes it not exactly
> >> repeatable. For crypto it's simply unusable.
> > My question.
> > IEEE 754-2008 is sufficient for all areas of science, except for
> > crypto?
>
> Floating-point representation is *not* sufficient for all areas of
> science. An FP number is only an approximation to a real number, and so
> FP is not suitable for problems requiring precise results. For crypto,
> results of computations must be accurate and repeatable in respect of
> every bit, otherwise the output will not be useable. FP doesn't give you
> that.

You appear to be claiming that floating point operations are
non-deterministic, a clear falsity. The fact that most languages
don't give you control over how the floating point operations are
performed is not a failing of floating point units.

Many of computer science's brightest minds and sharpest coders
have used floating point operations in order to perform exact
numeric computations. (Dan Bernstein comes to mind, for example.)

> I can imagine various (non-portable) cryptographic uses that could be
> made of floating-point hardware, but in general you have to use
> arbitrary-precision integers.

But I use floating point in order to implement arbitrary-precision
integers.

Phil
--
I find the easiest thing to do is to k/f myself and just troll away
-- David Melville on r.a.s.f1
From: Phil Carmody on
some fule wrote:
> You don't have to go nearly that far. Are Intel floating-point
> units even supposed to produce identical results given identical
> input? Does Intel claim this anywhere? Was it even a design goal?
> Did Intel achieve this? (I have my doubts, given the Pentium F00F

FDIV, not F00F. F00F was purely integer (or even logical, it's
testing simple bitwise equality).

Phil
--
I find the easiest thing to do is to k/f myself and just troll away
-- David Melville on r.a.s.f1
From: unruh on
On 2010-06-13, Phil Carmody <thefatphil_demunged(a)yahoo.co.uk> wrote:
> MrD <mrdemeanour(a)jackpot.invalid> writes:
>> Phoenix wrote:
>> > On 4 Jun, 13:31, Maaartin <grajc...(a)seznam.cz> wrote:
>> >
>> >> The scaling itself is no problem. An integer PRNG works
>> >> independently of all the floating-point problems, the resulting
>> >> real numbers may differ slightly, but this mustn't play any role in
>> >> e.g., simulation or equation solving (if it did, the whole result
>> >> would be garbage). For crypto you need no floating-point at all.
>> >> OTOH, using a floating-point based PRNG means that the generated
>> >> sequences may deviate substantially using different architectures,
>> >> compilers, and/or rounding modes. This should be no big problem for
>> >> non-cryptographic applications, but it makes it not exactly
>> >> repeatable. For crypto it's simply unusable.
>> > My question.
>> > IEEE 754-2008 is sufficient for all areas of science, except for
>> > crypto?
>>
>> Floating-point representation is *not* sufficient for all areas of
>> science. An FP number is only an approximation to a real number, and so
>> FP is not suitable for problems requiring precise results. For crypto,
>> results of computations must be accurate and repeatable in respect of
>> every bit, otherwise the output will not be useable. FP doesn't give you
>> that.
>
> You appear to be claiming that floating point operations are
> non-deterministic, a clear falsity. The fact that most languages
> don't give you control over how the floating point operations are
> performed is not a failing of floating point units.

IF you do not specify precision, accuracy, rounding, they are
"non-deterministic". Ie you cannot predict what the output will be.
IF you specify all those, then it is probably deterministic. But since
different manufacturers specify them differently they will be
deterministically different.

>
> Many of computer science's brightest minds and sharpest coders
> have used floating point operations in order to perform exact
> numeric computations. (Dan Bernstein comes to mind, for example.)
>
>> I can imagine various (non-portable) cryptographic uses that could be
>> made of floating-point hardware, but in general you have to use
>> arbitrary-precision integers.
>
> But I use floating point in order to implement arbitrary-precision
> integers.

????

>
> Phil
From: Phil Carmody on
unruh <unruh(a)wormhole.physics.ubc.ca> writes:
> On 2010-06-13, Phil Carmody <thefatphil_demunged(a)yahoo.co.uk> wrote:
> > MrD <mrdemeanour(a)jackpot.invalid> writes:
> >> Phoenix wrote:
> >> > On 4 Jun, 13:31, Maaartin <grajc...(a)seznam.cz> wrote:
> >> >
> >> >> The scaling itself is no problem. An integer PRNG works
> >> >> independently of all the floating-point problems, the resulting
> >> >> real numbers may differ slightly, but this mustn't play any role in
> >> >> e.g., simulation or equation solving (if it did, the whole result
> >> >> would be garbage). For crypto you need no floating-point at all.
> >> >> OTOH, using a floating-point based PRNG means that the generated
> >> >> sequences may deviate substantially using different architectures,
> >> >> compilers, and/or rounding modes. This should be no big problem for
> >> >> non-cryptographic applications, but it makes it not exactly
> >> >> repeatable. For crypto it's simply unusable.
> >> > My question.
> >> > IEEE 754-2008 is sufficient for all areas of science, except for
> >> > crypto?
> >>
> >> Floating-point representation is *not* sufficient for all areas of
> >> science. An FP number is only an approximation to a real number, and so
> >> FP is not suitable for problems requiring precise results. For crypto,
> >> results of computations must be accurate and repeatable in respect of
> >> every bit, otherwise the output will not be useable. FP doesn't give you
> >> that.
> >
> > You appear to be claiming that floating point operations are
> > non-deterministic, a clear falsity. The fact that most languages
> > don't give you control over how the floating point operations are
> > performed is not a failing of floating point units.
>
> IF you do not specify precision, accuracy, rounding, they are
> "non-deterministic". Ie you cannot predict what the output will be.
> IF you specify all those, then it is probably deterministic. But since
> different manufacturers specify them differently they will be
> deterministically different.

Yeah, and if you don't power them, they don't give any results
at all. In order to get things done correctly, you need to ask for
them to be done correctly.

And I don't know of any FPU which will perform 1.0 + 1.0 and give
an answer different from 2.0, no matter what rounding mode is used.
I can think of about 2^120 other operations which will be performed
identically no matter what rounding mode is on any IEEE754-compliant
double-precision unit. Those operations give one an awful lot of
flexibility to get computations done correctly.

> > Many of computer science's brightest minds and sharpest coders
> > have used floating point operations in order to perform exact
> > numeric computations. (Dan Bernstein comes to mind, for example.)
> >
> >> I can imagine various (non-portable) cryptographic uses that could be
> >> made of floating-point hardware, but in general you have to use
> >> arbitrary-precision integers.
> >
> > But I use floating point in order to implement arbitrary-precision
> > integers.
>
> ????

I, that is me, the poster of this post,
use, as in utilise,
floating point, as in those sign/exponent/mantissa things,
in order to implement, as in programming computers,
arbitrary-precision integers, like - integers with arbitrary precision.

Which bit wasn't clear?

At 128-512 bits, I was about two times faster than GMP, back in
the days before x86_64 came along. (I've not been number-crunching
since then, and I guess they've pulled their socks up since then.)

Credit where credit is due - my code was heavily based on Dan
Bernstein's zmodexp, he did the hard work, I just generalised.

You might want to look into GIMPS - they do some pretty heavy
arbitrary precision exact integer work, and they do it almost
all with FPUs.

Phil
--
I find the easiest thing to do is to k/f myself and just troll away
-- David Melville on r.a.s.f1