From: Gordon Burditt on
>>> One should not "always" demand standards that apply in
>>> all cases, remembering that even the metric system is not yet employed
>>> everywhere.

I'll accept proprietary standards.

>> How do you write a specification of what hardware you need to buy
>> to be compatible? I don't think you know, and perhaps even Intel
>> does not know, that all Intel desktop CPUs will be compatible with
>> each other. And you don't know that the ones released next year
>> will be.
>
>I know too little about hardware. But somewhere I read that Intel
>must publish their design specifications to the extent that the
>cometitors like AMD could produce compatible products. Of course,

The definitions of "specification" and "compatible" feed each other.
If some floating-point operation is specified to return a result
accurate to +/- 1 ulp (unit in the last place) compared to the
mathematically correct answer, then it can round UP *or* DOWN and
still meet the specification. Specifying +/- half a ulp means it
has to round in the correct direction, except when the result is
exactly halfway in between, in which case there is still a choice.
And an implementation has to make a rounding choice for each different
input. Per the specification, two FPUs which make different choices
are compatible, but they do *NOT* get bit-for-bit identical results.

Basic operations, such as addition and multiplication, can require
+/- half a ulp. More complicated operations such as log and trig
functions, cannot without requiring a lot more hardware.

The trouble is, your cryptography requires bit-for-bit identical
results, which is beyond what the specification (even Intel's
proprietary specification) guarantees. "Compatible" isn't good
enough.


>if one is pedantic, one could even question whether two chips
>of the same series by the same manufacuturer work identically,
>because there is a probability of manufacturing errors. But
>would one go thus far?

You don't have to go nearly that far. Are Intel floating-point
units even supposed to produce identical results given identical
input? Does Intel claim this anywhere? Was it even a design goal?
Did Intel achieve this? (I have my doubts, given the Pentium F00F
bug, and the fact that they fixed it). Would they know if they
didn't? How do they prove this? (Forget, for the moment, manufacturing
defects, cosmic ray bit-flipping, and overclocking.) Perhaps more
important, is this a design goal for future units to give bit-for-bit
identical results for the same input?


From: Bruce Stephens on
gordonb.kymhi(a)burditt.org (Gordon Burditt) writes:

[...]

> The trouble is, your cryptography requires bit-for-bit identical
> results, which is beyond what the specification (even Intel's
> proprietary specification) guarantees. "Compatible" isn't good
> enough.

IIUC the IEEE standards give algorithms for the common operations and
require that implementations produce equivalent results. (That is
bit-for-bit identical results.)

IIUC JVMs conforming to the spec must also produce identical results (by
virtue of the requirement that JVM arithmetic has to conform to the IEEE
standards).

How that works in the real world is another question, of course. One
can anticipate bugs at various levels.

[...]

From: Maaartin on
On Jun 5, 8:20 pm, Bruce Stephens <bruce+use...(a)cenderis.demon.co.uk>
wrote:
> gordonb.ky...(a)burditt.org (Gordon Burditt) writes:
>
> [...]
>
> > The trouble is, your cryptography requires bit-for-bit identical
> > results, which is beyond what the specification (even Intel's
> > proprietary specification) guarantees.  "Compatible" isn't good
> > enough.
>
> IIUC the IEEE standards give algorithms for the common operations and
> require that implementations produce equivalent results.  (That is
> bit-for-bit identical results.)

I think so. IIRC, the results of the four basis operations must equal
to the rounded exact values - according to the rounding mode. So there
are always bit-for-bit identical, assuming you can control the
rounding mode. But I know no portable way for controlling the rounding
mode.

> IIUC JVMs conforming to the spec must also produce identical results (by
> virtue of the requirement that JVM arithmetic has to conform to the IEEE
> standards).

The JVM is required to produce identical results only when strictfp is
in effect.

"Within an expression that is not FP-strict, some leeway is granted
for an implementation to use an extended exponent range to represent
intermediate results; the net effect, roughly speaking, is that a
calculation might produce "the correct answer" in situations where
exclusive use of the float value set or double value set might result
in overflow or underflow."

The JVM is required to produce results within 1 ulp of the exact
result even for goniometric functions, which is (because of
compatibility with improper historical implementations) not what the
Intel FPU does. That's why goniometric functions for large arguments
are slow in Java.
From: Mok-Kong Shen on
Maaartin wrote:
> Bruce Stephens wrote

>> IIUC the IEEE standards give algorithms for the common operations and
>> require that implementations produce equivalent results. (That is
>> bit-for-bit identical results.)
>
> I think so. IIRC, the results of the four basis operations must equal
> to the rounded exact values - according to the rounding mode. So there
> are always bit-for-bit identical, assuming you can control the
> rounding mode. But I know no portable way for controlling the rounding
> mode.

I haven't studied the standard, so the following is my 'guess' based on
a 'common sense' reasoning. The purpose of the standard is clearly to
guarantee the same rounded result being obtained up to a certain given
precision. To achieve that precision, different hardware may implement
differently. The user gets the same result for "that" precision for use
in his computations, but not necessarily the same result when the words
implementing the numbers are dumped and compared bit for bit. (The
big and small endian issue is in principle akin to this, isn't it?)

M. K. Shen

From: Mok-Kong Shen on
Phoenix wrote:
> unruh wrote:

>> Then you have nothing since all floating point values are represented by
>> fixed point arithmetic on any computer I know of.
>
> Read this:
>
> http://en.wikipedia.org/wiki/Fixed-point_arithmetic
>
> and
>
> http://en.wikipedia.org/wiki/Floating_point

Any floating point implementation covers of course only a finitely
many values (an extremely small part) of R and these values are
not uniformly distributed in the range implemented. Integers
implemented cover similarly only a small part of Z or N, but
the values are uniformly distributed in the range implemented.
Presumably this difference could be of some significance in having
a PRNG based on floating point or not. But I have no idea of
whether this is an advantage or disadvantage in aspects of crypto.

M. K. Shen