From: Gordon Burditt on
>The incompatibility bettween i.e. (Intel/SUN) communications, is for
>all areas (Chemistry, Trigonometry, Biology, Geometry, etc) or only
>for crypto?

The incompatability is not only in hardware, but also in software
providing gratuitious extra precision.

I'd be very nervous about doing cryptography where the specification
doesn't guarantee interoperability, and you don't really know what's
needed to guarantee it. And do you *really* know that for all Intel
CPUs used in desktop systems (you can specify "introduced since
2005" if you like), the log of a number will come out exactly the
same on all of them, given the same input, same rounding modes, and
the same precision? I doubt it. Does Intel even claim this? They
make precision claims but that doesn't mean you'll always get the
same error on every Intel CPU for the same input. I'm not so sure
even Intel knows for sure. Remember the infamous Pentium division
bug.

Most measurements you can make of physical quantities have far less
than 15 significant digits (about what you get from an IEEE 64-bit
floating point number), and those that get close to this require
very expensive equipment. Possible exceptions are currency and
time.

If you really *need* that many significant digits in other fields
of science such as Chemistry, Biology, and Physics, you're going
to have trouble matching your model with reality. Chemicals available
commercially have impurities. DNA does not reproduce exactly every
time. Densities of elements vary with the distribution of isotopes
in the sample, which may vary depending on where it came from. Real
data centers do not keep the temperature regulated at 72.000000000000
degrees F: you're lucky if it guarantees that the temperature stays
within 70-74 degrees F.

If it's going to blow up if you don't have 15 significant digits
of accuracy in everything, well, it's probably too late to move
further away from it.


Since you are using *all the bits* for crypto, you need all the
bits to match. You could *reduce* the problem, but not entirely
eliminate it, by rounding the result to, say, 10 significant digits
before using it. I don't think it's possible to eliminate it
entirely, as sometimes you're going to have the rounding error
straddling the point where you round one way or another.


>If answer is only for crypto, what is the solution to pass accurate
>and compatible values in R from one system to another?

*IF* you specify IEEE 754, and rounding modes, and which size to
use, you can communicate values using, say, C's hexadecimal floating
point representation. This will take care of things like byte-order.
*HOWEVER*, it won't take care of the problem that if two machines
take the log of that value they might get slightly different results
starting from an identical value.

From: Gordon Burditt on
>Are you aware that incompatibility is ubiquitous in CS? Files of
>Microsoft and of Apple are in different formats, if I don't err.
>I can't read PS-documents, because I have Adobe Reader on my PC.

I doubt very much that having Adobe Reader prevents you from
installing an application that understands Postscript documents.

>Currently I have the 64 bit version of Windows 7 and can no longer
>run the exe-files I previously got using Windows XP on a 32 bit PC.


>My point was that, 'if' the algorithm turns out to be 'sufficiently'
>superior, 'then' it could well find application areas that it
>deserves.
Why the scare quotes?

I suspect that the interoperability problem, plus the problem that
you have no specification that tells you what is compatible and
what isn't, makes it sufficiently inferior that truly infinite
speed will not make up for it.

How, for example, would you *TEST* that an Intel Core 2 Duo (with
a specific serial number) running the algorithm on CPU 1 is compatible
with the same algorithm running on CPU 2 on the same physical CPU?
Running 500 YB of encrypted data might not catch the problem.


>One should not "always" demand standards that apply in
>all cases, remembering that even the metric system is not yet employed
>everywhere.

How do you write a specification of what hardware you need to buy
to be compatible? I don't think you know, and perhaps even Intel
does not know, that all Intel desktop CPUs will be compatible with
each other. And you don't know that the ones released next year
will be.

>To my knowledge, even different versions of certain ISO
>Standards may also not be (entirely) upwards compatible. In real
>life, one has often to accept compromises, right?
From: Mok-Kong Shen on
Gordon Burditt wrote:
>> Are you aware that incompatibility is ubiquitous in CS? Files of
>> Microsoft and of Apple are in different formats, if I don't err.
>> I can't read PS-documents, because I have Adobe Reader on my PC.
>
> I doubt very much that having Adobe Reader prevents you from
> installing an application that understands Postscript documents.

No. The point was simply there is no standard stuff to read all
documents.

>> One should not "always" demand standards that apply in
>> all cases, remembering that even the metric system is not yet employed
>> everywhere.
>
> How do you write a specification of what hardware you need to buy
> to be compatible? I don't think you know, and perhaps even Intel
> does not know, that all Intel desktop CPUs will be compatible with
> each other. And you don't know that the ones released next year
> will be.

I know too little about hardware. But somewhere I read that Intel
must publish their design specifications to the extent that the
cometitors like AMD could produce compatible products. Of course,
if one is pedantic, one could even question whether two chips
of the same series by the same manufacuturer work identically,
because there is a probability of manufacturing errors. But
would one go thus far?

M. K. Shen
From: Mok-Kong Shen on
unruh wrote:

> Since floating point values are represented on the system as integers
> (ie a finite number of bits) and since the representation can vary, why
> in the world would anyone design a floating point crypto with all its
> problems rather than the equivalent integer system. Ie, anything a
> floating point algorithm can do, an integer one can as well-- maybe with
> a bit more programming.

I don't know exactly. But maybe efficiency could be a point. Otherwise
why are lots of other computing not done with integer arithmetics?

M. K. Shen

From: Bryan on
Mok-Kong Shen wrote:
> unruh wrote:
> > Since floating point values are represented on the system as integers
> > (ie a finite number of bits) and since the representation can  vary, why
> > in the world would anyone design a floating point crypto with all its
> > problems rather than the equivalent integer system. Ie, anything a
> > floating point algorithm can do, an integer one can as well-- maybe with
> > a bit more programming.
>
> I don't know exactly. But maybe efficiency could be a point. Otherwise
> why are lots of other computing not done with integer arithmetics?

Lots of other computing *is* done with integer arithmetic. Also a lot
of computing is done with floating point arithmetic, and the reason
is, obviously, that the problem domain deals with finite-precision
numeric values from a continuous range. Floating-point representation
is particularly well-suited for physical measurements.

As a tangential and off-topic note: Double-precision floating point
out-lasted other size-limited computer standards, but now faces a
challenge. The sizes historically allocated for crypto keys, for
addresses to memory or disk, for network ID's -- they've all been
blown past whither a decade or two. Double-precision floating point
numbers have been 64 bits from way back, and that's been more than
precise enough for any measurement until very recently. The most
precisely measured physical dimension is time, and now clocks have
exhausted the precision of double-length floats. Clocks have reached
+/- one second in 200 million to 400 million years, which crosses what
the 53-bit mantissa of IEEE-754-1985 can represent.

The IEEE-754 standard of 1985 deserves its acclaim. For those of us
who are naive about floating-point arithmetic, IEEE-754-1985 just
worked; countless problems never arose. The floating-point experts are
still on the job, and IEEE 754-2008 defines quadruple-precision,
precise to one part in 10384593717069655257060992658440192. It looks
like over-kill; none alive as I write this will see measurements close
to pushing it. Nevertheless, physicists did exhaust double-precision,
so let's get IEEE 754-2008 implemented.

--
--Bryan