From: nmm1 on
In article <htdqcg$jlu$1(a)node1.news.atman.pl>,
Piotr Wyderski <piotr.wyderski(a)mothers.against.spam.gmail.com> wrote:
>
>> You are confusing correctness with consistency. They aren't the same
>> when either parallelism or approximations to real numbers (including
>> floating-point) are involved.
>
>No Nick, I am not confusing these terms. Correctness and consistency
>are orthogonal. If you have a piece of code claimed to be a PDE solver,
>it is correct if it solves PDEs, basically. ...

Yes, we are agreed, there.

>I prefer the program to execute exactly as implemented, because
>it is what it is for a reason. It reflects the design made by a human
>who is supposed to know all the arcane details. Floating-point
>calculations are not for everybody, if the programmer doesn't know
>what is going under the hood, no compiler will ever help him. And
>no compiler should try to outsmart an expert.

But that shows that you ARE confusing correctness with consistency!
What on earth does "exactly as implemented" mean? Obviously, it
executes exactly as implemented in one sense, but I assume that you
mean coded. So let's consider a trivial example:

REAL(KIND=KIND(0.0D0)) :: x
INTEGER :: n
PRINT x**n

Should this deliver the nearest number to the 'true' result, the same
as multiplying x by itself n-1 times, or the binary form that every
compiler has used for 50 years? They are all different.

Similarly, consider a simple parallel reduction. Should it deliver
nearest number to the 'true' result, the same as a serial accumulation
or what?

The point is that only numerical algorithmists should be concerned
with details like that and we, almost uniformly, don't like Java's
approach and prefer Fortran's. That is largely because it isn't
even possible to specify what a deterministic result should be, in
general.

>> The expectation of determinism was introduced by the new 'computer
>> scientists'
>
>Numeric analysis is more applied math than anything else
>and centuries older than programmable computers.

Precisely. And the fault of the 'computer scientists' was to ignore
all of that, and its development on stored-program computers in the
1950s and 1960s.


Regards,
Nick Maclaren.
From: Piotr Wyderski on
nmm1(a)cam.ac.uk wrote:

> What on earth does "exactly as implemented" mean? Obviously, it
> executes exactly as implemented in one sense, but I assume that you
> mean coded.

First of all, it should preserve the order of braces, if that
particular order is important. The programmer alone
should specify whether it is or is not important.

> Should this deliver the nearest number to the 'true' result, the same
> as multiplying x by itself n-1 times, or the binary form that every
> compiler has used for 50 years? They are all different.

They are all different and they all are needed, hence all should be
available.
In most cases the "don't really care" variant will be used, so it should
be associated with the operator **, but the remaining ones should still
be there (as library functions, for instance).

> Similarly, consider a simple parallel reduction. Should it deliver
> nearest number to the 'true' result, the same as a serial accumulation
> or what?

It should deliver what is REALLY NEEDED. And it is the analysis,
*design time*, the right time and place to discover what is needed.
No compiler should transform e.g. compensated summation into plain
summation just because it is 2x faster and "looks similar".

Best regards
Piotr Wyderski

From: nmm1 on
In article <htdsrk$k50$1(a)node1.news.atman.pl>,
Piotr Wyderski <piotr.wyderski(a)mothers.against.spam.gmail.com> wrote:
>
>> What on earth does "exactly as implemented" mean? Obviously, it
>> executes exactly as implemented in one sense, but I assume that you
>> mean coded.
>
>First of all, it should preserve the order of braces, if that
>particular order is important. The programmer alone
>should specify whether it is or is not important.

If you mean parentheses (i.e. round brackets), not braces as such
(i.e. curly brackets), it's been tried many times, and has failed
every time. If you mean that there should be special brackets to
mark numerically significant grouping, that's been tried and has
never been very successful, either - but it's more plausible.

>> Should this deliver the nearest number to the 'true' result, the same
>> as multiplying x by itself n-1 times, or the binary form that every
>> compiler has used for 50 years? They are all different.
>
>They are all different and they all are needed, hence all should be
>available.
>In most cases the "don't really care" variant will be used, so it should
>be associated with the operator **, but the remaining ones should still
>be there (as library functions, for instance).

That's been tried, and it has failed dismally every time. It works
as long as you consider only a few, simple functions, but falls apart
as soon as more complicated ones are built using them or when the
function is infeasible to implement in all of the required ways.
You rapidly end up with either a combinatoric explosion or splitting
the language into separate dialects, none of which are adequate for
what people need to do.

>> Similarly, consider a simple parallel reduction. Should it deliver
>> nearest number to the 'true' result, the same as a serial accumulation
>> or what?
>
>It should deliver what is REALLY NEEDED. And it is the analysis,
>*design time*, the right time and place to discover what is needed.

Well, yes, but .... The problem is that a specification has got to
specify something, and your specification is a reinvention of the
classic DWIM instruction. Sorry, but you can't have it.

>No compiler should transform e.g. compensated summation into plain
>summation just because it is 2x faster and "looks similar".

If you can think of an effective, clean, efficient way of flagging
numerically critical structuring without massively compromising
optimisability, then you will have achieved something new. Nobody
has succeeded very well in 50 years of trying.


Regards,
Nick Maclaren.
From: Terje Mathisen "terje.mathisen at on
Piotr Wyderski wrote:
> Terje Mathisen wrote:
>
>> For some problems, Jave makes things even worse due to even stricter
>> insistence on "there can only be one possible answer here, and that is
>> the one you get by evaluating all fp operations in the exact order
>> specified".
>>
>> Not conductive to optimized code.
>
> Although I am a programmer working full-time on performance
> and scalability on SMPs, I like the above. Performance itself is
> not the most important thing. Correctness is. Then one can start
> optimizing.

I agree.

However, I contend that the definition of "correctness" for fp
operations is severely broken in Java.

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: Andrew Reilly on
On Mon, 24 May 2010 15:40:00 +0200, Terje Mathisen wrote:

> However, I contend that the definition of "correctness" for fp
> operations is severely broken in Java.

I agree that Java's (original) notion that the results should always be
bit-exact is obviously spurious. I think that I agree with Piotr,
though, that floating point code should (at least by default) be compiled
"exactly as written". Floating point numbers and operations really only
"work" at an assembly language level: order is vitally important. If
that turns out to make auto-optimisation painful, then so be it:
optimisation that requires things to happen in different orders
necessarily requires different numerical analysis, and so is rightly
something that happens at a library level (where that library might well
lean on explicit parallelism or thread mechanisms.)

Cheers,

--
Andrew