From: Walter Bright on
nmm1(a)cam.ac.uk wrote:
> In article <i0u2r7$tpa$1(a)news.eternal-september.org>,
> Walter Bright <walter(a)digitalmars-nospamm.com> wrote:
>>> I am afraid not. That is true for only some architectures and
>>> implementations, and is one of the great fallacies of the whole
>>> IEEE 754 approach. Even if a 'perfect' IEEE 754 implementation
>>> were predictable, which it is not required to be.
>> Can you elucidate where the IEEE 754 spec allows unpredictability?
>
> Mainly in the handling of the signs and values of NaNs: "this
> standard does not interpret the sign of a NaN". That wouldn't
> matter too much, except that C99 (and hence C++0X and IEEE 754R)
> then proceeded to interpret them - despite not changing that!

You're right, I'd forgotten about the NaN "payload" bits. But I also
think that
is irrelevant to getting accurate floating point results.

For the sign of NaN, you're right as well, but it's also irrelevant. The
only
reason C99 mentions this is because it's efficient for testing/copying
the sign
bit without also having to test for NaN'ness. Any code that depends on
the sign
of NaN is broken.

> Also, in IEEE 754R, the rounding mode for decimal formats.

Does anyone use the decimal formats?


>> I understand that the FP may use higher precision than specified by the
>> programmer, but what I was seeing was *lower* precision. For example,
>> an 80 bit transcendental function is broken if it only returns 64 bits
>> of precision.
>
> Not at all. I am extremely surprised that you think that. It would
> be fiendishly impossible to do for some of the nastier functions
> (think erf, inverf, hypergemetric and worse) and no compiler I have
> used for an Algol/Fortran/C/Matlab-like language has ever delivered it.

I agree that if it's impractical to do it, then requiring it is, ahem,
impractical. But I see these accuracy problems in functions like tanh() and
acosh(), where some C libraries get them fully accurate and others are
way off
the mark. Obviously, it is practical to get those right.


>> Other lowered precision sloppiness I've seen came from not implementing
>> the guard and sticky bits correctly.
>
> Well, yes. But those aren't enough to implement IEEE 754, anyway.

Aren't enough, sure, but necessary, yes.


>> Other problems are failure to deal properly with nan, infinity, and
>> overflow arguments.
>>
>> I don't believe such carelessness is allowed by IEEE 754, and even
>> if it was, it's still unacceptable in a professional implementation.
>
> Even now, IEEE 754 requires only the basic arithmetic operations,
> and recommends only some of the simpler transcendental functions.
> Have you ever tried to implement 'perfect' functions for the less
> simple functions? Everyone that has, has retired hurt - it's not
> practically feasible.

I was following the NCEG (Numerical C Extensions Group) back around 1991
or so,
and they came out with a document describing what each standard C library
function should do with NaN and infinity. I implemented all of that in
my C/C++
compiler. None of it was rocket science or impractical. It's a darned
shame that
it took 15+ years for other compilers to get around to doing it.


> Things have changed. Longer ago than that, a few computers had
> unpredictable hardware (the CDC 6600 divide was reported to, for
> example, but I didn't use it myself). But the big differences
> since 1980 and now are:
>
> 1) Attached processors (including GPUs) and alternate arithmetic
> units (e.g. vector units, SSE, Aptivec etc.) These usually are
> not perfectly compatible with the original arithmetic units,
> usually for very good reasons.
>
> 2) The widespread use of dynamic optimisation, where the code or
> hardware chooses a path at run-time, based on some heuristics to
> optimise performance.
>
> 3) Parallelism - ah, parallelism! And therein hangs a tale ....

I have no problem with, for example, a "fast float" compiler switch that
explicitly compromises fp accuracy for speed. But such behavior should
not be
enshrined in the Standard.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Joshua Maurice on
On Jul 6, 7:18 pm, Walter Bright <newshou...(a)digitalmars.com> wrote:
> So far, these decisions are holding up well in real life. There are
> still have some issues, like should operator== be required to be pure?

On this question, could someone define "pure" exactly in context
please? The possible counterexample I immediately thought of union
find. A find operation on a forest of rank trees in union find is
"logically const" in that it doesn't change observable state, but it
does greatly mutate internal state to get better performance on
subsequent lookups. operator== on some data structures may do
operations like this to optimize future operations. However, this
reminds me a lot of the mutable keyword as "an exception" to const
correctness. Perhaps a similar concept is needed for pure functions.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: nmm1 on
In article <i101th$qku$1(a)news.eternal-september.org>,
Walter Bright <walter(a)digitalmars-nospamm.com> wrote:
>
>You're right, I'd forgotten about the NaN "payload" bits. But I also
>think that is irrelevant to getting accurate floating point results.

Agreed.

>For the sign of NaN, you're right as well, but it's also irrelevant.
>The only reason C99 mentions this is because it's efficient for
>testing/copying the sign bit without also having to test for NaN'ness.
>Any code that depends on the sign of NaN is broken.

Obviously, I agree with the last!

But, regrettably, that is NOT what C99 specifies - there would have been
much less opposition to it if that were the case. It defines the
semantics of the sign bit of NaNs in at least a couple of cases
(copysign and conversion, mainly), leading many people to believe that
it is reliable data.

>> Also, in IEEE 754R, the rounding mode for decimal formats.
>
>Does anyone use the decimal formats?

Not yet, obviously. But the reason for their introduction was the
belief that they will. We shall see.

>I agree that if it's impractical to do it, then requiring it is, ahem,
>impractical. But I see these accuracy problems in functions like tanh()
>and acosh(), where some C libraries get them fully accurate and others
>are way off the mark. Obviously, it is practical to get those right.

Hang on. Firstly, 'right' is an emotive word. Many experts do not
subscribe to the 'perfect input' belief - it rapidly shows its defects
as soon as you do a bit of backwards error analysis, for a start.
Balancing 'accuracy', performance, the complexity of the code and the
cost of designing the functions is a matter of choice and not specified
by most languages (including C++).

Secondly, I am not denying that there are some thoroughly ghastly
implementations out there. I have seen cosh() both slower and less
accurate than using the standard formula on exp() - now, that's just
plain broken.

>I was following the NCEG (Numerical C Extensions Group) back around
>1991 or so, and they came out with a document describing what each
>standard C library function should do with NaN and infinity. I
>implemented all of that in my C/C++ compiler. None of it was rocket
>science or impractical. It's a darned shame that it took 15+ years
>for other compilers to get around to doing it.

So was I. I dropped out when it was agreed that it would always
remain a separate TR and not be included in C, and then took my eye
off the ball. I agree that it's not hard - whether it is desirable
or not is another matter (I don't agree that it is).

>I have no problem with, for example, a "fast float" compiler switch
>that explicitly compromises fp accuracy for speed. But such behavior
>should not be enshrined in the Standard.

Eh? The standard has the choice of forbidding it or permitting it. Are
you seriously saying that it should be forbidden? C++ is already slow
enough compared to Fortran on numeric code that making it worse would be
a bad idea.

What I am saying is that such matters are not currently specified
by the standard, and therefore a conforming implementation is free
to choose what it supports and what it makes the default.


Regards,
Nick Maclaren.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Mathias Gaunard on
On Jul 7, 3:21 am, Andre Kaufmann <akfmn...(a)t-online.de> wrote:

> Function objects yes. But the C++ compiler hasn't any notion of semantic
> of the function (internals) anymore - it has generated code.

I do not really understand what you're talking about, and neither why
it would be a problem.


>
> You can't (that easily) pass function objects to another function

There is nothing hard about it.
Either take it as a template argument or use type erasure with
std::function (but once you erase the type, you necessarily make any
function object monomorphic, as you have to specify a signature).


> use pattern matching (e.g. if parameter number2 is of type int and has
> value 5 then emit that code).

Pattern matching is a feature of sum types, not of functions. They are
tagged unions of different possible types, and pattern matching is a
process that happens at runtime. Note typical statically compiled
implementations do not do any code generation at runtime, so it's
basically a switch or a an indirect function call.

You can do the same thing with boost::variant (which visitors use a
switch under the hood), for example, albeit it's getting a bit old and
would be in need of modernization. I've written myself a helper that
generates a visitor from a set of function objects (possibly lambdas)
and the end syntax is basically the same as pattern matching, except
it only matches at the first level. Matching arbitrarily deep would be
possible, but it would require a DSEL-based visitor mechanism that
would do unification, rather than relying on plain old overloading to
select the function to invoke depending on the type.


> It's more like a mixture of C++ templates
> and delegates, but without the restrictions.

I don't get what that is about.


> Besides that it's more compact to write:
>
> Example:
>
> let r f(x, y, z) = fx + fy + fz;

I do not know what this syntax is supposed to do. This is not valid
OCaml for example.


> would be something like:
>
> template <typename T, typename F, typename X, typename Y, typename Z>
> T r(F function, X x, Y y, Z z)
> {
> return function(x) + function(y) + function(z);
>
> }

Well yeah, when you name the function "function", the code tends to be
more verbose than when you name it "f" ;).

Otherwise, it is true the template<...> part is more verbose than type
inference. You can have something like type inference with the GCC
polymorphic lambda extension I discussed, however.

I don't think it would be good to have that in normal function
definition, because to do type-inference-like behaviour you need to
put the whole expression-body in a decltype return clause, and that's
not very nice when function declaration and definition are decoupled
like they are in C++.



> Hm, but this ABI standard (wasn't aware of it) isn't part of the C++
> standard ?

No it isn't, it's an industry standard.


> I don't think that a general ABI standard would be needed for all
> platforms (although this would be nice), but a single one for each
> platform would be IMHO sufficient, since you can't mix any libraries of
> different platforms either.
>
> But any "basic open (C++) ABI standard" for each platform should exist
> and supported by all C++ compilers for this platform.

Isn't that basically the state of things?
All platforms follow the Itanium C++ ABI adapted to their
architecture, except the windows platforms that follows the Microsoft
ABI.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Mathias Gaunard on
On Jul 7, 3:18 am, Walter Bright <newshou...(a)digitalmars.com> wrote:

> 1. Exceptions were divided into two categories, recoverable and
> non-recoverable. Pure functions can throw both categories, but since
> non-recoverable ones are not recoverable (!) it is ok in such a case to
> violate purity. If recoverable exceptions are thrown, they must be
> thrown every time the same arguments are supplied. (Non-recoverable
> exceptions would be things like seg faults and assertion failures.)

I personally don't understand the point of non-recoverable exceptions
at all. If they're non recoverable, it means it is something that
should *never* happen, and therefore is a bug in the program itself.
The program might as well abort and terminate directly. Trying to
clean up in the face of something that should never happen in the
first place cannot work, and might actually lead to even more errors.




--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]