From: Andre Kaufmann on
Mathias Gaunard wrote:
> On Jul 6, 2:50 am, Andre Kaufmann <akfmn...(a)t-online.de> wrote:
>
>> I don't know why B.S. has chosen to let macros propagate over header
>> file boundaries.
>
> You mean you would like that, when you #include a file, the current
> context doesn't get transported to the processing of the included
> file?

Yes, exactly.

> That sounds like not such a bad idea, but wouldn't that prevent the
> preprocessor to be used for loops?

Do you mean, that macros can't be propagated and used in another file ?
E.g. to generate template code with "dynamic" parameters, without having
to repeat the template implementation multiple times ?

Perhaps.
But this type of code results commonly in "hard to debug" code and is
rather some kind of code generation.

Wouldn't it be better to have a code generator instead, which allows
just to do that instead of a preprocessor.
I just use that in our projects, to generate code for multiple languages
from a single declaration. This is a kind of "precompilation stage",
which is only executed if the templates (from which code is generated)
are changed.

Anyways, I think there should be a solution to get it all.
Smart integrated preprocessor (not separated) and code generation macros
which have an effect either only on a single file or globally.

Andre


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
Dragan Milenkovic wrote:
> GCC provides attributes "pure" and "const" which indeed allow
> taking advantage.

As I mentioned before, non-standard extensions are not part of C++ or C++0x.

> However, it seems that it doesn't verify
> the correctness of implementation. I don't know how easy such
> a feature could be added (my guess is not too hard),
> and whether it would be beneficial.

Without support for transitive immutability of the pure function's arguments, I
don't see how the compiler can verify purity.

C++ const falls short in 3 areas:

1. It is only a single level. Const applied to a type does not recursively apply
to any types that are embedded within that type. I.e. it's not transitive.

2. Const is a read-only view of the data, it does not apply to the data itself -
there's no commitment that there may be another, non-const, mutating reference
to the same data. (When using const as a type constructor, not as a storage class.)

3. Const may be legally cast away and the underlying data mutated. (When using
const as a type constructor, not as a storage class.)

Without transitive immutability of the pure function arguments, there's no way
to guarantee that two calls to the pure function with the same arguments will
produce the same result.

---
Walter Bright
free C, C++, and D programming language compilers (Javascript too!)
http://www.digitalmars.com

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: nmm1 on
In article <i0u2r7$tpa$1(a)news.eternal-september.org>,
Walter Bright <walter(a)digitalmars-nospamm.com> wrote:
>
>> I am afraid not. That is true for only some architectures and
>> implementations, and is one of the great fallacies of the whole
>> IEEE 754 approach. Even if a 'perfect' IEEE 754 implementation
>> were predictable, which it is not required to be.
>
>Can you elucidate where the IEEE 754 spec allows unpredictability?

Mainly in the handling of the signs and values of NaNs: "this
standard does not interpret the sign of a NaN". That wouldn't
matter too much, except that C99 (and hence C++0X and IEEE 754R)
then proceeded to interpret them - despite not changing that!
Also, in IEEE 754R, the rounding mode for decimal formats.

>I understand that the FP may use higher precision than specified by the
>programmer, but what I was seeing was *lower* precision. For example,
>an 80 bit transcendental function is broken if it only returns 64 bits
>of precision.

Not at all. I am extremely surprised that you think that. It would
be fiendishly impossible to do for some of the nastier functions
(think erf, inverf, hypergemetric and worse) and no compiler I have
used for an Algol/Fortran/C/Matlab-like language has ever delivered it.

Recently, some people (who should have known better) have attempted
it, and have uniformly come unstuck once they crossed the boundary
from 'simple' to 'complicated' functions (where the exact location
of the boundary depends on their competence and how much of a
performance hit they are prepared to tolerate).

>Other lowered precision sloppiness I've seen came from not implementing
>the guard and sticky bits correctly.

Well, yes. But those aren't enough to implement IEEE 754, anyway.

>Other problems are failure to deal properly with nan, infinity, and
>overflow arguments.
>
>I don't believe such carelessness is allowed by IEEE 754, and even
>if it was, it's still unacceptable in a professional implementation.

Even now, IEEE 754 requires only the basic arithmetic operations,
and recommends only some of the simpler transcendental functions.
Have you ever tried to implement 'perfect' functions for the less
simple functions? Everyone that has, has retired hurt - it's not
practically feasible.

>(Just to see where I'm coming from, I used to do numerical analysis for Boeing
>airplane designs. I cared a lot about getting correct answers. The Fortran
>compilers I used never let me down. 30 years later, C and C++ compilers still
>haven't reached that level, and wonder why Fortran still is preferred for
>numerical work.)

I have a more academic background, but it overlaps with that very
considerably. There is a fair amount of my code in the NAG library,
for example, though not in this area.

Things have changed. Longer ago than that, a few computers had
unpredictable hardware (the CDC 6600 divide was reported to, for
example, but I didn't use it myself). But the big differences
since 1980 and now are:

1) Attached processors (including GPUs) and alternate arithmetic
units (e.g. vector units, SSE, Aptivec etc.) These usually are
not perfectly compatible with the original arithmetic units,
usually for very good reasons.

2) The widespread use of dynamic optimisation, where the code or
hardware chooses a path at run-time, based on some heuristics to
optimise performance.

3) Parallelism - ah, parallelism! And therein hangs a tale ....


Regards,
Nick Maclaren.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
nmm1(a)cam.ac.uk wrote:
> The first point is to decide what purity means - and that's not
> obvious, even at the hardware level! For example, floating-point
> is not strictly pure once you allow IEEE 754 exception flags or
> even (God help us!) fixup by interrupt. Not a C++ problem today,
> but becomes one in C++0X.
>
> Most of the C++ library could be defined as pure or impure, but
> there would be screams from implementors who did things differently.
> And there are a lot of borderline cases in the inherited C library.
> POSIX massively increases the confusion, of course :-(
>
> And then there is the dreaded exception question. Does the licence
> to raise an exception destroy purity? If so, the previous questions
> more-or-less state that nothing non-trivial can be pure. But, if
> not, some extremely careful design is needed to avoid making the
> concept meaningless.

Excellent observations. We faced those problems in D, and here's what we
decided:

1. Exceptions were divided into two categories, recoverable and
non-recoverable. Pure functions can throw both categories, but since
non-recoverable ones are not recoverable (!) it is ok in such a case to
violate purity. If recoverable exceptions are thrown, they must be
thrown every time the same arguments are supplied. (Non-recoverable
exceptions would be things like seg faults and assertion failures.)

2. Pure functions can allocate memory, for the practical reason that if
they couldn't, their usefulness is severely compromised. This has the
consequence of requiring that memory allocation failure inside a pure
function be regarded as a non-recoverable exception.

3. There was a looong thread about what to do about the floating point
exception flags & global modes. None of the ideas about how to fit them
in with purity seemed very practical. The end result was that we just
decided to leave that up to the end user. We felt this was justifiable
because it's very rare to have a program that fiddles with the rounding
modes or reads the FP exception flags.

4. We've gone through the C standard library headers, and tagged the
pure functions (like memcmp) as pure. It's true that the C standard
doesn't guarantee any of them are pure, but all the ones we know about
are pure, and reasoned that only a perverse implementation of them
wouldn't be.

So far, these decisions are holding up well in real life. There are
still have some issues, like should operator== be required to be pure?

---
Walter Bright
free C, C++, and D programming language compilers (Javascript too!)
http://www.digitalmars.com

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Andre Kaufmann on
Mathias Gaunard wrote:
> On Jul 4, 6:41 pm, Andre Kaufmann <akfmn...(a)t-online.de> wrote:
>
>> In C++0x this could be simulated by using lambda functions:
>>
>> auto Div = [] (int x , int y) { return x / y; };
>> auto Div5 = [&Div](int x) { return Div(x, 5); };
>> int res = Div5(10);
>> printf("%d", res);
>
> Why aren't you using [&] and auto for res? That would be closer to the
> ML code.

Yes, you are right. No special intention. Should only indicate that Div5
here can only return int.

>> More equivalent would be this invalid C++0x code:
>>
>> auto Div = [] (auto x , auto y) { return x / y; };
>> auto Div5 = [&Div](int x) { return Div(x, 5); };
>
> Some people are working on providing this as an extension to C++0x
> lambdas in GCC.

Nice.

> I don't really know why they chose to make lambdas monomorphic. This
> is disappointing. Thankfully DSELs don't have that limitation.

Yes, don't know too.

>
>
>> b) If I understood it correctly, function and type expansion is deferred

> [...]

> Huh? It's exactly the same. You can copy and modify function objects
> just fine in C++.

Function objects yes. But the C++ compiler hasn't any notion of semantic
of the function (internals) anymore - it has generated code.

You can't (that easily) pass function objects to another function,
use pattern matching (e.g. if parameter number2 is of type int and has
value 5 then emit that code). It's more like a mixture of C++ templates
and delegates, but without the restrictions.

Besides that it's more compact to write:

Example:

let r f(x, y, z) = fx + fy + fz;

would be something like:

template <typename T, typename F, typename X, typename Y, typename Z>
T r(F function, X x, Y y, Z z)
{
return function(x) + function(y) + function(z);
}

> [...]
> That can already be done just fine.
> Only problem is that one of the most important compilers, MSVC,
> doesn't follow the standard C++ ABI.

Hm, but this ABI standard (wasn't aware of it) isn't part of the C++
standard ?

I don't think that a general ABI standard would be needed for all
platforms (although this would be nice), but a single one for each
platform would be IMHO sufficient, since you can't mix any libraries of
different platforms either.

But any "basic open (C++) ABI standard" for each platform should exist
and supported by all C++ compilers for this platform.

Andre

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]