From: James Kanze on
Walter Bright wrote:
> James Kanze wrote:
> > Sure you can do complex numbers in a
> > library, in pre-C99 C. But the results will not be anywhere
> > near as convivial as the language support for them in Fortran.
> > One of the design goals of C++, as I understand it, is precisely
> > that you should be able to implement a library solution which is
> > as convivial and as performant as the language support in
> > Fortran. Rather than add a complex type to the language,
> > Stroustrup added operator overloading. The result is that
> > complex can be effectively relegated to the library, at no real
> > cost. (In theory, anyway. Depending on the compiler, the fact
> > that function calls etc. are involved may reduce optimization.)

> If your point is that std::complex is just as good as native complex,

That's not my point, at all. My point is that developing the
language in ways which would allow std::complex to be just as
good as a native type might be more productive than just
implementing it as a native type, and letting it go at that. In
over thirty years of programming, I've never needed a complex
type; I have needed fixed point decimal, IP addresses, and a lot
of other "small" objects. Obviously, adding all of them (and
everything everyone else needs) to the language isn't an option.
So why not move in a way that helps everyone, and not just a
small group?

> consider the following:

> 1) Digital Mars C and D implement complex natively, and complex function
> return values are in the floating point register pair ST1,ST0. I don't
> know of any C++ compiler that does that.

And why not? Wouldn't it be better to develop a compiler which
put any class type consisting of only two doubles in registers,
rather than special case complex? (I know that g++ does put
some simple structures in registers, at least in certain cases.
I don't know if complex falls into those cases, however.)

> It's certainly more efficient.

It's a cheap hack, yes, which allows compiler writers to get
efficiency simply for a benchmark case, while not providing it
in general. Why should complex be more performant than Point2D,
or ColorPixel, or any other small class of that sort? (That's
the Java situation, which is why Java beats C++ when dealing
with double[], but becomes significantly slower as soon as you
change it to Point2D[].)

> Native support also means the compiler can easily enregister the complex
> numbers within the function.

I'm not sure I understand. What do you mean be enregister the
complex numbers within the function.

At any rate, I know that it is easier for a compiler to optimize
a built in type. Which doesn't mean that it can't do as well
with a user defined type, just that it requires a lot more
sophisitication on the part of the compiler. But that's an
argument which affects every type---in my current work, fixed
decimal would be more useful than complex, and in my preceding
job, an IP type. Where do you stop?

> 2) std::complex has no way to produce a complex literal. So, you have to
> write:
> complex<double>(6,7)
> instead of:
> 6+7i // D programming

Syntax is an issue, but isn't the solution developping ways to
provide comfortable syntax for user defined types?

> 3) Error messages involving native types tend to be much more lucid than
> error messages on misuse of library code.

Especially if the type is a template:-).

Again, it's a problem that compiler writers should solve, if
only because library types aren't going to go away. (It is, I
think, a more difficult problem than just getting the two
doubles of a struct into registers.)

> 4) There is much better potential for core native type exploitation of
> mathematical identities and constant folding than there is for library
> types where meaning must be deduced.

I'm not sure about this. The "mathematical identities" of a
constant are based on the identities of the underlying reals,
along with the operations performed on them (definition of
addition, multiplication, etc.). I would expect that the
compiler could find them in both cases.

> 5) Lack of a separate imaginary type [...]

I won't argue about the qualities of a particular
implemenation choice. My knowledge of numeric processing isn't
sufficient to really be able to judge such things. But I don't
see where this is a problem related to the issue of whether the
type is built-in or not: you could easily define an imaginary
type in the library, and you could just as easily define a
built-in complex without imaginary.

> So, why isn't there much of any outcry about these problems with
> std::complex?

Maybe because it's not a real problem. Or maybe just because
not enough people understand the issues. (I seem to recall
reading somewhere that IBM's base 16 floating point cause real
problems as well, but there wasn't much outcry about it,
either.)

> My theory is that few use C++ for serious numerical
> computing, derived from the observation that popular and widely used C++
> compilers have dropped support for extended doubles (80 bit floating
> point) without pushback from the users.

I think it's more than a theory. Numerical computing was, is
and ever shall be done in Fortran. I'd guess that C++ is a
distant number 2 (but well ahead of any of its followers);
people didn't take the effort to develop (and continue
developement on) Blitz++ for nothing.

As for extended doubles, I suspect that the main part of the
reason is a lack of hardware support on the plateforms being
used. I know that Sparc doesn't have it, for example. In fact,
the only machine I know today where it is present is on PC's,
and compilers there DO support it.

> I used to do numerical analysis (calculating stress, inertia, dynamic
> response, etc.), and having 80 bits available is a big deal.

I can also be a trap. I'm sure that numeric analysists know how
to deal with it, but I've seen people caught out more than a few
times by the fact that intermediate calculations are done in
long double, and the exact value of the results depending on
when and where the compiler spilled to memory.

> > But... From that point of view, the fact that some things can be
> > effectively relegated to the library, rather than be implemented
> > in the core langage, would be a symptom of an advantage in the
> > language. (Supposing, of course, that th
From: James Kanze on
Walter Bright wrote:
> Bo Persson wrote:
> > The C++ language defines features that can optionally be implemented as a
> > library, or built into the compiler. The standard allows, but does not
> > require, specific compiler support for these features. In my opinion, they
> > are still part of the language.

> Such a design choice means that it combines the worst characteristics of
> a core feature (not modifiable/replaceable by the user) with the worst
> characteristics of a library feature (poor integration with the language).

It means that you can get the performance from the library type
without actually doing the work necessary to optimize library
types:-).

> > The D language requires compiler support for strings, the C++ language
> > allows it. That doesn't mean that there are no strings in C++.

> As far as I know, there is no existing C++ compiler that implements
> std::string in the compiler. Furthermore, since it is required to be
> indistinguishable from a library implementation, it isn't in any useful
> sense part of the core language.

> C++ does have core strings (char[]) and core arrays (T[]), it's just
> that they were abandoned and (unofficially) deprecated rather than fixed.

They are a little bit too broken to be easily fixed.

[...]
> Template improvements don't float everyone's boat. What is high on your
> core language wish list?

Thread support, garbage collection and modules. All of which
affect the basic definition of the language, and can't be done
in a library at all. (By thread support, I mean things like
redefining sequence points so they have a meaning in a
multithreaded envirionment, and such. I see no problem with the
interface to the various requests being in the library; in fact,
I rather prefer it that way. But parts of the library will
probably need to collaborate with the compiler---which is
nothing new, either, since that's already the case with such
"library" types as type_info.)

> >> But what I think is fruitful to discuss or not only pertains to
> >> what I decide to post - I don't control these discussions. I'm not
> >> a moderator here.

> > It was I who didn't find it fruitful to compare the languages, when you
> > concentrate on a core only comparison. Without the string, vector, and
> > complex classes, C++ admittedly is just half a language.

> Strings and arrays (and to a lesser extent complex) are such fundamental
> types that relegating them to the library means missing out on the
> syntactic and semantic niceties one can get by putting them in the core.
> After all, would you want 'int' to be library only, and turn your back
> on all the goodies a language can offer through direct support of it?

I think you can find arguments for types like string and arrays.
But that's not the point. If you argue that string's in D are
supperior to string's in C++, that's one thing. And it may even
be that the reason is that string's in C++ aren't a native type.
But the argument that C++ doesn't have a string type is false.
It does. (It actually has two, which is probably a bigger
problem than any problem due to the fact that std::string is
part of the library, and not part of the core language.)

And if the language offered ways to define int as an efficient
library type, I'd be all for it. It doesn't. In the case of
int, I don't think it can---int generally corresponds to a basic
type in hardware. Strings and arrays are a bit less clear. And
about the only real argument I can think of for making complex a
basic type is performance (or the absense of any good
abstraction mechanisms---in Fortran IV, there really wasn't an
alternative).

--
James Kanze (GABI Software) email:james.kanze(a)gmail.com
Conseils en informatique orient?e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S?mard, 78210 St.-Cyr-l'?cole, France, +33 (0)1 30 23 00 34


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
Andrei Alexandrescu (See Website For Email) wrote:
> I think it would be goofy to limit comparison to core features.
> Otherwise a language with configurable syntax that allows things like
> custom statements, user-defined literals etc. will come last as they
> have a very small core.

I think it would come first, as it has core features far more advanced.

> Comparing how a library implementation fares versus a core language
> approach towards the same feature is more to the point. But let's not
> forget one important detail. The disadvantage of core implementation is
> the absolute dictatorship of the entity that defines the language. I
> view that as a major drawback.

Since the C++ Standard allows the Standard Library to be hardwired in to
the compiler, hasn't that happened anyway (even if no actual C++
compiler has done so)?

> The obvious disadvantages of library-side
> implementations (less natural syntax, less opportunities for
> optimization etc.) should be considered in balance with the democracy
> advantage they enjoy. Therefore, I'm personally way more enthused by
> languages that offer their user access to (meta)liguistic tools, rather
> than languages that are closely bounded by the view, limits, creativity,
> bias, preferences of their creator.

You seem to be arguing you're enthused by core features that enable
better libraries to be written, rather than the library itself. I would
agree with that.

I don't think anyone here is advocating a core language that somehow
prevents powerful libraries from being written. D's core support for
arrays doesn't in the slightest prevent someone from writing a vector
class any more than C++'s core arrays prevent it. There just isn't much
point to writing a vector class in D, since the core arrays are good enough.

It's like why isn't there a C++ std::int library type? Because the core
int is good enough, not because you cannot write a library integer.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Bo Persson on
AJ wrote:
> Hi there,
>
> James Kanze wrote:
> <snip>
>> That depends. Library support isn't as good as core support
>> when it isn't as good. The fact that it is library and not core
>> isn't a disadvantage per se---I'd say that almost the opposite
>> is true. But you do have to compare the level of support; there
>> are some things that simply cannot be supported in a library
>> alone.
>
> I think I agree, though I also think we need a clear definition of a
> "library" and of "core-language" support. It is unclear at this
> point whether a library can be "magic" in the sense that it does
> things a regular user couldn't.

It can be magic, in that the library writer can make a deal with the
compiler writer, to have some formally undefined heaviour actually being
defined for a specific use. Some offsetof macros use that trick.

>
> It is also unclear what is meant by C++ any more. Is C++ simply the
> language, or does it include the STL? I don't think I would include
> the STL into the definition of the "core" C++ language.

The C++ language is what is included in the standard, all chapters.

>>
snip
>> Formally, if it is part of the standard library, the compiler
>> can know about it, and do whatever is necessary. This is often
>> the case for standard library functions (like memcpy) in C. To
>> date, it's not become the case for anything in C++, at least to
>> my knowledge, although the standard certainly allows it. I
>> think, however, that part of the goal is to design things so
>> that the compiler can do the job well, not just for standard
>> components, but for any user defined class.
>
> I think this is what Walter disagrees with. If you're gonna make a
> special case for some part of the "standard" library, then why not
> make that feature part of the "core" language in the first place?
> You get the advantage of nicer syntax and semantics with the
> advantage of clean, intrinsic compiler support without having to
> "cheat" with a "magic" library (which is what he called the worst
> of both worlds).

Requiring compiler support doesn't make a feature clean, it makes it dirty.
I believe that's where James and I disagree with Walter.

If you have a core language support that allows std::string to be
efficiently implemented as a library, that means that other similar classes
can be efficiently implemented as well. By someone other than the compiler
writer.

The fact that it can be done as a library only, with no specific magic,
actually shows that the language is stronger, not weaker.

The fact that Walter designs and implements his own language, puts him in a
different position than most of us. Perhaps that influences his point of
view? Or maybe his views is the reason for doing it?

I am very happy when I can implement, or extend, some feature without
specific compiler support.


Bo Persson



--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Alf P. Steinbach on
* Walter Bright:
>
> 5) Lack of a separate imaginary type leads to problems such as
> identified by Professor W. Kahan: "A streamline goes astray when the
> complex functions SQRT and LOG are implemented,
> as is necessary in Fortran and in libraries currently distributed with
> C/C++ compilers,
> in a way that disregards the sign of 0.0 in IEEE 754 arithmetic and
> consequently
> violates identities like SQRT( CONJ( Z ) ) = CONJ( SQRT( Z ) ) and
> LOG( CONJ( Z ) ) = CONJ( LOG( Z ) ) whenever the COMPLEX variable Z
> takes negative
> real values. Such anomalies are unavoidable if Complex Arithmetic
> operates on
> pairs (x, y) instead of notional sums x + i*y of real and imaginary
> variables.
> The language of pairs is incorrect for Complex Arithmetic; it needs the
> Imaginary type."

It seems I've entered a period of posting "sorry, the above is not
meaningful to me" to clc++m.

Anyway, the above isn't meaningful to me.

Let complex(x,y) = x+i*y.

Is the professor's point then that

sqrt( conj( complex(-9, +0) ) ) === sqrt( complex(-9, -0) )

"should" ideally evaluate to

complex(+0, -3)

instead of complex(+0, +3)?

But even if so, how does an Imaginary type help with that, when x+i*y
can always be represented as Complex(x,y) and vice versa?

Furthermore, what is the relevance to library versus built-in?

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]