From: David Abrahams on
Al <two(a)haik.us> writes:

> 1. Anything sufficiently common and important deserves syntactic and
> semantic privilege.

Sure they "deserve" those privileges. But should they be granted?
Only where there's a really significant benefit to doing so over what
can be provided by a library written directly in the core language.

The whole argument here is about whether it's worth making the extra
effort to provide enough in the core language that more of these
capabilities are available to library authors.

> But other deficiencies are intrinsically linguistic. Starting with the
> fact that a "string" literal is really a pointer to memory that Must Not
> Be Touched, yet it unceremoniously decays to non-const. This seems
> fairly unsafe to me.

Legacy artifact.

> Another issue is that some concatenation is impossible:
>
> const char* b = "b";
> const char* x = "a" + "b"; // error
> const char* y = "a" + 'b'; // corrupt
> const char* z = "a" + b; // error

But that could have been fixed in the core language without making the
mutable string type a builtin. Yes, string literals almost certainly
have to be built in, but I would stop right there.

> People simply /expect/ an array to automatically know its size. A naked
> pointer is plain unreasonable. I've seen this very question asked a
> million times, with the same answer: No, the array does not know its own
> size. What's most ironic is that it would seem to me that arrays
> actually /must/ know their size internally in order for delete[] to work.

Another legacy artifact. Now all you're saying really is that C's
built-in arrays should have been removed from C++ because they're too
weak. Maybe, but that's a completely different argument.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: David Abrahams on
"James Kanze" <james.kanze(a)gmail.com> writes:

> I'd even go further and say that I prefer the Java
> solution of consistently doing the wrong thing to that of C++,
> which results in inconsistent and untestable behavior.

I agree that the behavior is often inconsistent. I disagree that it's
untestable in practice. One of my points is that labelling some
behavior undefined can improve testability, because the system can
then things observably outside the realm of defined behavior. It's
true that it's a crapshoot whether the behavior of most running C++
programs will be observably undefined, but that is technically
speaking an implementation artifact, for efficiency. There's no
reason in principle that a C++ system couldn't be written that
immediately detects way, _way_ more of the errors that lead to
undefined behavior and invokes a debugger immediately. Every pointer
dereference could be fully checked, for example.

> Undefined behavior is bad,

I don't think that's been demonstrated, and I challenge it as an
axiom.

Whether undefined behavior is badly expressed in today's running C++
programs is another matter.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Andrei Alexandrescu (See Website For Email) on
David Abrahams wrote:
> "Andrei Alexandrescu (See Website For Email)"
>>>That said, even in a system with no undefined behavior, we have no
>>>idea what the value of x (or anything else in our program) is after a
>>>programming error, so the ability to continue on with the program
>>>executing the instructions you thought you were giving it originally
>>>is not as valuable as it might at first seem.
>>
>>It's not "anything else in our program". It's "anything else in our
>>program that was affected by x"
>
>
> No, not at all. Re-read the scenario; "x" didn't necessarily have
> anything to do with the programming error. From a practical point of
> view, by the time your internal checks/assertions have detected that
> there's been a programming error by inspecting some piece of program
> state (call it Z), you have no idea how far the damage has spread.
> That is, the program's own guarantees are out the window.

I disagree. As I explained before: in Java bugs can be made modular in
ways that are not possible in C++, because you have true memory
isolation between objects.

>>and because (say in Java) races only happen on numbers
>
>
> Meaning that in Java, all writes of "references" (a.k.a. pointers) are
> synchronized?

That is correct. They are guaranteed to be atomic; there is no invalid
reference in Java, ever, period.

>>and because there's no pointer forging, that reduces to "any other
>>number that was affected by x", which considerably reduces the rot
>>in the program and the difficulty in spotting it. I guess all I can
>>say is that I tend to see that guarantee as much more valuable. :o)
>
>
> Than what?

Than "all the hell breaks loose starting at this point".


Andrei

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Al on
Hi,

James Kanze wrote:
> Mirek Fidler wrote:
>> Walter Bright wrote:
>>> Nevin :-] Liber wrote:
>>>> So what is it about C++ that is stopping you from applying the
>>>> optimization you use for the intrinsic complex to std::complex (and
>>>> then, in general, to objects that have a similar form to std::complex)?
>
>>> Having constructors causes problems for it. Even though indirection can
>>> often be removed by the optimizer, the decisions about how returns are
>>> done are made in the parsing stage, because it affects how constructors
>>> are set up to get the return type built on the caller's stack. The
>>> optimizer/code generator really only see the C-like result of all that.
>
>> Just to keep things clear, we are here speaking about non-inlined
>> functions, are we?
>
> I'm not sure it makes an important difference, since a compiler
> is free to inline any function it likes (and some compilers do
> inline functions not declared inline, if the profiler says they
> should).
>

I'm kind of curious about the inline keyword. If the compiler is allowed to:

a) Ignore it when it appears (i.e. not inline).
b) Ignore its omission when it doesn't appear (i.e. force inline).

Then what exactly is the point of it? Why not just let the compiler deal
entirely with the efficiency of inlining by itself? Clearly, from the
points above, the programmer has zero control over what actual inlining
goes on, so why pretend that they do via a bogus keyword? It seems like
it just needlessly complicates the function specifier set.

Thanks.
-Al.



--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Andrei Alexandrescu (See Website For Email) on
David Abrahams wrote:
> "Andrei Alexandrescu (See Website For Email)"
>>But in a memory-safe program you don't even need Purify to tell you that
>>the program did something wrong. A logging module would suffice, and the
>>proof is in the trace.
>
>
> a. I don't see how the logging module can do that
> b. Anyway, that's often far too late to actually debug the problem.

Am I not getting a joke? Logs are the _best_ way to debug a program.

>>If the argument is that it leads to messier languages and slower
>>programs, I'd agree. But IMHO the arguments brought in this thread
>>didn't carry much weight.
>>
>>So my answer to "Purify can't tell you..." is "Because you don't
>>need Purify".
>
>
> Of course not. That's a cute comeback but misses the point entirely.
> In a GC'd system Purify is the wrong tool because there are no invalid
> pointers. Instead you need a tool that tells you that something has
> been kept alive too long, and nobody's figured out a tool to do that
> because it's effectively impossible for a tool to tell what "too long"
> is.

Ehm. I thought we were talking about arbitrary memory overwrites. Maybe
I did miss the point entirely.

>>>>and that random behaviour including crashes are replaced by
>>>>deterministic, often plausible but wrong results.
>>>
>>>
>>>Of course that can happen in a system with undefined behavior, too.
>>>That said, it looks like a wash to me: incorrect programs have
>>>different characteristics under the two systems but neither one wins
>>>in terms of debuggability.
>>
>>The memory-safe program wins because it never overwrites arbitrary
>>memory; so all objects unaffected by a bug respect their invariants.
>
>
> The same is trivially true of C++: all objects unaffected by a bug
> respect their invariants.

This is wrong. A memory bug in C++ can affect any object. You could say,
yeah, all objects unaffected by a bug have no problem. The problem is,
you can't define the set of objects unaffected by a bug :o). Right back
atcha.


Andrei

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]