From: Gerhard Menzl on
Bob Bell wrote:

> I should be more specific. I interpret "the function cannot continue"
> to mean that the function shouldn't be allowed to execute a single
> instruction more, not even to throw an exception.

Not even log and assert?

> Then how about a more pragmatic definition? When a precondition fails,
> it almost always indicates a programmer error (a bug). When a bug
> occurs, the last thing you want is to unwind the stack:
>
> -- unwinding the stack destroys state that could help you track
> down the bug
> -- unwinding the stack may do more damage
> -- throwing an exception allows the bug to go unnoticed if a
> caller catches and swallows it (e.g., catch (...))
> -- throwing an exception gives a (possibly indirect) caller a
> chance to respond to the bug; typically, there isn't anything
> reasonable a caller can do to respond to a bug
>
> What you really want is to stop the program in a debugger, generate a
> core dump, or otherwise examine the state of the program at the
> instant the bug was detected. If you throw an exception, you're just
> allowing the program to continue running with a bug.

Perhaps I am too pragmatic and customer/end-user-oriented. While this is
something that is perfectly suitable for the development phase, you
don't want it to happen at the user's site. According to this
definition, every precondition is a potential source of crash (again
from a user's perspective: user's don't distinguish between calls to
assert() or exit() and, say, memory access violations - they just
perceive the program crashing). It seems like an extremely pessimistic
approach to me: pull the emergency brake whenever a condition that
should hold doesn't. There is no such thing as local failure - the bug
is always assumed to be global and catastrophic. Thus, under pressure to
deliver systems that do not "crash", there would be strong motivation to
use preconditions sparingly - which would defeat the purpose of DbC.

> I don't think it is; from what I understand about Eiffel (which is
> little) the aim is to keep the program running if a contract is
> broken. But I could be wrong about that.

That would be the opposite of the when-in-doubt-pull-the-emergency-brake
approach. Considering that making software sytems more reliable has been
a major driving force behind Eiffel and DbC, this difference puzzles me.

> You're right, undefined behavior, as defined by the language standard,
> is undetectable once it's occurred. It's clear from you response that
> applying the term "undefined behavior" to preconditions has been
> misleading. In the interest of clarity, I'm going to switch to
> "undefined state". Example:
>
> F() is documented to specify that it is the responsibility of all
> callers to establish condition Y. Now suppose F() is called and Y is
> false. What does this mean? All you know is that some caller failed to
> establish Y. Assuming the contract was valid and reasonable, you have
> detected a bug. (Even if the contract was invalid, you've still
> detected a bug -- only the bug is that F() demands condition Y.)
>
> In practical terms, the program has entered an undefined state -- it's
> doing something you didn't think it could do. Whether it entered the
> undefined state before Y was tested, as a result of testing Y, etc.,
> is not that important, as far as I'm concerned. What's important is
> what you do about it. If you throw an exception, you allow the program
> to continue running. But since it's entered an undefined state, you
> don't know what it will do.

I don't have - and never had - troubles understanding this reasoning.
What I am having doubts about is whether failure to meet Y automatically
means that the entire program is in an undefined state and aborting is
the only sensible reaction - unless, of course, you restrict the
definition of precondition to exactly those cases. From what I have been
able to survey, such a restrictive definition does not seem to be used
universally.

--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Gerhard Menzl on
David Abrahams wrote:

> Once again, I never said that stack corruption was a precondition
> violation. You seem to be looking hard for ways to find that what
> I've said is somehow inconsistent or incoherent. Poking holes in
> arguments I never made seems sorta pointless.
>
> It doesn't sound to me as though you're trying to understand what I'm
> saying; rather, it seems much more as though you simply don't _like_
> what I'm saying. If that's so, I'd like to stop trying to explain
> myself now. If not, I apologize in advance for even asking.

I am sorry if you got that impression. Nothing could be further from the
truth. My motivation behind participating in this newsgroup is to learn
about best practice from more experienced peers and pass on my knowledge
to the less experienced. I am really not in for
I-beat-a-Boost-guru-in-a-discussion games. My apologies if I made it
sound like I were.

Since you don't see any inconsistencies or contradictions where I do,
the problem must be communication, I guess.

> If you are merely suggesting that Microsoft's "Safer C" specification
> uses a different concept of the term "precondition," which allows a
> documented response to violations, you'll get no argument from me on
> that point. I never claimed that everyone in the world has the same
> concept. Clearly you and I don't, and I daresay the fact that
> somebody at Microsoft disagreed with me certainly doesn't prove
> anything about the coherence of my arguments.

Of course not. I am just trying to sort out who means what by
"precondition". As you said:

> A technical term is much more powerful and useful when it
> distinguishes one thing from another.

It is also more powerful and useful when there are not several slightly
different definitions being used in the industry.

By the way, I cited P. J. Plauger because of his role as an experienced
C++ Standard Library implementer, not because of his Microsoft
connection. When two experts who work in closely related fields don't
seem to use a technical term in the same way, how are ordinary
programmers supposed to agree on it?


--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Bob Bell on
Gerhard Menzl wrote:
> Bob Bell wrote:
>
> > I should be more specific. I interpret "the function cannot continue"
> > to mean that the function shouldn't be allowed to execute a single
> > instruction more, not even to throw an exception.
>
> Not even log and assert?

If I said "it's OK to log and assert", would that invalidate my point
or support yours? The point is to make as few assumptions about the
state of the system as possible, which leads to executing as little
code as possible. The problem with throwing is that it assumes that the
entire state of the system is still good, and that any and all code can
still run.

> > Then how about a more pragmatic definition? When a precondition fails,
> > it almost always indicates a programmer error (a bug). When a bug
> > occurs, the last thing you want is to unwind the stack:
> >
> > -- unwinding the stack destroys state that could help you track
> > down the bug
> > -- unwinding the stack may do more damage
> > -- throwing an exception allows the bug to go unnoticed if a
> > caller catches and swallows it (e.g., catch (...))
> > -- throwing an exception gives a (possibly indirect) caller a
> > chance to respond to the bug; typically, there isn't anything
> > reasonable a caller can do to respond to a bug
> >
> > What you really want is to stop the program in a debugger, generate a
> > core dump, or otherwise examine the state of the program at the
> > instant the bug was detected. If you throw an exception, you're just
> > allowing the program to continue running with a bug.
>
> Perhaps I am too pragmatic and customer/end-user-oriented.

Too pragmatic for a pragmatic definition? ;-)

> While this is
> something that is perfectly suitable for the development phase, you
> don't want it to happen at the user's site.

If you mean that you want to avoid crashes/ungraceful shutdowns when
end-users use the system, I agree.

> According to this
> definition, every precondition is a potential source of crash (again
> from a user's perspective: user's don't distinguish between calls to
> assert() or exit() and, say, memory access violations - they just
> perceive the program crashing). It seems like an extremely pessimistic
> approach to me: pull the emergency brake whenever a condition that
> should hold doesn't.

Are you saying that it's OK to let the program continue running when
you know there is a bug, but you don't know anything about how
extensive it is? Isn't the right thing to do to fix the bug? Throwing
an exception gets in the way of fixing the bug (see the above list of
problems caused by throwing in response to a bug). How can throwing
possibly be a good idea?

> There is no such thing as local failure

Until you know differently, there isn't. One way to know that a failure
is local is to actually debug it and determine the cause. Another way
to know the failure is local is if the failure is wrapped in some kind
of firewall, like a separate address space. When all of the state of a
program is in a shared address space, a failure in one part of a
program can be caused by something entirely non-local.

> - the bug
> is always assumed to be global and catastrophic.

The alternative is to assume the bug is always local and not
catastrophic. In my experience, making this assumption causes far more
serious problems than assuming a bug is global and catastrophic.

> Thus, under pressure to
> deliver systems that do not "crash", there would be strong motivation to
> use preconditions sparingly - which would defeat the purpose of DbC.

In practice this doesn't happen (at least, in my practice; can't speak
for anyone else). Instead, liberal usage of assertions to trap
precondition violations as bugs leads to finding and fixing a lot of
bugs. Perhaps you should try it before deciding that it doesn't work.

> > I don't think it is; from what I understand about Eiffel (which is
> > little) the aim is to keep the program running if a contract is
> > broken. But I could be wrong about that.
>
> That would be the opposite of the when-in-doubt-pull-the-emergency-brake
> approach. Considering that making software sytems more reliable has been
> a major driving force behind Eiffel and DbC, this difference puzzles me.

I don't know why it should. I'm not programming with Eiffel, and as far
as I know, neither are you, so why should it matter what "precondition"
means in Eiffel? Lots of terms are used differently by the two camps.
You don't seem to have trouble discussing exceptions, despite the fact
that the term means different things in the two languages.

I don't know much about Eiffel, but I don't see how letting a program
continue to run in an undefined state makes a system more reliable.

> > In practical terms, the program has entered an undefined state -- it's
> > doing something you didn't think it could do. Whether it entered the
> > undefined state before Y was tested, as a result of testing Y, etc.,
> > is not that important, as far as I'm concerned. What's important is
> > what you do about it. If you throw an exception, you allow the program
> > to continue running. But since it's entered an undefined state, you
> > don't know what it will do.
>
> I don't have - and never had - troubles understanding this reasoning.
> What I am having doubts about is whether failure to meet Y automatically
> means that the entire program is in an undefined state

What's the alternative? Saying that the program's state is partially
undefined? Or that some subset of the state is undefined, while the
remainder of the state is well-defined? That kind of fuzzy thinking is
something I don't understand. It often turns out to be wrong, and leads
to missed opportunities to fix bugs.

The point isn't "failure to meet Y automatically means that the entire
program is in an undefined state". The point is that any other
assumption is just less safe; it's safer to assume that, until proven
otherwise, the state of the entire program is in an undefined state.
Not coincidentally, the effort required to "prove otherwise" usuallly
involves gaining enough information to fix the bug.

One other pragmatic reason to stop the program and fix the bug the
moment the bug is detected is that you never know when the bug is going
to recur and you'll get another opportunity.

> and aborting is
> the only sensible reaction - unless, of course, you restrict the
> definition of precondition to exactly those cases. From what I have been
> able to survey, such a restrictive definition does not seem to be used
> universally.

Universally across languages? Or just within the C++ community? I'm
more interested in the term as it is used in the C++ community, and
there I see that there hasn't been much of a consensus. However, the
opinions of several experts I respect match my own intuitive
understanding, so I'm satisfied. Precondition failures indicate bugs,
and the right thing to do is fix the bug; just about the worst thing
you could do is throw an exception, since throwing an exception is
tantamount to ignoring the bug.

Bob


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: David Abrahams on
Gerhard Menzl <gerhard.menzl(a)hotmail.com> writes:

> David Abrahams wrote:
>
>> It doesn't sound to me as though you're trying to understand what I'm
>> saying; rather, it seems much more as though you simply don't _like_
>> what I'm saying. If that's so, I'd like to stop trying to explain
>> myself now. If not, I apologize in advance for even asking.
>
> I am sorry if you got that impression. Nothing could be further from the
> truth. My motivation behind participating in this newsgroup is to learn
> about best practice from more experienced peers and pass on my knowledge
> to the less experienced. I am really not in for
> I-beat-a-Boost-guru-in-a-discussion games. My apologies if I made it
> sound like I were.

Okay, thanks for clearing that up; I won't mention it again.

> As you said:
>
>> A technical term is much more powerful and useful when it
>> distinguishes one thing from another.
>
> It is also more powerful and useful when there are not several
> slightly different definitions being used in the industry.
>
> By the way, I cited P. J. Plauger because of his role as an experienced
> C++ Standard Library implementer, not because of his Microsoft
> connection. When two experts who work in closely related fields don't
> seem to use a technical term in the same way, how are ordinary
> programmers supposed to agree on it?

The best answer I can give you is:

1. I don't think that writing is based so much on Bill (P.J.)'s
_definition_ of "precondition" but on the _usage_ that was adopted
by the authors of the "safer C" specification. IMO, Bill was just
describing the system using the terminology its authors had
already established.

2. I haven't seen a _definition_ of precondition that's both
non-vacuous and consistent with that usage. I think that's an
indication that people using "precondition" that way haven't really
given rigorous thought to what it means when they use the word.

People use words casually and loosely all the time without giving a
second thought to what they mean. I've been suggesting that your
software will benefit from picking a rigorous definition for
"precondition" that clearly distinguishes preconditions from other
things.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Nicola Musatti on

Bob Bell wrote:
> Gerhard Menzl wrote:
> > Bob Bell wrote:
> >
> > > I should be more specific. I interpret "the function cannot continue"
> > > to mean that the function shouldn't be allowed to execute a single
> > > instruction more, not even to throw an exception.
> >
> > Not even log and assert?
>
> If I said "it's OK to log and assert", would that invalidate my point
> or support yours? The point is to make as few assumptions about the
> state of the system as possible, which leads to executing as little
> code as possible. The problem with throwing is that it assumes that the
> entire state of the system is still good, and that any and all code can
> still run.

Excuse me, but don't you risk assuming too much in the other direction?
Consider for example a function as the following:

double safeSqrt(double arg) {
if ( arg < 0 )
// what goes here?
return std::sqrt(arg);
}

Wouldn't it be a bit extreme to assume the world has ended just because
this function was passed a negative number?

On the other hand I agree that if the world has actually ended, we
wouldn't want to add damage to it. So what can we do about it? You are
probably right that exception handling is not to be trusted and it
seems to me that the least action you can take is to return a
conventional value.

Should we reach the conclusion that returning error codes is better
than exceptions for writing really robust code? ;-)

Cheers,
Nicola Musatti


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

First  |  Prev  |  Next  |  Last
Pages: 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Next: C++/CLI limitations?