From: Gerhard Menzl on
David Abrahams wrote:

>>Do I, or better: does the original function have to?
>
> If it is going to make guarantees about robustness in the face of
> these conditions, then yes. That's where we started this discussion:
> you wanted to provide documented guarantees of behavior in the face of
> violated preconditions. If the original function is going to guess
> about the brokenness of the context in which it was called, then no,
> it doesn't need to know. However, as I've repeatedly said, the called
> function usually has little or no knowledge about the context in which
> it is called, so it's very difficult to make an educated guess.
>
>>Precondition specifications aren't normally about complex global
>>states, they demand that certain local conditions of limited scope be
>>met.
>
> Exactly. That's what makes the detecting code particularly unsuited
> to making educated guesses about severity.

Full agreement here. That's why I have reservations about terminating
the program (in shipping code) whenever a precondition is violated: it
is based on educatedly guessing maximum severity.

>>They don't say "the stack is uncorrupted", they say: "this
>>particular vector must be sorted". If it isn't, it's usually because
>>the author of the client forgot to sort the vector, or called
>>another function after the sort that push_backs an element.
>
> On what do you base that assessment? Do you have data, or is it just
> intuition?

On my personal experience regarding the relative frequency of possible
causes:

Hardware failure: not that I remember
Compiler error: once or twice
Stack overflow: hardly ever
Buffer overrun: rare
Simple thinko: most of the time

I do not claim general validity. Your mileage may vary.

> True, but what makes you think that sortedness is not part of some
> much larger global invariant? The sortedness of the vector might be
> fundamental to the operation of most of the program.

I have a hard time imagining a non-trivial and well-designed program for
which this is the case, but let's just assume it. If the sortedness is
part of a global invariant, it must be specified as such, not just as a
local precondition. In other words, the contract surveillance mechanism
would trigger whatever action it is supposed to trigger everywhere the
invariant matters.

>>But in that case, starting a separate recovery mechanism is acting
>>"on a wing and a prayer" as well.
>
> Exactly. At that point, everything is a shot in the dark. I bet on
> the recovery mechanism avoiding total catastrophe because it's the
> best I can do.

I understood that. It's still a bet - or an educated guess.

> Programmers in general seldom make the distinction carefully between
> violated preconditions and conditions that are known to be
> recoverable. You yourself seem to have had that problem. The pull to
> throw from a violated precondition, and hope that code somewhere else
> can deal with the problem, is quite strong. We're loathe to admit
> that the program is broken, so we bet that something can be done about
> it elsewhere. Once you start trying to unwind-and-continue from a
> violated precondition, you -- or someone on your team -- will
> typically begin to add code for defensive programming (which has a
> high development cost and often, doesn't actually work), because you
> now have to make the program "work" even in a broken state.

Maybe we have different views of what throwing an exception means. When
I throw an exception, I do not hope that some code up there fixes the
mess and carries on as before - unless the catching code can clearly
tell by the nature of the exception and the high-level program state
that it is safe to do so. In general, it is easier to make such
decisions upwards from the point of detection.

> When I say, "it's almost always a mistake to throw from a violated
> precondition," I am addressing that problem: I want people to think
> much more carefully about the consequences and be much more
> conservative about the idea of doing so. If you determine, for
> whatever reason, that your application is better off betting that
> things "aren't broken too badly," you should still design the program
> as though preconditions are never actually violated. In other words,
> the program should not count on these exceptions and expect to respond
> to them in useful ways. Anything else leads to a mess.

My bet is not that things "aren't broken too badly". My bet also isn't
that the program has degenerated to a pulp of random bits. My bet is
that there is just enough stability left to perform a graceful exit.
Again: I am referring to shipping code here.

I also do not advocate high-level code to *count* on these exceptions; I
merely want it to handle them accordingly. In an interactive program,
this could mean informing the user and asking him to exit.

> No, that's not the issue. In general, catch blocks like the one above
> do the right thing. They're not swallowing errors. In this case the
> precondition violation gets treated like a recoverable error simply
> because its exception passes through a translation layer. At such a
> language or subsystem boundary, even if the programmer can anticipate
> the "violated precondition" exception type thrown by low level code,
> what would you have him do? What sort of response would you deem
> "trustworthy?"

A precondition violation in a separate subsystem normally means that the
subsystem has a bug. The more encapsulated and self-sufficient a
component is, the more inappropriate I would consider it for such a
component to terminate the entire program. I have worked with components
that do. They caused havoc.

Ideally, upon catching a violated precondition exception, the subsystem
would enter a global error state that would cause all further calls to
fail instantly. The external caller would be notified of the partial
"shutdown" and could decide whether it is possible to continue without
the subsystem (e.g. work offline), or initiate a shutdown itself.

> Sure. The way to avoid that is to eliminate bugs from the program,
> not to try to hobble along anyway when a bug is detected. The
> customer will usually be just as angry when the program doesn't behave
> as expected because some internal assumption is violated. And you
> know what? They are right!

Again, fully agreed. This is not about either terminating or continue as
if nothing had happened. It's about gracefully handling situations that
should never happen.

> Anyway browsers are an unusual case, since they're primarily viewers.
> If they don't contain some carefully-crafted bit of the user's work
> that will be lost on a crash, it's probably okay... hmm, but wait:
> there's webmail. So I could lose this whole message unless it gets
> written to disk and I get the chance to start over. Oh, and there are
> all kinds of scam sites that masquerade as secure and trustworthy,
> which might be easily mistaken for legit if the user begins to
> overlook garbage on the screen. As a matter of fact, security is a
> big deal for web browsers. They're used in all kinds of critical
> applications, including banking. Oh, and there are plugins, which
> could be malicious and might be the cause of the violation detected.
> No, I don't think we want the browser pressing ahead when it detects a
> bug, not at all. I think this is a perfect case-in-point.

Again, we are not talking about pressing ahead. There's a trade-off.
Terminating minimizes the chance of a catastrophic chain reaction and
risks destroying user data for harmless reasons. Giving the user a
chance to bail out in a controlled way minimizes the chance of data loss
and risks executing malicious code. Your bet is always to assume the
worst case. Personally, I prefer my browser to be wary and suspicious,
but not paranoid and suicidal, especially because I am a much better
judge of whether a website might be forged or a plugin might be from a
dubious source.

>>That leaves the question what to do in shipping code. Standard C
>>practice (in the sense of what most platforms seem to do - I don't
>>know what the C Standard says) is to let the preprocessor suppress
>>the test and boldly stomp into what may be disastrous. Incidentally,
>>the Eiffel practice (thanks for the link, by the way) seems to be
>>similar: assertion monitoring is usually turned off in shipping
>>code.
>
> That can be a good policy, because programmers concerned about
> efficiency will never be deterred from writing assertions on the basis
> of slowing down the shipping program.

Now you've lost me. You go to greath lengths to convince me that
pressing ahead is potentially disastrous, and then you call turning off
assertions in shipping mode a good policy? In other words, carefully
backing away (throwing an exception) is more dangerous than plunging
headlong into the abyss (ignoring the violation and executing the normal
case)? I'm sorry, but this doesn't make sense to me.

>>This is in stark contrast to what has been frequently advocated in
>>this newsgroup. The standard argument is: disabling assertions in
>>shipping code is like leaving the life jackets ashore when you set
>>sail.
>
> One or two vocal advocates of that approach do not a consensus make.
> I've never agreed with it.

We're not talking about a suggestion from a few passing amateurs. I
don't remember exactly who it was, but they were trusted experts. James
Kanze may have been one of them. What is more, I cannot remember having
seen objections posted.

>>I find this metaphor rather misleading - assertions are more like
>>self-destruction devices than life jackets - yet the argument cannot
>>be dismissed so easily. What is your position on this? Should
>>assertions in shipping code do nothing, do the same as in
>>non-shipping code, or do something else?
>
> The correct policy depends on the application and your degree of
> confidence in the code.

Which has been my position from the beginning. :-)

--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: dave_abrahams on
> David Abrahams wrote:
>
> > Exactly. That's what makes the detecting code particularly unsuited
> > to making educated guesses about severity.
>
> Full agreement here. That's why I have reservations about terminating
> the program (in shipping code) whenever a precondition is violated: it
> is based on educatedly guessing maximum severity.

Having read your whole message, I don't understand what policy you're
advocating, nor which viewpoint you're arguing with. Later in this
message you clearly state that you're not for "pressing ahead," but
instead are interested in a "graceful exit." I have recommended a
graceful exit to you in this thread, so surely you know I do not
support immediate termination in most cases. As far as I can tell, a
graceful exit means "take emergency measures and then terminate."
However, you also objected to my suggestion on the grounds that your
customers would see it as a crash. It only makes sense to terminate
immediately when no emergency measures are necessary or possible and
your customers can tolerate the rare core dump (or whatever) without
much human-readable explanation.

> >>They don't say "the stack is uncorrupted", they say: "this
> >>particular vector must be sorted". If it isn't, it's usually because
> >>the author of the client forgot to sort the vector, or called
> >>another function after the sort that push_backs an element.
>
> > On what do you base that assessment? Do you have data, or is it just
> > intuition?
>
> On my personal experience regarding the relative frequency of possible
> causes:
>
> Hardware failure: not that I remember
> Compiler error: once or twice
> Stack overflow: hardly ever
> Buffer overrun: rare
> Simple thinko: most of the time

Sure. The question is whether the thinko is local to where the
violation is detected. That's a different issue altogether.

> > Programmers in general seldom make the distinction carefully between
> > violated preconditions and conditions that are known to be
> > recoverable. You yourself seem to have had that problem. The pull to
> > throw from a violated precondition, and hope that code somewhere else
> > can deal with the problem, is quite strong. We're loathe to admit
> > that the program is broken, so we bet that something can be done about
> > it elsewhere. Once you start trying to unwind-and-continue from a
> > violated precondition, you -- or someone on your team -- will
> > typically begin to add code for defensive programming (which has a
> > high development cost and often, doesn't actually work), because you
> > now have to make the program "work" even in a broken state.
>
> Maybe we have different views of what throwing an exception means. When
> I throw an exception, I do not hope that some code up there fixes the
> mess and carries on as before -

If it's not going to carry on in some sense, why bother throwing? Why
not just quit?

> unless the catching code can clearly tell by the nature of the
> exception and the high-level program state that it is safe to do
> so. In general, it is easier to make such decisions upwards from the
> point of detection.

It's been my experience that in general, an application that can
recover from one kind of exception can recover from almost any
exception -- as long as exceptions aren't used to indicate that the
program is in a broken state from which no recovery is possible, of
course.

> > When I say, "it's almost always a mistake to throw from a violated
> > precondition," I am addressing that problem: I want people to think
> > much more carefully about the consequences and be much more
> > conservative about the idea of doing so. If you determine, for
> > whatever reason, that your application is better off betting that
> > things "aren't broken too badly," you should still design the program
> > as though preconditions are never actually violated. In other words,
> > the program should not count on these exceptions and expect to respond
> > to them in useful ways. Anything else leads to a mess.
>
> My bet is not that things "aren't broken too badly". My bet also isn't
> that the program has degenerated to a pulp of random bits. My bet is
> that there is just enough stability left to perform a graceful exit.
> Again: I am referring to shipping code here.

The question remains how throwing an exception is going to help you
achieve a graceful exit. And if you really mean "just enough
stability," then throwing an exception is the wrong choice because it
will almost always do more than is necessary for a graceful exit. So,
really, your bet is that there's enough stability left to run all the
catch blocks and destructors of automatic objects between the point of
the throw and the point where emergency measures are taken, and then
perform a graceful exit. There's nothing inherently wrong with making
that bet, but you ought to be honest with yourself about what you're
counting on.

> I also do not advocate high-level code to *count* on these exceptions; I
> merely want it to handle them accordingly.

The problem is that it's an extra discipline for the programmer to
carefully distinguish recoverable from unrecoverable exceptions. I'm
saying that any benefits you get from unwinding are usually not worth
the cost of maintaining that distinction, especially in a project with
developers who may not have considered all the issues that deeply.

> In an interactive program,
> this could mean informing the user and asking him to exit.

I guess we have different user interface philosophies. I am not one
of those people who thinks every interface should be dumb, but one of
the things I expect from my programs is that they'll do their best to
protect me from really bad things. If I have open documents and I
save over the old ones before exiting, I could end up with nothing
useful. If I happen to hit return as the error message is coming up
and miss the dialog box, I don't want to miss the chance to save all
my documents.

> > No, that's not the issue. In general, catch blocks like the one above
> > do the right thing. They're not swallowing errors. In this case the
> > precondition violation gets treated like a recoverable error simply
> > because its exception passes through a translation layer. At such a
> > language or subsystem boundary, even if the programmer can anticipate
> > the "violated precondition" exception type thrown by low level code,
> > what would you have him do? What sort of response would you deem
> > "trustworthy?"
>
> A precondition violation in a separate subsystem normally means that the
> subsystem has a bug. The more encapsulated and self-sufficient a
> component is, the more inappropriate I would consider it for such a
> component to terminate the entire program.

Agreed.

> I have worked with components that do. They caused havoc.
>
> Ideally, upon catching a violated precondition exception, the subsystem
> would enter a global error state that would cause all further calls to
> fail instantly. The external caller would be notified of the partial
> "shutdown" and could decide whether it is possible to continue without
> the subsystem (e.g. work offline), or initiate a shutdown itself.

Not bad.

> > Anyway browsers are an unusual case, since they're primarily viewers.
> > If they don't contain some carefully-crafted bit of the user's work
> > that will be lost on a crash, it's probably okay... hmm, but wait:
> > there's webmail. So I could lose this whole message unless it gets
> > written to disk and I get the chance to start over. Oh, and there are
> > all kinds of scam sites that masquerade as secure and trustworthy,
> > which might be easily mistaken for legit if the user begins to
> > overlook garbage on the screen. As a matter of fact, security is a
> > big deal for web browsers. They're used in all kinds of critical
> > applications, including banking. Oh, and there are plugins, which
> > could be malicious and might be the cause of the violation detected.
> > No, I don't think we want the browser pressing ahead when it detects a
> > bug, not at all. I think this is a perfect case-in-point.
>
> Again, we are not talking about pressing ahead.

So, in the case of the browser, what _are_ we talking about? What do
you think should happen?

> There's a trade-off. Terminating minimizes the chance of a
> catastrophic chain reaction and risks destroying user data for
> harmless reasons. Giving the user a chance to bail out in a
> controlled way minimizes the chance of data loss and risks executing
> malicious code. Your bet is always to assume the worst
> case.

Yes. I would assume the worse and *force* the user to bail out in a
way that saves as much relevant data as possible.

> Personally, I prefer my browser to be wary and suspicious, but not
> paranoid and suicidal, especially because I am a much better judge
> of whether a website might be forged or a plugin might be from a
> dubious source.

That's true until the screen display begins to show you stuff that
doesn't correspond to what's actually going on at the website you're
visiting because of some broken invariant. Can't you imagine what
happens when the little "security lock icon" becomes permanently stuck
in the "on" state?

> >>That leaves the question what to do in shipping code. Standard C
> >>practice (in the sense of what most platforms seem to do - I don't
> >>know what the C Standard says) is to let the preprocessor suppress
> >>the test and boldly stomp into what may be disastrous. Incidentally,
> >>the Eiffel practice (thanks for the link, by the way) seems to be
> >>similar: assertion monitoring is usually turned off in shipping
> >>code.
>
> > That can be a good policy, because programmers concerned about
> > efficiency will never be deterred from writing assertions on the basis
> > of slowing down the shipping program.
>
> Now you've lost me. You go to greath lengths to convince me that
> pressing ahead is potentially disastrous

No, I was trying to convince you that unwinding usually does more harm
than good when a precondition violation is detected.

> and then you call turning off assertions in shipping mode a good
> policy?

Depends on the application, your degree of confidence in your unit
tests, etc. Certainly the STL would have little value for many
applications if implementations were all forced to support the checks
used in many debugging implementations even in shipping mode.

> In other words, carefully backing away (throwing an exception) is
> more dangerous than plunging headlong into the abyss (ignoring the
> violation and executing the normal case)? I'm sorry, but this
> doesn't make sense to me.

Me neither. Fortunately, I never said that :)

> >>This is in stark contrast to what has been frequently advocated in
> >>this newsgroup. The standard argument is: disabling assertions in
> >>shipping code is like leaving the life jackets ashore when you set
> >>sail.
>
> > One or two vocal advocates of that approach do not a consensus make.
> > I've never agreed with it.
>
> We're not talking about a suggestion from a few passing amateurs. I
> don't remember exactly who it was, but they were trusted experts. James
> Kanze may have been one of them.

Yes he was. James and I have had a few big disagreements in the past.

> What is more, I cannot remember having seen objections posted.

You may find it hard to believe, but I don't find it necessary to
argue with every assertion I disagree with. :)

> >>I find this metaphor rather misleading - assertions are more like
> >>self-destruction devices than life jackets - yet the argument cannot
> >>be dismissed so easily. What is your position on this? Should
> >>assertions in shipping code do nothing, do the same as in
> >>non-shipping code, or do something else?
>
> > The correct policy depends on the application and your degree of
> > confidence in the code.
>
> Which has been my position from the beginning. :-)

Maybe there's nothing left to say about all this, then.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Gerhard Menzl on
dave_abrahams wrote:

> Having read your whole message, I don't understand what policy you're
> advocating, nor which viewpoint you're arguing with. Later in this
> message you clearly state that you're not for "pressing ahead," but
> instead are interested in a "graceful exit." I have recommended a
> graceful exit to you in this thread, so surely you know I do not
> support immediate termination in most cases. As far as I can tell, a
> graceful exit means "take emergency measures and then terminate."

I advocate a policy that is tailored to the type of application and its
sensitivity to security issues. In the case of interactive applications,
I advocate a policy that - ideally - makes users feel they are still in
charge of the situation and doesn't dumb them down by pretending the
program is always smarter. What this means in detail is probably
off-topic here.

I understand that you consider the C++ exception mechanism largely
unsuitable for fulfilling these goals. Although your arguments have not
convinced me that this is the case most of the times, I am now more
aware of the dangers. Thanks for broadening my horizon.

> It's been my experience that in general, an application that can
> recover from one kind of exception can recover from almost any
> exception -- as long as exceptions aren't used to indicate that the
> program is in a broken state from which no recovery is possible, of
> course.

It depends on what you mean by "recover". An application that may easily
handle database-related exceptions and carry on may have more troubles
recovering from std::bad_alloc. Hm, this reminds me of std::bad_cast -
wouldn't you agree that this exception type usually signals a
programming error?

> The problem is that it's an extra discipline for the programmer to
> carefully distinguish recoverable from unrecoverable exceptions. I'm
> saying that any benefits you get from unwinding are usually not worth
> the cost of maintaining that distinction, especially in a project with
> developers who may not have considered all the issues that deeply.

There is also an extra discipline for the programmer to maintain an
extra emergency cleanup mechanism and carefully distinguish resources
which need to be released even in case of a contract breach from those
that don't.

> I guess we have different user interface philosophies. I am not one
> of those people who thinks every interface should be dumb, but one of
> the things I expect from my programs is that they'll do their best to
> protect me from really bad things.

That may be the case. I don't think user interfaces should be dumb, but
I am more concerned about user interfaces that make users look dumb. But
I am straying into the off-topic zone again.

>>Personally, I prefer my browser to be wary and suspicious, but not
>>paranoid and suicidal, especially because I am a much better judge
>>of whether a website might be forged or a plugin might be from a
>>dubious source.
>
> That's true until the screen display begins to show you stuff that
> doesn't correspond to what's actually going on at the website you're
> visiting because of some broken invariant. Can't you imagine what
> happens when the little "security lock icon" becomes permanently stuck
> in the "on" state?

I can contrive lots of freak accidents caused by code that throws an
exception upon detecting a contract breach, just as I can contrive freak
accidents caused by code that doesn't throw and shuts down instead. All
I am saying is that there is a balance, and that there's odds, and that
I have doubts about the odds being as clear as you seem to think they are.

>>Now you've lost me. You go to greath lengths to convince me that
>>pressing ahead is potentially disastrous
>
> No, I was trying to convince you that unwinding usually does more harm
> than good when a precondition violation is detected.
>
>>and then you call turning off assertions in shipping mode a good
>>policy?
>
> Depends on the application, your degree of confidence in your unit
> tests, etc. Certainly the STL would have little value for many
> applications if implementations were all forced to support the checks
> used in many debugging implementations even in shipping mode.

That's a keypoint, so I must insist. You claim that throwing is almost
always a bad choice because there is *a chance* that code executed
during unwinding is rendered broken. Yet turning off assertions may be
okay, although in case of a contract breach this would cause code to be
executed which is *known* to be broken. To me, this is a glaring
contradiction. In the awkward case of a programming error slipping
through your tightly knit mesh of unit tests the odds of avoiding
further damage are surely better for throwing an exception than they are
for continuing normally, notwithstanding the fact that they may be even
better for aborting.

>>We're not talking about a suggestion from a few passing amateurs. I
>>don't remember exactly who it was, but they were trusted experts.
>>James Kanze may have been one of them.
>
> Yes he was. James and I have had a few big disagreements in the past.

It's a pity he hasn't taken the bait yet. I would be interested what his
views are on this. Maybe his endless-thread-filter is on.

>>What is more, I cannot remember having seen objections posted.
>
> You may find it hard to believe, but I don't find it necessary to
> argue with every assertion I disagree with. :)

I didn't mean to say you do. But when strong opinions posted here
repeatedly by long-term participants remain unchallenged, this is often
a hint (although, of course, no proof) that something is established
best practice. Hence my surprise.


--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

First  |  Prev  | 
Pages: 10 11 12 13 14 15 16 17 18 19 20
Next: C++/CLI limitations?