From: Peter Dimov on
Nicola Musatti wrote:
> Maxim Yegorushkin wrote:
> [...]
>> Why would one want a graceful exit when code is broken, rather than
>> dying as loud as possible leaving a core dump with all state
>> preserved, rather than unwound? std::abort() is a good tool for that.
>
> You and Peter seem to assume that there can be no knowledge about how
> and where the code is broken.

Not really.

Either

(a) you go the "correct program" way and use assertions to verify that your
expectations match the observed behavior of the program, or

(b) you go the "resilient program" way and use exceptions in an attempt to
recover from certain situations that may be caused by bugs.

(a) implies that whenever an assert fails, the program no longer behaves as
expected, so everything you do from this point on is based on _hope_ that
things aren't as bad.

(b) implies that whenever stack unwinding might occur, you must assume that
the conditions that you would've ordinarily tested with an assert do not
hold.

Most people do neither. They write incorrect programs and don't care about
the fact that every stack unwinding must assume a broken program. It's all
wishful thinking. We can't make our programs correct, so why even bother?
Just throw an exception.



[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Alf P. Steinbach on
* Peter Dimov:
> > >
> > > No recovery is possible after a failed assert.
>
> [The above] means that performing stack unwinding after a failed
> assert is usually a bad idea.

I didn't think of that interpretation, but OK.

The interpretation, or rather, what you _meant_ to say in the first
place, is an opinion, which makes it more difficult to discuss.

After a failed assert it's known that something, which could be anything
(e.g. full corruption of memory), is wrong. Attempting to execute even
one teeny tiny little instruction might do unimaginable damage. Yet you
think it's all right to not only terminate the process but also to log
things, which involves file handling, as long as one doesn't do a stack
rewind up from the point of the failed assert. This leads me to suspect
that you're confusing a failed assert with a corrupted stack, or that
you think that a failure to clean up 100% might be somehow devastating.
Anyway, an explanation of your opinion would be great, and this time,
please write what you mean, not something entirely different.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: David Abrahams on
"Peter Dimov" <pdimov(a)gmail.com> writes:

> Nicola Musatti wrote:
>> Maxim Yegorushkin wrote:
>> [...]
>>> Why would one want a graceful exit when code is broken, rather than
>>> dying as loud as possible leaving a core dump with all state
>>> preserved, rather than unwound? std::abort() is a good tool for that.
>>
>> You and Peter seem to assume that there can be no knowledge about how
>> and where the code is broken.
>
> Not really.
>
> Either
>
> (a) you go the "correct program" way and use assertions to verify that your
> expectations match the observed behavior of the program, or
>
> (b) you go the "resilient program" way and use exceptions in an attempt to
> recover from certain situations that may be caused by bugs.
>
> (a) implies that whenever an assert fails, the program no longer behaves as
> expected, so everything you do from this point on is based on _hope_ that
> things aren't as bad.
>
> (b) implies that whenever stack unwinding might occur, you must assume that
> the conditions that you would've ordinarily tested with an assert do not
> hold.

And while it is possible to do (b) in a principled way, it's much more
difficult than (a), because once you unwind and return to "normal"
code with the usual assumptions about program integrity broken, you
have to either:

1. Test every bit of data obsessively to make sure it's still
reasonable, or

2. Come up with a principled way to decide which kinds of
brokenness you're going to look for and try to circumvent, and
which invariants you're going to assume still hold.

In practice, I think doing a complete job of (1) is really impossible,
so you effectively have to do (2). Note also that once you unwind to
"normal" code, information about the particular integrity check that
failed tends to get lost: all the different throw points unwind into
the same instruction stream, so there really is a vast jungle of
potential problems to consider.

Programming with the basic assumption that the program state might be
corrupt is very difficult, and tends to work against the advantages of
exceptions, cluttering the "normal" flow of control with integrity
tests and attempts to work around the problems. And your program gets
bigger, harder to test and to maintain; if your work is correct, these
tests and workarounds will never be executed at all.

> Most people do neither. They write incorrect programs and don't care
> about the fact that every stack unwinding must assume a broken
> program.

I assume you mean that's the assumption you must make when you throw
in response to failed preconditions.

> It's all wishful thinking. We can't make our programs correct, so
> why even bother? Just throw an exception.

.....and make it someone else's problem. Code higher up the call stack
mike know how to deal with it, right? ;-)

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Alf P. Steinbach on
* David Abrahams:
> "Peter Dimov" <pdimov(a)gmail.com> writes:
>
> > Nicola Musatti wrote:
> >> Maxim Yegorushkin wrote:
> >> [...]
> >>> Why would one want a graceful exit when code is broken, rather than
> >>> dying as loud as possible leaving a core dump with all state
> >>> preserved, rather than unwound? std::abort() is a good tool for that.
> >>
> >> You and Peter seem to assume that there can be no knowledge about how
> >> and where the code is broken.
> >
> > Not really.
> >
> > Either
> >
> > (a) you go the "correct program" way and use assertions to verify that your
> > expectations match the observed behavior of the program, or
> >
> > (b) you go the "resilient program" way and use exceptions in an attempt to
> > recover from certain situations that may be caused by bugs.

[Here responding to Peter Dimov's statement:]

Those are extremes, so the "either" is not very meaningful.

AFAIK the techniques of mathematical proof of program correctness is in
general not used in the industry. One reason is simply that the proofs
(and attendant machinery) tend to be more complex than the programs. Apart
from the work involved, that means a possibly higher chance of errors, for
example from over-generalization being employed as a valid proof technique.

When an assertion fails you have proof that the program isn't correct, and
due to the way we use asserts, an indication that the process should terminate,
so whether (a) or (b) has been employed (and I agree that if one had to choose
between the extremes (a) would be a good choice) is not relevant any more.


> > (a) implies that whenever an assert fails, the program no longer behaves as
> > expected, so everything you do from this point on is based on _hope_ that
> > things aren't as bad.

That is literally correct, but first of all, "the program" is an over-
generalization, because you usually know something much more specific than
that, and secondly, there are degrees of hope, including informed hope.

If you try to execute a halt instruction you're hoping the instruction
space is not corrupted, and further that the OS' handling of illegal
instructions (if any) still works. And I can hear you thinking, those
bad-case scenarios are totally implausible, and even the scenarios
leading someone to try a halt instruction are so implausible that no-one
actually do that. But those scenarios are included in "the program" no
longer behaving as expected, that's what that over-generalization and
absolute -- incorrectly applied -- mathematical logic means.

If you try to terminate the process using a call to something, you're
hoping that this isn't a full stack you're up against, and likewise for
evaluation of any expression whatsoever. Whatever you do, you're doing a
gut-feeling potential cost / clear benefit analysis, and this should in my
opinion be a pragmatic decision, a business decision. It should not be a
decision based on absolute black/white principles thinking where every
small K.O. is equated to a nuclear attack because in both cases you're down.

As an example, the program might run out of handles to GUI objects. In
old Windows that meant that what earlier was very nice graphical
displays suddenly started showing as e.g. white, blank areas. If this is
detected (as it should be) then there's generally nothing that can be done
within this process, so the process should terminate, and that implies
detection by something akin to a C++ 'assert'. A normal exception won't
do, because it might be picked up by some general exception handler. On
the other hand, you'd like that program to clean up. E.g., if it's your
ATM example, you'd like it to eject the user's card before terminating.

And, you'd like it to log and/or report this likely bug, e.g. sending a mail.

And, you don't want to compromise your design by making everything global
just so a common pre-termination handler can do the job.


> > (b) implies that whenever stack unwinding might occur, you must assume that
> > the conditions that you would've ordinarily tested with an assert do not
> > hold.

(b) implies that whenever anything might occur, you must assume that anything
can be screwed up. ;-)


> And while it is possible to do (b) in a principled way, it's much more
> difficult than (a), because once you unwind and return to "normal"
> code with the usual assumptions about program integrity broken, you
> have to either:
>
> 1. Test every bit of data obsessively to make sure it's still
> reasonable, or
>
> 2. Come up with a principled way to decide which kinds of
> brokenness you're going to look for and try to circumvent, and
> which invariants you're going to assume still hold.
>
> In practice, I think doing a complete job of (1) is really impossible,
> so you effectively have to do (2).

[Here responding to David Abraham's statement:]

I think your points (1) and (2) summarizes approach (b) well, and show
that it's not a technique one would choose if there was a choice.

But as mentioned above, it's an extreme, although in some other
languages (e.g. Java) you get null-pointer exceptions & the like.


> Note also that once you unwind to
> "normal" code, information about the particular integrity check that
> failed tends to get lost: all the different throw points unwind into
> the same instruction stream, so there really is a vast jungle of
> potential problems to consider.

I agree, for the C++ exceptions we do have.

If we did have some kind of "hard" exception supported by the language,
or even just standardized and supported by convention, then the vast jungle
of potential problems that stands in the way of further normal execution
wouldn't matter so much: catching that hard exception at some uppermost
control level you know that the process has to terminate, not continue
with normal execution (which was the problem), and you know what actions
you've designated for that case (also known by throwers, or at least known
to be irrelevant to them), so that's what the code has to attempt to do.


> [snip]
> ....and make it someone else's problem. Code higher up the call stack
> mike know how to deal with it, right? ;-)

In the case of a "hard" exception it's not "might", it's a certainty.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: kanze on
Alf P. Steinbach wrote:
> * Peter Dimov:

> > > > No recovery is possible after a failed assert.

> > [The above] means that performing stack unwinding after a
> > failed assert is usually a bad idea.

> I didn't think of that interpretation, but OK.

> The interpretation, or rather, what you _meant_ to say in the
> first place, is an opinion, which makes it more difficult to
> discuss.

It's always difficult to discuss a sentence with "usually".
What percentage is "usually"?

My work is almost exclusively on large scall servers, usually on
critical systems. In that field, it is always a mistake to do
anything more than necessary when a program invariant fails; you
back out as quickly as possible, and let the watchdog processes
clean up and restart.

At the client level, I agree that the question is less clear,
although the idea of a executing tons of destructors when the
program invariants don't hold sort of scares me even there. As
does the idea that some important information not be displayed
because of the error. For a game, on the other hand, its no big
deal, and in many cases, of course, you can recover enough to
continue, or at least save the game so it could be restarted.

> After a failed assert it's known that something, which could
> be anything (e.g. full corruption of memory), is wrong.
> Attempting to execute even one teeny tiny little instruction
> might do unimaginable damage.

Well, a no-op is probably safe:-).

> Yet you think it's all right to not only terminate the process
> but also to log things, which involves file handling, as long
> as one doesn't do a stack rewind up from the point of the
> failed assert. This leads me to suspect that you're confusing
> a failed assert with a corrupted stack, or that you think that
> a failure to clean up 100% might be somehow devastating.

I think the idea is that basically, you don't know what stack
unwinding may do, or try to do, because it depends on the global
program state. It's not local, and you have no control over
it. Most of the time, it's probably acceptable to do some
formatting (which, admittedly, may overwrite critical memory if
some pointers are corrupted, but you're not going to do anything
with the memory afterwards), and try to output the results
(which does entail a real risk -- if the file descriptor is
corrupted, you may end up overwriting something you shouldn't).
The point is, if you try to do this from the abortion routine,
you know at least exactly what you are trying to do, and can
estimate the risk. Whereas stack unwinding leads you into the
unknown, and you can't estimate the risk.

And of course, there are cases where even the risk of trying to
log the data is unacceptable. You core, the watchdog process
picks up the return status (which under Unix tells you that the
process was terminated by an unhandled signal, and has generated
a core dump), and generates the relevant log entries.

--
James Kanze GABI Software
Conseils en informatique orientýe objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sýmard, 78210 St.-Cyr-l'ýcole, France, +33 (0)1 30 23 00 34


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Next: C++/CLI limitations?