From: Bob Bell on
David Abrahams wrote:
> "Peter Dimov" <pdimov(a)gmail.com> writes:
> > David Abrahams wrote:
> >> "Peter Dimov" <pdimov(a)gmail.com> writes:
> >>
> >> > Either
> >> >
> >> > (a) you go the "correct program" way and use assertions to verify that your
> >> > expectations match the observed behavior of the program, or
> >> >
> >> > (b) you go the "resilient program" way and use exceptions in an attempt to
> >> > recover from certain situations that may be caused by bugs.

[snip]

> > It's possible to do (b) when you know that the stack unwinding will
> > completely destroy the potentially corrupted state, and it seems
> > possible - in theory - to write programs this way.
>
> <snip example that copies program state, modifies, and swaps>
>
> What you've just done -- implicitly -- is to decide which kinds of
> brokenness you're going to look for and try to circumvent, and which
> invariants you're going to assume still hold. For example, your
> strategy assumes that whatever broke invariants in the copy of your
> document didn't also stomp on the memory in the original document.
> Part of what your strategy does is to increase the likelihood that
> your assumptions will be correct, but if you're going to go down the
> (b)(2) road in a principled way, you have to recognize where the
> limits of your program's resilience are.

And recognize that where those limits are exceeded, you're back to (a)
anyway.

Bob


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Gerhard Menzl on
kanze(a)gabi-soft.fr wrote:

> My experience (for the most part, in systems which are more or
> less critical in some way, and under Unix) is that the operating
> system will clean up most of the mess anyway, and that any
> attempts should be carefully targetted, to minimize the risk.
> Throwing an exception means walking back the stack, which in
> turn means executing a lot of unnecessary and potentially
> dangerous destructors. I don't think that the risk is that
> great, typically, but it is very, very difficult, if not
> impossible, to really evaluate. For example, I usually have
> transaction objects on the stack. Calling the destructor
> without having called commit should normally provoke a roll
> back. But if I'm unsure of the global invariants of the
> process, it's a risk I'd rather not take; maybe the destructor
> will misinterpret some data, and cause a commit, although the
> transaction didn't finish correctly. Where as if I abort, the
> connection to the data base is broken (by the OS), and the data
> base automatically does its roll back in this case. Why take
> the risk (admittedly very small), when a solution with zero risk
> exists?

I think the problem with this discussion is that no-one seems to agree
about what we mean by global invariants and what kinds of programs we
are talking about. When flight control software encounters a negative
altitude value, it had better shut down (and, hopefully, let the backup
system take over). On the other hand, a word processor that aborts and
destroys tons of unsaved work just because the spellchecker has met a
violated invariant is just inacceptable.

It is generally agreed that modularity, loose coupling, and
encapsulation are cornerstones of good software design. Providing these
principles are being adherred to, I wonder whether global invariants (or
preconditions) that require immediate shutdown when violated are really
as common as this discussion seems to suggest they are.

In my experience, the distinction is rarely that clear-cut, at least not
in interactive, user-centric systems. For example, right now I am
working on the front-end of a multi-user telecommunication system that
needs a central database for some, but not all operations. A corrupted
database table would certainly constitute violated preconditions, yet a
shutdown in such a case would be out of the question. Our customer
insists - justifiably - that operations which do not rely on database
transactions, such as emergency calls, continue to function even if the
database connection is completely broken.


--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Alf P. Steinbach on
* David Abrahams:
> alfps(a)start.no (Alf P. Steinbach) writes:
>
> > * David Abrahams:
> >> alfps(a)start.no (Alf P. Steinbach) writes:
> >>
> >> > * Peter Dimov:
> >> >> > >
> >> >> > > No recovery is possible after a failed assert.
> >> >>
> >> >> [The above] means that performing stack unwinding after a failed
> >> >> assert is usually a bad idea.
> >> >
> >> > I didn't think of that interpretation, but OK.
> >> >
> >> > The interpretation, or rather, what you _meant_ to say in the first
> >> > place,
> >>
> >> AFAICT that was a *conclusion* based on what Peter had said before.
> >
> > "It's impossible to do stack unwinding, therefore it's usually a bad
> > idea to do stack unwinding." I didn't think of that. It's, uh...
>
> You clipped everything but the first sentence of Peter's paragraph,

It so happens that I agree with the literal interpretation of the rest
(although not with the sense it imparts). I.e. there was nothing to
discuss there with Peter, and no need to quote. For completeness, here's
what I clipped and agree literally with, emphasis added:

A failed assert means that we no longer _know_ what's going on. [Right]
Generally logging and reporting should be done at the earliest
opportunity [right again, although what can be logged/reported at
that early moment, and of what use it can be, is very restricted]; if
you attempt to "recover" you may be terminated and no longer be able
to log or report [right, and that holds for anything you do].


> which makes what he's saying look like a simpleminded tautology,

I don't think Peter is simpleminded, quite the opposite, and anyway,
that discussion is off-topic and not one I'd like to participate in.


> and now you're ridiculing it. Nice.

If showing that a statement is incorrect, by quoting the parts it
refers to, is ridicule, then I ridiculed your statement. However,
quoting is normally not considered ridicule. You're off-topic
both regarding Peter's alleged intellectual capacity and my
alleged choice of rhetorical tools.

Cheers,

- Alf

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Peter Dimov on
Gerhard Menzl wrote:
>
> I think the problem with this discussion is that no-one seems to agree
> about what we mean by global invariants and what kinds of programs we
> are talking about. When flight control software encounters a negative
> altitude value, it had better shut down (and, hopefully, let the backup
> system take over). On the other hand, a word processor that aborts and
> destroys tons of unsaved work just because the spellchecker has met a
> violated invariant is just inacceptable.

The two options aren't "abort and destroy hours of work" and "throw an
exception". The two options are "throw an exception" and "don't throw
an exception".

In particular, nothing prevents the failed assertion handler to attempt
an emergency save, using a different file name (to not clobber the
"last known good" save), a different file format (if the native format
consists of a dump of the data structures and will likely produce an
unreadable file), and a different, extra-paranoid code path. _Then_
abort.

> It is generally agreed that modularity, loose coupling, and
> encapsulation are cornerstones of good software design. Providing these
> principles are being adherred to, I wonder whether global invariants (or
> preconditions) that require immediate shutdown when violated are really
> as common as this discussion seems to suggest they are.

Yes, I tried to gave an example of that in the other post.
Unfortunately, two important global invariants are "the heap is not
corrupted" and "there are no dangling pointers that are causing
damage", and a violation of those is usually manifested as a violation
of another (possibly local) invariant.

> In my experience, the distinction is rarely that clear-cut, at least not
> in interactive, user-centric systems. For example, right now I am
> working on the front-end of a multi-user telecommunication system that
> needs a central database for some, but not all operations. A corrupted
> database table would certainly constitute violated preconditions, yet a
> shutdown in such a case would be out of the question. Our customer
> insists - justifiably - that operations which do not rely on database
> transactions, such as emergency calls, continue to function even if the
> database connection is completely broken.

A broken database connection, a corrupted table, a corrupted data file
are not logic errors. The number of logic errors in a program is
constant and does not depend on external factors. (Unless the program
itself changes, i.e. you consider the information in the database
"code", a part of the program.)


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: David Abrahams on
Gerhard Menzl <gerhard.menzl(a)hotmail.com> writes:

> I think the problem with this discussion is that no-one seems to agree
> about what we mean by global invariants and what kinds of programs we
> are talking about.

No, that's not the problem, as shown by what you write here:

> When flight control software encounters a negative altitude value,
> it had better shut down (and, hopefully, let the backup system take
> over). On the other hand, a word processor that aborts and destroys
> tons of unsaved work just because the spellchecker has met a
> violated invariant is just inacceptable.

You seem to assume that aborting is the only alternative to unwinding
when a violated invariant is detected. In an interactive application
like a word processor it's usually possible to recover the state of
the document and offer the user an opportunity to save when a violated
invariant is detected. All of that can be done without any
unwinding.

> It is generally agreed that modularity, loose coupling, and
> encapsulation are cornerstones of good software design. Providing
> these principles are being adherred to, I wonder whether global
> invariants (or preconditions) that require immediate shutdown when
> violated are really as common as this discussion seems to suggest
> they are.

a. Nobody's suggesting "immediate" shutdown.

b. I'm not saying that the conditions *require* immediate shutdown.
I'm saying that if you try to continue, making judgements about what
things you can rely on at that point can become very difficult, and
that we don't have a well-developed discipline for doing so. I'm also
saying that dealing with the possibility of broken invariants tends to
complicate and obfuscate regular code, usually to no benefit.

> In my experience, the distinction is rarely that clear-cut, at least
> not in interactive, user-centric systems. For example, right now I
> am working on the front-end of a multi-user telecommunication system
> that needs a central database for some, but not all operations. A
> corrupted database table would certainly constitute violated
> preconditions, yet a shutdown in such a case would be out of the
> question. Our customer insists - justifiably - that operations which
> do not rely on database transactions, such as emergency calls,
> continue to function even if the database connection is completely
> broken.

In that case, a corrupted database table is by definition *not* a
broken precondition. IIUC, you are expected to write software that is
guaranteed to work whether the database table is corrupt or not, and
you seem to accept that challenge. Great! It's similar to writing
software that is robust in the face of invalid user input. If the
user's input is invalid, well, there's some functionality they can't
get to until the input is corrected. Nobody I know would consider
valid user input a precondition in that case.

In your application, calling database table integrity a precondition
is only going to confuse things and make your code more complicated.
Once you understand that database integrity is not a precondition, it
becomes very clear that you need to check for corruption in certain
places and make sure that you do something sensible if you detect it.

An application that continues in the face of broken preconditions is
-- by definition -- going on "a wing and a prayer." It's fine to do
so, as long as you know there are no guarantees at that point. My
favorite example of an appropriate place to hope for the best is a
lighting controller for a rock concert, where it might be better to
keep the lights flashing somehow than to have the stage go dark.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Next: C++/CLI limitations?