From: Claude on
On Jun 23, 9:22 am, Warren <ve3...(a)gmail.com> wrote:

Exceptions are not the best way to process error. (i.e., Not just a
SPARK topic).

> The downside of exceptions though, is that it requires
> extensive testing to provoke them (or to prove they
> don't occur).  So in a life-critical application, there
> may be the requirement that it not throw its hands in
> the air and give up (per exception). On the other hand,
> proceeding incorrectly may be equally disastrous.
>
> So my point is that there are two valid approaches and
> that "one size does not fit all".
>
> Warren

Warren, you got quite the point, proceeding incorrectly may be
disastrous.
But not equally, because when the exception handler doesn't proceed
correctly and gets another exception, what's going to happen next?
Getting a third exception in a row, and so on ...
Processing anything within an exception handler is not recommended.
(That can even requires extra procedures to correct state/data which
could become out of context/visibility).

Safety critical system won't like any exception...
That's one of the SPARK advantage, it can assess about the absence of
run-time errors.
But about operational hazards, that's another story (worst: the
semantic responses are used to be generic!)

Claude Defour
From: Warren on
Claude expounded in news:d90f60dd-b74f-4eff-b9d8-803ebb64c9d2
@z8g2000yqz.googlegroups.com:

> On Jun 23, 9:22�am, Warren <ve3...(a)gmail.com> wrote:
> Exceptions are not the best way to process error. (i.e., Not just a
> SPARK topic).

Well, I still think this is debatable (for run of the mill
code). I haven't seen conclusive arguments for either side.

>> The downside of exceptions though, is that it requires
>> extensive testing to provoke them (or to prove they
>> don't occur). �So in a life-critical application, there
>> may be the requirement that it not throw its hands in
>> the air and give up (per exception). On the other hand,
>> proceeding incorrectly may be equally disastrous.
>
> Warren, you got quite the point, proceeding incorrectly may be
> disastrous.
> But not equally, because when the exception handler doesn't proceed
> correctly and gets another exception, what's going to happen next?

I could counter that the non-exception code can continue
on blithely as if nothing went wrong. What then?

At some point, you just have to accept some level of code
responsibility (as a practical matter). With either
approach, you cannot entirely avoid disasters.

> Safety critical system won't like any exception...
> That's one of the SPARK advantage, it can assess about the absence of
> run-time errors.
> But about operational hazards, that's another story (worst: the
> semantic responses are used to be generic!)
>
> Claude Defour

I can see some definite advantage for "proving" that
a system works under all expected cases. But the
responsibility then just shifts to the proof testing,
making sure that you've covered all possible real life
conditions. The Arian disaster comes to mind. ;-)

But you're right in that if you can prove that all the
bases are covered, you are ahead of the game. You don't
have to worry that some unanticipated exception (and
possible mishandling) may occur.

So I suppose that I do agree with you, provided you
are rigourous in your proofs. But for run of the mill
stuff (that I work on), where that kind of testing is
not done, then exceptions are in my mind "good enough",
and perhaps even preferred.

Warren
From: Stephen Leake on
Warren <ve3wwg(a)gmail.com> writes:

> So I suppose that I do agree with you, provided you
> are rigourous in your proofs. But for run of the mill
> stuff (that I work on), where that kind of testing is
> not done, then exceptions are in my mind "good enough",
> and perhaps even preferred.

In an ideal world, SPARK could handle programs with exceptions, and we'd
have both proofs and localized well-designed exception handling.

--
-- Stephe
From: Randy Brukardt on

"Claude" <claude.defour(a)orange.fr> wrote in message
news:d90f60dd-b74f-4eff-b9d8-803ebb64c9d2(a)z8g2000yqz.googlegroups.com...
>
>Exceptions are not the best way to process error. (i.e., Not just a
>SPARK topic).

There is an extra "not" in your statement. :-) You must have meant
"Exceptions are the best way to process error."

Result codes are dangerous, as ignoring of result codes is one of the major
problems in programming, and the resulting bugs are very, very hard to find.
(That's because of the difficulty of finding errors of omission.)

Perhaps you are too young to remember, but early versions of Windows (3.0,
3.1) had a reputation for being unstable and crashing often. But the
problems were mostly not with Windows but rather with programs that tended
to ignore the error returns from the Windows API. As such, they tended to
continue running but using non-existent Windows handles and the like. Given
the lack of memory protection on 16-bit computers, these errors tended to
take the entire Windows subsystem with them.

We developed a "middle-level" binding for programming Windows, which used
exceptions rather than error codes, and programs developed with it tended to
be much more stable than the commercial programs we were using at the
time --- probably because even when they failed, they didn't take the entire
OS with them (just popped up a message box and terminated properly).

I understand the problems with safety-critical systems and the need to test
to check exceptions, but I believe that the proper solution to those
problems is on the line of exception contracts and proving exception absence
when necessary to fufill those contracts. In particular, if a routine is
declared to not propagate Constraint_Error, then the routine is incorrect if
it in fact could propagate that exception. (I'm mildly annoyed that we
didn't have time in Ada 2012 to pursue those sorts of contracts seriously.)

There is no technical reason that tools like SPARK couldn't support
exceptions if they wanted to do so. Compiler optimizers are a form of proof
tool, and they have no problem dealing with exceptions. (That's because the
control flow caused by exceptions is quite limited, and is generally out of
blocks -- outbound flow has little effect on proofs within a block).

Returning to the dark ages of programming (that is BE - Before Exceptions)
and reintroducing all of the problems of early C code just because of
inadequate tools seems like a horrible step backwards to me.

This is exactly why I'm not much interested in SPARK itself: because it
forces programs back into the style that I originally learned for Fortran IV
in 1976: no dynamic allocation, no dynamic calls, no exceptions -- no
interest! I'm interested in what SPARK brings to the table in proof
capabilities, but I see no reason to give up 3/4rds of Ada to get it.
(Remember that Ada allocators, Ada O-O dispatching, Ada access-to-subprogram
calls, are all type-safe and provide plenty of opportunity for reasoning).

Randy Brukardt.


From: Georg Bauhaus on
On 6/29/10 10:05 PM, Randy Brukardt wrote:

> Result codes are dangerous, as ignoring of result codes is one of the major
> problems in programming, and the resulting bugs are very, very hard to find.
> (That's because of the difficulty of finding errors of omission.)
>
> Perhaps you are too young to remember, but early versions of Windows (3.0,
> 3.1) had a reputation for being unstable and crashing often. But the
> problems were mostly not with Windows but rather with programs that tended
> to ignore the error returns from the Windows API.

The result code situation in Windows 6.0 (Vista) seems even
worse today, even inside Windows OS software. The example is
this: When a file in NTFS is encrypted (green in explorer) and an
existing program tries to open it (not knowing that it might be
encrypted), the system returns "Access Denied". MS says that they
don't have the OS return a more descriptive code because they know that
many old programs will start failing and behave erratically:
the old programs are not prepared to handle return codes that did
not exist when the program was written.

I Learned this when trying to install driver files extracted from
a recent official ZIP archive by Linksys/Cisco, using Windows's own
driver installation program. The files had ended up encrypted in
the file system. Windows's standard installation stops with
"Access Denied" ...



Georg