From: Jerry on
Here is an interesting article on software quality in science:

http://www.guardian.co.uk/technology/2010/feb/05/science-climate-emails-code-release

From the article:

"There is enough evidence for us to regard a lot of scientific
software with worry. For example Professor Les Hatton, an
international expert in software testing resident in the Universities
of Kent and Kingston, carried out an extensive analysis of several
million lines of scientific code. He showed that the software had an
unacceptably high level of detectable inconsistencies.

"For example, interface inconsistencies between software modules which
pass data from one part of a program to another occurred at the rate
of one in every seven interfaces on average in the programming
language Fortran, and one in every 37 interfaces in the language C.
This is hugely worrying when you realise that just one error — just
one — will usually invalidate a computer program. What he also
discovered, even more worryingly, is that the accuracy of results
declined from six significant figures to one significant figure during
the running of programs.

"Hatton and other researchers' work indicates that scientific software
is often of poor quality. What is staggering about the research that
has been done is that it examines commercial scientific software –
produced by software engineers who have to undergo a regime of
thorough testing, quality assurance and a change control discipline
known as configuration management.

"By contrast scientific software developed in our universities and
research institutes is often produced by scientists with no training
in software engineering and with no quality mechanisms in place and
so, no doubt, the occurrence of errors will be even higher."


For several years, I have used Ada for as much of my engineering
research as I can, having switched from Pascal and having dumped
FORTRAN many years ago (although it was my first language). Ada offers
the same advantages in this application as in other applications.
However, in my experience, Ada is hampered by having limited support
for standard and popular libraries such as the GNU Scientific Library.
(Feel free to add your own favorite library which is not supported by
Ada.)

Jerry
From: Jerry on
Here is the link in the Guardian article to the original work:

http://www.leshatton.org/Documents/Texp_ICSE297.pdf

It appears that this work was done in the 1990s. The paper is actually
fairly entertaining to read, if for nothing else how scary it is. Here
is a choice comment, about a commercial program for use in the nuclear
engineering industry:

"This package climbed to an awe-inspiring 140 weighted static faults
per 1000 lines of code, and in spite of the aspirations of its
designers, amounted to no more than a very expensive random number
generator."

And this comment which addresses the use of Ada:

"In C, note that function prototypes were well used only around 60% of
the time and as a result, interface faults accounted for about 24% of
the total. In other words, if function prototypes were mandated in all
C functions, 24% of all serious faults would disappear. The
computational scientist should not use this as an argument in favour
of C++ or Ada in which they are mandated. A large number of new
failure modes result from this action, which lack of space prohibits
further discussion here. The net result of changing languages appears
to be that the overall defect density appears to be about the same,
(Hatton 1997). In other words, when a language corrects one
deficiency, it appears to add one of its own."

Jerry
From: Hibou57 (Yannick Duchêne) on
On 9 fév, 22:51, Jerry <lancebo...(a)qwest.net> wrote:
> And this comment which addresses the use of Ada:
>
> "In C, note that function prototypes were well used only around 60% of
> the time and as a result, interface faults accounted for about 24% of
> the total. In other words, if function prototypes were mandated in all
> C functions, 24% of all serious faults would disappear.  The
> computational scientist should not use this as an argument in favour
> of C++ or Ada in which they are mandated. A large number of new
> failure modes result from this action, which lack of space prohibits
> further discussion here. The net result of changing languages appears
> to be that the overall defect density appears to be about the same,
> (Hatton 1997). In other words, when a language corrects one
> deficiency, it appears to add one of its own."

Switching from C to C++ is nearly the same as switching from C to
another C (C++ is backward compatible with C, as fat as I know), so
this is not surprising there.

About Ada now, this could nice to better advocate the assertion that
Ada may (as suggested) correct some error but will add some others.
What's funding this assertion ?

A question : was the studies on migration from C to Ada made by C
developers who had just learn the basics of Ada, or by Ada developers
with minimum experiencing in Ada ?
From: Robert A Duff on
Jerry <lanceboyle(a)qwest.net> writes:

> Here is the link in the Guardian article to the original work:
>
> http://www.leshatton.org/Documents/Texp_ICSE297.pdf

Thanks for the link.

> And this comment which addresses the use of Ada:

Hmm. Looks like it fails to address Ada (or C++)
due to "lack of space". I skept.

> "In C, note that function prototypes were well used only around 60% of
> the time and as a result, interface faults accounted for about 24% of
> the total. In other words, if function prototypes were mandated in all
> C functions, 24% of all serious faults would disappear.

Surely that's no longer a problem in modern C!?

>...The
> computational scientist should not use this as an argument in favour
> of C++ or Ada in which they are mandated. A large number of new
> failure modes result from this action, which lack of space prohibits
> further discussion here.

Hmm...

>...The net result of changing languages appears
> to be that the overall defect density appears to be about the same,
> (Hatton 1997). In other words, when a language corrects one
> deficiency, it appears to add one of its own."

That assertion requires evidence, and I don't see it here!

- Bob
From: Georg Bauhaus on
Robert A Duff wrote:
> Jerry <lanceboyle(a)qwest.net> writes:
>
>> Here is the link in the Guardian article to the original work:
>>
>> http://www.leshatton.org/Documents/Texp_ICSE297.pdf

>> ...The
>> computational scientist should not use this as an argument in favour
>> of C++ or Ada in which they are mandated. A large number of new
>> failure modes result from this action, which lack of space prohibits
>> further discussion here.
>
> Hmm...
>
>> ...The net result of changing languages appears
>> to be that the overall defect density appears to be about the same,
>> (Hatton 1997). In other words, when a language corrects one
>> deficiency, it appears to add one of its own."
>
> That assertion requires evidence, and I don't see it here!

Indeed, looking at some of the things that Les Hatton suggests
to be doing for a living, there might be an incentive not to
perform a comparative study of the effects of using statically
checked C (with Safer C (TM)) versus statically "checked" Ada
(Spark, or SofCheck Inspector (TM)). IOW, language choice does not
matter as long as you use our tools and participate in our training
courses.

His arguments still seem based on studies from the mid 1990s.
A study is something at least. Is there anything in the Tokeneer
data that could serve as a basis for a comparison?
What failure modes might Spark add?