From: Leif Roar Moldskred on
In comp.lang.java.programmer Arved Sandstrom <dcest61(a)hotmail.com> wrote:
>
> This is what I am getting at, although we need to have Brian's example
> as a baseline. In this day and age, however, I'm not convinced that a
> person could even give away a free car (it wouldn't be free in any case,
> it would still get taxed, and you'd have to transfer title) and be
> completely off the hook, although 99 times out of 100 I'd agree with
> Brian that it's not a likely scenario for lawsuits.

Where Brian's example falls down is that the previous owner of the car is,
in effect, just a reseller: he isn't likely to have manufactured the car
or modified it to any degree.

However, let us assume that he _has_ done modifications to the car such
as, say, replacing the fuel tank. If he messed up the repair and, without
realising it, turned the fuel car into a potential firebomb, he would be
liable for this defect even if he gave the car away free of charge.

> With software the law is immature.

I don't think the law is immature when it comes to software. Ultimately,
software is covered by the same laws as Ford Pintos. That said, the
legal practice might be lagging behind, as might the market and users'
awareness of legal rights and duties.

> To my way of thinking there are some
> implied obligations that come into effect as soon as a software program
> is published, regardless of price. Despite all the "legal" disclaimers
> to the effect that all the risk is assumed by the user of the free
> software, the fact is that the author would not make the program
> available unless he believed that it worked, and unless he believed that
> it would not cause harm. This is common sense.

Indeed, and while the exact limit varies between legal jurisdictions, there
is a legal limit to how much responsibility for a product the manufacturer
can cede through contracts or licenses.

> It's early days, and clearly software publishers are able to get away
> with this for now. But things may change.

Let us hope they will.

--
Leif Roar Moldskred

From: Martin Gregorie on
On Fri, 12 Feb 2010 07:16:33 +0000, Richard Heathfield wrote:

> No, it was a bug that wasted a byte and threw away data. And it's still
> a bug - some of the "solutions" adopted by the industry just shifted the
> problem on a little, by using a "century window" technique. That will
> catch up with us eventually.
>
Lets not forget that up to some time in the '90s COBOL could not read the
century, which created a blind spot about four digit years in many IT
people, COBOL being the language of choice for many mainframe systems
(and a lot of minicomputers too, thanks to the quality of the Microfocus
implementation).

Until CODASYL changed the language spec, some time in the mid '90s, the
only way you could get the date from the OS was with the "ACCEPT CURRENT-
DATE FROM DATE." where CURRENT-DATE could only be defined as a six digit
field:

01 CURRENT-DATE.
05 CD-YY pic 99.
05 CD-MM pic 99.
05 CD-DD pic 99.



--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |
From: Martin Gregorie on
On Thu, 11 Feb 2010 20:07:23 -0500, Lew Pitcher wrote:

> Packed decimal (the COBOL COMP-3 datatype) wasn't a "COBOL" thing; it
> was an IBM S370 "mainframe" thing. IBM's 370 instructionset included a
> large number of operations on "packed decimal" values, including data
> conversions to and from fixedpoint binary, and math operations.
>
You're right that its an IBM thing, but it goes further back that S/370.
I'm unsure about the 1400s, but I know for sure that the smaller S/360s,
model 30 for instance, and several of the other IBM small business
machines, e.g. System/3 and System/36, could *ONLY* do packed decimal
arithmetic.

> IBM's COBOL took advantage of these facilities with the (non-ANSI)
> COMP-3 datatype.
>
It had to: you couldn't have run COBOL on the smaller machines if it
hadn't done so.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |
From: Seebs on
On 2010-02-12, Arved Sandstrom <dcest61(a)hotmail.com> wrote:
> With software the law is immature. To my way of thinking there are some
> implied obligations that come into effect as soon as a software program
> is published, regardless of price. Despite all the "legal" disclaimers
> to the effect that all the risk is assumed by the user of the free
> software, the fact is that the author would not make the program
> available unless he believed that it worked, and unless he believed that
> it would not cause harm. This is common sense.

Common sense has the interesting attribute that it is frequently totally
wrong.

I have published a fair amount of code which I was quite sure had at
least some bugs, but which I believed worked well enough for recreational
use or to entertain. Or which I thought might be interesting to someone
with the time or resources to make it work. Or which I believed worked in
the specific cases I'd had time to test.

I do believe that software will not cause harm *unless people do something
stupid with it*. Such as relying on it without validating it.

> I don't know if there is a legal principle attached to this concept, but
> if not I figure one will get identified. Simply put, the act of
> publishing _is_ a statement of fitness for use by the author, and to
> attach completely contradictory legal disclaimers to the product is
> somewhat absurd.

I don't agree. I think it is a reasonable *assumption*, in the lack of
evidence to the contrary, that the publication is a statement of *suspected*
fitness for use. But if someone disclaims that, well, you should assume that
they have a reason to do so.

Such as, say, knowing damn well that it is at least somewhat buggy.

Wind River Linux 3.0 shipped with a hunk of code I wrote, which is hidden
and basically invisible in the infrastructure. We are quite aware that it
had, as shipped, at least a handful of bugs. We are pretty sure that these
bugs have some combination of the following attributes:

1. Failure will be "loud" -- you can't fail to notice that a particular
failure occurred, and the failure will call attention to itself in some
way.
2. Failure will be "harmless" -- operation of the final system image
built in the run which triggered the failure will be successful because
the failure won't matter to it.
3. Failure will be caught internally and corrected.

So far, out of however many users over the last year or so, plus huge amounts
of internal use, we've not encountered a single counterexample. We've
encountered bugs which had only one of these traits, or only two of them,
but we have yet to find an example of an installed system failing to operate
as expected as a result of a bug in this software. (And believe me, we
are looking!)

That's not to say it's not worth fixing these bugs; I've spent much of my
time for the last couple of weeks doing just that. I've found a fair number
of them, some quite "serious" -- capable of resulting in hundreds or thousands
of errors... All of which were caught internally and corrected.

The key here is that I wrote the entire program with the assumption that I
could never count on any other part of the program working. There's a
client/server model involved. The server is intended to be robust against
a broad variety of misbehaviors from the clients, and indeed, it has been
so. The client is intended to be robust against a broad variety of
misbehavior from the server, and indeed, it has been so. At one point in
early testing, a fairly naive and obvious bug resulted in the server
coredumping under fairly common circumstances. I didn't notice this for two
or three weeks because the code to restart the server worked consistently.
In fact, I only actually noticed it when I noticed the segfault log messages
on the console...

A lot of planning goes into figuring out how to handle bad inputs, how
to fail gracefully if you can't figure out how to handle bad inputs, and so
on. Do enough of that carefully enough and you have software that is at
least moderately durable.

-s
p.s.: For the curious: It's something similar-in-concept to the "fakeroot"
tool used on Debian to allow non-root users to create tarballs or disk images
which contain filesystems with device nodes, root-owned files, and other
stuff that allows a non-root developer to do system development for targeting
of other systems. It's under GPLv2 right now, and I'm doing a cleanup pass
after which we plan to make it available more generally under LGPL. When
it comes out, I will probably announce it here, because even though it is
probably the least portable code I have EVER written, there is of course a
great deal of fairly portable code gluing together the various non-portable
bits, and some of it's fairly interesting.
--
Copyright 2010, all wrongs reversed. Peter Seebach / usenet-nospam(a)seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
From: Martin Gregorie on
On Fri, 12 Feb 2010 05:58:07 -0800, Nick Keighley wrote:

> On 12 Feb, 11:21, Michael Foukarakis <electricde...(a)gmail.com> wrote:
>> > Products have passed all the
>> > tests, yet still failed to meet spec in production.
>
> the testing was inadequate then. System test is supposed to test
> compliance with the requirement.
>
Quite. System tests should at least be written by the designers, and
preferably by the commissioning users.

Module tests should NOT be written by the coders.

> The System Test people do black box
> testing (no access to internals) and demonstrate that it meets the
> requirement. The customer then witnesses a System Acceptance Test (often
> a cut-down version of System test plus some goodies of his own
> (sometimes just ad hoc "what does this do then?")).
>
These are the only tests that really count apart from performance testing.

Its really important that the project manager keep an eye on all levels
of testing and especially on how the coders design unit tests or it can
all turn to worms.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |