From: Malcolm McLean on
On Feb 14, 2:17 pm, James Kanze <james.ka...(a)gmail.com> wrote:
>
>The problem isn't the competence of
>the practitioners (which is the problem certification
>addresses), but the organizations in which they work.
>
Also the problem itself. It is impossible to test MiniBasic on all
possible different paths all possible scripts could take it through,
for example. (I wrote little test scripts to test each statement
individually when developing it). On the other hand a game like
"Defender" has a very limited set of user inputs.
From: James Kanze on
On Feb 14, 12:23 am, Öö Tiib <oot...(a)hot.ee> wrote:
> On Feb 13, 5:09 pm, Lew <l...(a)lewscanon.com> wrote:

> > James Kanze wrote:
> > > Logically, I think that most of the techniques necessary
> > > for making really high quality software would be difficult
> > > to apply in the context of a free development. And at
> > > least up to a

> > Nonsense. Free software has a much higher rate of adoption
> > of best practices for high quality than for-pay software
> > does.

> > You say so, too. It's the "logically" with which I take
> > issue. That free software uses the best techniques and has
> > the highest quality in the marketplace is entirely logical,
> > in addition to being an observed fact. You just have to
> > avoid false assumptions and fallacies in reasoning.

> Not sure what you mean. There are no such logical binary
> connection. Opposite is as easy to observe.

> Just download few C++ code-bases at random from places like
> sourceforge.net and review them.

I'm not sure that that's significant. It's less expensive to
publish free software, so you get a lot of idiots doing it. But
these are mostly products that no one is interested in. And
there are actually quite a few start-up companies which do
exactly the same thing. The difference is that the start-up
company will go out of business, and disappear, where as the
code on SourceForge just sits there.

If you're talking about successful projects, there are some good
free ones.

> One produced by using good techniques is really hard to find
> there. Most code there has quality so low that it would be
> unthinkable in professional software house to pass QA peer
> review with it.

I've had the chance of working mostly in well run shops, but
I've seen places where there was no peer review. Not all
commercial shops are better.

> It is easy to logically explain since most of it is hobby of
> non-professionals who find software development amusing or
> professionals of other language who learn C++ as hobby.

> Results are slightly better with larger and more popular open
> source products but that is often thanks to huge tester and
> developer base and not good techniques used.

At least some of the larger open source projects have a steering
committee, and do practice at least some sort of code review and
regression testing.

> In best shape are open source projects that are popular and
> where commercial companies are actively participating since
> they need these for building or supporting their commercial
> products. Again it is easy to see how the companies are
> actually enforcing the techniques and quality there and it is
> likely that the companies use even higher standards in-house.

> Worst what i have seen is the code written by in-house
> software department of some smaller non-software companies but
> that is again easy to explain by workers of that department
> obfuscating their work to gain job security.

> So all things have logical explanations and there are no silly
> binary connections like free = quality and commercial = lack
> of quality.

Quality is largely determined by the development process. Some
of the better processes probably can't be applied to a free
project, at least not easily. But a lot of commercial projects
aren't applying even a minimum, and some of the better freeware
projects have applied some of the easier and more obvious
techniques. In the end, if you want quality, you have to
consider the process used to develop the software, independently
of the costs.

--
James Kanze
From: James Kanze on
On Feb 13, 4:27 pm, Seebs <usenet-nos...(a)seebs.net> wrote:
> On 2010-02-13, James Kanze <james.ka...(a)gmail.com> wrote:

> > Logically, I think that most of the techniques necessary for
> > making really high quality software would be difficult to
> > apply in the context of a free development.

> They might be hard to apply, but consider that a great deal of
> free software is written without idiots saying "you need to
> get this done sooner so we can book revenue this quarter to
> please shareholders".

If your point is that some (most?) commercial vendors don't have
a good development process, I already pointed that out.

> It's also often written by particularly good developers, who
> care about their code.

That I don't believe. I've seen a lot of particularly good
developers in industry as well. People who care about their
code---in fact, one of the most important things in creating a
good process is to get people to care about their code.

> It is also probably an influence that free software writers
> expect the code itself to get feedback, not just the behavior
> of the application. I have submitted bug reports about
> poorly-expressed code, not just about code which didn't work.

In a well run development process, such feedback is guaranteed,
not just "expected". That's what code reviews are for.

> > And at least up to a point, they actually reduce the cost of
> > development. So theoretically, the quality of commercial
> > software should be considerably higher than that of free
> > software.

> Again, I don't think there's actually any force driving that.
> The benefits of well-written software are significant enough
> that it is likely worth it to some people to improve software
> they have access to, and if it's worth it to them to do that,
> it costs them virtually nothing to release the improvements.

> Free software often ends up with the best efforts of hundreds
> of skilled programmers, with active filtering in place to keep
> badly-written code from sneaking in.

I'm far from sure about the "often", and I have serious doubts
about "hundreds"---you don't want hundreds of cooks spoiling the
broth---but that's more or less the case for the best run
freeware projects. Which is no different from the best run
commercial organizations, with the difference that the
commercial organization has more power to enforce the rules it
sets.

> > And Subversion is at least as good as any of the version
> > management tools, excepted ClearCase (and the two really
> > address different models of development).

> If you are implying that CC is actually usable to you, that
> marks a first in my experience. No one else I've known has
> ever found it preferable to any of the open source tools, of
> which git is probably currently the most elegant.

ClearCase is by far the best version management system for
large, well run projects. It's a bit overkill for smaller
things, and it causes no end of problems if the project isn't
correctly managed (but what doesn't), but for any project over
about five or six people, I'd rather use ClearCase than anything
else.

> > I think that part of the problem is that a mistake in a
> > program will affect every instance of the program. Most
> > recalls for cars, on the other hand, only affect a small
> > subset of the total production.

> Another issue is that, if you give away open source software,
> people can modify it. If you modify my code, and your
> modification is not itself buggy, and my code is not itself
> buggy, but your modification causes some part of my code not
> to work as expected, whose fault is that? This kind of thing
> is a lot more complicated with code than it is with physical
> objects. You don't have a million people using a bridge, and
> a couple hundred thousand of them are using the bridge
> recompiled for sports cars, and another couple hundred
> thousand are running it with a third-party tollbooth
> extension.

That is, of course, a weakness of free software. A company
using it, however, should be able to manage this (although I
once worked for a company where one employee would slip
modifications into the g++ we were trying to use for production
code, without telling anyone).

--
James Kanze
From: James Kanze on
On Feb 13, 5:42 pm, Brian <c...(a)mailvault.com> wrote:
> On Feb 13, 6:19 am, James Kanze <james.ka...(a)gmail.com> wrote:
> > On 12 Feb, 22:37, Arved Sandstrom <dces...(a)hotmail.com> wrote:

> > Logically, I think that most of the techniques necessary for
> > making really high quality software would be difficult to apply
> > in the context of a free development. And at least up to a
> > point, they actually reduce the cost of development.

[I really shouldn't have said "most" in the above. "Some"
would be more appropriate, because there are a lot of
techniques which can be applied to free development.]

> I'm not sure what you are referring to, but one thing we
> agree is important to software quality is code reviewing.
> That can be done in a small company and I'm sometimes
> given feedback on code in newsgroups and email.

To be really effective, design and code review requires a
physical meeting. Depending on the organization of the project,
such physical meetings are more or less difficult.

Code review is *not* just some other programmer happening to
read your code by chance, and making some random comments on
it. Code review involves discussion. Discussion works best
face to face. (I've often wondered if you couldn't get similar
results using teleconferencing and emacs's make-frame-on-display
function, so that people at the remote site can edit with you.
But I've never seen it even tried. And I note that where I
work, we develop at two main sites, one in the US, and one in
London, we make extensive use of teleconferencing, and the
company still spends a fortune sending people from one site to
the other, because even teleconferencing isn't as good as face
to face.)

> > So theoretically, the quality of commercial software should
> > be considerably higher than that of free software.
> > Practically, when I actually check things out... g++ is one
> > of the better C++ compilers available, better than Sun CC or
> > VC++, for example.

> Maybe now that Sun CC and VC++ are free they'll improve. :)

I doubt it. Making something free doesn't change your
development process. (On the other hand, if it increases the
number of users, and thus your user feedback, it may help. But
I don't think any quality problems with VC++ can be attributed
to a lack of users.)

> I'm not sure about Sun CC, but guess that it is free with
> Open Solaris. Still I'm not comfortable with g++'s foundation.
> I would like to think that VC++, written mostly in C++, is at
> least able to produce a draw when up against g++.

There are a lot of factors which affect quality, but the basic
development process is by far the most important one. And from
what I've seen, I'd guess that Microsoft doesn't have a
particularly good process. Note that it's a lot easier to have
a good process when relatively few people are involved. Which
works against Microsoft, and also to a degree against g++. And
may contribute to explaining why the EDG front-end is so good
(along with the fact that it's probably easier to find four
exceptional people than to find 400).

[...]
> That may be a reason why an on line approach makes sense.
> Since you haven't shipped out instances of the program,
> just make sure the instances that exist on your servers
> are corrected. The other way, a court in a distant country
> might hold you liable if some customers didn't receive a
> message that they should update their copy.

Who knows what a court in a distant country may decide. (Note
that Microsoft now uses the push model for patches---by default,
automatic upgrading is activated, and you get all of the latest
patches for Windows, whether you asked for them or not.)

--
James Kanze
From: James Kanze on
On Feb 13, 6:07 pm, LR <lr...(a)superlink.net> wrote:
> James Kanze wrote:
[...]
> > The "standard" life of a railway locomotive is thirty or fourty
> > years. Some of the Paris suburbain trainsets go back to the
> > early 1970's, or earlier, and they're still running.

> Do you happen to know if they've undergone any engineering
> changes over those 40 years for safety or performance
> enhancements?

Engineering changes, I don't know; I think in many cases, no.
(The "petit gris" commuter equipment in the Paris area certainly
hasn't changed much since its introduction.) But they are
maintained, with regular check-ups, replacement of worn parts,
etc., and if there were a safety defect, it would be corrected.

> With worn/damaged parts replacement how much of the original
> equipment remains? Wheel sets, motors, controls, seats,
> doors, couplers, windshields, etc. all get inspected and
> replaced on schedule.

Certainly. Hardware wears out. Even on your car, you'll
replace the brake pads from time to time (I hope). In the case
of locomotives, a lot more gets changed. But for the most part,
it's a case of replacing a standard component with a new, but
otherwise identical, component.

Not that that was my point. My point was that any embedded
software they're using was written before 1975 (more or
less---in the case of the "petit gris", before 1965, when the
first deliveries took place).

(The "petit gris" are the Z 5300 "automotrices" used by the
French railways in suburban service. They're very well known to
anyone commuting in the Paris area. I'm not aware of any
information about them in English, but
http://fr.wikipedia.org/wiki/Z_5300 has some information in
French, for those who can read French and are interested. The
main point is that they were put into service starting in 1965,
and are still in service, without any real changes, today.)

> Not all locomotives last 40 years.

> Design flaws can contribute to a shorter life. For example the
> Erie
> Triplex.http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/triplex/triplex.htm

Certainly, and others might last longer. (But somehow, I doubt
that the Erie Triplex had any embedded software, that could have
failed if the locomotive had still been in use in the year
2000.)

> Although design flaws played a part in the death of the Jawn Henry, I've
> heard that N&W's business was undergoing changes and undercut the
> companies desire to invest in coal fired power.http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/nwturbine/nflkturb.htm

> >> Where do you get your conclusions that there was much software
> >> out there that was worth re-writing eighteen years ahead of
> >> time?

> To continue with our locomotives, the replacement of coal
> fired steam by diesel and electric (No, no, not this
> one:http://www.dself.dsl.pipex.com/MUSEUM/LOCOLOCO/swisselec/swisselc.htm;))
> power was largely driven by maintenance cost, the sort that
> replaces the lubricating oil, not the kind that replaces
> faulty brake systems, although this played a role too. It's
> nice to be able to buy parts OTS if you need them rather than
> have a huge work force ready to make parts.

Yes. But that's not really the issue here. I'm not sure when
the Swiss started using regenerative braking on the Gotthard
line, but when they did, they obviously had to retrofit a number
of locomotives in order for it to work. But that doesn't mean
that the original locomotives weren't designed with the idea
that they'd be used 40 years; it doesn't necessarily mean that
all of the programs embedded in them were replaced (although I
think that a move to regenerative braking might affect most of
them).

--
James Kanze