From: James Kanze on
On Feb 14, 4:45 pm, Seebs <usenet-nos...(a)seebs.net> wrote:
> On 2010-02-14, James Kanze <james.ka...(a)gmail.com> wrote:

> > Really. I've not seen any free software which adopted all
> > of the best practices.

> Bespoke software may. But go to a store that sells discs in
> boxes, and tell me with a straight face that any of those
> boxes contain software developed through a development
> operation which adpoted all of the best practices.

I've already stated that most commercial organizations aren't
doing a very good job either. There's a big difference between
what is feasible, and what is actually done.

[...]
> > First, free software doesn't have the highest quality. When
> > quality is really, really important (in critical systems), you
> > won't see any free software.

> I'm not totally sure of this.

I am. If only because such projects require a larger degree of
accountability than free software can offer. I can't see anyone
providing free software with contractual penalties for downtime;
most of the software I worked on in the 1990's had such
penalties.

--
James Kanze
From: James Kanze on
On Feb 14, 4:54 pm, Lew <no...(a)lewscanon.com> wrote:
> James Kanze wrote:
> >> Did you actually try using any free software back in the early
> >> 1990's [sic]?
> Seebs wrote:
> > I did.

> Same here.

> > NetBSD was for the most part reliable and bulletproof during
> > that time; it ran rings around several commercial Unixes. I
> > had no interest in g++; so far as I could tell, at that
> > time, "a C++ compiler" was intrinsically unusable. But gcc
> > was stable enough to build systems that worked reliably, and
> > the BSD kernel and userspace were pretty livable.
> James Kanze wrote:
> >> Neither Linux nor g++ were even usable, and emacs (by

> That's pure fantasy.

> I used a couple of Linux distributions in the early nineties,
> and they worked better than commercial UNIX variants.

And I tried to use them, and they just didn't stop crashing.
Even today, Linux is only gradually approaching the level of the
Unixes back then.

> I used emacs and knew many who used vi back then. They were
> solid.

I used vi back then. It didn't have many features, but it was
solid. It was also a commercial product. Emacs depended on the
version. Some worked, some didn't.

> I used gcc by the mid-90s and it was rock solid, too.

G++ was a joke. A real joke until the mid-1990's. It was usual
to find more bugs in the compiler than in freshly written code.

> I used free software even as far back as the late 80s that
> worked beautifully.

> The facts to back up your assertions are not in evidence.

They are for anyone who is open and honest about it. I did
compiler evaluations back then, so I know pretty well what I'm
talking about. We measured the differences.

--
James Kanze
From: James Kanze on
On Feb 14, 4:56 pm, Seebs <usenet-nos...(a)seebs.net> wrote:
> On 2010-02-14, James Kanze <james.ka...(a)gmail.com> wrote:

> > To be really effective, design and code review requires a
> > physical meeting. Depending on the organization of the project,
> > such physical meetings are more or less difficult.

> Nonsense.

The more channels you have available, the better communication
works.

> > Code review is *not* just some other programmer happening to
> > read your code by chance, and making some random comments on
> > it. Code review involves discussion. Discussion works best
> > face to face.

> IMHO, this is not generally true. Of course, I'm autistic, so
> I'd naturally think that.

There are probably some special exceptions, but other peoples
expressions and gestes are a vital part of communications.

Not to mention the informal communications which occur when you
meet at the coffee pot. I've worked from home, and in the end,
I was frustrated by it because I was missing so much of the
informal communications which make things go.

> But I've been watching a lot of code reviews (our review
> process has named reviewers, but also has reviews floating
> about on a list in case anyone else sees something of
> interest, which occasionally catches stuff). And what I've
> seen is that a whole lot of review depends on being able to
> spend an hour or two studying something, or possibly longer,
> and write detailed analysis -- and that kind of thing is
> HEAVILY discouraged for most people by a face-to-face meeting,
> because they can't handle dead air.

That sort of thing is essential for any review. You do it
before the face-to-face meeting. But the reviewer isn't God,
either; the purpose of the meeting is to discuss the issues, not
to say that the coder did it wrong.

> Certainly, discussion is essential to an effective review.
> But discussion without the benefit of the ability to spend
> substantial time structuring and organizing your thoughts will
> feel more effective but actually be less effective, because
> you're substituting primate instincts for reasoned analysis.

> I really don't think that one can be beaten. If what you need
> for a code review is for someone to spend hours (or possibly
> days) studying some code and writing up comments, then trying
> to do it in a face-to-face meeting would be crippling. Once
> you've got the comments, you could probably do them
> face-to-face, but again, that denies you the time to think
> over what you've been told, check it carefully, and so on.
> You want a medium where words sit there untouched by the
> vagaries of memory so you can go back over them.

> But!

> You do need people who are willing and able to have real
> discussions via text media. That's a learned skill, and not
> everyone's learned it.

> It is not universally true that discussion "works best face to
> face".

Almost universally. Ask any psychologist. We communicate
through many different channels.

--
James Kanze
From: Seebs on
On 2010-02-16, James Kanze <james.kanze(a)gmail.com> wrote:
> And I tried to use them, and they just didn't stop crashing.
> Even today, Linux is only gradually approaching the level of the
> Unixes back then.

I guess it depends on which unixes, and which Linux. When I went from
SVR4 Unix to NetBSD, though, I had a LOT less downtime.

> I used vi back then. It didn't have many features, but it was
> solid. It was also a commercial product. Emacs depended on the
> version. Some worked, some didn't.

The version I used (nvi) was nearly-rock-solid. Which is to say, I
found and reported a bug and it was fixed within a day. And I've been
using the same version of nvi that I was using in 1994 ever since, and
I have not encountered a single bug in >15 years.

>> I used gcc by the mid-90s and it was rock solid, too.

> G++ was a joke. A real joke until the mid-1990's. It was usual
> to find more bugs in the compiler than in freshly written code.

I said gcc, not g++. And while, certainly, it has bugs, so has every
other compiler I've used. I had less trouble with gcc than with sun
cc. I used a commercial SVR4 which switched to gcc because it was
noticably more reliable than the SVR4 cc.

> They are for anyone who is open and honest about it. I did
> compiler evaluations back then, so I know pretty well what I'm
> talking about. We measured the differences.

I do not think it is likely that implying that anyone who disagrees
with you is being dishonest will lead to productive discussion. My
experiences with free software were apparently different from yours --
or perhaps my experiences with commercial software were different.

Whatever the cause, the net result is that by the mid-90s, I had a strong
preference for free tools and operating systems, because they had
consistently been more reliable for me.

-s
--
Copyright 2010, all wrongs reversed. Peter Seebach / usenet-nospam(a)seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
From: Seebs on
On 2010-02-16, James Kanze <james.kanze(a)gmail.com> wrote:
> On Feb 14, 4:56 pm, Seebs <usenet-nos...(a)seebs.net> wrote:
>> On 2010-02-14, James Kanze <james.ka...(a)gmail.com> wrote:
>> > To be really effective, design and code review requires a
>> > physical meeting. Depending on the organization of the project,
>> > such physical meetings are more or less difficult.

>> Nonsense.

> The more channels you have available, the better communication
> works.

Not so. Some channels can swamp others. If you're busy picking up
facial expressions, instead of properly processing the raw data, the
extra channel has HARMED your quality of communication.

> There are probably some special exceptions, but other peoples
> expressions and gestes are a vital part of communications.

They may well be -- but my experience has been that you can communicate
some things much better without them.

> Not to mention the informal communications which occur when you
> meet at the coffee pot. I've worked from home, and in the end,
> I was frustrated by it because I was missing so much of the
> informal communications which make things go.

I would miss that, except that in my workplace (which spans several
continents), the "coffee pot" is IRC.

> That sort of thing is essential for any review. You do it
> before the face-to-face meeting. But the reviewer isn't God,
> either; the purpose of the meeting is to discuss the issues, not
> to say that the coder did it wrong.

If you do it well enough, I don't think the face-to-face meeting does
anything but cater to superstition.

> Almost universally. Ask any psychologist. We communicate
> through many different channels.

I do, in fact, have a psych degree. And what I can tell you is that, while
there are many channels, sometimes you get better or more reliable
communication by *suppressing* the non-analytic channels. Say, if you
were trying to obtain accurate data about a thing subject to pure analysis,
rather than trying to develop a feel for someone else's emotional state.

The goal is not to have the largest possible total number of bits
communicated, no matter what those bits are or what they communicate about;
it's to communicate a narrowly-defined specific class of things, and for
that plain text can have advantages.

Most people I know have had the experience of discovering that a particular
communication worked much better in writing than it did in speech. Real-time
mechanisms can be a very bad choice for some communications.

You get more data per second if you are watching ten televisions than if
you're watching only one. That doesn't mean that, if you want to learn a
lot, the best way to do it is to watch multiple televisions at once. For
that matter, while a picture may be worth a thousand words, sometimes it's
only worth the exact thousand words it would take to describe the picture.
Why would we read code when we could watch a movie of someone reading it,
complete with facial expressions, tone, and guestures?

Because facial expressions, tone, and guestures swamp our capacity to
process input, and leave us feeling like we've really connected but with
a very high probability of having completely missed something because
we were too busy being connected to think carefully. It's like the way
that people *feel* more productive when they multitask, but they actually
get less done and don't do it as well.

-s
--
Copyright 2010, all wrongs reversed. Peter Seebach / usenet-nospam(a)seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!