From: Andy Champ on
Lew wrote:
> What, with a name like "Bloch" I wouldn't know what I'm saying?

I might point out that all _we_ could see is that you are called Lew...

From: Martin Gregorie on
On Sun, 14 Feb 2010 16:45:43 +0000, Seebs wrote:

> I have no idea what you're talking about. I cannot point to any point
> in the history of my exposure to free software (which predates the
> 1990s) at which any major project had no central management. Linux was
> pretty flaky early on, but then, in the early 1990s, all it had to do
> was be more stable than Windows 3.1, which was not a high bar to reach
> for.
About the best free software I remember from that era was Kermit. It
worked and worked well and had ports to a large range of OSen and
hardware of widely varying sizes: I first used it on a 48 KB 6809 running
Flex-09 and still use it under Linux. It had an open development model
though it was managed within a university department, so the project
owners had pretty good control over it.

martin@ | Martin Gregorie
gregorie. | Essex, UK
org |
From: Lew on
Lew wrote:
>> What, with a name like "Bloch" I wouldn't know what I'm saying?

Andy Champ wrote:
> I might point out that all _we_ could see is that you are called Lew...

You might, but that would be stupid.

From: Nick Keighley on
On 12 Feb, 21:22, James Kanze <james.ka...(a)> wrote:
> On Feb 11, 9:33 pm, Andy Champ <no....(a)nospam.invalid> wrote:
> > Lew wrote:
> > > Andy Champ wrote:

> > >> In 1982 the manager may well have been right to stop them
> > >> wasting their time fixing a problem that wasn't going to be
> > >> a problem for another 18 years or so.  The software was
> > >> probably out of use long before that.
> > > Sure, that's why so many programs had to be re-written in 1999.
> > > Where do you get your conclusions?
> > Pretty well everything I saw back in 1982 was out of use by
> > 1999.  How much software do you know that made the transition?
> > Let's see.. Operating systems.  The PC world was... umm.. CP/M
> > 80?  Maybe MS-Dos 1.0?  And by 1999 I was working on drivers
> > for Windows 2000.  That's at least two, maybe three depending
> > how you count it, ground-up re-writes of the OS.
> > With that almost all the PC apps had gone from 8 bit versions
> > in 64kb of RAM to 16-bit DOS to Windows 3.1 16-bit with
> > non-preemptive multitasking and finally to a 32-bit app with
> > multi-threading and pre-emptive multitasking running in
> > hundreds of megs.
> > OK, so how about embedded stuff?  That dot-matrix printer
> > became a laserjet.  The terminal concentrator lost its RS232
> > ports, gained a proprietary LAN, then lost that and got
> > ethernet.  And finally evaporated in a cloud of client-server
> > computing smoke.

I know of system's that still poke data down 9600b lines.

> The "standard" life of a railway locomotive is thirty or fourty
> years.  Some of the Paris suburbain trainsets go back to the
> early 1970's, or earlier, and they're still running.
> > I'm not so up on the mainframe world - but I'll be surprised
> > if the change from dumb terminals to PC clients didn't have a
> > pretty major effect on the software down the back.
> Have you been to a bank lately, and seen what the clerk uses to
> ask about your account?  In more than a few, what you'll see on
> his PC is a 3270 emulator.  Again, a technology which goes back
> to the late 1960's/early 1970's.

travel agencies seem to run some pretty old stuff

> > Where do you get your conclusions that there was much software
> > out there that was worth re-writing eighteen years ahead of
> > time?
> It depends on what you're writing, but planned obsolescence
> isn't the rule everywhere.

I believe the UK's National Grid (the high voltage country-wide power
distribution system) wanted one-for-one replacements for very old
electonic componants. What had been a rats nest of TTL (or maybe
something older) was replaced with a board containing only a few more
modern components (maybe one). But the new board had to have the same
form factor, electrical power requirements etc. This becasue they
didn't want to actually replace the computers they were part of.

I know of software that runs on an emulated VAX.

Sometimes software far out lives its hardware.

From: Brian on
On Feb 14, 7:26 am, James Kanze <james.ka...(a)> wrote:
> On Feb 13, 5:42 pm, Brian <c...(a)> wrote:
> > On Feb 13, 6:19 am, James Kanze <james.ka...(a)> wrote:
> > > On 12 Feb, 22:37, Arved Sandstrom <dces...(a)> wrote:
> > > Logically, I think that most of the techniques necessary for
> > > making really high quality software would be difficult to apply
> > > in the context of a free development. And at least up to a
> > > point, they actually reduce the cost of development.
> [I really shouldn't have said "most" in the above. "Some"
> would be more appropriate, because there are a lot of
> techniques which can be applied to free development.]
> > I'm not sure what you are referring to, but one thing we
> > agree is important to software quality is code reviewing.
> > That can be done in a small company and I'm sometimes
> > given feedback on code in newsgroups and email.
> To be really effective, design and code review requires a
> physical meeting. Depending on the organization of the project,
> such physical meetings are more or less difficult.
> Code review is *not* just some other programmer happening to
> read your code by chance, and making some random comments on
> it. Code review involves discussion. Discussion works best
> face to face. (I've often wondered if you couldn't get similar
> results using teleconferencing and emacs's make-frame-on-display
> function, so that people at the remote site can edit with you.
> But I've never seen it even tried. And I note that where I
> work, we develop at two main sites, one in the US, and one in
> London, we make extensive use of teleconferencing, and the
> company still spends a fortune sending people from one site to
> the other, because even teleconferencing isn't as good as face
> to face.)

It hadn't really dawned on me that my approach might be
thought of like that. The rabbis teach that G-d controls
everything; there's no such thing as chance or coincidence.
The Bible says, "And we know that all things work together
for good to them that love G-d, to them who are the called
according to His purpose." Romans 8:28. I get a lot of
intelligent and useful discussion on, here and
on Boost. It's up to me though to sift through it and
decide how to use the feedback. I've incorporated at
least three suggestions mentioned on gamedev and quite a
few more from here. The latest gammedev suggestion was to
use variable-length integers in message headers -- say for
message lengths. I rejected that though as a redundant
step since I'm using bzip for compression of data. I
thought for awhile that was the end of that, but then
remembered that there's a piece of data that wasn't
compressed -- the length of the compressed data that is
sent just ahead of the compressed data. So now, when
someone uses compression, the length of the compressed
data is generally also compressed with the following:
(I say generally because it depends on the length of

CalculateIntMarshallingSize(uint32_t val)
if (val < 128) { // 2**7
return 1;
} else {
if (val < 16384) { // 2**14
return 2;
} else {
if (val < 2097152) {
return 3;
} else {
if (val < 268435456) {
return 4;
} else {
return 5;

// Encodes integer into variable-length format.
encode(uint32_t N, unsigned char* addr)
while (true) {
uint8_t abyte = N & 127;
N >>= 7;
if (0 == N) {
*addr = abyte;
abyte |= 128;
*addr = abyte;
N -= 1;

uint8_t maxBytes =
uint32_t writabledstlen = compressedBufsize_ - maxBytes;
int bzrc = BZ2_bzBuffToBuffCompress(reinterpret_cast<char*>
(compressedBuf_ + maxBytes),
(buf_), index_,
7, 0, 0);
if (BZ_OK != bzrc) {
throw failure("Buffer::Flush -- bzBuffToBuffCompress failed ");

uint8_t actualBytes = CalculateIntMarshallingSize(writabledstlen);

encode(writabledstlen, compressedBuf_ + (maxBytes - actualBytes));
PersistentWrite(sock_, compressedBuf_ + (maxBytes - actualBytes),
actualBytes + writabledstlen);
index_ = 0;

Those functions are from this file --
compressedBuf_ is an unsigned char*. I've thought that the
calculation of maxBytes should be moved to the constructor,
but I have to update/improve the Resize() code first.
We've discussed the Receive function previously. I now have
a SendBuffer class and a SendCompressedBuffer class. This is
the SendCompressedBuffer version of Receive --

Receive(void const* data, uint32_t dlen)
unsigned char const* d2 = reinterpret_cast<unsigned char
while (dlen > bufsize_ - index_) {
memcpy(buf_ + index_, d2, bufsize_ - index_);
d2 += bufsize_ - index_;
dlen -= bufsize_ - index_;
index_ = bufsize_;

memcpy(buf_ + index_, d2, dlen);
index_ += dlen;

> > > So theoretically, the quality of commercial software should
> > > be considerably higher than that of free software.
> > > Practically, when I actually check things out... g++ is one
> > > of the better C++ compilers available, better than Sun CC or
> > > VC++, for example.
> > Maybe now that Sun CC and VC++ are free they'll improve. :)
> I doubt it. Making something free doesn't change your
> development process. (On the other hand, if it increases the
> number of users, and thus your user feedback, it may help. But
> I don't think any quality problems with VC++ can be attributed
> to a lack of users.)

I think it changes the development process. If it doesn't
then they probably haven't thought much about the implications
of making it free. They are in a battle of perception. Many
people have thought that Microsoft is a greedy company that
makes mediocre products. Giving away some software, while
going against their nature, is done, I think, to help improve
their image. They are forced into what 25 years ago would
have been unthinkable. I don't really think it will radically
improve their product either, though. As I've indicated I
don't think they are coming to the decision because they've
had a change of heart. It's more of a necessity being
imposed upon them. However, as I often say -- better late
than never.

Brian Wood
(651) 251-938