From: Robert Myers on
On Oct 20, 3:49 pm, n...(a)cam.ac.uk wrote:
> In article <60e27937-1063-4241-9b96-88af251ba...(a)p23g2000vbl.googlegroups..com>,
> Robert Myers  <rbmyers...(a)gmail.com> wrote:
>

> >> >All that work that was done in the first six days of the history of
> >> >computing was aimed at doing the same thing that human "computers"
> >> >were doing calculating the trajectories of artillery shells. =A0Leave
> >> >the computer alone, and it can still manage that sort of very
> >> >predictable calculation tolerably well.
>
> >> Sorry, but that is total nonsense. =A0I was there, from the late 1960s
> >> onwards.
>
> >Where do you think I was, Nick?  Do you know?  I don't want to make
> >this personal, but when you use comp.arch as a forum for your personal
> >mythology, I don't know how to avoid it.  I stand by my
> >characterization.
>
> If you check up on it, the first interactive 'real-time' games date
> from the 1950s, and they were widespread by the mid-1960s.  Again,
> before I started.  If you will deprecating the work of the first
> generation, I will stop correcting you.
>
I don't think I've ever deprecated the work of any generation. I've
responded in every way I can to correct the false and dangerous idea
that everything that can be thought of has been thought of and over
half a century ago, at that. What's on display is not information,
but something bizarre about your psychology that needs to put what you
and what you happen to know at the center of the cosmos. If I wanted
to discourage progress in a field, I couldn't find a better way to do
it.

I pondered whether I should respond to your post at all. These
exchanges make both of us look like fools. I've always been kind of
an outsider, but you can be certain that I'm no fool, and I'm not
intimidated by your bluster any more than I am impressed by the name-
dropping of another poster.

As to your encyclopedic knowledge of everything, you have your facts
about interactive computer games wrong--at least as they are commonly
understood. I was ushered into a darkened room and played Space War
implemented on a PDP-1 with a round CRT in Cambridge, just not
Cambridge, England, and not all that many years after it was first
developed. The discrepancy in our "knowledge" may be that Space War
could be played by multiple players, even if others may have and I'm
sure did fiddle with man against machine before 1961. As far as I
know, there was no off-the-shelf hardware that would have supported a
game like Space War before the PDP-1.

> >For one thing, the computers that were available, even as late as the
> >late sixties, were pathetic in terms of what they could actually do.
>
> As the Wheelers frequently point out, many people did then with those
> 'pathetic' computers what people are still struggling to do.  At the
> start of this thread, I pointed out that the initial inventions were
> often no more than proof of concept, but their existence is enough
> to show that things have NOT changed out of all recognition.
>
People in the sixties had many wild and dangerous ideas that have been
proven to be dead ends, at best. Many of the ideas about computers
are in a category with Werner von Braun's rotating donuts, which would
have been unstable. The self-confidence of the period corresponded to
very little in the way of reality, and space and computers are still
the showcase examples of delusional thinking.

In contrast to your posts, the Wheelers posts are packed with facts
that jibe pretty well with what I personally remember.

> >> >Even though IBM and its camp-followers had to learn early how to cope
> >> >with asynchronous events ("transactions"), they generally did so by
> >> >putting much of the burden on the user: if you didn't talk to the
> >> >computer in just exactly the right way at just exactly the right time,
> >> >you were ignored.
>
> >> Ditto.
>
> >Nick.  I *know* when time-sharing systems were developed.  I wasn't
> >involved, but I was *there*, and I know plenty of people who were.
>
> What on earth are you on about?  Let's ignore the detail that Cambridge
> was one of the leading sites in the world in that respect.  You claim
> that the 'IBM' designs involved transactions being ignored - nothing
> could be further from the truth.  That is a design feature of the
> X Windowing System (and perhaps Xerox PARC before it), and was NOT
> a feature of the mainframe designs.
>
Yes, Nick. If bombs weren't being dropped on England, the building I
worked in might never have had the history it did in radar. Bombs
were being dropped all over Europe, and the center of gravity of
technology moved away from Europe, possibly forever. Get over it.

As to transactions being ignored, I don't know where that phrasing
comes from, because it didn't come from me. To initiate *anything* on
any computer at any time, you have to "get in." It was true then, and
it's true now. If you managed to get your transaction into an IBM
system, I'm sure it was no more likely to drop it than any other
system.

If you're close to a deadline, and everyone is trying to "get in,"
then a mistyped character or a syntax error could cost you your place
in line. It still can, except that computers are faster and more
tolerant of human error. Trying to deal with the problem of "getting
in" without making it seem as if the computer were at times out to get
you is still, so far as I know, an unworked problem, and probably
unworkable with any approach now in existence.

Robert.
From: dmackay on
On Oct 14, 11:44 pm, Jean <alertj...(a)rediffmail.com> wrote:
> In last couple of decades the exponential increase in computer
> performance was because of the advancements in both computer
> architecture and fabrication technology.
> What will be the case for future ? Can I comment that the next major
> leap in computer performance will not because of breakthroughs in
> computer architecture but rather from new underlying technology ?

That certainly isn't the direction we've been heading. As process
tech has gone through 90->65->45->32nm nodes it's happily provided us
with a lot of additional logic to play with but it hasn't provided the
same massive increase in circuit speed that everyone had grown
accustomed to. (Sure, it's slightly faster in most cases and there
are other constraints such as power keeping the speed down but no
matter how you slice it you're getting a smaller proportional speed
bonus from simply shrinking a design).

IMO (and it's just my opinion, there are some people in this thread
who are in a better position to speak about this than I am and they're
welcome to set me straight) a lot of the recent gains have been from
finding smarter ways to use the logic we already have and intelligent
ways to use the ridiculous amount of additional logic we're given at
each node. Moving an ever-increasing portion of the system onto the
same die. Moving away from the ancient FSB. Improving the memory
hierarchy (there's a long way to go here). Each of these falls within
the realm of comp arch and has provided a fairly significant
performance boost.

You need people that spend their time trying to find new uses for the
additional logic at our fingertips. Without that all you gain with
new process nodes is the ability to pack more of the same chip on a
wafer and/or stuff more cache on each chip. What if your competitor
(there are still at least a few in the CPU world) has architects
plugging away at the endless list of remaining problems and they
happen to find a very good use for all the additional logic? Well, if
that happens, you're screwed.
From: ChrisQ on
Robert Myers wrote:

>>
> Yes, Nick. If bombs weren't being dropped on England, the building I
> worked in might never have had the history it did in radar. Bombs
> were being dropped all over Europe, and the center of gravity of
> technology moved away from Europe, possibly forever. Get over it.
>

'Scuse me butting in here, but i think I know where that is and have a
large number of the 30 vol set that was published shortly after wwII.
For anyone interested in the history of electronics, it's worth looking
up. The scale of the work is quite amazing and still used for reference
even now...

Seriously good stuff...

Regards,

Chris

From: EricP on
Robert Myers wrote:
> People in the sixties had many wild and dangerous ideas that have been
> proven to be dead ends, at best. Many of the ideas about computers
> are in a category with Werner von Braun's rotating donuts, which would
> have been unstable.

The rotating donut space stations are unstable?
Why?

Eric

From: Anne & Lynn Wheeler on

Robert Myers <rbmyersusa(a)gmail.com> writes:
> Even though IBM and its camp-followers had to learn early how to cope
> with asynchronous events ("transactions"), they generally did so by
> putting much of the burden on the user: if you didn't talk to the
> computer in just exactly the right way at just exactly the right time,
> you were ignored.

some from the CTSS group went to 5th flr and multics ... and some went
to the 4th flr and the science center. in 1965, science center did
(virtual machine) cp40 on 360/40 that had hardware modifications to
support virtual memory. cp40 morphed into cp67, when the science center
got 360/67 that came standard with hardware virtual memory support.

last week of jan68, three people from science center came out and
installed cp67 at univ. where i was undergraduate. over the next several
months i rewrote significant portions of the kernel to radically speed
things up. part of presentation that i made at aug68 SHARE user group
meeting ... about both both speedups done fro os/360 (regardless of
whether or not running in virtual machine or on real hardware) as well
as rewrites of major section of cp67.
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

cp67 came standard with 2741 and 1052 terminal support (and did
automatic terminal type recognition). univ. also had ascii/tty machines
(33s & 35s) and i got to add tty support. I tried to do this in
consistent way that did automatic terminal type (including being able to
have single rotary dial-in number for all terminals). Turns out that
standard ibm terminal controller had a short-cut and couldn't quite do
everything that i wanted to do. Somewhat as a result, univ was motivated
to do a clone controller project ... where channel interface was reverse
engineered and a channel interface board was built of an interdata/3
minicomputer ... and the interdata/3 was programmed to emulate the
mainframe terminal controller (along with being able to do automatic
baud rate detection)

my automatic terminal recognition would work with standard controller
for leased lines ... the standard controller could switch the type of
line scanner under program control ... but had hardware the baud rate
oscillator to each port interface. This wouldn't work if i wanted to
have a common pool of ports (next available selected from common dialin
number) for terminals that operated at different baud rates.

os/360 tended to have a operating system centric view of the world
.... with initiation of things at the operating system ... and people
responding at the terminal. cp67 was just the opposite ... it had
end-user centric view ... with the user at the terminal initiating
things and the operating system reacting. one of the things i really
worked on was being able to pre-emptive dispatching and page fault
handling in a couple hundreds instructions (i.e. take page fault, select
replacement page, initiate page read, switch to different process,
handle interrupt, and switch back to previous process all in a couple
hundred instructions aggregate).

the univ. library had also gotten a ONR grant to do computerized
catalogue and then was also selected to be betatest for the original
CICS product release (cics still one of the major transaction processing
systems). i got tasked to support (and debug) this betatest.

a little later one of the things that came out of cp67 was charlie
invented compare&swap instruction when he was doing work on fine-grain
locking in cp67 (compare&swap was selected because CAS is charlie's
initials). initial forey into POK trying to get it included in 370
architecture was rebuffed ... favorite son operating system claiming
that test&set from 360 SMP was more than sufficient. challenge to
science center was to come up with use of compare&swap that wasn't smp
specific ... thus was born all the stuff for multithreaded
implementation (independent of operation on single processor or multiple
processor machine) ... which started to see big uptake in transaction
processing and DBMS applications ... even starting to appear on other
hardware platforms.

minor digression about getogether last year celebrating jim gray
.... also references he tried to palm off some amount of his stuff on me
when he departed for tandem
http://www.garlic.com/~lynn/2008p.html#27 Father of Financial Dataprocessing
some old email from that period
http://www.garlic.com/~lynn/2007.html#email801006
http://www.garlic.com/~lynn/2007.html#email801016

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970