From: Bernd Paysan on
Mayan Moudgill wrote:

> Bernd Paysan wrote:
>
> >
> > Sending chunks of code around which are automatically executed by
> > the receiver is called "active messages".
>
> I'm not so sure. The original Active Messages stuff from Thorsten von
> Eicken et.al. was more like passing a pointer to a user space
> interrupt handler along with an inter-processsor message, so that the
> message could be handled with zero-copies/low-latency (OK, it wasn't
> always quite that - but its close in flavor). The interrupt handler
> code was already resident on the processor.

The original stuff is obviously a specialization. The idea is that the
message "processes itself", right at arrival. Threaded code (i.e. a
sequence of pointers to programs in memory) is certainly a possible
subset, and a single pointer is certainly a subset of threaded code.

> I've never heard of pushing code for execution on another processor
> being called "active messages" - citations? references?

How would you call it?

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Stephen Fuld on
Andy "Krazy" Glew wrote:
> Robert Myers wrote:
>> I can't see anything about channels that you can't do with modern PC I/O.
>
> AFAIK there isn't much that IBM mainframe channels could do that modern
> PC I/O controllers cannot do. Even a decade ago I saw SCSI controllers
> that were more sophisticated than IBM channels.

I think you are conflating two things here. One is the mechanism of
connecting peripheral devices to the host and what programatic mechanism
is used to control I/O (I.e. PCI) , and the other is the "protocol" used
to control the devices (i.e. SCSI). I absolutely agree that SCSI can do
pretty much everything that is really useful in instructing a disk to do
something. Note that there are even defined commands with the SCSI
spec. for things like search, though no vendor that I know of actually
implemented them.

As for channels proper. I think there are two main advantages over the
typical PC I/O control scheme, though PCI/E has alleviated some of the
problems.

The first is scalability. Say you want to connect each of eight
different server boxes to each a set of hundreds of disks. Prior to
PCI/E, this would be very messy, if not impossible. You would need a
lot of PCI/PCI bridges and such. The resulting system would be very
difficult to isolate faults on, hard to performance tune, etc.
Furthermore, there would be issues with how to connect the disks to all
eight servers, which would be required for availability. This is
solvable, but introduces extra things to worry about. Channels, with
their "network" model of connectivity make this relatively easy.

The second issue is the difference between memory mapping and CPU
instructions. There is a performance hit with memory mapping. This was
best shown prior to PCI/e, where you had PCI bridges, etc. and sometimes
the "memory" was 10s of microseconds away. Having the I/O done in the
CPU allows its speed to scale with the CPU speed, which is, of course,
better than the memory speed.

You may have noticed a pattern here. Had the Intel server guys won, and
NGIO or Infiniband had gotten implemented in the chip set (i.e. not
memory mapped), with their network model, and if the infrastructure of
adapters, switches, etc. grown up around it, we would be there now.
With the advantage that the same hardware would be used for peripheral
attachment and for cluster interconnect.

> If anything, the problem is that there are too many different PC I/O
> controllers with similar, but slightly different, capabilities.
>
> Perhaps the biggest thing IBM channels had (actually, still have) going
> for them is that they are reasonably standard. They are sold by IBM,
> not a plethora of I/O device vendors. They can interface to many I/O
> devices. You could write channel programs without too much fear of
> getting lockind in to a particular device (although, of course, you were
> largely locked in to IBM).

True of the channel, but not of CKD. With CKD, the application program
had to know the number of bytes in a disk track! This made it difficult
for IBM to move from say 3350 to 3380 to 3390 class disks, as each class
had a different number of bytes per track and thus required user
programs to change to take advantage of them.

> Plus, of course, IBM channels were fairly well implemented.

Yes.

> From time to time Intel tried to create its own generical channel
> controllers. Even back in the 80s. But, unfortunately, sometimes it
> was a net performance loss to use these devices, particularly for
> latency sensitive applications.

And at least once, lost to inter-division warfare within the company. :-(


--
- Stephen Fuld
(e-mail address disguised to prevent spam)
From: Andrew Reilly on
On Wed, 23 Dec 2009 21:17:07 -0800, Andy \"Krazy\" Glew wrote:

> And dataflow, no matter how you
> gloss over it, does not really like stateful memory. Either we hide the
> fact that there really is memory back there (Haskell monads, anyone?),
> or there is another level of synchronization relating to when it is okay
> to overwrite a memory location. I vote for the latter.

Why? I've only been fooling around with functional programming for a
year or so, and have not graduated to the point where I think that I'm up
for Haskell (I haven't convinced myself that I can let go of explicit
execution order, yet.) Compared to all of the Turing-school languages
(Fortran's descendants) that are all about modifying state, the Church-
school (Lisp's descendants) that is more about the computations is quite
a liberating (and initially mind-altering) change.

Why prefer adding layers of protocol and mechanism, so that you can
coordinate the overwriting of memory locations, instead of just writing
your results to a different memory location (or none at all, if the
result is immediately consumed by the next computation?)

[I suspect that the answer has something to do with caching and hit-
rates, but clearly there are trade-offs and optimizations that can be
made on both sides of the fence.]

Cheers,

--
Andrew
From: Paul Wallich on
Robert Myers wrote:
> On Dec 24, 6:43 pm, Bill Todd <billt...(a)metrocast.net> wrote:
>> Andy "Krazy" Glew wrote:
>>> Robert Myers wrote:
>>>> I can't see anything about channels that you can't do with modern PC I/O.
>>> AFAIK there isn't much that IBM mainframe channels could do that modern
>>> PC I/O controllers cannot do. Even a decade ago I saw SCSI controllers
>>> that were more sophisticated than IBM channels.
>> I may be missing something glaringly obvious here, but my impression is
>> that the main thing that channels can do that PC I/O controllers can't
>> is accept programs that allow them to operate extensively on the data
>> they access. For example, one logical extension of this could be to
>> implement an entire database management system in the channel controller
>> - something which I'm reasonably sure most PC I/O controllers would have
>> difficulty doing (not that I'm necessarily holding this up as a good
>> idea...).
>>
>> PC I/O controllers have gotten very good at the basic drudge work of
>> data access (even RAID), and ancillary DMA engines have added
>> capabilities like scatter/gather - all tasks which used to be done in
>> the host unless you had something like a channel controller to off-load
>> them. But AFAIK channels they ain't.
>>
>
> So, let's see.
>
> I'm still just trying to get at the heart of the matter.
>
> I could (and did) hang a complete 32-bit processor off a PC/XT bus.
> The 32-bit processor (which was a single board computer), depended on
> the host OS for I/O. If I used concurrent DOS, I could do whatever I
> wanted on the host 8-bit computer while the 32-bit mini ground away on
> problems formerly done at great expense on a Cray. Which was the
> channel? Why should I care? The most natural way of looking at it
> (as you describe things) is to think of the "host" PC, which was
> utterly programmable, as a mainframe channel, except that it cost
> hundreds of thousands of dollars less and didn't come with that
> priceless baby-blue logo.

And it didn't come with a promise that any of several tens (hundreds?)
of thousands of machines that a coder encountered with a similar
hardware configuration would work the same way, that it would continue
to work the same way if you got hit by a bus or just came up with a
brilliant new idea about how to do you communication protocol, it hadn't
been stress-tested and debugged by a staff of zillions...

There are obviously a lot of technical and important-in-context details
about how channel controllers worked that the people who know them well
can talk about, but the simple fact that they were almost universally
present in a standard form, and that a huge community of people used
them also makes a crucial difference. If big parallel systems are going
to be useful, stupid people as well as brilliant ones are going to have
to be able to use them.

paul
From: Robert Myers on
On Dec 25, 8:25 pm, Paul Wallich <p...(a)panix.com> wrote:
> Robert Myers wrote:
> > On Dec 24, 6:43 pm, Bill Todd <billt...(a)metrocast.net> wrote:
> >> Andy "Krazy" Glew wrote:
> >>> Robert Myers wrote:
> >>>> I can't see anything about channels that you can't do with modern PC I/O.
> >>> AFAIK there isn't much that IBM mainframe channels could do that modern
> >>> PC I/O controllers cannot do.  Even a decade ago I saw SCSI controllers
> >>> that were more sophisticated than IBM channels.
> >> I may be missing something glaringly obvious here, but my impression is
> >> that the main thing that channels can do that PC I/O controllers can't
> >> is accept programs that allow them to operate extensively on the data
> >> they access.  For example, one logical extension of this could be to
> >> implement an entire database management system in the channel controller
> >> - something which I'm reasonably sure most PC I/O controllers would have
> >> difficulty doing (not that I'm necessarily holding this up as a good
> >> idea...).
>
> >> PC I/O controllers have gotten very good at the basic drudge work of
> >> data access (even RAID), and ancillary DMA engines have added
> >> capabilities like scatter/gather - all tasks which used to be done in
> >> the host unless you had something like a channel controller to off-load
> >> them.  But AFAIK channels they ain't.
>
> > So, let's see.
>
> > I'm still just trying to get at the heart of the matter.
>
> > I could (and did) hang a complete 32-bit processor off a PC/XT bus.
> > The 32-bit processor (which was a single board computer), depended on
> > the host OS for I/O.  If I used concurrent DOS, I could do whatever I
> > wanted on the host 8-bit computer while the 32-bit mini ground away on
> > problems formerly done at great expense on a Cray.  Which was the
> > channel?  Why should I care?  The most natural way of looking at it
> > (as you describe things) is to think of the "host" PC, which was
> > utterly programmable, as a mainframe channel, except that it cost
> > hundreds of thousands of dollars less and didn't come with that
> > priceless baby-blue logo.
>
> And it didn't come with a promise that any of several tens (hundreds?)
> of thousands of machines that a coder encountered with a similar
> hardware configuration would work the same way, that it would continue
> to work the same way if you got hit by a bus or just came up with a
> brilliant new idea about how to do you communication protocol, it hadn't
> been stress-tested and debugged by a staff of zillions...
>
> There are obviously a lot of technical and important-in-context details
> about how channel controllers worked that the people who know them well
> can talk about, but the simple fact that they were almost universally
> present in a standard form, and that a huge community of people used
> them also makes a crucial difference. If big parallel systems are going
> to be useful, stupid people as well as brilliant ones are going to have
> to be able to use them.
>
I think your answer is pretty close to the fire, and it offers some
insight into the important ways in which things have changed.

One of the more frustrating things about the world of computers today
is that you don't have to be especially bright to do cutting-edge
stuff. With the Internet and Google, you can zero in on whatever
details you need, and it doesn't matter that your grasp of pretty much
everything is really shallow.

IBM used to own the only people in the world who knew certain things,
and, if you wanted access to that expertise, you had to deal with IBM,
and you had to be using IBM hardware. If you were using IBM hardware,
it didn't matter how rococo or obtuse the hardware or the software,
solving any problem whatsoever was only a phone call away. If you
weren't using IBM hardware, you might have to be pretty bright to
muddle through.

That monopoly has been broken by a series of events, all of which have
conspired to put the latest and greatest in hardware and software into
an amazing number of hands. *No one* owns the expertise the way IBM
once did. It hardly matters how obscure your problem. Someone,
somewhere, has the same setup and maybe even the same problem and you
no longer have to go through IBM to find them. Now, almost everyone
wants to give away some version of their software so that there will
be a free support network for it out there. Thomas J. Watson must be
rolling in his grave.

Bottom line: if you're big enough, you can standardize something, make
sure it winds up everywhere, make sure key details are proprietary,
make sure that whatever it is has a name that is on enough people's
lips, and you can keep your gross margin high. That's the real
explanation for mainframes and mainframe channels, as far as I'm
concerned. I haven't seen a compelling competing explanation.

Wintel learned to play the same game, only they understood the price/
volume tradeoff much better than IBM ever did or ever will.

That's not to say, Del, and others, that IBM didn't do some amazing
things and doesn't continue to do some amazing things and doesn't
deserve due credit. Nor does it say that mainframes and mainframe
channels don't bring enough value added in certain circumstances to
make them worth the price tag. In the end, though, there is no
particularly unique magic, certainly not in today's world, where you
might well get fired for buying IBM instead of something from Rackable
Systems or Dell.

Robert.