From: Robert Myers on
On Dec 23, 12:21 pm, Terje Mathisen <"terje.mathisen at tmsw.no">
wrote:
> Bernd Paysan wrote:
> > Sending chunks of code around which are automatically executed by the
> > receiver is called "active messages".  I not only like the idea, a
> > friend of mine has done that successfully for decades (the messages in
> > question were Forth source - it was a quite high level of active
> > messages).  Doing that in the memory controller looks like a good idea
> > for me, too, at least for that kind of code a memory controller can
> > handle.  The good thing about this is that you can collect all your
> > "orders", and send them in one go - this removes a lot of latency,
> > especially if your commands can include something like compare&swap or
> > even a complete "insert into list/hash table" (that, unlike
> > compare&swap, won't fail).
>
> Why do a feel that this feels a lot like IBM mainframe channel programs?
> :-)

Could I persuade you to take time away from your first love
(programming your own computers, of course) to elaborate/pontificate a
bit? After forty years, I'm still waiting for someone to tell me
something interesting about mainframes. Well, other than that IBM bet
big and won big on them.

And CHANNELS. Well. That's clearly like the number 42.

Robert.
From: Bernd Paysan on
Terje Mathisen <"terje.mathisen at tmsw.no"> wrote:
> Why do a feel that this feels a lot like IBM mainframe channel
> programs?
> :-)

But there's a fundamental difference: A channel program is executed on
your side. An active message is executed on the other side (when you
use a channel-based memory system, you'll send a message to your
communication channel to send the message over to the other computer).

> (Security is of course implicit here: If you _can_ send the message,
> you're obviously safe, right?)

It all depends. You can implement something similar as the Internet
using active messages, and there of course, every message would be
potentially hostile. Solution: Keep these message instruction set
simple, and have a rigid framework of protecting you against malicious
messages.

As said before, the successful active message system my friend has made
is based on Forth source code - this is probably the worst thing for
security, but also the most powerful and robust one. Naming it "Skynet"
is not too far - it is extremely robust, since each node can download
all the code it needs from a repository or even from other nodes. I
would not use such a scheme to implement a secure network with public
access.

> Terje
> PS. This is my very first post from my personal leafnode installation:
> I have free news access via my home (fiber) ISP, but not here in
> Rauland on Christmas/New Year vacation, so today I finally broke down
> and installed leafnode on my home FreeBSD gps-based ntp server. :-)

I use leafnode locally for a decade or so now; it does a good job on
message prefetching, and it also can be used to hide details like where
my actual news feed is coming from.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Robert Myers on
On Dec 23, 9:05 am, Bernd Paysan <bernd.pay...(a)gmx.de> wrote:

>
> It has been tried and it works - you can find a number of papers about
> active message passing from various universities.  However, it seems to
> be that most people try to implement some standard protocols like MPI on
> top of it, so the benefits might be smaller than expected.  And as Andy
> already observed: Most people seem to be more comfortable with
> sequential programming.  Using such an active message system makes the
> parallel programming quite explicit - you model a data flow graph, you
> create packets with code and data, and so on.

Maybe I can add, even if it's already in the literature, that such a
computing model makes the non-uniform address space problem disappear,
as well. One process pushes its packets to another. It can even
happen by DMA, just so long as there is a way to refer uniformly to
the input buffers of receiving locations. Then, both sending and
receiving process can address memory (which is entirely private,
except for the input buffer) in whatever idiosyncratic ways they care
to.

Forgive me for this, please. I was prepared with a post like: look,
you can't trust people to do it (non-trivially concurrent programming)
in a flat, uniform SMP address space, how in the name of heaven will
anyone do it correctly with heterogeneous everything? I think I just
answered my own question.

Robert.
From: Anne & Lynn Wheeler on

Terje Mathisen <"terje.mathisen at tmsw.no"> writes:
> Why do a feel that this feels a lot like IBM mainframe channel programs?
> :-)

downside was that mainframe channel programs were half-duplex end-to-end
serialization. there were all sorts of heat & churn in fiber-channel
standardization with the efforts to overlay mainframe channel program
(half-duplex, end-to-end serialization) paradigm on underlying
full-duplex asynchronous operation.

from the days of scarce, very expensive electronic storage
.... especially disk channel programs ... used "self-modifying" operation
.... i.e. read operation would fetch the argument used by the following
channel command (both specifying the same real address). couple round
trips of this end-to-end serialization potentially happening over 400'
channel cable within small part of disk rotation.

trying to get a HYPERChannel "remote device adapter" (simulated
mainframe channel) working at extended distances with disk controller &
drives ... took a lot of slight of hand. a copy of the
completedmainframe channel program was created and downloaded into the
memory of the remote device adapter .... to minimize the
command-to-command latency. the problem was that some of the disk
command arguments had very tight latencies ... and so those arguments
had to be recognized and also downloaded into the remote device adapter
memory (and the related commands redone to fetch/store to the local
adapter memory rather than the remote mainframe memory). this process
was never extended to be able to handle the "self-modifying" sequences.

on the other hand ... there was a serial-copper disk project that
effectively packetized SCSI commands ... sent them down outgoing
link ... and allowed asynchronous return on the incoming link
.... eliminating loads of the scsi latency. we tried to get this morphed
into interoperating with fiber-channel standard ... but it morphed into
SSA instead.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Robert Myers on
On Dec 23, 7:57 pm, Anne & Lynn Wheeler <l...(a)garlic.com> wrote:
> Terje Mathisen <"terje.mathisen at tmsw.no"> writes:
>
> > Why do a feel that this feels a lot like IBM mainframe channel programs?
> > :-)
>
> downside was that mainframe channel programs were half-duplex end-to-end
> serialization. there were all sorts of heat & churn in fiber-channel
> standardization with the efforts to overlay mainframe channel program
> (half-duplex, end-to-end serialization) paradigm on underlying
> full-duplex asynchronous operation.
>
> from the days of scarce, very expensive electronic storage
> ... especially disk channel programs ... used "self-modifying" operation
> ... i.e. read operation would fetch the argument used by the following
> channel command (both specifying the same real address).  couple round
> trips of this end-to-end serialization potentially happening over 400'
> channel cable within small part of disk rotation.
>
> trying to get a HYPERChannel "remote device adapter" (simulated
> mainframe channel) working at extended distances with disk controller &
> drives ... took a lot of slight of hand. a copy of the
> completedmainframe channel program was created and downloaded into the
> memory of the remote device adapter .... to minimize the
> command-to-command latency. the problem was that some of the disk
> command arguments had very tight latencies ... and so those arguments
> had to be recognized and also downloaded into the remote device adapter
> memory (and the related commands redone to fetch/store to the local
> adapter memory rather than the remote mainframe memory). this process
> was never extended to be able to handle the "self-modifying" sequences.
>
> on the other hand ... there was a serial-copper disk project that
> effectively packetized SCSI commands ... sent them down outgoing
> link ... and allowed asynchronous return on the incoming link
> ... eliminating loads of the scsi latency. we tried to get this morphed
> into interoperating with fiber-channel standard ... but it morphed into
> SSA instead.
>
> --
> 40+yrs virtualization experience (since Jan68), online at home since Mar1970

What bothers me is the "it's already been thought of"

You worked with a different (and harsh) set of constraints.

The contstraints are different now. Lots of resources free that once
were expensive. Don't want just a walk down memory lane. The world
is going to change, believe me. Anyone here interested in seeing
how?

What can we know from the hard lessons your learned. That's a good
question. What's different now. That's a good question, too.
Everything is the same except the time scale. That answer requires a
detailed defense, and I think it's wrong. Sorry, Terje.

Robert.
First  |  Prev  |  Next  |  Last
Pages: 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
Prev: PEEEEEEP
Next: Texture units as a general function