From: MitchAlsup on
On May 26, 4:08 pm, n...(a)cam.ac.uk wrote:
>     2) The context of this wasn't interrupts versus something else,
> but funnelling ALL such actions though a single mechanism that is
> unsatisfactory for almost all of them.  For example, there is
> absolutely NO reason why a floating-point fixup need execute a
> FLIH in God mode, only to be restored to the mode of the process
> that was interrupted.  The Ferranti Atlas/Titan and ICL 1900
> didn't do it.

Forgot the CDC6600 that did no interrupt processing whatsoever.

The PPs (perifferal processors) performed the I/O (polling) and then
scheduled the subsequent work for the CPU(s) and if the CPU was
to be rescheduled, it was directed away from the taks at hand,
immediately tothe subsequent task in a single context switch
from usermode, to usermode in a single instruction!

In a modern context--one could perform all the I/O of a typical PC
in the southbridge chip with an instruction set compatible CPU,
and only interrupt the main CPU(s) after performing the I/O and
updating al the queues. Here, the interrupt would stop one CPU,
direct it to the run queue, where it would pick up a new higher
priority unit of work, and context switch thereto.

Such a CPU would still be miniscule compared to the size of
these modern Southbridge chips.

Mitch
From: Tim McCaffrey on
In article <htjqa4$n0h$5(a)usenet01.boi.hp.com>, rick.jones2(a)hp.com says...
>
>Tim McCaffrey <timcaffrey(a)aol.com> wrote:
>> There is a bit of future shock with modern embedded systems. I've
>> worked on a I/O board that offloaded Disk & Network I/O, handled
>> Gigabit speeds on the network (with a complete TCP/IP stack) & 4G
>> Fiber channel, and it didn't (really) use interrupts. All with a
>> single 800 Mhz MIPS processor.
>
>Ah, but with what size TCP segments or UDP datagrams?-) It is one
>thing to handle Gigabit speeds with CKO and TSO (and perhaps LRO) and
>large sends but getting to link-rate with small segments is another
>entirely.
>

Because we wrote all the software on the card we didn't have to force fit what
the hardware did with the model the OS wanted. The neat thing was the busier
the card got, the more efficient it was (basically from queueing effects). It
can handle 80K frames a second, worst case (when the I/O requests didn't take
advantage of multiple frame sends). So, I guess that works out to ~140 bytes
of data per frame? (in one direction)

- Tim


From: Rick Jones on
Tim McCaffrey <timcaffrey(a)aol.com> wrote:

> Because we wrote all the software on the card we didn't have to
> force fit what the hardware did with the model the OS wanted. The
> neat thing was the busier the card got, the more efficient it was
> (basically from queueing effects). It can handle 80K frames a
> second, worst case (when the I/O requests didn't take advantage of
> multiple frame sends). So, I guess that works out to ~140 bytes of
> data per frame? (in one direction)

I think that is incorrect - not that I'm immune to math mistakes :)
but I think 80K, 140 byte frames per second would be 11200000 bytes
per second or 89.6 Mbit/s. So, for ~GbE speed it would need to be
either 800K frames per second, or 1400 bytes per frame.

rick jones
--
a wide gulf separates "what if" from "if only"
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...
From: FredK on

<nmm1(a)cam.ac.uk> wrote in message
news:htk2ko$s8t$1(a)smaug.linux.pwf.cam.ac.uk...
> In article <htjt4h$pi0$1(a)usenet01.boi.hp.com>,
> FredK <fred.nospam(a)dec.com> wrote:
>>
>>>>For the life of me, I can't understand the logic of computer systems
>>>>that shovel all tasks into one hopper, even if it means constantly
>>>>interrupting tasks that might well have interrupted another task. I
>>>>suspect the influence of some legacy (PC?) mentality, but I'm sure
>>>>there is someone here who can set me straight.
>>>
>>> I can correct your query. The 'mentality' is older than personal
>>> computers, goes back as long as I can recall, and has NEVER made
>>> any sense! I think that it's a relic of the days when the hardware
>>> designers were subdeities, and the software people were expected to
>>> be thankful for whatever they were given. But the origin was before
>>> my time.
>>
>>You are on a PDP11 and you want to have IO. Propose the alternative to
>>interrupts that provides low latency servicing of the device. Today you
>>can
>>create elaborate IO cards with offload engines, but ultimately you need to
>>talk to the actual computer system which is otherwise engaged in general
>>computing.
>
> Sigh. You really haven't been following this group. There are two
> issues there:
>

The OP question was asking about device interrupts.

> 1) There are plenty of well-tried alternatives to interrupts
> though, if you start with a PDP 10, I will agree that they are a
> plausible approach.
>

Sure, and ultimately you can offload much IO processing to intellegent
devices or (for alt.comp.folklore...) channel controllers. Ultimately these
still need to interrupt a CPU. They just reduce the number of interrupts.
Though I readily admit that while I am old enough to have learned
programming on a IBM 360, and know how to run a card sorter - I never wrote
a device driver for a mainframe so perhaps there was "magic" that occured to
"notify" the CPU of things that the channel controller needed serviced.

> 2) The context of this wasn't interrupts versus something else,
> but funnelling ALL such actions though a single mechanism that is
> unsatisfactory for almost all of them. For example, there is
> absolutely NO reason why a floating-point fixup need execute a
> FLIH in God mode, only to be restored to the mode of the process
> that was interrupted. The Ferranti Atlas/Titan and ICL 1900
> didn't do it.
>

I wasn't aware you had sidetracked off into various other "faults", though I
agree that the specific example you give (FP exceptions) might have more
efficient implementations.



From: Andrew Reilly on
On Wed, 26 May 2010 14:30:06 -0700, MitchAlsup wrote:

> In a modern context--one could perform all the I/O of a typical PC in
> the southbridge chip with an instruction set compatible CPU, and only
> interrupt the main CPU(s) after performing the I/O and updating al the
> queues. Here, the interrupt would stop one CPU, direct it to the run
> queue, where it would pick up a new higher priority unit of work, and
> context switch thereto.
>
> Such a CPU would still be miniscule compared to the size of these modern
> Southbridge chips.

At Uni I actually worked on a computer that nearly fitted that
description. A Sony NEWS3860 workstation. I suspect that it was the
evolution of an earlier 680x0 machine, and rather than replacing the
680x0, the designers kept it, and relegated it to running device drivers,
and bolted a new (at the time) MIPS R3000+memory system on top. It
seemed that the MIPS processor ran the top half of the BSD4.3 OS, and the
680x0 ran the device drivers, and talked to the main system through a
shared memory buffer. For its time it was *really* quick. Significantly
better throughput for our applications than many of the broadly-as-
expensive single processor or symmetrical processors of the time.

The common OS wisdom these days is that it's better to be symmetrical,
and let the OS work it out at run-time, but it's a long time since I saw
a paper where anyone had actually compared the alternatives. Probably
because I read the wrong sort of papers, rather than because there hasn't
been any written.

Cheers,

--
Andrew