From: nmm1 on
In article <868i53Ft0cU1(a)mid.individual.net>,
Andrew Reilly <areilly---(a)bigpond.net.au> wrote:
>On Thu, 27 May 2010 20:04:11 +0200, Morten Reistad wrote:
>
>> You would still need to signal other cpu's, but that signal does not
>> have to be a very precise interrupt. That cpu can easily handle a few
>> instructions more before responding. It could e.g. easily run it's
>> pipeline dry first.
>
>Do any processors actually do something like this? That is, have some
>instructions or situations that are preferred as interruptable points,
>and others just not? It seems to me that most systems more complex than
>certain very low-level controllers could probably get away with only
>taking interrupts at the point of taken branches or (perhaps) i-cache or
>tlb miss or page fault. That would still guarantee sub-microsecond
>response to external events, which is probably fast enough...

Not usually. And that's a lot of my point. While I prefer
interrupt-free designs, interrupting ones work pretty well. But
funnelling a wildly heterogeneous set of interrupt requirements
through a single mechanism is just plain stupid.


Regards,
Nick Maclaren.
From: Terje Mathisen "terje.mathisen at on
nmm1(a)cam.ac.uk wrote:
> In article<htmoal$u5$1(a)usenet01.boi.hp.com>,
> FredK<fred.nospam(a)dec.com> wrote:
>>
>>>
>>> If that PDP11 has a good number of processors, dedicating one of them
>>> to handle low-level I/O; and build a FIFO with hardware-provided
>>> atomic reads and writes (not _that_ hard to do) and simple block
>>> on read on that should solve that.
>>
>> The DEC PDP-11 was a single processor minicomputer from the 70's (there was
>> a dual CPU 11/74 IIRC). On these systems it was not feasible from a
>> cost/complexity viewpoint to implement "IO channel" processors. Just as it
>> wasn't reasonable for those systems/CPUs that ultimately resulted in the
>> x86.
>
> That's a very dubious statement. It was cost-effective on similar
> size machines a decade earlier, in other ranges (e.g. System/360).
> Yes, I know that IBM's target was different from DEC's.
>
> DEC took the design decision to go for efficient interrupts, which
> was not stupid, but that does NOT mean that it was the best (let
> alone only feasible!) solution.

On the AppleII, the design point was to use as little HW as the Woz
could get way with, including the (in)famous sw diskette interface.

On the first PC we had a very similar situation, up to and including the
choice of the 8088 instead of the 8086 in order to get fewer and cheaper
(8 vs 16 bits!) interface/memory chips.

Since then, there has been a number of attempts to get intelligent IO,
aka channels, implemented on the PC architecture, and for each and every
generation the real showstopper, except for very expensive server
designs, has been that almost all PCs are mostly idle, most of the time.

The next hurdle is the fact that if you have $N available for either an
IO engine or a second cpu/core, then it is nearly always more
cost-effective to use those dollars for another real core.

When/if we finally get lots of cores, some of which are really
low-power, in-order, with very fast context switching, then it makes
even more sense to allocate all IO processing to such cores and let the
big/power-hungry/OoO cores do the "real" processing.

With proper non-blocking queue handling, those working cores can run
flat out with no interrupts as long as there is any work at all to be
done, then go to sleep.

Using an interrupt from an IO core to get out of sleep and start
processing again is a good idea from a power efficiency viewpoint.

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: FredK on

"Terje Mathisen" <"terje.mathisen at tmsw.no"> wrote in message
news:89c4d7-5go.ln1(a)ntp.tmsw.no...
> nmm1(a)cam.ac.uk wrote:
>> In article<htmoal$u5$1(a)usenet01.boi.hp.com>,
>> FredK<fred.nospam(a)dec.com> wrote:
>>>

snip

>
> With proper non-blocking queue handling, those working cores can run flat
> out with no interrupts as long as there is any work at all to be done,
> then go to sleep.
>
> Using an interrupt from an IO core to get out of sleep and start
> processing again is a good idea from a power efficiency viewpoint.
>

The question being - how fast can you bring the CPU out of it's "sleep"
state, and how do you schedule servicing of the non-blocking queues without
dedicating one or more cores strictly to handling them. The clock interrupt
for example is typically the mechanism used for scheduling multiple
processes competing for CPU time.



From: Andy 'Krazy' Glew on
On 5/28/2010 1:16 AM, nmm1(a)cam.ac.uk wrote:
> In article<htmoal$u5$1(a)usenet01.boi.hp.com>,
> FredK<fred.nospam(a)dec.com> wrote:
>>
>>>
>>> If that PDP11 has a good number of processors, dedicating one of them
>>> to handle low-level I/O; and build a FIFO with hardware-provided
>>> atomic reads and writes (not _that_ hard to do) and simple block
>>> on read on that should solve that.
>>
>> The DEC PDP-11 was a single processor minicomputer from the 70's (there was
>> a dual CPU 11/74 IIRC). On these systems it was not feasible from a
>> cost/complexity viewpoint to implement "IO channel" processors. Just as it
>> wasn't reasonable for those systems/CPUs that ultimately resulted in the
>> x86.
>
> That's a very dubious statement. It was cost-effective on similar
> size machines a decade earlier, in other ranges (e.g. System/360).
> Yes, I know that IBM's target was different from DEC's.
>
> DEC took the design decision to go for efficient interrupts, which
> was not stupid, but that does NOT mean that it was the best (let
> alone only feasible!) solution.


Best in a Darwinian sense?

From: Robert Myers on
On May 28, 4:42 am, Terje Mathisen <"terje.mathisen at tmsw.no">
wrote:

> When/if we finally get lots of cores, some of which are really
> low-power, in-order, with very fast context switching, then it makes
> even more sense to allocate all IO processing to such cores and let the
> big/power-hungry/OoO cores do the "real" processing.

But it would likely take Microsoft to make such a step of any value in
the desktop/notebook space, no?

Servers not only have different workloads, they use different
operating systems, and I'll take a wild guess that almost any server
OS can take advantage of intelligent I/O better than Desktop Windows,
which, I speculate, could take advantage of it hardly at all without a
serious rewrite.

Robert.