From: Andrew Reilly on
On Thu, 27 May 2010 20:04:11 +0200, Morten Reistad wrote:

> You would still need to signal other cpu's, but that signal does not
> have to be a very precise interrupt. That cpu can easily handle a few
> instructions more before responding. It could e.g. easily run it's
> pipeline dry first.

Do any processors actually do something like this? That is, have some
instructions or situations that are preferred as interruptable points,
and others just not? It seems to me that most systems more complex than
certain very low-level controllers could probably get away with only
taking interrupts at the point of taken branches or (perhaps) i-cache or
tlb miss or page fault. That would still guarantee sub-microsecond
response to external events, which is probably fast enough...

Cheers,

--
Andrew
From: Roger Ivie on
On 2010-05-27, FredK <fred.nospam(a)dec.com> wrote:
> The DEC PDP-11 was a single processor minicomputer from the 70's (there was
> a dual CPU 11/74 IIRC). On these systems it was not feasible from a
> cost/complexity viewpoint to implement "IO channel" processors. Just as it
> wasn't reasonable for those systems/CPUs that ultimately resulted in the
> x86.

http://en.wikipedia.org/wiki/Intel_8089
--
roger ivie
rivie(a)ridgenet.net
From: Andy 'Krazy' Glew on
On 5/26/2010 11:14 AM, Robert Myers wrote:
> On May 26, 1:43 pm, timcaff...(a)aol.com (Tim McCaffrey) wrote:

> For the life of me, I can't understand the logic of computer systems
> that shovel all tasks into one hopper, even if it means constantly
> interrupting tasks that might well have interrupted another task. I
> suspect the influence of some legacy (PC?) mentality, but I'm sure
> there is someone here who can set me straight.

I partly agree with you, but then I read in one of the design magazines that is in the foyer at work - I think it was
ECN - something that might be paraphrased as:

<<

Automotive engineers are tired of having too many microprocessors.

Not only are there too many chips, but even if the different processors are put onto a reduced number of chips, it is
still too much of a hassle to manage. If you choose a given set of processors, some jobs turn out to be too hard to be
handled by a single processor, so you have to do the expensive and risky parallelization.

Worse, all of the little processors allocated to specific tasks have to be sized so that they can handle the worst case
workload.

It's much more efficient to have fewer, larger processors - since the worst case workload for the separate tasks seldom
happens all at the same time. So that instead of having N*WorstCaseNeed, you have 1 processor of size N*AverageNeed +
M-sum of worst cases, where M is the largest number of worst cases that will ever happen together.

>>

Truly, this isn't me, except paraphrasing.

Now, magazines like ECN are often just shilling for whoever wrote the article. But obviously some embedded processor
vendor for automotive felt that an argument like the above was compelling.



From: Andy 'Krazy' Glew on
On 5/27/2010 5:47 PM, Andrew Reilly wrote:
> On Thu, 27 May 2010 20:04:11 +0200, Morten Reistad wrote:
>
>> You would still need to signal other cpu's, but that signal does not
>> have to be a very precise interrupt. That cpu can easily handle a few
>> instructions more before responding. It could e.g. easily run it's
>> pipeline dry first.
>
> Do any processors actually do something like this? That is, have some
> instructions or situations that are preferred as interruptable points,
> and others just not? It seems to me that most systems more complex than
> certain very low-level controllers could probably get away with only
> taking interrupts at the point of taken branches or (perhaps) i-cache or
> tlb miss or page fault. That would still guarantee sub-microsecond
> response to external events, which is probably fast enough...


Yes.

I could swear I was looking at a webpage for one recently.

From: nmm1 on
In article <htmoal$u5$1(a)usenet01.boi.hp.com>,
FredK <fred.nospam(a)dec.com> wrote:
>
>>
>> If that PDP11 has a good number of processors, dedicating one of them
>> to handle low-level I/O; and build a FIFO with hardware-provided
>> atomic reads and writes (not _that_ hard to do) and simple block
>> on read on that should solve that.
>
>The DEC PDP-11 was a single processor minicomputer from the 70's (there was
>a dual CPU 11/74 IIRC). On these systems it was not feasible from a
>cost/complexity viewpoint to implement "IO channel" processors. Just as it
>wasn't reasonable for those systems/CPUs that ultimately resulted in the
>x86.

That's a very dubious statement. It was cost-effective on similar
size machines a decade earlier, in other ranges (e.g. System/360).
Yes, I know that IBM's target was different from DEC's.

DEC took the design decision to go for efficient interrupts, which
was not stupid, but that does NOT mean that it was the best (let
alone only feasible!) solution.


Regards,
Nick Maclaren.