From: Peter Olcott on
Anyhow it is easy enough to implement both ways so that
testing can show which one is superior. This is far far
simpler than my original approach, thanks to you and others.

(1) Make a separate process that has a lower priority than
the high priority process.

(2) Make several separate processes such that the processes
with lower priority explicitly look to see if they need to
yield to a higher priority process, and thus put themselves
to sleep. A shared memory location could provide the number
of items pending in each priority queue. The lower priority
process could look, at these memory locations inside of
every tight loop. It would have to check no more than once
every 10 ms, and once every 100 ms may be often enough.

Apparently according to the hardware guys in the hardware
groups there would be no need to lock these memory locations
because reads and writes can occur concurrently without
garbling each other. I will have to look into this further
because memory shared between processes may have more
complex requirements than memory shared between threads.

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:fbb61541-57d9-4a6a-9a53-58a9c7a82dec(a)g10g2000yqh.googlegroups.com...
On Apr 7, 9:22 am, David Schwartz <dav...(a)webmaster.com>
wrote:
> On Apr 7, 7:01 am, "Peter Olcott" <NoS...(a)OCR4Screen.com>
> wrote:
>
> > If it runs at all it ruins my cache spatial locality of
> > reference and makes the other process at least ten-fold
> > slower, that is why I want to make these jobs
> > sequential.

Actually, I should point out that there is one exception. If
you have
CPUs where cores share an L3 cache, code running in the
other core can
use L3 cache. But to avoid that, you'd need a dummy
thread/process at
a very high priority whose sole purpose is to use that core
without
using cache. It would be very tricky to control that dummy
thread/
process, and it would tank system performance if you didn't
make sure
it wound up on the right core and ran only when needed. I
would only
resort to such a thing if all other possibilities had
already been
tried and had been proven to be unable to get working.

DS


From: Peter Olcott on
I still need to know what is involved in a context switch
for other reasons. I want a lower priority process to not
ever run at all while a higher priority process is running.

If a lower priority process is run for 1 ms every second (a
0.1% time slice) it would screw up my 8 MB L3 cache.

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:441ffeae-b15a-4630-85b0-1c8d8d30c548(a)o30g2000yqb.googlegroups.com...
On Apr 7, 9:33 am, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:

> My 8 MB of L3 cache can be refilled in less time that the
> context switch?

No, that's not what I said.

> Exactly what is involved with a context switch besides
> saving and initializing the machine registers?

You missed my point completely. Please read every word I
wrote one
more time. Here it is again:

"[A] lower-priority process is not going to pre-empt a
higher-priority
process. So the context switch rate will be limited to the
scheduler's
slice time. The slice time is specifically set large enough
so a full
cache refill per slice time is lost in the noise."

This has nothing whatsoever to do with how long it takes to
perform a
context switch and everything to do with how *often* you
perform
context switches.

DS


From: Moi on
On Wed, 07 Apr 2010 11:44:20 -0500, Peter Olcott wrote:

> Anyhow it is easy enough to implement both ways so that testing can show
> which one is superior. This is far far simpler than my original
> approach, thanks to you and others.
>
> (1) Make a separate process that has a lower priority than the high
> priority process.
>
> (2) Make several separate processes such that the processes with lower
> priority explicitly look to see if they need to yield to a higher
> priority process, and thus put themselves to sleep. A shared memory
> location could provide the number of items pending in each priority
> queue. The lower priority process could look, at these memory locations
> inside of every tight loop. It would have to check no more than once
> every 10 ms, and once every 100 ms may be often enough.

Again: you don't need to sleep. You can block on input, use select/poll,
or you could even block on msgget().

Creating you own queuing in SHM, while files, (named) pipes and message-queues
are available does mot seem wise to me. These facilities are there for a reason.

AvK
From: Moi on
On Wed, 07 Apr 2010 11:49:13 -0500, Peter Olcott wrote:

> I still need to know what is involved in a context switch for other
> reasons. I want a lower priority process to not ever run at all while a
> higher priority process is running.
>
> If a lower priority process is run for 1 ms every second (a 0.1% time
> slice) it would screw up my 8 MB L3 cache.

Maybe you should pull the network cable, too.
Or disable interrupts, just to be sure.
:-)

AvK
From: Peter Olcott on

"Moi" <root(a)invalid.address.org> wrote in message
news:2bb77$4bbcbab9$5350c024$23768(a)cache120.multikabel.net...
> On Wed, 07 Apr 2010 11:44:20 -0500, Peter Olcott wrote:
>
>> Anyhow it is easy enough to implement both ways so that
>> testing can show
>> which one is superior. This is far far simpler than my
>> original
>> approach, thanks to you and others.
>>
>> (1) Make a separate process that has a lower priority
>> than the high
>> priority process.
>>
>> (2) Make several separate processes such that the
>> processes with lower
>> priority explicitly look to see if they need to yield to
>> a higher
>> priority process, and thus put themselves to sleep. A
>> shared memory
>> location could provide the number of items pending in
>> each priority
>> queue. The lower priority process could look, at these
>> memory locations
>> inside of every tight loop. It would have to check no
>> more than once
>> every 10 ms, and once every 100 ms may be often enough.
>
> Again: you don't need to sleep. You can block on input,
> use select/poll,
> or you could even block on msgget().

A 3.5 minute long low priority process could be already
executing when a 50 ms high priority job arrives. The 3.5
minute long low priority must give up what it is doing
(sleep) so that the 50 ms high priority process has
exclusive use of the CPU. If the 50 ms job does not have
exclusive use of the CPU it may become A 500 ms job due to
the lack of cache spatial locality of reference. I am trying
to impose a 100 ms real-time limit on the high priority
jobs.

>
> Creating you own queuing in SHM, while files, (named)
> pipes and message-queues
> are available does mot seem wise to me. These facilities
> are there for a reason.
>
> AvK