From: David Schwartz on
On Apr 7, 9:49 am, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:

> I still need to know what is involved in a context switch
> for other reasons. I want a lower priority process to not
> ever run at all while a higher priority process is running.

That's really not what you want. That's what you think you want, and
no matter how many times I explain that this is not what you want, you
don't seem to get it.

Suppose, just hypothetically, that the lower priority process holds a
lock the higher priority process needs to make forward progress. Do
you still want the lower priority process not to run? Of course not.

> If a lower priority process is run for 1 ms every second (a
> 0.1% time slice) it would screw up my 8 MB L3 cache.

You're designing across levels again. You don't have to worry about
that, the operating system's designers know all the details about the
CPU's internals and have designed the scheduler to avoid that problem.
Really.

If you try to act on knowledge like this, performance will tank in the
case where your general assumptions are wrong. The scheduler has tons
of information you don't have about the live state of the system and
the CPU topology. Let it do its job -- just make sure it knows what
you want.

DS
From: Moi on
On Wed, 07 Apr 2010 12:46:26 -0500, Peter Olcott wrote:

> "Moi" <root(a)invalid.address.org> wrote in message
> news:2bb77$4bbcbab9$5350c024$23768(a)cache120.multikabel.net...
>> On Wed, 07 Apr 2010 11:44:20 -0500, Peter Olcott wrote:
>>
>>> Anyhow it is easy enough to implement both ways so that testing can
>>> show
>>> which one is superior. This is far far simpler than my original
>>> approach, thanks to you and others.
>>>
>>> (1) Make a separate process that has a lower priority than the high
>>> priority process.
>>>
>>> (2) Make several separate processes such that the processes with lower
>>> priority explicitly look to see if they need to yield to a higher
>>> priority process, and thus put themselves to sleep. A shared memory
>>> location could provide the number of items pending in each priority
>>> queue. The lower priority process could look, at these memory
>>> locations
>>> inside of every tight loop. It would have to check no more than once
>>> every 10 ms, and once every 100 ms may be often enough.
>>
>> Again: you don't need to sleep. You can block on input, use
>> select/poll,
>> or you could even block on msgget().
>
> A 3.5 minute long low priority process could be already executing when a
> 50 ms high priority job arrives. The 3.5 minute long low priority must
> give up what it is doing (sleep) so that the 50 ms high priority process
> has exclusive use of the CPU. If the 50 ms job does not have exclusive
> use of the CPU it may become A 500 ms job due to the lack of cache
> spatial locality of reference. I am trying to impose a 100 ms real-time
> limit on the high priority jobs.

No.

If the processess are real processes in the Unix-sense,
in your example the following will happen:

1) the low-priority process is running and making progress;
*the high-priority process is blocked (on input)*
2) A high-priority task arrives
(this can be trough a pipe/socket/message queue /whatever IPC)
3) The High-priority process is unblocked by the kernel / scheduler
(and the low-pri task will be set to "runnable" ; if there is only one CPU available)



>> Creating you own queuing in SHM, while files, (named) pipes and
>> message-queues
>> are available does mot seem wise to me. These facilities are there for
>> a reason.
>>
>> AvK

AvK
From: David Schwartz on
On Apr 7, 4:43 am, Jasen Betts <ja...(a)xnet.co.nz> wrote:

> select() prefers the lowest numbered file descriptor it's asked to
> test/watch so that should be easy to arrange,

Huh?!

DS
From: Peter Olcott on
If I am wrong (and I really don't think that I am, I have
benchmarking to support my hypotheses) I can always use the
simple mechanism that you proposed. I would benchmark these
against each other. Since I will make sure that there are no
locks between processes that makes all of these
complications moot. In any case if a lock was needed, then
this is an operation that would not be interrupted.

Either approach may produce acceptable performance, one
might be better than the other. It is like compression
algorithms the generic ones can not perform as well as the
specialized ones because the specialized ones know more of
the underlying details. Likewise with my scheduling of my
processes as compared to the OS scheduling them for me.

Is there any way to tell the hardware cache to load specific
data?

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:6f6bc480-f0af-4153-a003-06e68605c078(a)g10g2000yqh.googlegroups.com...
On Apr 7, 9:49 am, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:

> I still need to know what is involved in a context switch
> for other reasons. I want a lower priority process to not
> ever run at all while a higher priority process is
> running.

That's really not what you want. That's what you think you
want, and
no matter how many times I explain that this is not what you
want, you
don't seem to get it.

Suppose, just hypothetically, that the lower priority
process holds a
lock the higher priority process needs to make forward
progress. Do
you still want the lower priority process not to run? Of
course not.

> If a lower priority process is run for 1 ms every second
> (a
> 0.1% time slice) it would screw up my 8 MB L3 cache.

You're designing across levels again. You don't have to
worry about
that, the operating system's designers know all the details
about the
CPU's internals and have designed the scheduler to avoid
that problem.
Really.

If you try to act on knowledge like this, performance will
tank in the
case where your general assumptions are wrong. The scheduler
has tons
of information you don't have about the live state of the
system and
the CPU topology. Let it do its job -- just make sure it
knows what
you want.

DS


From: Peter Olcott on

"Moi" <root(a)invalid.address.org> wrote in message
news:9b03e$4bbcccf5$5350c024$25709(a)cache110.multikabel.net...
> On Wed, 07 Apr 2010 12:46:26 -0500, Peter Olcott wrote:
>
>> "Moi" <root(a)invalid.address.org> wrote in message
>> news:2bb77$4bbcbab9$5350c024$23768(a)cache120.multikabel.net...
>>> On Wed, 07 Apr 2010 11:44:20 -0500, Peter Olcott wrote:
>>>
>>>> Anyhow it is easy enough to implement both ways so that
>>>> testing can
>>>> show
>>>> which one is superior. This is far far simpler than my
>>>> original
>>>> approach, thanks to you and others.
>>>>
>>>> (1) Make a separate process that has a lower priority
>>>> than the high
>>>> priority process.
>>>>
>>>> (2) Make several separate processes such that the
>>>> processes with lower
>>>> priority explicitly look to see if they need to yield
>>>> to a higher
>>>> priority process, and thus put themselves to sleep. A
>>>> shared memory
>>>> location could provide the number of items pending in
>>>> each priority
>>>> queue. The lower priority process could look, at these
>>>> memory
>>>> locations
>>>> inside of every tight loop. It would have to check no
>>>> more than once
>>>> every 10 ms, and once every 100 ms may be often enough.
>>>
>>> Again: you don't need to sleep. You can block on input,
>>> use
>>> select/poll,
>>> or you could even block on msgget().
>>
>> A 3.5 minute long low priority process could be already
>> executing when a
>> 50 ms high priority job arrives. The 3.5 minute long low
>> priority must
>> give up what it is doing (sleep) so that the 50 ms high
>> priority process
>> has exclusive use of the CPU. If the 50 ms job does not
>> have exclusive
>> use of the CPU it may become A 500 ms job due to the lack
>> of cache
>> spatial locality of reference. I am trying to impose a
>> 100 ms real-time
>> limit on the high priority jobs.
>
> No.
>
> If the processess are real processes in the Unix-sense,
> in your example the following will happen:
>
> 1) the low-priority process is running and making
> progress;
> *the high-priority process is blocked (on input)*
> 2) A high-priority task arrives
> (this can be trough a pipe/socket/message queue
> /whatever IPC)
> 3) The High-priority process is unblocked by the kernel /
> scheduler
> (and the low-pri task will be set to "runnable" ; if
> there is only one CPU available)

So if the high priority job takes 100% of the CPU for ten
minutes, then the low priority job must wait ten minutes? If
I even give the low priority job a tiny slice of the time,
then the first thing that it will do is utterly screw up my
cache.

>
>
>
>>> Creating you own queuing in SHM, while files, (named)
>>> pipes and
>>> message-queues
>>> are available does mot seem wise to me. These facilities
>>> are there for
>>> a reason.
>>>
>>> AvK
>
> AvK