From: Peter Olcott on

"Jens Thoms Toerring" <jt(a)toerring.de> wrote in message
news:823jm7F41dU1(a)mid.uni-berlin.de...
> In comp.unix.programmer Peter Olcott
> <NoSpam(a)ocr4screen.com> wrote:
>> David has mostly convinced me that my 3.5 minute job is
>> best
>> off as its own process. The only issue is that the
>> minimum
>> sleep period seems to be one second, I could really use
>> it
>> to be 100 ms. I might have to build my own sleep system
>> for
>> this process.
>
> For the sleep() function that's correct. For shorter sleep
> periods use nanosleep(), that should give you a resolution
> of at least 10 ms.
> Regards, Jens

Yes I forgot about that.

>
> PS: Please stop crossposting this to
> linux.development.system,
> that group is about "Linux kernels, device drivers,
> modules"
> and thus the whole stuff discussed here is off-topic
> over
> there.

People define systems programming in different ways. I need
to speak to people that do the kind of programming that you
have referred to, to make sure that my designs work the way
that I expect them to under the covers, as implemented in
the kernel.

Here is a specific concrete example:
I am assuming that all pwrite(), pread() and append()
operations are atomic specifically because the kernel forces
these to be executed sequentially. Is this the means that
the kernel uses to make these operations atomic?


> --
> \ Jens Thoms Toerring ___ jt(a)toerring.de
> \__________________________ http://toerring.de


From: David Schwartz on
On Apr 7, 7:01 am, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:

> If it runs at all it ruins my cache spatial locality of
> reference and makes the other process at least ten-fold
> slower, that is why I want to make these jobs sequential.

You're wrong for way too many reasons for me to point out. Suffice it
to see, operating systems are specifically designed so that this is
not a problem. (To put the most obvious reason in simplest terms -- a
lower-priority process is not going to pre-empt a higher-priority
process. So the context switch rate will be limited to the scheduler's
slice time. The slice time is specifically set large enough so a full
cache refill per slice time is lost in the noise.)

DS
From: David Schwartz on
On Apr 7, 9:22 am, David Schwartz <dav...(a)webmaster.com> wrote:
> On Apr 7, 7:01 am, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:
>
> > If it runs at all it ruins my cache spatial locality of
> > reference and makes the other process at least ten-fold
> > slower, that is why I want to make these jobs sequential.

Actually, I should point out that there is one exception. If you have
CPUs where cores share an L3 cache, code running in the other core can
use L3 cache. But to avoid that, you'd need a dummy thread/process at
a very high priority whose sole purpose is to use that core without
using cache. It would be very tricky to control that dummy thread/
process, and it would tank system performance if you didn't make sure
it wound up on the right core and ran only when needed. I would only
resort to such a thing if all other possibilities had already been
tried and had been proven to be unable to get working.

DS

From: Peter Olcott on
My 8 MB of L3 cache can be refilled in less time that the
context switch?
Exactly what is involved with a context switch besides
saving and initializing the machine registers?

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:9d77b938-13e8-4ed3-b16b-981bf7daa578(a)8g2000yqz.googlegroups.com...
On Apr 7, 7:01 am, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:

> If it runs at all it ruins my cache spatial locality of
> reference and makes the other process at least ten-fold
> slower, that is why I want to make these jobs sequential.

You're wrong for way too many reasons for me to point out.
Suffice it
to see, operating systems are specifically designed so that
this is
not a problem. (To put the most obvious reason in simplest
terms -- a
lower-priority process is not going to pre-empt a
higher-priority
process. So the context switch rate will be limited to the
scheduler's
slice time. The slice time is specifically set large enough
so a full
cache refill per slice time is lost in the noise.)

DS


From: David Schwartz on
On Apr 7, 9:33 am, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:

> My 8 MB of L3 cache can be refilled in less time that the
> context switch?

No, that's not what I said.

> Exactly what is involved with a context switch besides
> saving and initializing the machine registers?

You missed my point completely. Please read every word I wrote one
more time. Here it is again:

"[A] lower-priority process is not going to pre-empt a higher-priority
process. So the context switch rate will be limited to the scheduler's
slice time. The slice time is specifically set large enough so a full
cache refill per slice time is lost in the noise."

This has nothing whatsoever to do with how long it takes to perform a
context switch and everything to do with how *often* you perform
context switches.

DS