From: Ersek, Laszlo on
On Wed, 7 Apr 2010, Peter Olcott wrote:

> So if the high priority job takes 100% of the CPU for ten minutes, then
> the low priority job must wait ten minutes?

That's doable.

http://packages.debian.org/lenny/schedtool
http://www.gnu.org/software/libc/manual/html_node/Absolute-Priority.html

lacos
From: Phil Carmody on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
> "Phil Carmody" <thefatphil_demunged(a)yahoo.co.uk> wrote:
>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
....
>>> So the OS will put a thread to sleep, saving its
>>> multi-megabyte state upon a process triggered event?
>>
>> What, *precisely*, do you mean by "saving" the state,
>> and why are you so worried by that?

Complete absense of a response noted.

>>> I would save the state with a simple change of the
>>> integer
>>> subscript into an array of each threads data. Could the
>>> OS
>>> be told that this is what is needed?
>>
>> In which case you have to simultaniously maintain the
>> multi-megabyte
>> state of _all_ of the "thread data".
>
> David has mostly convinced me that my 3.5 minute job is best
> off as its own process.

Was that supposed to be a follow-on from my point? Your ability
to generate or follow a logical argument is completely hatstand.

> The only issue is that the minimum
> sleep period seems to be one second, I could really use it
> to be 100 ms. I might have to build my own sleep system for
> this process.

So you're attempting to optimising the system and reinvent the
scheduler, and yet you've never heard of poll(2)? Why do I feel
that's an unsurprising combination?

Phil
--
I find the easiest thing to do is to k/f myself and just troll away
-- David Melville on r.a.s.f1
From: Peter Olcott on
The one thing that I know is that my fundamental process is
about as memory intensive as a process can get. I also know
in advance what the memory access patterns are much better
than any cache algorithm could possibly infer. Even if a
cache algorithm could infer this pretty well it would
necessarily waste a lot of time gaining this info.

All that really needs to happen is to load my DFA recognizer
directly into cache and not unload it from cache until
recognition is complete. This occurs naturally (reasonably
well) if my process is the only one running. If another
memory intensive process is time sliced in it screws up the
cache resulting in a tenfold degradation to performance.

This analysis of the actual conditions would seem to
indicate that the optimization that I propose might be
better than the optimization that you propose.

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:71a9eeb3-eec0-4bb0-91f7-8cbf09f91bca(a)w42g2000yqm.googlegroups.com...
On Apr 7, 11:32 am, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:

You can't use benchmarks that way. Your naive assumptions
may be right
99% of the time, but in the 1% where they're wrong, you can
deadlock
completely. Benchmarks won't detect that.

If your goal is to write an application that happens to
perform
acceptably under the conditions you test it under, then
fine. But if
your goal is to design an application that will reliably
meet your
requirements, you are going about it all wrong.

> Is there any way to tell the hardware cache to load
> specific
> data?

That's yet another step in the wrong direction. The hardware
cache has
a lot more information than you do -- trying to override its
decision
is likely a huge mistake.

DS


From: Peter Olcott on

"Phil Carmody" <thefatphil_demunged(a)yahoo.co.uk> wrote in
message news:8739z7gdb6.fsf(a)kilospaz.fatphil.org...
> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>> "Phil Carmody" <thefatphil_demunged(a)yahoo.co.uk> wrote:
>>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
> ...
>> The only issue is that the minimum
>> sleep period seems to be one second, I could really use
>> it
>> to be 100 ms. I might have to build my own sleep system
>> for
>> this process.
>
> So you're attempting to optimising the system and reinvent
> the
> scheduler, and yet you've never heard of poll(2)? Why do I
> feel
> that's an unsurprising combination?

I am working on providing a preemptive scheduling system
whereby a process puts itself to sleep when another higher
priority job becomes available.

>
> Phil
> --
> I find the easiest thing to do is to k/f myself and just
> troll away
> -- David Melville on r.a.s.f1


From: Ersek, Laszlo on
On Wed, 7 Apr 2010, Scott Lurndal wrote:

> jt(a)toerring.de (Jens Thoms Toerring) writes:
>> In comp.unix.programmer Jasen Betts <jasen(a)xnet.co.nz> wrote:
>>> select() prefers the lowest numbered file descriptor it's asked to
>>> test/watch so that should be easy to arrange,
>>
>> Just curious: in what sense does select() "prefer" lower numbered
>> file descriptors?
>
> for(i=0; i < num_file_descriptors; i++){
> if (pending_select[i].is_ready) {
> return i;
> }
> }

Where/when does code like this run? From your other post ("on the call,
but not subsequent to the wait", Message-ID:
<jj6vn.1068$MH1.304(a)news.usenetserver.com>) I guess you may be suggesting
this code runs when some socket(s) are already readable/writable when the
application calls select(). In that case, wouldn't it be reasonable for
the kernel to examine/collect all specified sockets, so it can set all
corresponding bits in the fd_set's? That would batch "socket status
reports" and could minimize the number of switches between user-space and
kernel-space while transferring (roughly) the same amount of information.

I would reformulate Jasen's original statement (probably changing its
meaning) like this: the higher the /nfds/ argument is, the more time
select() takes, independently from the number of bits set in the fd_set
arguments. /nfds/ is not even a functionally necessary parameter, its only
purpose is to allow for an early stop.

lacos