From: Scott Lurndal on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>
>"Ian Collins" <ian-news(a)hotmail.com> wrote in message
>news:81vfqpFa9rU3(a)mid.individual.net...
>> On 04/ 6/10 09:27 AM, Peter Olcott wrote:
>>
>> I know you are using an awful client, but please fix your
>> quoting!
>>
>>> I was going to do the thread priority thing very simply.
>>> Only one thread can run at a time, and the purpose of the
>>> multiple threads was to make fast context switching using
>>> thread local data.

maintain a single, priority ordered, queue of pending work
items. Create one logical thread per physical thread or core
and have each grab the next element from the top of the queue.

New requests are inserted in priority order into the queue.

Note this can result in starvation for lower priority requests,
if higher priority requests occur with sufficient frequency,
but it avoids the extremely silly idea of using thread priority.

scott
From: Peter Olcott on

"Scott Lurndal" <scott(a)slp53.sl.home> wrote in message
news:D1Kun.92145$DU3.83045(a)news.usenetserver.com...
> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>>
>>"Ian Collins" <ian-news(a)hotmail.com> wrote in message
>>news:81vfqpFa9rU3(a)mid.individual.net...
>>> On 04/ 6/10 09:27 AM, Peter Olcott wrote:
>>>
>>> I know you are using an awful client, but please fix
>>> your
>>> quoting!
>>>
>>>> I was going to do the thread priority thing very
>>>> simply.
>>>> Only one thread can run at a time, and the purpose of
>>>> the
>>>> multiple threads was to make fast context switching
>>>> using
>>>> thread local data.
>
> maintain a single, priority ordered, queue of pending work
> items. Create one logical thread per physical thread or
> core
> and have each grab the next element from the top of the
> queue.
>
> New requests are inserted in priority order into the
> queue.
>
> Note this can result in starvation for lower priority
> requests,
> if higher priority requests occur with sufficient
> frequency,
> but it avoids the extremely silly idea of using thread
> priority.
>
> scott

That won't work because this design could start on the 3.5
minute job, and not even be aware of the 50 ms job until it
completes. I think that I will have a simple FIFO queue at
first. Later on I will have at least two FIFO queues where
one preempts the other. There won't be very many 3.5 minute
jobs, and all of the other jobs will be fast enough to start
with.


From: Scott Lurndal on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>
>"Scott Lurndal" <scott(a)slp53.sl.home> wrote in message
>news:D1Kun.92145$DU3.83045(a)news.usenetserver.com...
>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>>>
>>>"Ian Collins" <ian-news(a)hotmail.com> wrote in message
>>>news:81vfqpFa9rU3(a)mid.individual.net...
>>>> On 04/ 6/10 09:27 AM, Peter Olcott wrote:
>>>>
>>>> I know you are using an awful client, but please fix
>>>> your
>>>> quoting!
>>>>
>>>>> I was going to do the thread priority thing very
>>>>> simply.
>>>>> Only one thread can run at a time, and the purpose of
>>>>> the
>>>>> multiple threads was to make fast context switching
>>>>> using
>>>>> thread local data.
>>
>> maintain a single, priority ordered, queue of pending work
>> items. Create one logical thread per physical thread or
>> core
>> and have each grab the next element from the top of the
>> queue.
>>
>> New requests are inserted in priority order into the
>> queue.
>>
>> Note this can result in starvation for lower priority
>> requests,
>> if higher priority requests occur with sufficient
>> frequency,
>> but it avoids the extremely silly idea of using thread
>> priority.
>>
>> scott
>
>That won't work because this design could start on the 3.5
>minute job, and not even be aware of the 50 ms job until it
>completes. I think that I will have a simple FIFO queue at
>first. Later on I will have at least two FIFO queues where
>one preempts the other. There won't be very many 3.5 minute
>jobs, and all of the other jobs will be fast enough to start
>with.

Given that all modern processors have at least two hardware
threads/cores and for an application like this, you'll probably
want a 4, 6 or 12 core processor (Shanghai, Istanbul or Magney-cours).

With 12 hardware threads, you'd need 12 3.5 minute jobs all
happening at the same time to starve the 50ms job. In any
case, have two queues then, one for short jobs and one for long
jobs and partition the hardware threads appropriately amongst
the queues.

scott
From: Peter Olcott on

"Scott Lurndal" <scott(a)slp53.sl.home> wrote in message
news:1FKun.92148$DU3.33231(a)news.usenetserver.com...
> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>>
>>"Scott Lurndal" <scott(a)slp53.sl.home> wrote in message
>>news:D1Kun.92145$DU3.83045(a)news.usenetserver.com...
>>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>>>>
>>>>"Ian Collins" <ian-news(a)hotmail.com> wrote in message
>>>>news:81vfqpFa9rU3(a)mid.individual.net...
>>>>> On 04/ 6/10 09:27 AM, Peter Olcott wrote:
>>>>>
>>>>> I know you are using an awful client, but please fix
>>>>> your
>>>>> quoting!
>>>>>
>>>>>> I was going to do the thread priority thing very
>>>>>> simply.
>>>>>> Only one thread can run at a time, and the purpose of
>>>>>> the
>>>>>> multiple threads was to make fast context switching
>>>>>> using
>>>>>> thread local data.
>>>
>>> maintain a single, priority ordered, queue of pending
>>> work
>>> items. Create one logical thread per physical thread
>>> or
>>> core
>>> and have each grab the next element from the top of the
>>> queue.
>>>
>>> New requests are inserted in priority order into the
>>> queue.
>>>
>>> Note this can result in starvation for lower priority
>>> requests,
>>> if higher priority requests occur with sufficient
>>> frequency,
>>> but it avoids the extremely silly idea of using thread
>>> priority.
>>>
>>> scott
>>
>>That won't work because this design could start on the 3.5
>>minute job, and not even be aware of the 50 ms job until
>>it
>>completes. I think that I will have a simple FIFO queue at
>>first. Later on I will have at least two FIFO queues where
>>one preempts the other. There won't be very many 3.5
>>minute
>>jobs, and all of the other jobs will be fast enough to
>>start
>>with.
>
> Given that all modern processors have at least two
> hardware
> threads/cores and for an application like this, you'll
> probably
> want a 4, 6 or 12 core processor (Shanghai, Istanbul or
> Magney-cours).
>
> With 12 hardware threads, you'd need 12 3.5 minute jobs
> all
> happening at the same time to starve the 50ms job. In any
> case, have two queues then, one for short jobs and one for
> long
> jobs and partition the hardware threads appropriately
> amongst
> the queues.
>
> scott

I am only focusing on the design of the preemptive
scheduling now. Also the initial hardware will only have a
single core with hyperthreading.


From: Scott Lurndal on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>
>"Scott Lurndal" <scott(a)slp53.sl.home> wrote in message
>news:1FKun.92148$DU3.33231(a)news.usenetserver.com...
>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>>>
>>>"Scott Lurndal" <scott(a)slp53.sl.home> wrote in message
>>>news:D1Kun.92145$DU3.83045(a)news.usenetserver.com...
>>>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> writes:
>>>>>
>>>>>"Ian Collins" <ian-news(a)hotmail.com> wrote in message
>>>>>news:81vfqpFa9rU3(a)mid.individual.net...
>>>>>> On 04/ 6/10 09:27 AM, Peter Olcott wrote:
>>>>>>
>>>>>> I know you are using an awful client, but please fix
>>>>>> your
>>>>>> quoting!
>>>>>>
>>>>>>> I was going to do the thread priority thing very
>>>>>>> simply.
>>>>>>> Only one thread can run at a time, and the purpose of
>>>>>>> the
>>>>>>> multiple threads was to make fast context switching
>>>>>>> using
>>>>>>> thread local data.
>>>>
>>>> maintain a single, priority ordered, queue of pending
>>>> work
>>>> items. Create one logical thread per physical thread
>>>> or
>>>> core
>>>> and have each grab the next element from the top of the
>>>> queue.
>>>>
>>>> New requests are inserted in priority order into the
>>>> queue.
>>>>
>>>> Note this can result in starvation for lower priority
>>>> requests,
>>>> if higher priority requests occur with sufficient
>>>> frequency,
>>>> but it avoids the extremely silly idea of using thread
>>>> priority.
>>>>
>>>> scott
>>>
>>>That won't work because this design could start on the 3.5
>>>minute job, and not even be aware of the 50 ms job until
>>>it
>>>completes. I think that I will have a simple FIFO queue at
>>>first. Later on I will have at least two FIFO queues where
>>>one preempts the other. There won't be very many 3.5
>>>minute
>>>jobs, and all of the other jobs will be fast enough to
>>>start
>>>with.
>>
>> Given that all modern processors have at least two
>> hardware
>> threads/cores and for an application like this, you'll
>> probably
>> want a 4, 6 or 12 core processor (Shanghai, Istanbul or
>> Magney-cours).
>>
>> With 12 hardware threads, you'd need 12 3.5 minute jobs
>> all
>> happening at the same time to starve the 50ms job. In any
>> case, have two queues then, one for short jobs and one for
>> long
>> jobs and partition the hardware threads appropriately
>> amongst
>> the queues.
>>
>> scott
>
>I am only focusing on the design of the preemptive
>scheduling now. Also the initial hardware will only have a
>single core with hyperthreading.
>

You are making this far more complicated than necessary. Have fun.

scott