From: Peter Olcott on
I have to have the 3.5 minute thread run. I have to have the
50 ms thread have absolute priority over the 3.5 minute
thread. Ideally I would like the 50 ms thread to preempt the
3.5 minute thread and have the 3.5 minute thread pick up
exactly where it left off the next time it gets scheduled.

I want the design to be as simple as possible. I want the
design to be as efficient as possible. (Possibly A carefully
balanced tradeoff)

This only seems to leave one category of solution, when it
is also known that there is plenty of memory to keep all
four levels of priority resident in memory. Do you have any
other ideas that meet these specs?

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:9a03c50d-7d4e-4592-a0f4-88860dfbf3a5(a)u22g2000yqf.googlegroups.com...
On Apr 5, 6:49 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:

> David's proposal will not work because the 3.5 minute
> thread
> would have to finish before the 50 ms thread could begin.
> This would provide intolerably long response time for the
> 50
> ms thread.

Only if you design is such that the 3.5 minute thread would
have to
finish before the 50 ms thread began. Forget about which
threads run
-- make sure threads are only doing the work *you* want done
(or, at
worst, don't do work that's harmful to the forward progress
you need
to make). Then it won't matter much what the scheduler does.

DS


From: David Schwartz on
On Apr 5, 8:44 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:

> I have to have the 3.5 minute thread run. I have to have the
> 50 ms thread have absolute priority over the 3.5 minute
> thread. Ideally I would like the 50 ms thread to preempt the
> 3.5 minute thread and have the 3.5 minute thread pick up
> exactly where it left off the next time it gets scheduled.

I don't think that's what you want. As I explained, that can lead to
priority tasks getting massively delayed when they need to acquire
locks that lower-priority threads held when they were pre-empted.

> This only seems to leave one category of solution, when it
> is also known that there is plenty of memory to keep all
> four levels of priority resident in memory.

How will that help?

> Do you have any
> other ideas that meet these specs?

Yes, stop fighting yourself. Just code what you want.

If you don't want a thread to be doing the 3.5 minute job, because
there's something more important to do, for the love of god DON'T CODE
IT TO DO THAT JOB.

What you're trying to do is code the thread to do one thing and then
use some clever manipulation to get it to do something else. Just code
the thread to do the work you want done and you won't have to find
some way to pre-empt it or otherwise "trick" it.

DS
From: David Schwartz on
On Apr 5, 12:43 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:

> I am trying to find algorithm that makes cache as
> ineffective as possible. I want to get an accurate measure
> of the worst case performance of my deterministic finite
> automaton.  The only thing that I can think of is to make
> sure that each memory access is more than max cache size
> away from the prior one. This should eliminate spatial
> locality of reference.

You have some fundamental misunderstandings about how cache works. If
you have, say, a 512KB cache, there is no spacial locality difference
between two accesses 511KB apart and two accesses 513KB apart. A 512KB
cache will treat those accesses precisely the same.

DS
From: Peter Olcott on

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:2bbaa36f-fb22-492f-8ae1-15ca75943974(a)y17g2000yqd.googlegroups.com...
On Apr 5, 8:44 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:

> I have to have the 3.5 minute thread run. I have to have
> the
> 50 ms thread have absolute priority over the 3.5 minute
> thread. Ideally I would like the 50 ms thread to preempt
> the
> 3.5 minute thread and have the 3.5 minute thread pick up
> exactly where it left off the next time it gets scheduled.

--I don't think that's what you want. As I explained, that
can lead to
--priority tasks getting massively delayed when they need to
acquire
--locks that lower-priority threads held when they were
pre-empted.

The only lock that I will need is when the customer's
account balance is updated, this could be an exception to
preempting.

> This only seems to leave one category of solution, when it
> is also known that there is plenty of memory to keep all
> four levels of priority resident in memory.

--How will that help?

The 50 ms task will tend to achieve its 500 ms goal much
more frequently.

> Do you have any
> other ideas that meet these specs?

--Yes, stop fighting yourself. Just code what you want.
I never ever begin the slightest trace of coding until the
design is complete.

--If you don't want a thread to be doing the 3.5 minute job,
because
--there's something more important to do, for the love of
god DON'T CODE
--IT TO DO THAT JOB.

I must do the 3.5 minute job, but every 50 ms job can
preempt it. It is more important that the 50 ms job gets
done (in as best as can be accomplished) within 500 ms
response time, than it is that the 3.5 minute job achieve
less than 12 hour response time.

--What you're trying to do is code the thread to do one
thing and then
--use some clever manipulation to get it to do something
else. Just code
--the thread to do the work you want done and you won't have
to find
--some way to pre-empt it or otherwise "trick" it.
--
--DS

I always want the 50 ms jobs to have all of the CPU
resources, one whole CPU core to itself.


From: Peter Olcott on
If I have 8MB of cache then random access to a 2 GB memory
space will have little benefit from the 8MB cache. I know
this from empirical testing and analysis. Do you disagree?

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:61b0d032-d1ad-43d5-94ac-ff02efd2eca7(a)g30g2000yqc.googlegroups.com...
On Apr 5, 12:43 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:

> I am trying to find algorithm that makes cache as
> ineffective as possible. I want to get an accurate measure
> of the worst case performance of my deterministic finite
> automaton. The only thing that I can think of is to make
> sure that each memory access is more than max cache size
> away from the prior one. This should eliminate spatial
> locality of reference.

You have some fundamental misunderstandings about how cache
works. If
you have, say, a 512KB cache, there is no spacial locality
difference
between two accesses 511KB apart and two accesses 513KB
apart. A 512KB
cache will treat those accesses precisely the same.

DS