From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:jca9s5paj2239743msi6ils7rq6ijhems4(a)4ax.com...
> See below...
> On Mon, 12 Apr 2010 19:53:40 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>If jobs come in at exactly the same rate that it takes
>>them
>>to be processed including every little nuance of process
>>overhead, then the queue grows to infinite length? I don't
>>see how this can occur. Could you explain it, or at least
>>point me to a link that explains it?
> ****
> It is one of the fundamental results in queueing theory
> that we proved, mathematically,
> using "sound reasoning" (because queueing models have
> closed-form analytic solutions) in
> the first week of the queueing theory section of the O.R.
> course. Go get a book on
> elementary queueing theory. This is considered one of the
> important results because it is
> so counterintuitive, which just proves how "intuitive
> reasoning" is not "sound reasoning".
> It has been more than 40 years since I proved that
> theorem, and I no longer recall the
> details of the proof, but the result was easy to remember.
> (Actually, the exercise goes
> something like "compute the maximum queue length if the
> interarrival time exactly equals
> the processing time" and it has a singularity that means
> it goes infinite; furthermore, if
> you build a discrete-event simulation model, the program
> can graph the queue size, and
> before it runs out of memory, it will show the size
> climbing to infinity. In the simplest
> form, you just use a counter, and therefore, at least in
> those days when we submitted the
> programs on punched cards, the simulation ran out of time
> and was kicked off the machine
> with the curve still climbing. We had to do this to show
> that the singularity was not
> just a mathematical artifact that didn't actually work out
> in practice!)
>
> joe

I will take you word for it. The reasoning that you provided
above seemed to be sufficiently sound. I am estimating that
this would require my arrival rate to be the slightest trace
less than capacity, or is it a measurable size larger than
this?

> ****
>>
>>> ****
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:vta9s55ldp93l32ca2imcpf6597gpjub7s(a)4ax.com...
> See below...
> On Mon, 12 Apr 2010 19:46:38 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>So Linux thread time slicing is infinitely superior to
>>Linux
>>process time slicing?
> ****
> I do not distinguish between threads in a single process
> and threads which are in
> different processes. This distinction apparently exists
> only in your own mind, probably
> caused by the fact that you have confused the
> pseudo-threads library with real threads.
> *****
>>
>>One of my two options for implementing priority scheduling
>>was to simply have the OS do it by using Nice to set the
>>process priority of the process that does the high
>>priority
>>jobs to a number higher than that of the lower priority
>>jobs.
> ****
> This has system-wide implications, and can interfere with
> the correct behavior of every
> other task the system is managing. This includes your Web
> server, and any other process
> the system is running. YOu have to be EXTREMELY careful
> how you muck around with thread
> priorities.
> joe

OK. I was considering one of two alternatives, either
raising the priority of the web server and one OCR process
by one decrement (negative values have higher priority) or
lowering the priority of the other three processes. I will
probably test both and see if the more conservative one
gives me enough of what I need.

> ****
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:d5b9s5lhtm18kffjki39ks9bilrbdogio1(a)4ax.com...
> See below....
> On Mon, 12 Apr 2010 23:22:21 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
>>news:MPG.262d7a6da11542c4989872(a)news.sunsite.dk...
>>> In article
>>> <pYidndO7AuRyI17WnZ2dnUVZ_rednZ2d(a)giganews.com>,
>>> NoSpam(a)OCR4Screen.com says...
>>>
>>> [ ... ]
>>>
>>>> So Linux thread time slicing is infinitely superior to
>>>> Linux
>>>> process time slicing?
>>>
>>> Yes, from the viewpoint that something that exists and
>>> works (even
>>> poorly) is infinitely superior to something that simply
>>> doesn't exist
>>> at all.
>>
>>David Schwartz from the Linux/Unix groups seems to
>>disagree.
>>I can't post a google link because it doesn't exist in
>>google yet. Here is the conversation.
> ****
> I don't see anything here that matters; he explains that
> there is a very complex
> scheduling mechanism (which, apparently, you can
> completely predict the behavior of), and
> he does not mention the effects of playing with priorities
> would have on the rest of the
> system (and if you have a closed-form analytic model that
> lets you predict this perfectly,
> this is another reason you should enroll in a PhD program,
> because nobody else has such a
> methodology available, so you should get a PhD for being
> able to show this). If you do
> not have such a closed-form solution, you have no "sound
> reasoning" to base your decisions
> on.
>
> What I see below is a detailed handwave on how the linux
> scheduler works. But nothing
> that is really useful to tell you what priorities to set.
> Or what will happen if you set
> them.

OK so this requires further study and testing.


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:ldb9s5tc2b5ravig9v8p6ql7qn45jmgr4i(a)4ax.com...
> See below...
> On Mon, 12 Apr 2010 19:19:18 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>I am very happy to pay attention to your sound reasoning.
>>Every time that I see complete sound reasoning that
>>refutes
>>my position, I immediately change my position.
>>
> ****
> Has it occurred to you to try to do sound reasoning on
> your own?
> joe
> ****

I am doing the best that I can with the limited (but
growing) knowledge that I have.

>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:cfb9s5dn105a5i8j0ori6us9od2k8r46ht(a)4ax.com...
> See below...
> On Mon, 12 Apr 2010 19:39:54 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:acs6s59011mhn54fbp4sbbttiegs2t6o4f(a)4ax.com...
>>> See below...
>>> On Mon, 12 Apr 2010 09:47:29 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>
>>> How is a single-core 2-hyperthreaded CPU different
>>> logically than a 2-core
>>> non-hyperthreaded system (Hint: the hyperthreaded
>>> machine
>>> has about 1.3x the performance
>>> of a single-core machine but the dual-processor system
>>> has
>>> about 1.8x the performance).
>>> But logically, they are identical! The reduction in
>>> performance is largely due to
>>> cache/TLB issues
>>
>>There you go sounding reasoning. I didn't know that, but,
>>the reasoning makes sense.
> ****
> You could have found all this out on your own. It has
> been known for years, since the
> first hyperthreaded machines came out.
> joe
> ****

I am estimating that the 1.8 factor might improve for
processes that are very CPU bound yet need little memory.

>>
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm