From: Hector Santos on
Jerry Coffin wrote:

> In article <U6qdndrIQtU5Q1jWnZ2dnUVZ_judnZ2d(a)giganews.com>,
> NoSpam(a)OCR4Screen.com says...
>
> [ ... ]
>
>> That makes complete sense, but, the technical author
>> explicitly used the term (on slide 10) "mean job arrival
>> rate", (Lambda) and "mean service rate" (Mu).
>
> There are two different "means" in play here. The mean he's using is
> the mean of the processing rates for different kinds of jobs. For
> example, let's assume your processing rate is 10 ms per page. Let's
> also assume that your job size averages out to 1.5 pages. That gives
> a mean processing time of 15 ms, and therefore your mu is ~66.7 jobs
> per second (i.e. 1/0.015).
>
> Despite that, when your processor doesn't have a job to do, it can't
> do anything -- and therefore, the fact that it _could_ process ~66.7
> jobs per second doesn't change the fact that for that duration, it IS
> processing exactly 0 jobs per second.


Jerry, I don't wish to act or play moderator, by no means do I wish to
show any disrespect here. He got his answers to a wide degree, but no
amount of insight by scientists, engineers and experts in the field is
good enough. Its really time to ignore this troll.

--
HLS
From: Joseph M. Newcomer on
See below
On Tue, 13 Apr 2010 21:14:16 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:ad6as5poekc7o1ldfccdtv2v8684rt1f4u(a)4ax.com...
>> See below....
>> On Tue, 13 Apr 2010 00:08:10 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>Yes and if you don't still remember the details of this
>>>there is no way to explain these details. Could you maybe
>>>try to find me a good link, I don't know enough about this
>>>stuff to know a good link from a bad one.
>> ****
>> I don't need to remember the details; I only have to
>> remember the results. Sometimes you
>> need to do massive amounts of computation to derive a
>> single bit, but once that bit is
>> derived, you can immediately map a set of input conditions
>> to that bit without having to
>> work through the entire derivation. It is in this way, we
>> derive patterns we know work,
>> and reject patterns we know don't work. Go get a book on
>> queueing theory and study it. I
>> did, many years ago, and
>
>> I have a set of known-good patterns and known-bad
>> patterns, and
>> all I need to know are the patterns.
>
>Not whether or not these patterns still apply after decades
>of technological advances, or whether or not they apply in a
>particular situation? That might even work OK as much as
>most of the time, it is highly doubtful that this would
>always work well.
>
****
A model that is derived from a set of axioms which are stable across technologies will
remain valid. But hey, you were the one that insisted on "sound reasoning" and "proofs"
like in geometry.

So if I have a pattern I know is mathematically sound, I don't need to worry about it
being made obsolete by technological changes.

You are flailing about for justification for your bad decisions by trying to figure out
some reason my decisions are wrong.

In fact, SQMS works even BETTER in modern architectures than it did decades ago!
joe
****
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Wed, 14 Apr 2010 08:25:38 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
>news:MPG.262f20ea7e576d9a98987b(a)news.sunsite.dk...
>> In article
>> <LcqdnfKth5NkuVjWnZ2dnUVZ_jidnZ2d(a)giganews.com>,
>> NoSpam(a)OCR4Screen.com says...
>>
>> [ ... ]
>>
>>> Not whether or not these patterns still apply after
>>> decades
>>> of technological advances, or whether or not they apply
>>> in a
>>> particular situation? That might even work OK as much as
>>> most of the time, it is highly doubtful that this would
>>> always work well.
>>
>> You've already been given the term to Google for
>> ("queueing theory").
>> Do us the courtesy of spending a *Little* (sorry Joe, I
>> just couldn't
>> resist the pun) time reading up on at least the most
>> minimal basics
>> of what the theory covers.
>>
>> The bottom line is that while technological advances can
>> (and
>> constantly do) change the values of some of the variables
>> you plug
>> into the equations, but have precisely _zero_ effect on
>> the theory or
>> equations themselves.
>>
>> Since you're too lazy to look for yourself, here's one
>> page that
>> covers the basics reasonably well:
>
>I did look yesterday. I spent at least two hours looking.
>I found this:
>
>http://docs.google.com/viewer?a=v&q=cache:Hb_P22Cj9OAJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.93.429%26rep%3Drep1%26type%3Dpdf+multi+queue+multi+server&hl=en&gl=us&pid=bl&srcid=ADGEESh1kerH3RGqAvIEul4ryHpwxxU5HdWzS3edrtXW764CJUPudOBFnvTmUvl7W3uBXqe046N1tNkirmGqVOkUlmWQWTZQgLLwQHf5LolcXX43mvOEc3k0wR55vXqYAklq8Fu2-qgL&sig=AHIEtbSrBAf6HW8XDtNinTOdsNx5lf9tNQ
>
>>
>> http://users.crhc.illinois.edu/nicol/ece541/slides/queueing.pdf
>>
>> Slide 19 has the equation that leads to one of the
>> conclusions Joe
>> posted (stated there as: "...the response time approaches
>> infinity as
>> lambda approaches mu" (Slide 10 contains the definitions
>> of lambda
>> and mu -- the mean arrival rate and mean service rate
>> respectively).
>
>Joe said that this result is counter-intuitive.
>Bill can work on ten cars an hour, how long will it take
>Bill to finish his work if ten cars arrive per hour for four
>hours?
>
****
Plug the values into the equation. You seem to be claiming that "sound reasoning" trumps
solid mathematical proof techniques, yet you were insisting we give you solid proofs. But
when the solid proof contradicts your flawed reasoning, then you feel you must claim your
flawed reasoning must be correct? What did I miss here?
****
>(Six and one half hours because Bill gets tired quickly).
>Note there must be a [because] somewhere, otherwise it must
>be four hours. I never did get to this [because] other than
>because of math magic.
****
Note that by putting an upper bound on the number of arrivals (10 cars an hour for 4
hours) you violate the basic premise, which is that there is a continuous arrival, and you
toss in some weird concept about "tiredness" (do your server processes get "tired"?)
****
>
>>
>> Slides 31 and 33 show graphs comparing response time with
>> a single
>> queue versus response time with multiple queues (the red
>> line is the
>> response time for multiple queues, the green line for a
>> single
>> queue). By some stroke of amazing luck, slide 31 fits your
>> scenario
>> _exactly_, down to even using exactly the number of server
>> processes
>> that you've described (4), so the graph applies
>> _precisely_ to your
>> situation, and shows exactly how much worse of response
>> time you can
>> expect using one queue for each of your four server
>> processes, versus
>> one queue feeding all four server processes.
>
>On many of the links that I did find "M" mean Markov, and
>not Multiple. Here is what I did find, it seems to disagree
>with your link and Joe's idea:
>
>http://docs.google.com/viewer?a=v&q=cache:Hb_P22Cj9OAJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.93.429%26rep%3Drep1%26type%3Dpdf+multi+queue+multi+server&hl=en&gl=us&pid=bl&srcid=ADGEESh1kerH3RGqAvIEul4ryHpwxxU5HdWzS3edrtXW764CJUPudOBFnvTmUvl7W3uBXqe046N1tNkirmGqVOkUlmWQWTZQgLLwQHf5LolcXX43mvOEc3k0wR55vXqYAklq8Fu2-qgL&sig=AHIEtbSrBAf6HW8XDtNinTOdsNx5lf9tNQ
>
***
In describing various architectures, we use "S" for "Single" and "M" for "Multiple", such
as SIMD, MIMD, etc. for hardware architectures. So "M" can mean "Markov" in one notation
and "Multiple" in another. I already made it clear by definining the terms, that "M"
meant multiple in the notation I was using, so I guess you missed THAT message as well.

I am not responsible for your inability to find, read and/or comprehend information.
joe
*****
>>
>> --
>> Later,
>> Jerry.
>
>I will study your link and see if I can understand it. It
>does show a huge difference between the two models.
>Ultimately there has to be a reason for this that can be
>explained as something other than math magic.
****
Ohh, so the new rule is "You must prove everything you tell me, with solid mathematical
proofs, but if your proofs violate my intuition, then your proofs don't count"

I love the Magic Morphing Validity Criteria. Why don't we just state your basic
philosophy as

"I am always right, and everyone else is always wrong"

and let it go at that?
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Wed, 14 Apr 2010 09:16:01 -0600, Jerry Coffin <jerryvcoffin(a)yahoo.com> wrote:

>In article <MPG.262f871d326d2f3998987d(a)news.sunsite.dk>,
>jerryvcoffin(a)yahoo.com says...
>
>[ ... ]
>
>> As it happens, you can have
>> "M" in a couple of different spots -- one of them does mean "Markov",
>> but (amazingly enough) the other does not.
>
>I should probably correct this on two points: as I pointed out later
>in the same post, upper-case "M" can really occur in either of the
>first two spots, and in either of those cases, it does refer to
>"Markov". A lower-case "m" can occur in the third spot, and when it
>does, it does not refer to "Markov".
***
I had defined shorthand acronyms and given precise definitions of what *I* meant by the
acronyms, but he thinks there is an Absolute God Of Notation, and we may not violate these
absolute rules about notation without special religious dispensation, which apparently I
did not get. Never mind that I am free to define any acronym I want, as long as a define
it and use it properly relative to my definition!
joe

Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Jerry Coffin on
In article <r6CdnXmDjpjiRVjWnZ2dnUVZ_s2dnZ2d(a)giganews.com>,
NoSpam(a)OCR4Screen.com says...
> I studied it for at least one hour. One huge false
> assumption with this as applied to my problem is found on
> slide 30:
>
> The service rate Mu is fixed. In my case the service rate is
> not a fixed constant but, proportionally increases as fewer
> processes are running.
>
> It does this because some of these processes have already
> completed all of the jobs in their separate queues, thus
> providing all of the CPU time that they were using to the
> remaining processes.

You're mis-interpreting the situation. Your overall total *is*
essentially fixed -- from what you've said, something like 10 ms per
page. Devoting that entirely to one OCR engine or splitting it
between four makes no difference to the overall rate.

Yes, if you want to get _really_ technical, your overall rate isn't
absolutely fixed -- you might save a whole microsecond on context
switches by ONLY have one OCR engine active versus having four. If,
as you've said, the static data involved has been reduced
substantially, and most (if not all) can be shared between the OCR
engines, an initial guess would be that this will affect speed by
less than 1%. It's impossible to say without measuring, of course,
but it's _unlikely_ to have a material effect.

> Another possibly very significant false assumption is that
> the arrival rate is anything at all like Lambda / m, where m
> is the number of queues. The actual arrival rate at any one
> queue is completely independent of all of the other queues.
> There are four completely separate and distinctly different
> arrival rates that have nothing at all to do with each
> other.

You've clearly misread what's there. Keep in mind that what they're
talking about is a *mean*. So where it talks about the arrival rate
being lambda for the single queue model, and lambda/m for the multi-
queue model, it's NOT saying every queue in the multi-queue model
gets exactly the same number of jobs -- it's just saying that the
overall average comes out to that. To put it differently, it's saying
that they're assuming the total number of jobs arriving is the same
for both models.

What they're ruling out is, for one example, what Starbucks has found
applies to their queueing -- many people won't get in line if they
see more than about a half dozen (or so) people already in line. In
this case, the total arrival rate changes depending on the current
queue length.

--
Later,
Jerry.