From: Joseph M. Newcomer on
See below...
On Wed, 14 Apr 2010 10:00:13 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
>news:MPG.262f20ea7e576d9a98987b(a)news.sunsite.dk...
>> In article
>> <LcqdnfKth5NkuVjWnZ2dnUVZ_jidnZ2d(a)giganews.com>,
>> NoSpam(a)OCR4Screen.com says...
>> http://users.crhc.illinois.edu/nicol/ece541/slides/queueing.pdf
>>
>> Slide 19 has the equation that leads to one of the
>> conclusions Joe
>> posted (stated there as: "...the response time approaches
>> infinity as
>> lambda approaches mu" (Slide 10 contains the definitions
>> of lambda
>> and mu -- the mean arrival rate and mean service rate
>> respectively).
>
>According to the graph on slide 21, somewhere between 80-95%
>of capacity queue length is between about 5-20.
>
>> Slides 31 and 33 show graphs comparing response time with
>> a single
>> queue versus response time with multiple queues (the red
>> line is the
>> response time for multiple queues, the green line for a
>> single
>> queue). By some stroke of amazing luck, slide 31 fits your
>> scenario
>> _exactly_, down to even using exactly the number of server
>> processes
>> that you've described (4), so the graph applies
>> _precisely_ to your
>> situation, and shows exactly how much worse of response
>> time you can
>> expect using one queue for each of your four server
>> processes, versus
>> one queue feeding all four server processes.
>>
>> --
>> Later,
>> Jerry.
>
>I studied it for at least one hour. One huge false
>assumption with this as applied to my problem is found on
>slide 30:
>
>The service rate Mu is fixed. In my case the service rate is
>not a fixed constant but, proportionally increases as fewer
>processes are running.
****
OK, look at it this way: if you get 3 10ms jobs and 1 3-minute job, the mean processing
time is 3:0.030 or 180.030sec for a mean of 45.0075 sec/job. You do remember how to
compute a mean, don't you? Add up all the values and divide by the quantity of values.

This is why SQSS won't work well. But SQMS can work if you apply an anti-starvation
algorithm.

So the mean processing time increases, and this tends to back up the prediction of
throughput under that model. And you still don't understand the basic meanings of lambda
and mu.
****
>
>It does this because some of these processes have already
>completed all of the jobs in their separate queues, thus
>providing all of the CPU time that they were using to the
>remaining processes.
>
>Another possibly very significant false assumption is that
>the arrival rate is anything at all like Lambda / m, where m
>is the number of queues. The actual arrival rate at any one
>queue is completely independent of all of the other queues.
>There are four completely separate and distinctly different
>arrival rates that have nothing at all to do with each
>other.
****
You have false-assumption-fixation. Is this part of your "refute" mode? I pointed out a
fundamental theorem, and you say 'It doesn't apply" without any evidence to the contrary,
because you have no actual running system providing this service. So you don't actually
KNOW what your average arrival rate is!
****
>
>I am sure that correcting for these two false assumptions
>would change the results substantially. I can not accurately
>quantify the degree that it would change these results
>without further study.
****
We call it "building an easily tunable architecture, testing it, and adjusting it to
provide optimum performance". But you sare so convinced that your architecture is perfect
that you don't really want to hear that you should build one that does not require complex
changes to tune it.
joe
****
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
news:MPG.262fb9ae1969571c989882(a)news.sunsite.dk...
> In article
> <r6CdnXmDjpjiRVjWnZ2dnUVZ_s2dnZ2d(a)giganews.com>,
> NoSpam(a)OCR4Screen.com says...

>> Another possibly very significant false assumption is
>> that
>> the arrival rate is anything at all like Lambda / m,
>> where m
>> is the number of queues. The actual arrival rate at any
>> one
>> queue is completely independent of all of the other
>> queues.
>> There are four completely separate and distinctly
>> different
>> arrival rates that have nothing at all to do with each
>> other.
>
> You've clearly misread what's there. Keep in mind that
> what they're
> talking about is a *mean*. So where it talks about the
> arrival rate
> being lambda for the single queue model, and lambda/m for
> the multi-
> queue model, it's NOT saying every queue in the
> multi-queue model
> gets exactly the same number of jobs -- it's just saying
> that the

Actually it is literally saying exactly that,
that is literally and exactly what Lambda / 4 means.

"/" is the divided by symbol so Lambda / 4 means
Lambda divided by 4 .

The actual mean value for each queue will be more like
98% Lambda for the free job queue.
1.99 % Lambda for both categories of paying job queue.
0.01 Lambda for the build a new DFA job queue.


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:fu1cs59pa2dnv9l7j8bvbc8prui3akm9ec(a)4ax.com...
> See below
> On Tue, 13 Apr 2010 21:14:16 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:ad6as5poekc7o1ldfccdtv2v8684rt1f4u(a)4ax.com...
>>> See below....
>>> On Tue, 13 Apr 2010 00:08:10 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>>>Yes and if you don't still remember the details of this
>>>>there is no way to explain these details. Could you
>>>>maybe
>>>>try to find me a good link, I don't know enough about
>>>>this
>>>>stuff to know a good link from a bad one.
>>> ****
>>> I don't need to remember the details; I only have to
>>> remember the results. Sometimes you
>>> need to do massive amounts of computation to derive a
>>> single bit, but once that bit is
>>> derived, you can immediately map a set of input
>>> conditions
>>> to that bit without having to
>>> work through the entire derivation. It is in this way,
>>> we
>>> derive patterns we know work,
>>> and reject patterns we know don't work. Go get a book
>>> on
>>> queueing theory and study it. I
>>> did, many years ago, and
>>
>>> I have a set of known-good patterns and known-bad
>>> patterns, and
>>> all I need to know are the patterns.
>>
>>Not whether or not these patterns still apply after
>>decades
>>of technological advances, or whether or not they apply in
>>a
>>particular situation? That might even work OK as much as
>>most of the time, it is highly doubtful that this would
>>always work well.
>>
> ****
> A model that is derived from a set of axioms which are
> stable across technologies will
> remain valid. But hey, you were the one that insisted on
> "sound reasoning" and "proofs"
> like in geometry.
>
> So if I have a pattern I know is mathematically sound, I
> don't need to worry about it
> being made obsolete by technological changes.
>
> You are flailing about for justification for your bad
> decisions by trying to figure out
> some reason my decisions are wrong.
>
> In fact, SQMS works even BETTER in modern architectures
> than it did decades ago!
> joe

When you referred to a pattern, I wasn't envisioning
something along the lines of the Gang of four, not a
mathematical basis. Even when a mathematical modal is used
care must be taken to ensure that the simplifying
assumptions apply to the case at hand.

This excellent link that Jerry provided:
http://users.crhc.illinois.edu/nicol/ece541/slides/queueing.pdf

This discussion of the benefits of SQMS over MQMS shows
drastic differences in the relative performance, yet makes
simplifying assumptions that clearly do not apply to my
case.

> ****
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
news:MPG.262f871d326d2f3998987d(a)news.sunsite.dk...
> In article
> <KZCdnZAzg6rOX1jWnZ2dnUVZ_rqdnZ2d(a)giganews.com>,
> NoSpam(a)OCR4Screen.com says...
>
> As to why you the service time goes to infinity when the
> arrival rate
> approaches the service rate: it comes down to this: you're
> comparing
> the _peak_ processing rate to the _average_ arrival rate.
>
> If the system is ever idle, even for a moment, that means
> the
> _average_ processing rate has dropped (to zero for the
> duration of
> the idle period) -- but since the average arrival rate has
> not
> dropped, the processor is really getting behind.
>
> In a practical system, the average processing rate will
> always be at
> least a little lower than the peak processing rate --
> which means
> that over time, the latency for each job (i.e. the time
> from arrival
> to result) will rise to infinity.
>
> --
> Later,
> Jerry.

I see that you are right in this now. When I was forming my
reply to Joe, I saw that this makes perfect sense. When I
had to carefully think this through again, in forming my
reply to Joe, it suddenly gelled in my mind.

What was screwing me up is my temporary inability to
sufficiently conceptualize the stochastic nature of the
arrival rate. You could have an average arrival rate of five
cars per hour that manifests as no cars per hour for the
first four hours and ten cars per hour during the last four
hours.


From: Jerry Coffin on
In article <e4S4Gt$2KHA.4540(a)TK2MSFTNGP04.phx.gbl>, sant9442
@nospam.gmail.com says...

[ ... ]

> Jerry, I don't wish to act or play moderator, by no means do I wish to
> show any disrespect here. He got his answers to a wide degree, but no
> amount of insight by scientists, engineers and experts in the field is
> good enough. Its really time to ignore this troll.

As much as I prefer to give people the benefit of the doubt, I'm
quickly realizing that you're probably right.

--
Later,
Jerry.