From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:p62cs516tucg20sul4cfiqqqci00skavuc(a)4ax.com...
> See below...
> On Wed, 14 Apr 2010 08:25:38 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>Joe said that this result is counter-intuitive.
>>Bill can work on ten cars an hour, how long will it take
>>Bill to finish his work if ten cars arrive per hour for
>>four
>>hours?
>>
> ****
> Plug the values into the equation. You seem to be
> claiming that "sound reasoning" trumps
> solid mathematical proof techniques, yet you were
> insisting we give you solid proofs. But
> when the solid proof contradicts your flawed reasoning,
> then you feel you must claim your
> flawed reasoning must be correct? What did I miss here?
> ****
>>(Six and one half hours because Bill gets tired quickly).
>>Note there must be a [because] somewhere, otherwise it
>>must
>>be four hours. I never did get to this [because] other
>>than
>>because of math magic.
> ****

Now I completely get this whole aspect. The reason (and
there really was a reason, like I said there must always be)
that queue length approaches infinity as arrival rate
approaches service rate is: (merely a restatement of
Jerry's words)

That the stochastic nature of the arrival rate (Lambda)
tends to cause the server to become idle at times, thus
reducing the actual service rate below that of its
theoretical maximum rate (Mu).

It is the specific assumption of the exponential
distribution that provides the specifically quantified
numerical values for queue length relative differing values
of Lambda / Mu.


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:643cs51dhpgpcn93rmec0avcpsk763sc1q(a)4ax.com...
> See below...
> On Wed, 14 Apr 2010 10:00:13 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>The service rate Mu is fixed. In my case the service rate
>>is
>>not a fixed constant but, proportionally increases as
>>fewer
>>processes are running.
> ****
> OK, look at it this way: if you get 3 10ms jobs and 1
> 3-minute job, the mean processing
> time is 3:0.030 or 180.030sec for a mean of 45.0075
> sec/job. You do remember how to
> compute a mean, don't you? Add up all the values and
> divide by the quantity of values.
>
> This is why SQSS won't work well. But SQMS can work if
> you apply an anti-starvation
> algorithm.

Simpler provide one means or another, whatever works the
best, to provide absolute priority of the 10 ms jobs over
the other jobs. The large paying jobs may have absolute
priority over every jobs besides the high priority jobs, on
done to the free jobs that get done whenever time is
available, if any every becomes available.

> So the mean processing time increases, and this tends to
> back up the prediction of
> throughput under that model. And you still don't
> understand the basic meanings of lambda
> and mu.
> ****

I do yet there are going to be four Lambdas not one:
98% of jobs are free jobs
1.89 % of jobs are small paying jobs.
0.10 % of jobs are large paying jobs.
0.01 of jobs are build a new DFA jobs

The service rate for each of these jobs will greatly vary
depending upon whether or not other job types are running.
For example the free jobs could take anywhere from 10 ms to
several hours.

> ****
> We call it "building an easily tunable architecture,
> testing it, and adjusting it to
> provide optimum performance". But you sare so convinced
> that your architecture is perfect
> that you don't really want to hear that you should build
> one that does not require complex
> changes to tune it.
> joe

I want to do that best that I can against multiple and
somewhat competing criterion measures.


From: Peter Olcott on

"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
news:MPG.262fc7cf6d6769b989883(a)news.sunsite.dk...
> In article <e4S4Gt$2KHA.4540(a)TK2MSFTNGP04.phx.gbl>,
> sant9442
> @nospam.gmail.com says...
>
> [ ... ]
>
>> Jerry, I don't wish to act or play moderator, by no means
>> do I wish to
>> show any disrespect here. He got his answers to a wide
>> degree, but no
>> amount of insight by scientists, engineers and experts in
>> the field is
>> good enough. Its really time to ignore this troll.
>
> As much as I prefer to give people the benefit of the
> doubt, I'm
> quickly realizing that you're probably right.
>
> --
> Later,
> Jerry.

Make sure that you read my post where I completely agree
with you first.


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:643cs51dhpgpcn93rmec0avcpsk763sc1q(a)4ax.com...
> See below...
> On Wed, 14 Apr 2010 10:00:13 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
> We call it "building an easily tunable architecture,
> testing it, and adjusting it to
> provide optimum performance". But you sare so convinced
> that your architecture is perfect
> that you don't really want to hear that you should build
> one that does not require complex
> changes to tune it.
> joe
> ****

(1) None of my proposals would require complex changes to
tune.
(2) It is only cost effective to begin building after the
basic design is right. The are several large issues
remaining to be resolved:

Jerry said that turning off the hard drive cache is a bad
idea, and he has me mostly convinced. Five other experts say
that I should turn off the drive cache. This also directly
impacts the speed of transactions by something like 100-fold
or more. The whole disk access time is the limit of TPS that
I said only applies of every bit of every form of cache is
completely turned off.

Is MQMS really a bad idea or have I changed so many of the
basic assumptions that the analysis that shows it is a bad
idea does not apply.

This can be determined by answering exactly what is bad
about MQMS relative to SQMS in the same way that the
question of the reason why Lambda approaching Mu causes
infinitely long queues was answered. (Basically Jerry's
answer).

If MQMS relative to SQMS provides much worse performance
then there is something specific that causes this much worse
performance, as soon as I know exactly what that something
is, then I will understand.


>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:643cs51dhpgpcn93rmec0avcpsk763sc1q(a)4ax.com...
> See below...
> On Wed, 14 Apr 2010 10:00:13 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:

> So the mean processing time increases, and this tends to
> back up the prediction of
> throughput under that model. And you still don't
> understand the basic meanings of lambda
> and mu.
> ****
>>
>>It does this because some of these processes have already
>>completed all of the jobs in their separate queues, thus
>>providing all of the CPU time that they were using to the
>>remaining processes.
>>
>>Another possibly very significant false assumption is that
>>the arrival rate is anything at all like Lambda / m, where
>>m
>>is the number of queues. The actual arrival rate at any
>>one
>>queue is completely independent of all of the other
>>queues.
>>There are four completely separate and distinctly
>>different
>>arrival rates that have nothing at all to do with each
>>other.
> ****
> You have false-assumption-fixation. Is this part of your
> "refute" mode? I pointed out a
> fundamental theorem, and you say 'It doesn't apply"
> without any evidence to the contrary,
> because you have no actual running system providing this
> service. So you don't actually
> KNOW what your average arrival rate is!
> ****

I think that I see now exactly why the MQMS is substantially
inferior to the SQMS where there are multiple physical
processors. Simply because idle time would tend to build up
in each processor as it is waiting for its next job, and
with a SQ the distribution of these jobs is more uniform,
(than the distribution to MQ) thus reducing this idle time.

The reason that I don't think that the difference applies to
MQMS on a single core CPU is that any idle time on any
process, immediately and directly translates into more CPU
for the remaining processes, so there really isn't any idle
time at all.