From: Jerry Coffin on
In article <KZCdnZAzg6rOX1jWnZ2dnUVZ_rqdnZ2d(a)giganews.com>,
NoSpam(a)OCR4Screen.com says...

[ ... ]

> On many of the links that I did find "M" mean Markov, and
> not Multiple.

Slide 7 of the same presentation tells about the notation. It's a
position notation, so it's impossible to say what "M" means without
looking at which position that's in. There are six basic parameters
for the queue, and each gets its own set of letters in its own spot,
though some have default values, so you routinely see specifications
that have only 3 or 4 things specified. As it happens, you can have
"M" in a couple of different spots -- one of them does mean "Markov",
but (amazingly enough) the other does not.

In the example I pointed out, it specifically says it's comparing: "m
M/M/1" to: "1 M/M/m" queue. In each of those, the first and second
"M's" mean "Markov" (i.e., Markov distribution of arrival rate and
processing time respectively). The _third_ position is the one
telling about the number of servers (being fed by a particular
queue). As I'm sure even you can guess, a "1" means "one server". An
"m" (lower case, NOT upper case) means "multiple". BTW,the reference
I gave you explains all of this -- if you'd spend half as much time
_reading_ the reference as you do blathering nonsense, you might
actually learn something!

> Here is what I did find, it seems to disagree
> with your link and Joe's idea:

If you'd bother to actually _read_ the paper, you'd realize that's
not the case at all. About all the paper does is prove that it's
possible for a multi-queue/multi-processor system to be *stable*
under certain conditions.

It does *not* (even attempt to) prove that it'll be superior to a
single-queue system under any circumstances at all. Just from
glancing at it, their result means precisely *nothing* with respect
to your system -- at least as I read it, one of their preconditions
is that no processor has any effect on any other processor, which is
most assuredly _not_ the case when you're talking about four
processing tasks all running on the same physical processor.

As to why you the service time goes to infinity when the arrival rate
approaches the service rate: it comes down to this: you're comparing
the _peak_ processing rate to the _average_ arrival rate.

If the system is ever idle, even for a moment, that means the
_average_ processing rate has dropped (to zero for the duration of
the idle period) -- but since the average arrival rate has not
dropped, the processor is really getting behind.

In a practical system, the average processing rate will always be at
least a little lower than the peak processing rate -- which means
that over time, the latency for each job (i.e. the time from arrival
to result) will rise to infinity.

--
Later,
Jerry.
From: Jerry Coffin on
In article <MPG.262f871d326d2f3998987d(a)news.sunsite.dk>,
jerryvcoffin(a)yahoo.com says...

[ ... ]

> As it happens, you can have
> "M" in a couple of different spots -- one of them does mean "Markov",
> but (amazingly enough) the other does not.

I should probably correct this on two points: as I pointed out later
in the same post, upper-case "M" can really occur in either of the
first two spots, and in either of those cases, it does refer to
"Markov". A lower-case "m" can occur in the third spot, and when it
does, it does not refer to "Markov".

--
Later,
Jerry.
From: Peter Olcott on

"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
news:MPG.262f871d326d2f3998987d(a)news.sunsite.dk...
> In article
> <KZCdnZAzg6rOX1jWnZ2dnUVZ_rqdnZ2d(a)giganews.com>,
> NoSpam(a)OCR4Screen.com says...
>

> In the example I pointed out, it specifically says it's
> comparing: "m
> M/M/1" to: "1 M/M/m" queue. In each of those, the first
> and second
> "M's" mean "Markov" (i.e., Markov distribution of arrival
> rate and
> processing time respectively). The _third_ position is the
> one
> telling about the number of servers (being fed by a
> particular
> queue). As I'm sure even you can guess, a "1" means "one
> server". An
> "m" (lower case, NOT upper case) means "multiple". BTW,the
> reference
> I gave you explains all of this -- if you'd spend half as
> much time
> _reading_ the reference as you do blathering nonsense, you
> might
> actually learn something!

Using the variable M to mean several different things on the
same line seems to be far more difficult than necessary.

> As to why you the service time goes to infinity when the
> arrival rate
> approaches the service rate: it comes down to this: you're
> comparing
> the _peak_ processing rate to the _average_ arrival rate.
>
> If the system is ever idle, even for a moment, that means
> the
> _average_ processing rate has dropped (to zero for the
> duration of
> the idle period) -- but since the average arrival rate has
> not
> dropped, the processor is really getting behind.
>
> In a practical system, the average processing rate will
> always be at
> least a little lower than the peak processing rate --
> which means
> that over time, the latency for each job (i.e. the time
> from arrival
> to result) will rise to infinity.
>
> --
> Later,
> Jerry.

That makes complete sense, but, the technical author
explicitly used the term (on slide 10) "mean job arrival
rate", (Lambda) and "mean service rate" (Mu).

This would directly contradict your use of "peak processing
rate" because "mean service rate" explicitly means [average
processing rate], and thus not [peak processing rate].

Joe said that the result is counter-intuitive, maybe its
also inexplicable. In any case the results show that 80% of
capacity works very well with a queue length of about 5.

I sure would like to know the reason why this is so. If the
ultimate reason is [math magic] then I can see why Joe and
Hector got so frustrated with me insisting on knowing the
reason.


From: Jerry Coffin on
In article <U6qdndrIQtU5Q1jWnZ2dnUVZ_judnZ2d(a)giganews.com>,
NoSpam(a)OCR4Screen.com says...

[ ... ]

> That makes complete sense, but, the technical author
> explicitly used the term (on slide 10) "mean job arrival
> rate", (Lambda) and "mean service rate" (Mu).

There are two different "means" in play here. The mean he's using is
the mean of the processing rates for different kinds of jobs. For
example, let's assume your processing rate is 10 ms per page. Let's
also assume that your job size averages out to 1.5 pages. That gives
a mean processing time of 15 ms, and therefore your mu is ~66.7 jobs
per second (i.e. 1/0.015).

Despite that, when your processor doesn't have a job to do, it can't
do anything -- and therefore, the fact that it _could_ process ~66.7
jobs per second doesn't change the fact that for that duration, it IS
processing exactly 0 jobs per second.

--
Later,
Jerry.
From: Peter Olcott on

"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
news:MPG.262fa231e10317c7989881(a)news.sunsite.dk...
> In article
> <U6qdndrIQtU5Q1jWnZ2dnUVZ_judnZ2d(a)giganews.com>,
> NoSpam(a)OCR4Screen.com says...
>
> [ ... ]
>
>> That makes complete sense, but, the technical author
>> explicitly used the term (on slide 10) "mean job arrival
>> rate", (Lambda) and "mean service rate" (Mu).
>
> There are two different "means" in play here. The mean
> he's using is
> the mean of the processing rates for different kinds of
> jobs. For
> example, let's assume your processing rate is 10 ms per
> page. Let's
> also assume that your job size averages out to 1.5 pages.
> That gives
> a mean processing time of 15 ms, and therefore your mu is
> ~66.7 jobs
> per second (i.e. 1/0.015).
>
> Despite that, when your processor doesn't have a job to
> do, it can't
> do anything -- and therefore, the fact that it _could_
> process ~66.7
> jobs per second doesn't change the fact that for that
> duration, it IS
> processing exactly 0 jobs per second.
>
> --
> Later,
> Jerry.

Are you going to get to my post regarding the false
assumptions that the model makes pertaining to my process?