From: Jerry Coffin on
In article <LcqdnfKth5NkuVjWnZ2dnUVZ_jidnZ2d(a)giganews.com>,
NoSpam(a)OCR4Screen.com says...

[ ... ]

> Not whether or not these patterns still apply after decades
> of technological advances, or whether or not they apply in a
> particular situation? That might even work OK as much as
> most of the time, it is highly doubtful that this would
> always work well.

You've already been given the term to Google for ("queueing theory").
Do us the courtesy of spending a *Little* (sorry Joe, I just couldn't
resist the pun) time reading up on at least the most minimal basics
of what the theory covers.

The bottom line is that while technological advances can (and
constantly do) change the values of some of the variables you plug
into the equations, but have precisely _zero_ effect on the theory or
equations themselves.

Since you're too lazy to look for yourself, here's one page that
covers the basics reasonably well:

http://users.crhc.illinois.edu/nicol/ece541/slides/queueing.pdf

Slide 19 has the equation that leads to one of the conclusions Joe
posted (stated there as: "...the response time approaches infinity as
lambda approaches mu" (Slide 10 contains the definitions of lambda
and mu -- the mean arrival rate and mean service rate respectively).

Slides 31 and 33 show graphs comparing response time with a single
queue versus response time with multiple queues (the red line is the
response time for multiple queues, the green line for a single
queue). By some stroke of amazing luck, slide 31 fits your scenario
_exactly_, down to even using exactly the number of server processes
that you've described (4), so the graph applies _precisely_ to your
situation, and shows exactly how much worse of response time you can
expect using one queue for each of your four server processes, versus
one queue feeding all four server processes.

--
Later,
Jerry.
From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:uoQRoC02KHA.5420(a)TK2MSFTNGP05.phx.gbl...
> Joseph M. Newcomer wrote:
>
>>
>> So how is it that your unsubstantiated opinions are so
>> reliable and nobody else's opinions
>> can be trusted?
>> joe
>
> Joe, he has have the same battles and issues, I mean the
> SAME, with the Linux group. Read the thread here:
>
> http://www.groupsrv.com/linux/post-923554.html
>
> Same mis-understandings, same answers, even the "Great"
> David Schwartz and practically everyone else have told him
> same things. And all of them are recognizing how
> "uneducable" he is. Read it all six pages of it and see
> the Deja Vu. I loved David's statement:

Yes there were a small number of cases in those groups where
my communication style proved to be less than ideal. None of
these people were rude though.

It would have probably been much more effective if I started
off with [What is the best way to achieve this functional
objective?]

I tried that here and Joe completely ignored this correct
[functional objective] statement and went on and on about
how [functional objective] statements should never have
implementation details, even though this one had no
implementation details.

He said that he was not talking about what I just said (the
correct functional objective statement immediately above his
comments) but some of the other times that I did not form
correct functional objective statements, and did include too
many implementation details.

He completely ignored anything at all that I said that was
correct, and ever resorted to deceitful means if that is
what it took to continue to be hyper critical of anything
that I said.

At this point I concluded that Joe's intended purpose was
not to be helpful, but hurtful.


From: Peter Olcott on

"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
news:MPG.262f14c976a813a1989878(a)news.sunsite.dk...
> In article
> <XIqdne3OYaoRX1nWnZ2dnUVZ_qOdnZ2d(a)giganews.com>,
> NoSpam(a)OCR4Screen.com says...
>>
>> "Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
>> news:MPG.262e5c39c3902417989876(a)news.sunsite.dk...
>
> [ ... The buffer built into the hard disc: ]
>
>> Because of required fault tolerance they must be
>> immediately
>> flushed to the actual platters.
>>
>> > Though it depends somewhat on the disk, most drives
>> > store
>> > enough power on board (in a capacitor) that if the
>> > power dies,
>> > they can still write the data in the buffer out to the
>> > platter.
>> > As such, you generally don't have to worry about
>> > bypassing it to
>> > assure your data gets written.
>>
>> When you are dealing with someone else's money
>> (transactions
>> are dollars) this is not recommended.
>
> [ ... ]
>
>> Buffer must be shut off, that is exactly and precisely
>> what
>> I meant by [all writes are forced to disk immediately].
>
> Quite the contrary. Disabling the buffer will *hurt* the
> system's
> dependability -- a lot. The buffer allows the disc to use
> an elevator
> seeking algorithm, which minimizes head movement. The
> voice coil that
> drives the head is relatively fragile, so minimizing
> movement
> translates directly to reduced wear and better
> dependability.
>
> Disabling the buffer will lead almost directly to data
> corruption.
> None of the banks, insurance companies, etc., that I've
> worked with
> would even *consider* doing what you think is necessary
> for financial
> transactions.

Several experts disagree with this statement. From what I
recall including Joe. (He may have been intentionally
leading me astray, though) One expert stated that many of
the high quality drives come with buffering turned off by
default. I already know that truth is not a democracy and
these four or five people could all be wrong.

I do remember another line-of-reasoning that may be the one
that you are referring to. This only works if the source of
the problem is a power loss. There are some RAID controllers
that have a battery backup directly on the controller. When
it goes into battery backup mode, it immediately flushes its
buffers to disk. What happens if the source of the problem
has another cause such as either the program or the
operating system locking up? Is this simply a case where the
bank gets the transaction wrong?

>> I think that the figure that I quoted may have already
>> included than, it might really be access time rather than
>> seek time. I am so unused to the c/c++ library lseek and
>> fseek meaning that, that I may have related the incorrect
>> term.
>
> I doubt it -- for the sake of "nice" numbers, most drive
> manufacturers like to quote the fastest sounding things
> they can. In
> fact, they'll often quote the time only for the actual
> head movement,
> even leaving out the time for the controller hardware to
> translate
> the incoming command into signals sent to the servo (which
> makes no
> real sense at all, since there's no way for you to
> actually bypass
> the controller).

You are playing with me here? One of these is in
milliseconds and the other one is probably in nanoseconds.

>
> [ ... ]
>
>> In any case access time still looks like it is the
>> binding
>> constraint on my TPS.
>
> Perhaps -- and perhaps not. Right now, the "binding
> constraint" isn't
> access time; it's complete lack of basis for even making
> informed
> guesses. I'll say it again: for your guesses to mean
> anything at all,
> you need to put together at least a really minimal system
> and do some
> measurements on it. Without that, it's just hot air.
>
> --
> Later,
> Jerry.

I like to stick with analysis until most everything is
known. The analytical answer that would determine how long
(the disk access aspect of) a transaction takes would seem
to most crucially depend on whether you or five other
experts are correct on the issue of turning off the hard
drive cache.

Since no amount of testing can possibly provide this answer,
and the test results completely depend upon this answer, I
must await this answer before testing becomes feasible.

Perhaps you can provide complete and sound reasoning (even a
good link or two would do) showing that these other five
experts are wrong. You have already made a good start of it
by referring to the fragility of the read/write head. Joe
also mentioned this.

The biggest missing piece is how a transaction can be
reversed even when the application or operating system
crashes. I might have figured that one out. It would not
matter if the application crashes if the database manager is
in a separate process. I still don't know what could be done
about an OS crash in the middle of a transaction besides
turning off hard disk write caching.


From: Peter Olcott on

"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
news:MPG.262f20ea7e576d9a98987b(a)news.sunsite.dk...
> In article
> <LcqdnfKth5NkuVjWnZ2dnUVZ_jidnZ2d(a)giganews.com>,
> NoSpam(a)OCR4Screen.com says...
>
> [ ... ]
>
>> Not whether or not these patterns still apply after
>> decades
>> of technological advances, or whether or not they apply
>> in a
>> particular situation? That might even work OK as much as
>> most of the time, it is highly doubtful that this would
>> always work well.
>
> You've already been given the term to Google for
> ("queueing theory").
> Do us the courtesy of spending a *Little* (sorry Joe, I
> just couldn't
> resist the pun) time reading up on at least the most
> minimal basics
> of what the theory covers.
>
> The bottom line is that while technological advances can
> (and
> constantly do) change the values of some of the variables
> you plug
> into the equations, but have precisely _zero_ effect on
> the theory or
> equations themselves.
>
> Since you're too lazy to look for yourself, here's one
> page that
> covers the basics reasonably well:

I did look yesterday. I spent at least two hours looking.
I found this:

http://docs.google.com/viewer?a=v&q=cache:Hb_P22Cj9OAJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.93.429%26rep%3Drep1%26type%3Dpdf+multi+queue+multi+server&hl=en&gl=us&pid=bl&srcid=ADGEESh1kerH3RGqAvIEul4ryHpwxxU5HdWzS3edrtXW764CJUPudOBFnvTmUvl7W3uBXqe046N1tNkirmGqVOkUlmWQWTZQgLLwQHf5LolcXX43mvOEc3k0wR55vXqYAklq8Fu2-qgL&sig=AHIEtbSrBAf6HW8XDtNinTOdsNx5lf9tNQ

>
> http://users.crhc.illinois.edu/nicol/ece541/slides/queueing.pdf
>
> Slide 19 has the equation that leads to one of the
> conclusions Joe
> posted (stated there as: "...the response time approaches
> infinity as
> lambda approaches mu" (Slide 10 contains the definitions
> of lambda
> and mu -- the mean arrival rate and mean service rate
> respectively).

Joe said that this result is counter-intuitive.
Bill can work on ten cars an hour, how long will it take
Bill to finish his work if ten cars arrive per hour for four
hours?

(Six and one half hours because Bill gets tired quickly).
Note there must be a [because] somewhere, otherwise it must
be four hours. I never did get to this [because] other than
because of math magic.

>
> Slides 31 and 33 show graphs comparing response time with
> a single
> queue versus response time with multiple queues (the red
> line is the
> response time for multiple queues, the green line for a
> single
> queue). By some stroke of amazing luck, slide 31 fits your
> scenario
> _exactly_, down to even using exactly the number of server
> processes
> that you've described (4), so the graph applies
> _precisely_ to your
> situation, and shows exactly how much worse of response
> time you can
> expect using one queue for each of your four server
> processes, versus
> one queue feeding all four server processes.

On many of the links that I did find "M" mean Markov, and
not Multiple. Here is what I did find, it seems to disagree
with your link and Joe's idea:

http://docs.google.com/viewer?a=v&q=cache:Hb_P22Cj9OAJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.93.429%26rep%3Drep1%26type%3Dpdf+multi+queue+multi+server&hl=en&gl=us&pid=bl&srcid=ADGEESh1kerH3RGqAvIEul4ryHpwxxU5HdWzS3edrtXW764CJUPudOBFnvTmUvl7W3uBXqe046N1tNkirmGqVOkUlmWQWTZQgLLwQHf5LolcXX43mvOEc3k0wR55vXqYAklq8Fu2-qgL&sig=AHIEtbSrBAf6HW8XDtNinTOdsNx5lf9tNQ

>
> --
> Later,
> Jerry.

I will study your link and see if I can understand it. It
does show a huge difference between the two models.
Ultimately there has to be a reason for this that can be
explained as something other than math magic.


From: Peter Olcott on

"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
news:MPG.262f20ea7e576d9a98987b(a)news.sunsite.dk...
> In article
> <LcqdnfKth5NkuVjWnZ2dnUVZ_jidnZ2d(a)giganews.com>,
> NoSpam(a)OCR4Screen.com says...
> http://users.crhc.illinois.edu/nicol/ece541/slides/queueing.pdf
>
> Slide 19 has the equation that leads to one of the
> conclusions Joe
> posted (stated there as: "...the response time approaches
> infinity as
> lambda approaches mu" (Slide 10 contains the definitions
> of lambda
> and mu -- the mean arrival rate and mean service rate
> respectively).

According to the graph on slide 21, somewhere between 80-95%
of capacity queue length is between about 5-20.

> Slides 31 and 33 show graphs comparing response time with
> a single
> queue versus response time with multiple queues (the red
> line is the
> response time for multiple queues, the green line for a
> single
> queue). By some stroke of amazing luck, slide 31 fits your
> scenario
> _exactly_, down to even using exactly the number of server
> processes
> that you've described (4), so the graph applies
> _precisely_ to your
> situation, and shows exactly how much worse of response
> time you can
> expect using one queue for each of your four server
> processes, versus
> one queue feeding all four server processes.
>
> --
> Later,
> Jerry.

I studied it for at least one hour. One huge false
assumption with this as applied to my problem is found on
slide 30:

The service rate Mu is fixed. In my case the service rate is
not a fixed constant but, proportionally increases as fewer
processes are running.

It does this because some of these processes have already
completed all of the jobs in their separate queues, thus
providing all of the CPU time that they were using to the
remaining processes.

Another possibly very significant false assumption is that
the arrival rate is anything at all like Lambda / m, where m
is the number of queues. The actual arrival rate at any one
queue is completely independent of all of the other queues.
There are four completely separate and distinctly different
arrival rates that have nothing at all to do with each
other.

I am sure that correcting for these two false assumptions
would change the results substantially. I can not accurately
quantify the degree that it would change these results
without further study.