From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:6fe7s55u4fnmhjnq2qj3ctroi81iomsnk2(a)4ax.com...
> See below...
> On Mon, 12 Apr 2010 15:53:23 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
> Actually, the priority queue works better on a single core
> machine, but I've tried to
> explain why this works, and you have ignored the
> explanations.

Yeah it works because of time slicing, but how is this
better than one of my two original proposals that also uses
time slicing?

>>How else do you block a DoS attack?
> ****
> YOU don't. You let your ISP block it for you. Note that
> your belief that the IP address
> is going to be a reasonable approach shows how little you
> understand about Internet
> protocols or (as I have explained in the past) how D-O-S
> attacks work (each packet has a
> forged IP address which is fictional, PRECISELY so you
> can't use the IP address to detect
> and block the attack!)
> joe

Then what other basis would the ISP have?


From: Jerry Coffin on
In article <pYidndO7AuRyI17WnZ2dnUVZ_rednZ2d(a)giganews.com>,
NoSpam(a)OCR4Screen.com says...

[ ... ]

> So Linux thread time slicing is infinitely superior to Linux
> process time slicing?

Yes, from the viewpoint that something that exists and works (even
poorly) is infinitely superior to something that simply doesn't exist
at all.

From the viewpoint of the OS, a "process" is a data structure holding
a memory mapping, and suspension state for some positive number of
threads. There are also (often) a few other things attached, but
that's pretty much the essentials.

The scheduler doesn't schedule processes -- it schedules threads.
User threads are always associated with some process, but what's
scheduled is the thread, not the process.

> One of my two options for implementing priority scheduling
> was to simply have the OS do it by using Nice to set the
> process priority of the process that does the high priority
> jobs to a number higher than that of the lower priority
> jobs.

Which means (among other things) that you need yet another process to
do actually do that priority adjustment -- and for it to be able to
reduce the priority of one of your high priority tasks, it must have
a higher priority than any of the other threads. Since it's going to
have extremely high priority, it needs to be coded *extremely*
carefully to ensure it doesn't start to consume most of the CPU time
itself (and if it does, killing it may be difficult).

Don't get me wrong: it's *possible* for something like this to work
-- but it's neither simple nor straightforward.

--
Later,
Jerry.
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:jpe7s55a5462g4436crrm86c2nva939loa(a)4ax.com...
> See below...
> On Mon, 12 Apr 2010 15:05:51 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>That sure sounds screwy to me. Of the 40 different
>>priority
>>levels available on Linux, a process with priority of 0
>>would starve a process with priority of 1? That sure
>>sounds
>>screwy to me. Can you prove this?

> ****
> Well, I believe I gave you a citation to the explanation
> of the linux scheduler and its
> anti-starvation algorithm, and even gave you the google
> search phrase
> linux scheduler starvation
> by which you could find it. And fundamentally, the answer
> is YES, highest priority thread
> wins, period. That's how it has worked for decades. The
> reason is that back in the days

http://oreilly.com/catalog/linuxkernel/chapter/ch10.html
This may be one of several options now. I am not sure if I
fully understand this material yet, and since it is outdated
I am waiting form my updated copy.

> when we were inventing timesharing, the schedulers tried
> really hard to give CPU
> percentage guarantees, and when we measured performance
> bottlenecks, we found that on
> multimillion-dollar mainframes with the computing
> horsepower of a 286, 37% of the CPU time
> was being spent in the scheduler. So in later systems
> (1970 and beyond) we opted for
> lean, mean schedulers that had trivial algorithms
> (highest-priority thread wins) and moved
> "policy" to other parts of the system (e.g., the Balance
> Set Manager in Windows, working
> set trimmers. etc.) because this refactoring reduced OS
> overheads and essentially
> guaranteed more CPU cycles to the apps, instead of to "OS
> maintenance". And it worked,
> and all modern systems use these patterns.

I hope that you are right.

> You're big on pattens. Recognize that the best patterns
> we know are not necessarily the
> patterns used to design PFOS.
>
> I can prove it for Windows just by pointing to Solomon &
> Russinovich's book; and you might
> try the google phrase I gave above, which details why
> linux needs and how it has an
> antistarvation algorithm.
> joe

>>What I am saying is that telling me that it is bad without
>>telling me what is bad about it is far worse than useless.
>>In more than half of the cases now what was bad about my
>>design was not the design itself but the misconception of
>>it. Without explaining why you think it is bad, and only
>>saying that it is bad is really harassment and not
>>helpful.
> ****
> I though I had pointed out conclusively why MQMS
> architectures have problems and SQMS
> architectures work better.

Yes and repeating that even an infinite number of times will
never count as sufficient reasoning.

> And the paper you gave us to read about linux throughput
> emphasized, time and again, how they were using SQMS
> architectures to improve performance,
> and you continue your lengthy diatribes about how you are
> going to build magical
> mechanisms to stop low-priority threads from running,
> mechanisms which do not need to
> exist because they are solving nonexistent problems.

Now that I understand how the Linux Scheduler probably works
that point is finally made.



From: Joseph M. Newcomer on
See below....
On Mon, 12 Apr 2010 18:10:04 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:43p6s59eci43p0rhj5ms1hseo67a60eaek(a)4ax.com...
>> See below...
>> On Mon, 12 Apr 2010 01:19:15 -0400, Hector Santos
>> <sant9442(a)nospam.gmail.com> wrote:
>>
>>>But you have not done that.
>> ****
>> I guess I don't see the false assumption anywhere; SQMS
>> will run circles around MQMS any
>> day of the week for optimizing throughput.
>
>Ok there is the dogma, now where is the supporting
>reasoning?
>
****
You gave us the citation to the article that proves it. And anyone who has ever studied
elementary queueing theory knows this. I suggest an introductory book on queueing theory.
But then, I once studied this, and I don't need the details again, I have already examined
the problem, nearly forty years ago, and know what the answer is. I don't need to
re-derive the equations; I did that back in 1969 or 1970, and the only thing I needed to
remember was that SQMS is a superior architecture.

The book on queueing theory that I have has been out of print since the late 1970s, so
there's no point in giving the citation.

You are such an expert, why aren't you proving that MQMS is superior?
*****
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Mon, 12 Apr 2010 16:39:50 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:jfo6s55kuakr6gadh3uf62ech1q5povpmu(a)4ax.com...
>> See below...
>> On Sun, 11 Apr 2010 23:29:51 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>Four processes four queues each process reading only from
>>>its own queue. One process having much more process
>>>priority
>>>than the rest. Depending upon the frequency and size of
>>>the
>>>time slices this could work well on the required single
>>>core
>>>processor.
>> ****
>> As I pointed out, this still guarantees worst-case
>> performance, even on a single-core CPU!
>
>Yup you've pointed that out numerous times. What you have
>not yet pointed out is why you think this is so.
***
I gave you a detailed analysis, which you apparently didn't read. I'm not responsible for
your failure to read the explanations, and less responsible for your failure to understand
it.

I gave you the algorithm, including the priority-inversion-prevention feature, that
guarantees maximum throughput and minimum delays, while allowing for high-priority
turnaround.

So if you couldn't follow that, I'm sorry. It isn't my fault, I explained it in detail.
****
>
>>>I knew this before you said it the first time. The
>>>practical
>>>If it doesn't require a separate index to do this, then
>>>the
>>>record number maps to a byte offset. Since record numbers
>>>can be sequenced out-of-order, in at least this instance
>>>it
>>>must have something telling it where to go, probably an
>>>index. Hopefully it does not always make an index just in
>>>case someone decides to insert records out-of-sequence.
>> *****
>> Sadly, this keeps presuming details of implementation that
>> have not been substantiated by
>> any actual measurements.
>
>Yes it is much faster to do it this way. I don't build a
>system and then test it to see if I built it correctly. I go
>through many interpolations upon an optimal design before I
>begin building. I only resort to testing if the answer can
>not be otherwise obtained.
****
But you are making implementation decisions based on performance projections derived from
Tarot cards, the I Ching, or possibly a Ouija board. If you builid a system based on
parameters you do not understand, and those parameters are critical for the correct
performance, then you are not likely to achieve your goals. THe reason you have to build
and measure is that you cannot predict a priori what the behavior will be, because we have
no closed-form analytic solutions for these complex problems. That's why we HAVE to do
measurements. Your claimed ability to predict complex behavior by sitting and thinking
about it comes as a surprise to the rest of us, who know that this is not possible.
****
>
>This answer might be testable with minimal effort. In any
>case I want to know in advance what my options are, and the
>boundaries of these options. It does look like the disk seek
>time cost of the transactions might prove to be the binding
>constraint on my TPS. This is because each transaction takes
>at least one disk seek, and at 9 ms per seek this is only
>111 TPS.
****
Why do you have this buzzword-lock on seek time? This is just the latest buzzword-lock,
like the insistence that memory had to be physically contiguous memory. You have
fastened on an issue that is not under your control, not something you can predict, and
not something you can control. So how is it you know that there is only ONE seek per
transaction and why do you know it is 9ms? Oh, I keep forgetting, you are using PFOS,
where all of the OS behaves according to your fantasies.

Please show the the closed-form analytic solution for an ISAM file that predicts to the
millisecond what the seek time is to an arbitrary record. That is the only way you can
predict the behavior without building the system and measuiing it. And if you have it, it
would probably do well to apply to someone's PhD program, and present it as a
dissertation, since no one else in the world knows how to derive this formula.
joe
*****
>
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm