From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:acs6s59011mhn54fbp4sbbttiegs2t6o4f(a)4ax.com...
> See below...
> On Mon, 12 Apr 2010 09:47:29 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>

> How is a single-core 2-hyperthreaded CPU different
> logically than a 2-core
> non-hyperthreaded system (Hint: the hyperthreaded machine
> has about 1.3x the performance
> of a single-core machine but the dual-processor system has
> about 1.8x the performance).
> But logically, they are identical! The reduction in
> performance is largely due to
> cache/TLB issues

There you go sounding reasoning. I didn't know that, but,
the reasoning makes sense.



From: Joseph M. Newcomer on
See below...
On Mon, 12 Apr 2010 15:05:51 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
>news:MPG.262d1770adba771989867(a)news.sunsite.dk...
>> In article
>> <jdSdnYVeeN8mq17WnZ2dnUVZ_qidnZ2d(a)giganews.com>,
>> NoSpam(a)OCR4Screen.com says...
>>
>> The scheduling algorithm does NOT boil down to essentially
>> (or even
>> remotely) the frequency and/or duration of time slices. It
>> happens as
>> I already described: the highest priority tasks get
>> (essentially) all
>> the processor time. Like Windows, Linux does have a
>> starvation
>> prevention mechanism, but 1) it basically works in
>> opposition to the
>> priority mechanism, and 2) it only redistributes a small
>> percentage
>> of processor time, not anywhere close to the 20% you're
>> looking for.
>
>That sure sounds screwy to me. Of the 40 different priority
>levels available on Linux, a process with priority of 0
>would starve a process with priority of 1? That sure sounds
>screwy to me. Can you prove this?
****
Well, I believe I gave you a citation to the explanation of the linux scheduler and its
anti-starvation algorithm, and even gave you the google search phrase
linux scheduler starvation
by which you could find it. And fundamentally, the answer is YES, highest priority thread
wins, period. That's how it has worked for decades. The reason is that back in the days
when we were inventing timesharing, the schedulers tried really hard to give CPU
percentage guarantees, and when we measured performance bottlenecks, we found that on
multimillion-dollar mainframes with the computing horsepower of a 286, 37% of the CPU time
was being spent in the scheduler. So in later systems (1970 and beyond) we opted for
lean, mean schedulers that had trivial algorithms (highest-priority thread wins) and moved
"policy" to other parts of the system (e.g., the Balance Set Manager in Windows, working
set trimmers. etc.) because this refactoring reduced OS overheads and essentially
guaranteed more CPU cycles to the apps, instead of to "OS maintenance". And it worked,
and all modern systems use these patterns.

You're big on pattens. Recognize that the best patterns we know are not necessarily the
patterns used to design PFOS.

I can prove it for Windows just by pointing to Solomon & Russinovich's book; and you might
try the google phrase I gave above, which details why linux needs and how it has an
antistarvation algorithm.
joe
*****
>
>>> I see no other way to provide absolute priority to the
>>> high
>>> priority jobs (paying customers) over the low priority
>>> jobs
>>> (free users). Also I see no way that this would not work
>>> well. If I get enough high priority jobs that the lower
>>> priority jobs never ever get a chance to run that would
>>> be
>>> fantastic. The whole purpose of the free jobs is to get
>>> more
>>> paying jobs.
>>
>> I see Joe has already commented on the technical aspects
>> of this, so
>> I won't bother. I'll just add that if you think delaying a
>> free job
>> indefinitely is going to convince somebody to pay for your
>> service,
>> your understanding of psychology is even more flawed than
>> your
>> understanding of operating systems.
>
>The ONLY purpose of the free jobs is to get paying jobs. The
>only way that a free job would never get done is it my
>website is earning $10 per second 24/7/365. $864,000 per
>day. Long before that ever happens I will set up a cluster
>of servers just for the free jobs.
****
So in reality, we will expect that all free jobs will have <10ms turnaround.
****
>
>>
>>> If you see something specifically wrong with this
>>> approach
>>> please point out the specific dysfunctional aspect. I see
>>> no
>>> possible dysfunctional aspects with this design.
>>
>> Perfect designs are sufficiently rare that if you see no
>> possible
>> dysfunctional aspects to a design, it's essentially proof
>> positive
>> that you don't understand the design.
>
>What I am saying is that telling me that it is bad without
>telling me what is bad about it is far worse than useless.
>In more than half of the cases now what was bad about my
>design was not the design itself but the misconception of
>it. Without explaining why you think it is bad, and only
>saying that it is bad is really harassment and not helpful.
****
I though I had pointed out conclusively why MQMS architectures have problems and SQMS
architectures work better. And the paper you gave us to read about linux throughput
emphasized, time and again, how they were using SQMS architectures to improve performance,
and you continue your lengthy diatribes about how you are going to build magical
mechanisms to stop low-priority threads from running, mechanisms which do not need to
exist because they are solving nonexistent problems. I outlined how to build
anti-starvation into a SQMS algorithm. You accuse us of being unhelpful, but we keep
telling you (a) better designs are possible and (b) giving you the details of those better
designs. But you choose to ignore us because you have fallen in love with your fantasy
design.
****
>
>>> Block IP long before that.
>>
>> That has (at least) two serious problems. First of all,
>> for a DoS
>> attack, the sender doesn't care about receiving replies
>> (in fact,
>> doesn't *want* to receive replies) so he'll normally
>> generate each
>> packet with a unique IP address in the "From" field.
>>
>> Second, there are distributed denial of service attacks
>> that
>> (typically) use "botnets" of machines that have been
>> infected with
>> malware that allows the botnet operator to control them.
>> The Mariposa
>> botnet (recently shut down, at least partially, when
>> Spanish law
>> enforcement arrested three operators) controlled machines
>> using over
>> 11 million unique IP addresses.
>>
>> --
>> Later,
>> Jerry.
>
>So what else can be done, nothing?
****
That pretty much sums it up. Welcome To Reality.
****
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:4ot6s5lt9a53uocku5ga06pjc5sq2rc4ht(a)4ax.com...
> See below...
> On Sun, 11 Apr 2010 19:50:43 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:

>>> Watch carefully: 1 + 1 = 2. 2 + 2 = 4; 1 / 4 = 0.25.
>>> Read the third-grade arithmetic
>>> that I used to demonstrate that a SQMS architecture
>>> scales
>>> up quite well, and maximizes
>>
>>I don't think that it makes sense on a single core machine
>>does it? It is reasonable to postulate that a quad core
>>machine might benefit from an adapted design. I will not
>>have a quad core machine. If I had a quad-core machine it
>>might be likely that your SQMS would make sense.
> ****
> But it works better on a single-core machine because of
> (and again, I'm going to violate
> my Sacred Vows of Secrecy) "time slicing".

So Linux thread time slicing is infinitely superior to Linux
process time slicing?

One of my two options for implementing priority scheduling
was to simply have the OS do it by using Nice to set the
process priority of the process that does the high priority
jobs to a number higher than that of the lower priority
jobs.


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:t1u6s5908ajiqes3830sqmn8h0f6fnucm9(a)4ax.com...
> See below...
> On Sun, 11 Apr 2010 21:11:40 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>The only way this site is going to ever get too long of a
>>queue is if too many free jobs are submitted. Do you
>>really
>>think that this site is ever going to be making $10.00 per
>>second? If not then I really don't have to worry about
>>queue
>>length. In any case I will keep track of the average and
>>peak loads.
>>
> ****
> Sorry, if the interarrival time even EQUALS the expected
> processing time, the queue grows
> to infinite size, no matter what.
> joe

If jobs come in at exactly the same rate that it takes them
to be processed including every little nuance of process
overhead, then the queue grows to infinite length? I don't
see how this can occur. Could you explain it, or at least
point me to a link that explains it?

> ****
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:q6u6s5pl2aueul9l9bor6olbcqvhptcani(a)4ax.com...
> See below..
> On Mon, 12 Apr 2010 09:31:48 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>

>>A single queue with two handlers will not by itself
>>provide
>>the prioritization that I need.
> ****
> So add more handlers. You seem to have missed the idea
> that "time slicing" allows you to
> have an arbitrary number of handlers!

Four different types of jobs, one of these is to have (as
much as possible) absolute priority over all the others,
every job must be processed in strict FIFO order within its
priority. The whole system should be as efficient as
possible.

I don't think that SQMS using threads can do that as well as
MQMS using processes because Linux threads are reported to
not work as well as Linux processes. I don't know a good way
to make SQMS work well with multiple processes. The whole
purpose of the MQ is to make communicating with Multiple
processes simple.

>>I still see four different queues as a better solution for
>>a
>>single core processor. It is both simpler and more
>>efficient. One of the types of jobs will take 210,000 ms
>>and
>>this job absolutely positively can not screw up my maximum
>>100 ms real time threshold for my high priority jobs.
>>Joe's
>>solution is simply broken in this case.
> ****
> Try to find a bright 10-year-old to help you with the
> complex arithmetic involved here.

This is the ruse that deceitful people use in an attempt to
conceal their deceitfulness.

> How is it that having your 210,000 ms job lose a timeslice
> to your 10ms job "screws up"
> anything? Duh! But I guess you never heard of "time
> slicing" so you can be forgiven.

The cache spatial locality of reference will likely be
ruined.

Depending upon the duration and frequency of the time slices
this may not make much of a difference.