From: Joseph M. Newcomer on
See below...
On Sun, 11 Apr 2010 19:50:43 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:m8m4s55k6vsqr36lrkfo5b70al9sa86hos(a)4ax.com...
>> See below...
>> On Sat, 10 Apr 2010 09:22:28 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>> The reason you don't get priority inversion is that you
>> can't block the processing thread
>> by a slow job when there is one demanding immediate
>> attention. But in a 4-CPU system, you
>> give up most of your CPU power to accomplish this, which
>> is another reason the design
>> sucks. You have a design that will NOT use concurrency
>> when it is available, and will not
>> scale up to larger multiprocessors. It works, by
>> accident, on a single-core CPU, and from
>> this you generalize that it is an acceptable design.
>
>It works by design. I don't ever envision scaling up to
>quad-core when the fees are tenfold higher. I don't ever
>envision having more than an average of 50 transactions per
>second. One more thing it really would be very easy to scale
>up to quad core processors, but, the design would have to be
>slightly adapted. In this case the four processes that
>already exist would have to all be able to process high
>priority jobs.
****
So you are saying performance doesn't matter because you will be running well under
threshold for performance. If so, why do you fasten on the most trivial and irrelevant
details to claim they are going to matter in performance?
****
>
>>
>> ROTFL!
>> ****
>>>
>>>Also I don't see that there is much difference between
>>>four
>>>queues one for each priority level and a single priority
>>>ordered queue. The priority ordered queue would have a
>>>more
>>>complex insert. Looking for a higher priority job would be
>>>easier, only look at the head of the queue. Named pipes
>>>can
>>>not handle this.
>> ****
>> Watch carefully: 1 + 1 = 2. 2 + 2 = 4; 1 / 4 = 0.25.
>> Read the third-grade arithmetic
>> that I used to demonstrate that a SQMS architecture scales
>> up quite well, and maximizes
>
>I don't think that it makes sense on a single core machine
>does it? It is reasonable to postulate that a quad core
>machine might benefit from an adapted design. I will not
>have a quad core machine. If I had a quad-core machine it
>might be likely that your SQMS would make sense.
****
But it works better on a single-core machine because of (and again, I'm going to violate
my Sacred Vows of Secrecy) "time slicing".
****
>
>It is more likely that I will have ten single core machines
>geographically dispersed, than a single quad core machine
>because the ten machines cost the same as the one machine.
***
And there is some reason you are trying to minimize performance on these machines, while
creating gratuitous complexity in your code to achieve it?
joe
****
>
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Sun, 11 Apr 2010 21:17:17 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>
>Peter Olcott wrote:
>
>> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>> message news:m8m4s55k6vsqr36lrkfo5b70al9sa86hos(a)4ax.com...
>>> See below...
>>> On Sat, 10 Apr 2010 09:22:28 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>> The reason you don't get priority inversion is that you
>>> can't block the processing thread
>>> by a slow job when there is one demanding immediate
>>> attention. But in a 4-CPU system, you
>>> give up most of your CPU power to accomplish this, which
>>> is another reason the design
>>> sucks. You have a design that will NOT use concurrency
>>> when it is available, and will not
>>> scale up to larger multiprocessors. It works, by
>>> accident, on a single-core CPU, and from
>>> this you generalize that it is an acceptable design.
>>
>> It works by design. I don't ever envision scaling up to
>> quad-core when the fees are tenfold higher. I don't ever
>> envision having more than an average of 50 transactions per
>> second. One more thing it really would be very easy to scale
>> up to quad core processors, but, the design would have to be
>> slightly adapted. In this case the four processes that
>> already exist would have to all be able to process high
>> priority jobs.
>
>
>
>Joe, at least so far we got him to:
>
> - Admit to lack of understanding of memory and he himself reduced
> the loading requirement rather than code for any large memory
> efficiency methods.
>
> - Admit that his 100 TPS was unrealistic for a 10 ms throughput
> that lacked consideration for the interfacing processing time
> outside the vapor ware OCR processor. So he added another
> 10 ms and reduced the TPS now to 50.
>
>Joe, I don't know about you, but I still got a few tooth left to be
>pulled! :)
***
I blame it on OCD. Some people with OCD keep washing their hands to get them clean; I
keep returning here to see if we can educate Peter. I probably need therapy to stop me
from trying to help him, since he clearly has all the answers and I'm wasting my time.
joe
****
>
>Next he needs to realize the request do not come in equally time laid
>out fashion!
>
> 20ms, 20ms, 20 ms, ,,,,,,,,,,,,,, 20ms = 50 TPS!
>
>How does he plan to scale burst of request?
>
>How does he plan to delegate the HTTP POSTING and control cross
>domains waste time?
>
>He basically does not see the queue accumulation!
****
Elementary queueing theory is not one of his strong points.
joe
****
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Sun, 11 Apr 2010 21:11:40 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
>news:O2mJq3d2KHA.2284(a)TK2MSFTNGP06.phx.gbl...
>>
>> Peter Olcott wrote:
>>
>> Joe, at least so far we got him to:
>>
>> - Admit to lack of understanding of memory and he
>> himself reduced
>> the loading requirement rather than code for any large
>> memory
>> efficiency methods.
>
>No. Joe was and continues to be wrong that a machine with
>plenty of extra RAM ever needs to page out either a process
>or its data.
***
I accepted the data of your experiment, so how is it I "continue" to be wrong. In fact, I
explicitly said that I didn't know the right answer, but needed actual measured data to
determine it, and you provided the actual experiment, which I acknowledged!
****
>
>>
>> - Admit that his 100 TPS was unrealistic for a 10 ms
>> throughput
>> that lacked consideration for the interfacing
>> processing time
>> outside the vapor ware OCR processor. So he added
>> another
>> 10 ms and reduced the TPS now to 50.
>
>No, the latest analysis indicates that I am back up to 100
>because the webserver and the OCR execute in parallel.
>
>>
>> Joe, I don't know about you, but I still got a few tooth
>> left to be
>> pulled! :)
>>
>> Next he needs to realize the request do not come in
>> equally time laid out fashion!
>>
>> 20ms, 20ms, 20 ms, ,,,,,,,,,,,,,, 20ms = 50 TPS!
>>
>> How does he plan to scale burst of request?
>>
>> How does he plan to delegate the HTTP POSTING and control
>> cross domains waste time?
>>
>> He basically does not see the queue accumulation!
>> --
>> HLS
>
>The only way this site is going to ever get too long of a
>queue is if too many free jobs are submitted. Do you really
>think that this site is ever going to be making $10.00 per
>second? If not then I really don't have to worry about queue
>length. In any case I will keep track of the average and
>peak loads.
>
****
Sorry, if the interarrival time even EQUALS the expected processing time, the queue grows
to infinite size, no matter what.
joe
****
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Jerry Coffin on
In article <jdSdnYVeeN8mq17WnZ2dnUVZ_qidnZ2d(a)giganews.com>,
NoSpam(a)OCR4Screen.com says...

[ ... ]

> Here is is:
> http://en.wikipedia.org/wiki/Nice_(Unix)
> I will have to run my own tests to see how the process
> priority number map to the relative process priorities that
> I provided above. Ultimately the schedule algorithm boils
> down to essentially the frequency and duration of a time
> slice. There is no need to map to the exact percentage
> numbers that I provided.

The scheduling algorithm does NOT boil down to essentially (or even
remotely) the frequency and/or duration of time slices. It happens as
I already described: the highest priority tasks get (essentially) all
the processor time. Like Windows, Linux does have a starvation
prevention mechanism, but 1) it basically works in opposition to the
priority mechanism, and 2) it only redistributes a small percentage
of processor time, not anywhere close to the 20% you're looking for.

[ ... ]

> I see no other way to provide absolute priority to the high
> priority jobs (paying customers) over the low priority jobs
> (free users). Also I see no way that this would not work
> well. If I get enough high priority jobs that the lower
> priority jobs never ever get a chance to run that would be
> fantastic. The whole purpose of the free jobs is to get more
> paying jobs.

I see Joe has already commented on the technical aspects of this, so
I won't bother. I'll just add that if you think delaying a free job
indefinitely is going to convince somebody to pay for your service,
your understanding of psychology is even more flawed than your
understanding of operating systems.

> If you see something specifically wrong with this approach
> please point out the specific dysfunctional aspect. I see no
> possible dysfunctional aspects with this design.

Perfect designs are sufficiently rare that if you see no possible
dysfunctional aspects to a design, it's essentially proof positive
that you don't understand the design.

[ ... ]

> I see no possible sequence of events where this would ever
> occur, if you do please point it out detail by detail.

I already did -- you apparently either didn't read or didn't
understand it.

> > Not even close, and you clearly don't understand the
> > problem at all yet. The problem is that to authenticate the user
> > you've *already* created a thread for his connection. The fact
> > that you eventually decide not to do the OCR for him doesn't
> > change the fact that you've already spawned a thread. If he makes
> > a zillion attempts at connecting, even if you eventually reject
> > them all, he's still gotten you to create a zillion threads to
> > carry out the attempted authentication for each, and then reject
> > it.
>
> Block IP long before that.

That has (at least) two serious problems. First of all, for a DoS
attack, the sender doesn't care about receiving replies (in fact,
doesn't *want* to receive replies) so he'll normally generate each
packet with a unique IP address in the "From" field.

Second, there are distributed denial of service attacks that
(typically) use "botnets" of machines that have been infected with
malware that allows the botnet operator to control them. The Mariposa
botnet (recently shut down, at least partially, when Spanish law
enforcement arrested three operators) controlled machines using over
11 million unique IP addresses.

--
Later,
Jerry.
From: Jerry Coffin on
In article <mfj6s5lj3bqji65f0cnbreq98utl7m11oc(a)4ax.com>,
newcomer(a)flounder.com says...

[ ... ]

> >Though the specific details differ, Linux works reasonably
> >similarly.
> ****
> There is an article I cited in this thread on how Linux anti-starvation works, and it is a
> really sad design. "Kludge" comes to mind.
> ****

Well yes -- in fact I've said pretty much the same thing. I didn't
intend to comment on its quality though -- only the fact that in
either case, lower priority threads get something like a a fraction
of a percent of the CPU time, not anywhere close to the 20% he's
looking for.

[ ... ]

> >Bottom line: you're ignoring virtually everything the world has
> >learned about process scheduling over the last 50 year or so. You're
> >trying to start over from the beginning on a task that happens to be
> >quite difficult.
> ****
> Actually, we were doing better than this in 1967. But then, we were a bunch of people who
> were in touch with reality (read the Multics literature of the era). Nobody was trying to
> build fantasy operating systems.

In fairness to Peter, a fair amount that was known in 1967 and
embodied in the design of Multics is still being ignored (e.g. in
Unix) today.

[ ... ]

> ****
> The end result being a successful Denial-of-service (D-O-S) attack
> on his site!
> ****

Yes, but since DoS attacks are always carried out by complete
*honest* criminals, we don't have to worry, because all the IP
packets will have the "from" field filled out with the attackers own
IP address, so they'll be easy to filter out.

How could anybody expect less than perfect honesty from criminals?

--
Later,
Jerry.