From: Hector Santos on
Peter Olcott wrote:

> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
> news:%23%23iEA1c2KHA.4332(a)TK2MSFTNGP02.phx.gbl...
>> Peter Olcott wrote:
>>
>>>> So how do your HTTP request get delegated? Four
>>>> separate IP address, sub domains?
>>>>
>>>> free.peter.com
>>>> 1penny.peter.com
>>>> nickle.peter.com
>>>> peso.peter.com
>>>>
>>>> What happens when they cross domain attempts occur?
>>> I would not be using the complex design that you are
>>> referring to. One domain, one web server, four OCR
>>> processes.
>>
>> So you back to a Many to One Fifo queue. And what happens
>> with the HTTP responses?
>
> They have another FIFO queue in the opposite direction.


So its an synchronized serialization? Or two threads? Do you need
any reader/writer?


>> By the web server needs to do a +1 and one of the OCR has
>> to do a -1.
>
> No not on this memory location. This memory location is a
> copy of another memory location that does these things to
> the original value. I don't want to slow down the read of
> this value by using a lock because it will be read in a very
> tight loop.


This is so a classic SYNC 101 problem - See Bains, maybe 1st or 2nd
chapter.

> In other words the original memory location may be read or
> written to as often as once every 10 ms. It has to be
> locked to make sure that it is updated correctly.


But you said there is no lock in the paragraph above.

> This copy
> of the memory location could easily be read a million times
> a second or more, don't want to slow this down with a lock.


Oy vey!

>> No conflicts, no reader/locker locks? No Interlock
>> increments and decrements?
>
> As long as a simultaneous read and write can not garble each
> other there is no need for any of these things, on this copy
> of the original memory location.


But do you have both a write and read potention that can occur at the
same time or priority inversion due to task switching?

--
HLS
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:m8m4s55k6vsqr36lrkfo5b70al9sa86hos(a)4ax.com...
> See below...
> On Sat, 10 Apr 2010 09:22:28 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
> The reason you don't get priority inversion is that you
> can't block the processing thread
> by a slow job when there is one demanding immediate
> attention. But in a 4-CPU system, you
> give up most of your CPU power to accomplish this, which
> is another reason the design
> sucks. You have a design that will NOT use concurrency
> when it is available, and will not
> scale up to larger multiprocessors. It works, by
> accident, on a single-core CPU, and from
> this you generalize that it is an acceptable design.

It works by design. I don't ever envision scaling up to
quad-core when the fees are tenfold higher. I don't ever
envision having more than an average of 50 transactions per
second. One more thing it really would be very easy to scale
up to quad core processors, but, the design would have to be
slightly adapted. In this case the four processes that
already exist would have to all be able to process high
priority jobs.

>
> ROTFL!
> ****
>>
>>Also I don't see that there is much difference between
>>four
>>queues one for each priority level and a single priority
>>ordered queue. The priority ordered queue would have a
>>more
>>complex insert. Looking for a higher priority job would be
>>easier, only look at the head of the queue. Named pipes
>>can
>>not handle this.
> ****
> Watch carefully: 1 + 1 = 2. 2 + 2 = 4; 1 / 4 = 0.25.
> Read the third-grade arithmetic
> that I used to demonstrate that a SQMS architecture scales
> up quite well, and maximizes

I don't think that it makes sense on a single core machine
does it? It is reasonable to postulate that a quad core
machine might benefit from an adapted design. I will not
have a quad core machine. If I had a quad-core machine it
might be likely that your SQMS would make sense.

It is more likely that I will have ten single core machines
geographically dispersed, than a single quad core machine
because the ten machines cost the same as the one machine.

> concurrency while minimizing delay, whereas your MQMS
> architecture is guaranteed to
> maximize delays because, basically, it allows no
> concurrency. If you can't follow the
> simple arithmetic, try to find a reasonably bright child
> to explain addition, subtraction,
> multiplication and division to you. You demanded "sound
> reasoning" when I thought you
> could do simple arithmetic all on your own, without me to
> lead you every step of the way.
>>
>>> ****
>>>>
>>>>A possibly better way to handle this would be to have
>>>>the
>>>>3.5 minute job completely yield to the high priority
>>>>jobs,
>>>>and then pick up exactly where it left off.
>>> ****
>>> No, the better way is to use
>>> priority-inversion-prevention
>>> algorithms as documented in the
>>> realtime literature. I went to a lecture on these last
>>> year some time, and learned in 40
>>> minutes what progress had been made in
>>> priority-inversion-prevention. Clever stuff.
>>> ****
>>
>>From what I have been told, the only possible way that
>>priority inversion can possibly occur is if there is some
>>sort of dependency on a shared resource. I don't see any
>>shared resource in my new design with four different
>>independent processes.
> ****
> You don't have the faintest clue what you are talking
> about, do you? You have solved the
> priority inversion problem by creating a design that
> minimizes utlization of computing
> resources and which, by its very nature, guarantees
> absolutely worst-case response under
> load! According to my training, we call this "bad
> design".
> ****
>>
>>>>
>>>>>>Just the pipe name itself it part of the disk, nothing
>>>>>>else
>>>>>>hits the disk. There are many messages about this on
>>>>>>the
>>>>>>Unix/Linux groups, I stated a whole thread on this:
>>>>> ****
>>>>> And the pipe grows until what point? It runs out of
>>>>> memory? Ohh, this is a new
>>>>> interpretation of "unlimited pipe growth" of which I
>>>>> have
>>>>> been previously unaware!
>>>>
>>>>It does not ever simply discard input at some arbitrary
>>>>queue length such as five items. One of the FIFO models
>>>>did
>>>>just that.
>>> ****
>>> Well, that is not exactly "robust" or "reliable" now, is
>>> it? So it either blocks, or
>>> discards data. If it blocks, it doesn't matter whether
>>> it
>>> grows or not. If it discards
>>> data, it is not a very good design.
>>> joe
>>
>>The fundamental OS IPC mechanisms are what is discarding
>>the
>>data. I think that this pertained to TCP sockets
>>discarding
>>data because their buffer of very few elements had been
>>exceeded.
> ****
> TCP sockets will NEVER discard data; obviously, you cannot
> tell the difference between
> TCP/IP and UDP. TCP uses a positive-acknowledgement
> protocol and essentially what we
> might call a "distributed semaphore" model of queue
> management, which if you had ever read
> anything about it, you would immediately have known. I
> have no idea why you create
> imaginary problems due to your lack of understanding and
> solve them with fantasy solutions
> whose reliability is problematic at best.
>
> TCP/IP is a reliable protocol in that you will receive
> EVERY byte the sender sends, IN
> ORDER, NO DUPLICATES, NO LOST DATA, EVER! or both the
> sender and the receiver will receive
> a notication of failure (usually a lost connection because
> of failure of the sender or
> receiver). Where, in this specification, is there
> anything about it being allowed to
> whimsically throw away data? (Hint: the correct answer is
> "nowhere")
>
> Again, you have rejected a reasonable design alternative
> because of complete ignorance of
> the details!
>
> And how do you detect/recover from ANY IPC mechanism that
> can throw data away? Or does
> reliability no longer matter?
> joe
> ****
>>
>>> ****
>>>>
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:enbDMkd2KHA.3568(a)TK2MSFTNGP04.phx.gbl...
> Peter Olcott wrote:
>
>> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>> message news:%23%23iEA1c2KHA.4332(a)TK2MSFTNGP02.phx.gbl...
>>> Peter Olcott wrote:
>>>
>>>>> So how do your HTTP request get delegated? Four
>>>>> separate IP address, sub domains?
>>>>>
>>>>> free.peter.com
>>>>> 1penny.peter.com
>>>>> nickle.peter.com
>>>>> peso.peter.com
>>>>>
>>>>> What happens when they cross domain attempts occur?
>>>> I would not be using the complex design that you are
>>>> referring to. One domain, one web server, four OCR
>>>> processes.
>>>
>>> So you back to a Many to One Fifo queue. And what
>>> happens with the HTTP responses?
>>
>> They have another FIFO queue in the opposite direction.
>
>
> So its an synchronized serialization? Or two threads? Do
> you need any reader/writer?

The FIFO in the other direction will have four OCR processes
that are writers and one web server that is the reader.

>
>
>>> By the web server needs to do a +1 and one of the OCR
>>> has to do a -1.
>>
>> No not on this memory location. This memory location is a
>> copy of another memory location that does these things to
>> the original value. I don't want to slow down the read of
>> this value by using a lock because it will be read in a
>> very tight loop.
>
>
> This is so a classic SYNC 101 problem - See Bains, maybe
> 1st or 2nd chapter.

Updating the original value is a little tricky, I have books
on that. Updating the copy only requires that the copy be
made immediately following an update of the original.

>
>> In other words the original memory location may be read
>> or written to as often as once every 10 ms. It has to be
>> locked to make sure that it is updated correctly.
>
>
> But you said there is no lock in the paragraph above.
>
>> This copy of the memory location could easily be read a
>> million times a second or more, don't want to slow this
>> down with a lock.
>
>
> Oy vey!
>
>>> No conflicts, no reader/locker locks? No Interlock
>>> increments and decrements?
>>
>> As long as a simultaneous read and write can not garble
>> each other there is no need for any of these things, on
>> this copy of the original memory location.
>
>
> But do you have both a write and read potention that can
> occur at the same time or priority inversion due to task
> switching?

No priority inversion is possible with this design because
every low priority job checks at least 100 times a second
whether it needs to put itself to sleep.

There is no memory lock on the copy of the original
NumberOfHighPriorityJobsPending, since this is the only
thing that is shared across processes, there can be no
possible lock contention, thus no priority inversion.

>
> --
> HLS


From: Hector Santos on
Peter Olcott wrote:

> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
> message news:m8m4s55k6vsqr36lrkfo5b70al9sa86hos(a)4ax.com...
>> See below...
>> On Sat, 10 Apr 2010 09:22:28 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>> (1) Four queues each with their own OCR process, one of
>>> these processes has much more process priority than the
>>> rest.
>> ****
>> Did we not explain that messing with thread priorities
>> gets you in trouble?
>>
>> And you seem to have this totally weird idea that
>> "process" and "thread" have meaning. Get
>
> Oh like the complete fiction of a separate address space for
> processes and not for threads?
> This group really needs to be moderated.


The funny thing is you think seriously think you are normal! I
realize we have go beyond the call to duty ourselves to help you, but
YOU really think you are of a sound mind. You are the one that really
should be so lucky, these public groups are not moderated - you would
be the #1 person locked out. Maybe that is what happen in the linux
forums - people told you to go away - "go to the WINDOWS FORUMS and
cure them!"

>> ****
>> How? You are postulating mechanisms that do not exist in
>> any operating system I am aware
>
> // shared memory location
> if (NumberOfHighPriorityJobsPending !=0)
> nanosleep(20);
>
> Is seems like every other message you switch into jerk mode.


And everything you post seems to be greater evidence of your
incompetence. Everyone knows that using time to synchronize is the #1
beginners mistake in any sort of thread, process synchronization designs.

--
HLS
From: Hector Santos on
Peter Olcott wrote:

> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
> message news:m8m4s55k6vsqr36lrkfo5b70al9sa86hos(a)4ax.com...
>> See below...
>> On Sat, 10 Apr 2010 09:22:28 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>> The reason you don't get priority inversion is that you
>> can't block the processing thread
>> by a slow job when there is one demanding immediate
>> attention. But in a 4-CPU system, you
>> give up most of your CPU power to accomplish this, which
>> is another reason the design
>> sucks. You have a design that will NOT use concurrency
>> when it is available, and will not
>> scale up to larger multiprocessors. It works, by
>> accident, on a single-core CPU, and from
>> this you generalize that it is an acceptable design.
>
> It works by design. I don't ever envision scaling up to
> quad-core when the fees are tenfold higher. I don't ever
> envision having more than an average of 50 transactions per
> second. One more thing it really would be very easy to scale
> up to quad core processors, but, the design would have to be
> slightly adapted. In this case the four processes that
> already exist would have to all be able to process high
> priority jobs.



Joe, at least so far we got him to:

- Admit to lack of understanding of memory and he himself reduced
the loading requirement rather than code for any large memory
efficiency methods.

- Admit that his 100 TPS was unrealistic for a 10 ms throughput
that lacked consideration for the interfacing processing time
outside the vapor ware OCR processor. So he added another
10 ms and reduced the TPS now to 50.

Joe, I don't know about you, but I still got a few tooth left to be
pulled! :)

Next he needs to realize the request do not come in equally time laid
out fashion!

20ms, 20ms, 20 ms, ,,,,,,,,,,,,,, 20ms = 50 TPS!

How does he plan to scale burst of request?

How does he plan to delegate the HTTP POSTING and control cross
domains waste time?

He basically does not see the queue accumulation!
--
HLS