From: Hector Santos on
Joseph M. Newcomer wrote:

>>>
>>>> Alternative (a) There are four processes with four queues
>>>> one for each process. These processes only care about
>>>> executing the jobs from their own queue. They don't care
>>>> about the jobs in any other queue. The high priority
>>>> process
>>>> is given a relative process priority that equates to 80%
>>>> of
>>>> the CPU time of these four processes. The remaining three
>>>> processes get about 7% each. This might degrade the
>>>> performance of the high priority jobs more than the next
>>>> alternative.
>>> There is no such thing with any OS of which I'm aware. At
>>> least with
>>> a typical OS, the highest priority task is the *only* one
>>> that will
>>> run at any given time. Windows (for one example) does
>>> attempt to
>>> prevent starvation of lower priority threads by waking one
>>> lower
>>> priority thread every four seconds.
>> The alternative that you show quoted above is called time
>> slicing and has been available for many decades.
> ****
> And therefore timeslicing takes ZERO overhead, right? And you are still thinking that
> mucking with thread priorities is going to result in a flawless design (and if you start
> ranting about how "thread" and "processes" are different, you will only confirm that you
> are completely and utterly clueless)


I'll tell ya Joe, this guy needs a Step 12 program!

>> My scheduler will signal all of the low priority jobs that
>> they need to sleep now. When the high priority queue is
>> empty, and all of the high priority jobs are completed the
>> low priority jobs get a signal to wake up now.
> ****
> Oh, a whole NEW mechanism! Has anyone said "this is without a doubt the dumbest design I
> have ever seen"? Well, it is. You really don't understand operating systems, because the
> ideas you state here show a fundamental confusion about the right way to build
> applications.

>

> I presume you mean the linux 'signal' operation. Have you really studied the problems
> involved in using it?
> ****


I think he really wants a

HttpThread()
{
WaitForSingleOjbject(hHighPriorityProcessInProgress,INFINITE);
....
}

or

WebServer()
{
for each thread handle, suspend all low priority
threads if new request is high priority.

start HttpThread() thread.
}

>> Block IP long before that.
> ****
> WHat IP? You clearly have some bizarre notion that the IP is unforgeable. If I wanted to
> do a D-O-S on your site, I'd change the IP with every transmission. You have some awfully
> naive views about the goodness of the world. Or are we talking about Peter's Fantasy
> Internet, where everyone is well-behaved, there are not D-O-S attacks, and nobody EVER
> emails a virus?
> ****

No Joe, he will require every customer to have a verifiable static IP
before signing up. That IP will be part of the user record and will
never change. :)

--
HLS
From: Joseph M. Newcomer on
See below...
On Sun, 11 Apr 2010 23:29:51 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
>news:O%23HmXKf2KHA.5212(a)TK2MSFTNGP04.phx.gbl...
>> Peter Olcott wrote:
>>
>>> http://en.wikipedia.org/wiki/Priority_inversion
>>> If there are no shared resources then there is no
>>> priority inversion.
>>> Try and provide a valid counter-example of priority
>>> inversion without shared resources.
>>
>>
>> You don't to have a dead lock to reveal problems. You can
>> get Race Conditions with classic SYNC 101 mistakes like
>> this that depends on time synchronizations:
>>
>> if (NumberOfHighPriorityJobsPending !=0)
>> nanosleep(20);
>
> 20 milliseconds
****
It doesn't matter if it is 20ns, 20us, 20ms or 20s. The code has the same failure mode.
Pay attention to those of us who have had to deal with these problems!

Hector is right in all regards. Something about a "sound technical basis" comes to mind
here...
joe
****
>
>>
>> Since you like wikipedia, read:
>>
>> http://en.wikipedia.org/wiki/Race_condition
>>
>> Whats the point of the above? Are you expecting that the
>> value will turn 0 in the nanosleep(20) which is wrong
>> anyway. Is that 20 seconds or 20 nanaseconds? Did you
>> really mean?
>>
>> if (NumberOfHighPriorityJobsPending !=0)
>> usleep(20);
>>
>> In either case, you are are in for a RUDE awakening with
>> that.
>>
>> You probably mean:
>>
>> while (NumberOfHighPriorityJobsPending !=0)
>> usleep(20);
>>
>> which COULD be fine, but you should use an optimized
>> kernel object here to wait on.
>>
>> if (WaitForSingleObject(hPriorityEvent, INFINITE) ==
>> WAIT_OBJECT) {
>> /// do whatever
>> } else {
>> /// Not what I expected
>> }
>>
>> When you wait on a kernel object, you won't be spinning
>> your thread like you do above.
>
>Event driven is better. I would prefer that the high
>priority jobs has absolute priority over the lower priority
>jobs. Even better would be if this could be done
>efficiently. I think that process priority would work well
>enough. That would depend on how the kernel scheduler works,
>the frequency and duration of the time slices.
****
But given a high-priority thread will preempt a lower-priority thread immediately, and the
high-priority task will finish in less than 1 scheduler quantum (30ms), why do there have
to be special mechanisms that implement what the kernel already does? The whole design
adds complexity that is not required.
****
>
>>> You are not explaining with much of any reasoning why you
>>> think that one alternative is better than another, and
>>> when I finally do get you to explain, it is only that
>>> your alternative is better than your misconception of my
>>> design, not the design itself.
>>
>>
>> No, your problem is that you are stuck with a framework
>
>One design constraint that won't be changed until system
>load requires it is that we must assume a single core
>processor with hyperthreading.
>
>> Many Threads to 1 FIFO/OCR process
>>
>> and everyone is telling you its flawed and why. I'm tried
>> different
>
>When they finally get to the why part I point out their
>false assumption. A priority Queue may be a great idea with
>multiple cores, I will not have those.
****
Actually, for reasons I have already pointed out, it works better even with a single core
CPU from the Computer Museum.
****
>
>> ways using your WORK LOAD which you accepted and began to
>> change your TPS.
>>
>> But you still going to overflow your Many to 1 design,
>> especially if you expect to use TIME to synchronize
>> everything.
>
>This is not a given, but, using time to synchronize is not
>the best idea. It could possibly waste a lot of CPU. So then
>four processes with one getting an 80% of the relative share
>and the other three sharing about 7%.
*****
Dependence on time to synchronize is one of the hallmarks of amateur designers. Using
thread priorities to simulate synchronization is another.
****
>
>>> Exactly what are these ways, and precisely what have I
>>> failed to account for?
>>
>>
>> You been told in a dozen ways why it will fail! You are
>> OFF in your timing of everything for the most part. You
>> think you can achieve what you want with a Many Thread to
>> 1 OCR process design at the TPS rates and work load you
>> think you can get.
>>
>> You can't!
>
>Four processes four queues each process reading only from
>its own queue. One process having much more process priority
>than the rest. Depending upon the frequency and size of the
>time slices this could work well on the required single core
>processor.
****
As I pointed out, this still guarantees worst-case performance, even on a single-core CPU!
****
>
>On a quad-core it would have to be adapted possibly using a
>single priority queue so that the high priority jobs could
>possibly be running four instances at once.
***
And your point is...?
****
>
>>> I know full and well that the biggest overhead of the
>>> process is going to be disk access. I also know full and
>>> well that tripling the number of disk access would likely
>>> triple overhead. I am not sure that SQLite is not smart
>>> enough to do a record number based seek without requiring
>>> an index. Even if SQLite is not smart enough to do a
>>> record seek without an index, it might still be fast
>>> enough.
>>
>>
>> This is what I am saying, WE TOLD YOU WHAT THE LIMITS OF
>> SQLITE are and you are not listening. You can do a ROW
>> lookup, but you can't do a low level FILE RECORD POSITION
>> AND BYTE OFFSET like you think you need, but really don't.
>
>As long as the ROW lookup maps to the file byte offset we
>are good. If the ROW lookup must read and maintain an index
>just to be able to get to the rows in sequential order, this
>may not be acceptable.
>
>> I also told you that while you UPDATE an SQLITE database,
>> all your READS are locked!
>>
>> You refuse to comprehend that.
>
>I knew this before you said it the first time. The practical
>implications of this is that SQLite can't handle nearly as
>many as simultaneous updates as other row locking systems.
>Their docs said 500 transaction per second.
>
>> Again, you can SELECT a row in your table using the proper
>> query, but it isn' a direct FILE ACCESS with BYTE OFFSET
>> idea and again, SQLITE3 will lock your database during
>> updates so all your REQUEST SERVER will be locked in
>> reading/writing any table while it is being updated by
>> ANYONE.
>
>If it doesn't require a separate index to do this, then the
>record number maps to a byte offset. Since record numbers
>can be sequenced out-of-order, in at least this instance it
>must have something telling it where to go, probably an
>index. Hopefully it does not always make an index just in
>case someone decides to insert records out-of-sequence.
*****
Sadly, this keeps presuming details of implementation that have not been substantiated by
any actual measurements.
>
>>> You (and Hector) are definitely right on some things.
>>
>>
>> We are right on EVERYTHING discussed here. There has been
>> nothing you stated or posted that indicates any error in
>> all suggestions to you.
>
>You and Joe are most often wrong by making false assumptions
>about the details of my design and its requirements.
>
>> IDEAL: Many Threads to Many Threads
>> WORST: Many Threads to 1 thread
>
>I guess that I am currently back to alternative two which is
>many threads or a web server to four OCR processes via four
>FIFOS on a single core machine, one process having much more
>process priority than the others.
****
So how is a MQMS different from WORST? SQMS comes closer to IDEAL than any other design.
****
>
>A multi-core processor would probably involve the same thing
>except have a single priority queue in-between.
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Hector Santos on
Jerry Coffin wrote:

> The difference from what he's doing right now is that instead of
> being restricted to running on one single-core processor,


He's got a Quad with 8GB monster machine. :)

--
HLS
From: Pete Delgado on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:uhFwgp11KHA.140(a)TK2MSFTNGP05.phx.gbl...
> Live and learn. Which leads to the questions, if you are going to design
> for Linux, then;
>
> Why are you trolling in a WINDOWS development forum?
>
> Why are you here asking/stating design methods that defies logic
> under Windows when YOU think this logic is sound under UNIX?
>
> If you are going to design for Windows, then you better learn how to
> follow WINDOWS technology and deal with its OS and CPU design guidelines.


Hector,
You haven't yet figured out the riddle of Peter Olcott despite the repeated
clues? When you look back at the posts after I tell you his little secret,
it should become obvious and you should have one of those "Ah-ha!" moments.

The truth of the matter is that there is no OCR technology at play here at
all but rather AI technology. The secret is that Peter Olcott is really an
AI program that is being entered to win the Loebner Prize.
http://loebner.net/Prizef/2010_Contest/Loebner_Prize_Rules_2010.html

Let's look at the evidence again shall we?? Peter originally posed a
question to the group. From each of the answers he recieved, his follow-up
questions contained an amalgam of the original question and the resulting
answer. In each case, the mixture could be made and perceived by humans to
be reasonably logical because the original respondant had already considered
the answer in the context of the original question. This is pretty common
with many Turing Test style programs (mixture of question and response). I
recall some of the games that I had back in the 80's that used this
technique to appear intelligent.

This also explains the magical morphing requirements and the circular
reasoning being used quite nicely. Each time a post was made by the Peter
Olcott program, it would incorporate the suggestions from previous posts by
members of this group. The interesting thing about this particular Turing
Test program is that if the group reached a consensus on a particular
approach, the program would respond *against* the suggestion even after many
attempts were made to justify the suggestion thus generating even more posts
to the affirmative that the program could respond to. The architecture of
this part of the "personality" was sheer genius because it simulates the
average clueless programmer who has no motivation and below average
intelligence.

Another clue must be the way the Turing Test program (Peter Olcott) fishes
for additional posts by always responding to *every* post on *every* branch
of a thread. The Turing Test program must make sure that its posts are the
leaf on every branch in order to ensure that *someone*, *somewhere* will
respond to it. Without responses, the machine is simply in wait state which,
of course, means that the program has failed to convince humans that there
is a human intelligence behind the posts.

I had originally thought that the real "programmer" would come forth on
April 1st and identify him/herself, but apparently the deception has gone so
swimmingly well that testing will continue so long as you and Joe post to
the threads. ;-)

In an effort to find out who the real programmer was behind the Peter Olcott
Turing Test machine, I consulted with the internet anagram server at
http://wordsmith.org/anagram/ and typed in the name Peter Olcott in the
hopes that the real culpret had simply tried to mask his identity. The
responses included:

Elect Pro Tot
Creep Lot Tot
Crop Let Tote
Cop Letter To

The bottom line is that while we may not know the true identity of the
programmers behind the Peter Olcott hoax, it is possible that the internet
anagram program may have come up with an appropriate response to his
spamming of a Windows newsgroup with questions that are to be implemented on
Linux... send a letter to a cop!

-Pete

PS: Can we all get back to real MFC programs and real MFC programmers now???
;-)


From: Peter Olcott on

"Hector Santos" <sant9442(a)gmail.com> wrote in message
news:f357657c-51c6-4112-bf7a-4dccf1699aff(a)35g2000yqm.googlegroups.com...
> On Apr 12, 1:18 am, Jerry Coffin <jerryvcof...(a)yahoo.com>
> wrote:
>> In article
>> <LuidnT3tuaC7p1_WnZ2dnUVZ_gCdn...(a)giganews.com>,
>> NoS...(a)OCR4Screen.com says...
>>
> I just don't think he needs four different EXEs for this.

Under Linux there is supposed to be a much greater chance of
priority inversion with threads than processes because the
OS locks all kinds of things on behalf of the process when
using threads. At least that is what David Schwartz said.