From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:6j3qq5hbdgrtotu8k91pm5mb3db4dgd228(a)4ax.com...
> See below...
> On Thu, 25 Mar 2010 19:20:38 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:64onq5dabohqabp4htku20ajb8q96r06s1(a)4ax.com...
>>> See below...
>>> On Thu, 25 Mar 2010 10:12:56 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>>>
>>>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>>>message
>>>>news:00rmq5hctllab7ursv8q64pq5eiv8s82ad(a)4ax.com...
>>>>> See below...
>>>>> On Thu, 25 Mar 2010 00:01:37 -0500, "Peter Olcott"
>>>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>>>
>>>>>>
>>>>>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>>>>>message
>>>>>>news:rdqlq5dv2u8bh308se0td53rk7lqmv0bki(a)4ax.com...
>>>>>>> Make sure the addresses are completely independent
>>>>>>> of
>>>>>>> where the vector appears in memory.
>>>>>>>
>>>>>>> Given you have re-implemented std::vector
>>>>>>> (presumably
>>>>>>> as
>>>>>>> peter::vector) and you have done
>>>>>>> all the good engineering you claim, this shouldn't
>>>>>>> take
>>>>>>> very much time at all. Then you
>>>>>>> can use memory-mapped files, and share this massive
>>>>>>> footprint across multiple processes,
>>>>>>> so although you might have 1.5GB in each process, it
>>>>>>> is
>>>>>>> the SAME 1.5GB because every
>>>>>>> process SHARES that same data with every other
>>>>>>> process.
>>>>>>>
>>>>>>> Seriously, this is one of the exercises in my
>>>>>>> Systems
>>>>>>> Programming course; we do it
>>>>>>> Thursday afternoon.
>>>>>>> joe
>>>>>>
>>>>>>But all that this does is make page faults quicker
>>>>>>right?
>>>>>>Any page faults at can only degrade my performance.
>>>>> ***
>>>>> Denser than depleted uranium. Fewer page faults,
>>>>> quicker.
>>>>> For an essay, please explain
>>>>> in 500 words or less why I am right (it only requires
>>>>> THINKING about the problem) and why
>>>>> these page faults happen only ONCE even in a
>>>>> multiprocess
>>>>> usage! Compare to the ReadFile
>>>>> solution. Compare and contrast the two approaches.
>>>>> Talk
>>>>> about storage allocation
>>>>> bottlenecks.
>>>>>
>>>>> I'm sorry, but you keep missing the point. DId you
>>>>> think
>>>>> your approach has ZERO page
>>>>> faults? You even told us it doesn't!
>>>>
>>>>I was making a conservative estimate, actual measurement
>>>>indicated zero page faults after all data was loaded,
>>>>even
>>>>after waiting 12 hours.
>>> ***
>>> And a memory-mapped file would not show the same
>>> performance? You know this HOW?
>>> ****
>>>>
>>>>> Why do you think a memory-mapped file is going to
>>>>> be different? Oh, I forgot, you don't WANT to
>>>>> understand
>>>>> how they work, or how paging
>>>>> works!
>>>>
>>>>Not if testing continues to show that paging is not
>>>>occurring.
>>> ****
>>> SCALABILITY! SCALABILITY! MAXIMIZE THROUGHPUT! MEET
>>> 500MS PERFORMANCE GOALS!
>>> SCALABILITY!
>>>
>>
>>If there are no page faults without a memory mapped file
>>and
>>there are no page faults with a memory mapped file, then
>>exactly and precisely what is the specific incremental
>>benefit of a memory mapped file? (The performance and
>>memory usage would have to be the same in both cases).
> *****
> Did you say there were no page faults? I thought you said
> there were 27,000 while the
> data is being loaded! And is it not screamingly obvious
> that if you have the same file
> mapped into 20 processes, that you did not take 27,000
> page faults in EACH of them?
> Instead, you took a flurry of page faults when you mapped
> the file in and touched the
> pages, and thereafter there are no page faults in ANY
> process that is sharing that
> segment! ZERO! In contrast, in your single-thread
> multiple-process model, each process
> has to undergo 27,000 page faults as it starts up.
>
> Have I not repeatedly said "amortized over ALL processes"?
> ****
>>
>>I don't want a really quick way to load more data when
>>needed, because there is no possible way that this could
>>be
>>nearly as fast as having the data already loaded.
> ****
> I have no idea why you thought I said anything like this.
> Instead, I gave you a
> ZERO-page-fault way to load the data into subsequent
> processes.

Yet since that is almost never going to occur it helps
almost not at all.

One process that probably will only have a single thread for
years will load the data exactly once and continue to run
indefinitely unless something goes wrong. This single
process is to as much as possible on Linux or Windows have
real-time priority.

>
> Note also that the page faults might be what are called
> "soft" faults, that is, when the
> OS goes to look for the page, it discovers it is already
> in memory, and doesn't waste time
> bringing in another copy. So even if the page faults
> occur, they are very CHEAP page
> faults.
> ****
>>
>>> Your method does not scale; our suggestions give you
>>> scalability, maximize throughput, and
>>> probably makes it possible to meet your wonderful 500ms
>>> goal consistently.
>>> joe
>>
>>Yet you have consistently, time after time, again and
>>again
>>never backed this up with reasoning. As soon as you back
>>this up with verifiably correct reasoning, thenn (then and
>>only then) will I agree. I absolutely and positively
>>reject
>>dogma as a source of truth.
> ****
> Hmm.. "Verifiable correct reasoning"....how about ZERO
> PAGE FAULTS ARE BETTER THAN LOTS OF
> PAGE FAULTS? Doesn't that sound good? As to the rest of
> the reasoning, it is in the

I want zero page faults from the time that my application
has loaded all of its data on. I don't care about the three
minutes a month start up time.

> MSDN, which I have read, which I have told you that YOU
> should read, and I'm not going to

You have not given me even enough of an outline of your
reasoning to see if it could possibly make sense. Most of
the reasoning that you have provided about this specific
matter has been based on false assumptions. Most of your
advice has been superb. On this one point we fail to agree.

> retype it all from memory here. There is no "dogma" here.
> Just obvious conclusions
> anyone who bothered to understand the technology could
> arrive at, just from reading the
> documentation that is available!
> joe
> ****
>>
>>>
>>> ****
>>>>
>>>>> joe
>>>>> ****
>>>>>
>>>>> Joseph M. Newcomer [MVP]
>>>>> email: newcomer(a)flounder.com
>>>>> Web: http://www.flounder.com
>>>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>>>
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:%23rQAjVSzKHA.1796(a)TK2MSFTNGP02.phx.gbl...
> Peter Olcott wrote:
>
>>> First of all, you never mentioned you were running this
>>> this vapor process of yours in real time priority
>>> status.
>>
>> I kept saying that it had to be as fast as possible. I
>> mentioned a 500 ms maximum response time. How does this
>> not add up to real time?
>
>
> First, 500ms is probably too HIGH for a process running
> under a real time classification. It has to be faster. I
> won't waste time explaining the WHY - go read about it in
> all the books, googleland, etc.
>
> What you don't understand AGAIN, AGAIN, AGAIN and AGAIN is
> what is a preemptive operating system and how it works.
> To say that a process must be active for at least 500ms,
> well, you really are going to be put a hurting on the
> system that will now have to halt all threads from running
> at all until Mister Real Time OCR is finished!
>
> You really have no feel for any of this and since you will
> won't believe people with massive experience in the area,
> well, you will have to just do it yourself and maybe in 10
> years you will finally figure it out.

It is not that I don't believe or fully comprehend your
point of view, it is that you continue to fail to see the
subtle nuances indicating that your vast wealth of knowledge
does not perfectly apply in this given situation.

>
> --
> HLS


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:uEffiWSzKHA.1796(a)TK2MSFTNGP02.phx.gbl...
> Peter Olcott wrote:
>
>> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>> message news:ebsQrxRzKHA.2436(a)TK2MSFTNGP04.phx.gbl...
>>> Peter Olcott wrote:
>>>
>>>>> Oh brother. A patent troll then, a patent troll today!
>>>>> Its all vapor!
>>>> You continue to use the term "patent troll" cluelessly.
>>> You don't have a product - you are a PATENT TROLL.
>>>
>>> --
>>> HLS
>>
>> Putting it caps does not make it any more clueless. Joe
>> already corrected you on this, and Joe has three patents
>> himself.
>
> Yeah, but JOE, like NORMAL PEOPLE actually produced
> something, did the research to provide theories, etc.
>
> You are a PATENT TROLL.
>
> --
> HLS

The fact that I am here developing detailed plans for
specifically commercializing this invention as a web server
directly contradicts the full range of meanings of the term
"patent troll".


From: Liviu on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote...
> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote...
>> Peter Olcott wrote:
>>
>>>> Oh brother. A patent troll then, a patent troll today! Its all
>>>> vapor!
>>>
>>> You continue to use the term "patent troll" cluelessly.
>>
>> You don't have a product - you are a PATENT TROLL.
>
> Putting it caps does not make it any more clueless.

May I just point that "patent" is an adjective, too ;-)



From: Liviu on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote...
> "Liviu" <lab2k1(a)gmail.c0m> wrote...
>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote...
>>> "Liviu" <lab2k1(a)gmail.c0m> wrote...
>>>> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote...
>>>>>
>>>>> You're not going to get anything done because you don't have
>>>>> the capacity to do so. You haven't yet in what 2-3 years?
>>>>
>>>> "I filed a provisional patent last August"
>>>> - Peter Olcott, 12/14/2001
>>>>
>>>> (message #584 in thread of 881 at
>>>> http://groups.google.com/group/comp.lang.c++/msg/f8161ee71a584326?hl=en)
>>>
>>> This patent issued in 2005. The task that I am undertaking is very
>>> large.
>>
>> I am not even trying to argue that now. But you are talking _years_
>> in the works, yet demonstrated deep confusion over elementary matters
>> and ignored most of the sound advice volunteered here. Then you say
>> "I do not have the time to learn inessential new things". Nothing
>> personal, of course, and don't know that you even realize it, but
>> that paints you somewhere between utterly arrogant and a complete
>> kook.
>
> Ad hominem? How professional!

Ad hominem? Not at all, I was just stating the obvious. And, believe it
or not, my comment was meant to be helpful. Could have even been,
had you only paid attention. Assuming you had a cause at all, you are
not helping it, nor doing yourself any favors by abusing people who took
your questions at face value, and belittling their advice. But, instead,
you chose to parade the whole "I know better, it's below me to try that,
read that, or debug that, and I am too busy with this uniquely grandiose
patent to bother with such lowlife details as the rest of you must do".

> The confusion is not mine when experts agree that a real-time process
> does not need to be memory resident.

I start to get a feeling that, years from now, you'll be remembering
this thread as "Joe Newcomer vetted my design", much like you say now
"I learned this from an email from Ward Cunningham" or "I spoke to Phil
Zimmerman about his stuff once" when attempting to throw your imagined
weight around.

Bye now,
Liviu