From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:agqmq5hvh7d7e99ekhbrjp1snta9hm630p(a)4ax.com...
> See below...
> On Thu, 25 Mar 2010 00:07:00 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:sjolq5dp8kkg42mubvr9kroebcimut3blo(a)4ax.com...
>>> SEe below...
>>> On Tue, 23 Mar 2010 15:53:36 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>>
>>>>> Run a 2nd instance and you begin to see faults. You
>>>>> saw
>>>>> that. You proved that. You told is that. It is why
>>>>> this
>>>>> thread got started.
>>>>
>>>>Four instances of 1.5 GB RAM and zero page faults after
>>>>the
>>>>data is loaded.
>>>>
>>>>You never know a man with a billion dollars in the bank
>>>>just
>>>>might panic and sell all of his furniture just in case
>>>>he
>>>>loses the billion dollars and won't be able to afford to
>>>>pay
>>>>his electric bill.
>>> ****
>>> There are people who behave this way. Custodial care
>>> and
>>> psychoactive drugs (like
>>> lithium-based drugs) usually help them. SSRIs sometimes
>>> help (selective serotonin
>>> reuptake inhibitors). I don't know what an SSRI or
>>> lithium equivalent is for an app that
>>> becomes depressed.
>>
>>Ah so then paging out a process or its data when loads of
>>RAM is still available is crazy right?
> ****
> No, lots of operating systems do it. Or did you miss that
> part of my explanation of the

It have never occurred with my process.

> two-timer linux page-marking method?
>
> You still persist in believing your fantasies.
>
> Essentially, what the OS is doing is the euivalent of
> putting its money into an
> interest-bearing account! It is doing this while
> maximizing the liquidity of its assets.
> That isn't crazy. NOT doing it is crazy! But as
> operating systems programmers, we

If the most RAM it can possibly need is 1 GB, and it has 4
GB then it seems crazy to page anything out. How is this not
crazy?

> learning this in the 1970s. We even wrote papers about
> it. And books. I not only read
> those papers and books, I helped write some of them. You
> will find me acknowledged in
> some of them.

Sure and back then 64K was loads of RAM. I worked on an
application that calculated the water bills for the City of
Council Bluffs IA, on a machine with 4K RAM.

>
> Sadly, you persist in believing what you want to believe
> instead of understanding how real
> systems work.
> joe
>
> ****
>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:00rmq5hctllab7ursv8q64pq5eiv8s82ad(a)4ax.com...
> See below...
> On Thu, 25 Mar 2010 00:01:37 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:rdqlq5dv2u8bh308se0td53rk7lqmv0bki(a)4ax.com...
>>> Make sure the addresses are completely independent of
>>> where the vector appears in memory.
>>>
>>> Given you have re-implemented std::vector (presumably as
>>> peter::vector) and you have done
>>> all the good engineering you claim, this shouldn't take
>>> very much time at all. Then you
>>> can use memory-mapped files, and share this massive
>>> footprint across multiple processes,
>>> so although you might have 1.5GB in each process, it is
>>> the SAME 1.5GB because every
>>> process SHARES that same data with every other process.
>>>
>>> Seriously, this is one of the exercises in my Systems
>>> Programming course; we do it
>>> Thursday afternoon.
>>> joe
>>
>>But all that this does is make page faults quicker right?
>>Any page faults at can only degrade my performance.
> ***
> Denser than depleted uranium. Fewer page faults, quicker.
> For an essay, please explain
> in 500 words or less why I am right (it only requires
> THINKING about the problem) and why
> these page faults happen only ONCE even in a multiprocess
> usage! Compare to the ReadFile
> solution. Compare and contrast the two approaches. Talk
> about storage allocation
> bottlenecks.
>
> I'm sorry, but you keep missing the point. DId you think
> your approach has ZERO page
> faults? You even told us it doesn't!

I was making a conservative estimate, actual measurement
indicated zero page faults after all data was loaded, even
after waiting 12 hours.

> Why do you think a memory-mapped file is going to
> be different? Oh, I forgot, you don't WANT to understand
> how they work, or how paging
> works!

Not if testing continues to show that paging is not
occurring.

> joe
> ****
>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:%233s63XCzKHA.928(a)TK2MSFTNGP05.phx.gbl...
> Peter Olcott wrote:
>
>> It continues to work (in practice) the way that I need it
>> to work, and I have never seen it work according to Joe's
>> theories. Whenever there is plenty of excess RAM (such as
>> 4 GB more than anything needs) there are no page-faults
>> in my process. I even stressed this out a lot and had
>> four processes taking 1.5 GB each (of my 8 GB) and still
>> zero page faults in any of the four processes.
>
>
> Geez, ok, fine, you got no faults. What we are telling
> you is that OK to get faults so that you can scale even
> more instances.
>
> But its a really a moot point now since you can't have
> more than one socket service[r] bound to the same port.
>
> You have no choice but to use a multi-threaded design.
>
> If you don't want to do threads, and you want to keep four
> OCR processors running, then at the very least, as I said
> very EARLY in

How do you manage to make such absurdly false assumptions?

> the post, you need to sit down and get your OCR
> (interface) protocol worked out.
>
> Now your design could be:
>
> WEB SERVER <--> OCR.PROTOCOL.PROXY.DLL <--> X number of
> OCE.EXE
>
> The OCR Protocol proxy dll is now your FIFO based Load
> Balancer.
>
> Thats the best I see you can do with your peter-made
> design limitations and peter-based programming
> limitations.
>
> --
> HLS


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:OjYZxbCzKHA.2436(a)TK2MSFTNGP04.phx.gbl...
> Peter Olcott wrote:
>
>>> Will MySQL will keep a FIFO queue resident?
>>>
>>> WOW! This is unbelievable.
>>>
>>> Do you know what MySQL is? Or even a FIFO queue?
>>
>> Do you know what file caching is?
>
>
> Do you want to compete with me?
>
>> I know that a SQL provider would not be required to
>> always hit disk for a 100K table when multiple GB of RAM
>> are available.
>
>
> Under idle times there is a penalty for MYSQL to re-awake.
> Your connects will be slow to start at first.
>
> For 100K file? Just use an plan old text file for that or
> KILL it in memory for god sake.
>
> If you want SQL, MySQL is an overkill for your peter-made
> design limitations and primitive application. You will
> be better off with a single accessor SQLITE3 which by its
> very design is meant for SINGLE access only - probably fit
> your crazy FIFO ideas which I see you totally IGNORE the
> example pseudo code that you Have no choice to do.
>
> --
> HLS

The whole process has to be as fault tolerant as possible,
and fault tolerance requires some sort of persistent
storage.


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:ez50qkCzKHA.5040(a)TK2MSFTNGP02.phx.gbl...
> Peter Olcott wrote:
>
>> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>> message
>
>>> Its a moot point now. You can't run multiple socket
>>> servers on the same port 80 Peter. Oh, they will start
>>> up, but you won't get any secondary processors trying to
>>> bind on port 80 to listen on it. That is still among
>>> one of #1 support questions when a new customer
>>> purchases our service products, installs it and runs it
>>> and he test it with his browser and it doesn't connect -
>>> he has IIS running in the background. So either he
>>> turns it off or he uses a different port.
>>>
>>> You have no choice but to follow the proper multi-thread
>>> design server models. :)
>
>> That should not be any issue because data processing time
>> will always be a large multiple of HTTP processing time.
>
>
> Huh? What does that have to do what you can't have
> multiple EXE running with their own web server?
>
>> Web server listens at port 80, and creates a separate
>> thread (possibly from a thread pool) to handle each HTTP
>> request. It places each request in some sort of FIFO
>> queue. OCR process has one or more threads that pull
>> requests from this FIFO queue.
>
> But thats a SINGLE PROCESS with 1 OCR.
>
> You can not run this multiple times because the OS will
> not allow a 2nd or more web server to bind on the same
> port.
>
> You need to go back to separating the WEB SERVER from the
> OCR.EXE process and now we go back to my early post where
> you need to work out the MIDDLE WARE, the OCR Protocol
> Interface logic.
>
> WEB <--> INTERFACE <--> 4 instances of OCR.EXE
>
> --
> HLS

I did not explicitly state (in this post) that I have been
planning to divide the WebServer code from the OCR code for
several days now.

Also I will not have four instances of the OCR code. I will
have a number of OCR threads that scales to the number of
CPU cores.