From: Joseph M. Newcomer on
See below...
On Thu, 25 Mar 2010 00:07:00 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:sjolq5dp8kkg42mubvr9kroebcimut3blo(a)4ax.com...
>> SEe below...
>> On Tue, 23 Mar 2010 15:53:36 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>
>>>> Run a 2nd instance and you begin to see faults. You saw
>>>> that. You proved that. You told is that. It is why this
>>>> thread got started.
>>>
>>>Four instances of 1.5 GB RAM and zero page faults after
>>>the
>>>data is loaded.
>>>
>>>You never know a man with a billion dollars in the bank
>>>just
>>>might panic and sell all of his furniture just in case he
>>>loses the billion dollars and won't be able to afford to
>>>pay
>>>his electric bill.
>> ****
>> There are people who behave this way. Custodial care and
>> psychoactive drugs (like
>> lithium-based drugs) usually help them. SSRIs sometimes
>> help (selective serotonin
>> reuptake inhibitors). I don't know what an SSRI or
>> lithium equivalent is for an app that
>> becomes depressed.
>
>Ah so then paging out a process or its data when loads of
>RAM is still available is crazy right?
****
No, lots of operating systems do it. Or did you miss that part of my explanation of the
two-timer linux page-marking method?

You still persist in believing your fantasies.

Essentially, what the OS is doing is the euivalent of putting its money into an
interest-bearing account! It is doing this while maximizing the liquidity of its assets.
That isn't crazy. NOT doing it is crazy! But as operating systems programmers, we
learning this in the 1970s. We even wrote papers about it. And books. I not only read
those papers and books, I helped write some of them. You will find me acknowledged in
some of them.

Sadly, you persist in believing what you want to believe instead of understanding how real
systems work.
joe

****

Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Oliver Regenfelder" <oliver.regenfelder(a)gmx.at> wrote in
message news:3b7af$4bab373b$547743c7$31073(a)news.inode.at...
> Hello,
>
> Peter Olcott wrote:
>>> Then you
>>> can use memory-mapped files, and share this massive
>>> footprint across multiple processes,
>>> so although you might have 1.5GB in each process, it is
>>> the SAME 1.5GB because every
>>> process SHARES that same data with every other process.
>>>
>>> Seriously, this is one of the exercises in my Systems
>>> Programming course; we do it
>>> Thursday afternoon.
>>> joe
>>
>> But all that this does is make page faults quicker right?
>> Any page faults at can only degrade my performance.
>
> It also reduces overall memory usage as stated earlier.

I don't care about that at all. The machine will be a
dedicated server that has the single purpose of running my
process.

>
> Say you have 4 processes (not threads!) then each of the 4
> processes has its own address space. So if you need the
> same
> data in each process the simple thing is
> 1) Generate an array or something
> 2) Load precalculated data from file
>
> The problem is, that this way, each process has his
> independend
> copy of the data and you would use 4*data_size memory.
>
> Now if you use memory mapped files, then each process
> would do
> the following:
> 1) Memory map the file[1].

Why not just have a single read only std::vector with
multiple threads reading it?

>
> If you do this in each process then the OS will do its
> magic and
> recognize that the same file is mapped into memory 4 times
> and only
> keep one physical copy of the file in memory. Thus you
> only use
> 1*data_size of memory.
>
> [1]: You somewhere mentioned something about OS
> independence so
> have a look at www.boost.org. The interprocess library
> contains
> the memory mapping code. You might also want to consider
> it for
> your threading in case the software shall not only run on
> Windows.

If I can eliminate page faults then a thing that makes page
faults quick is still moot, and not worth any learning
curve.

>
>
> Best regards,
>
> Oliver
>


From: Joseph M. Newcomer on
See below...
On Thu, 25 Mar 2010 00:01:37 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:rdqlq5dv2u8bh308se0td53rk7lqmv0bki(a)4ax.com...
>> Make sure the addresses are completely independent of
>> where the vector appears in memory.
>>
>> Given you have re-implemented std::vector (presumably as
>> peter::vector) and you have done
>> all the good engineering you claim, this shouldn't take
>> very much time at all. Then you
>> can use memory-mapped files, and share this massive
>> footprint across multiple processes,
>> so although you might have 1.5GB in each process, it is
>> the SAME 1.5GB because every
>> process SHARES that same data with every other process.
>>
>> Seriously, this is one of the exercises in my Systems
>> Programming course; we do it
>> Thursday afternoon.
>> joe
>
>But all that this does is make page faults quicker right?
>Any page faults at can only degrade my performance.
***
Denser than depleted uranium. Fewer page faults, quicker. For an essay, please explain
in 500 words or less why I am right (it only requires THINKING about the problem) and why
these page faults happen only ONCE even in a multiprocess usage! Compare to the ReadFile
solution. Compare and contrast the two approaches. Talk about storage allocation
bottlenecks.

I'm sorry, but you keep missing the point. DId you think your approach has ZERO page
faults? You even told us it doesn't! Why do you think a memory-mapped file is going to
be different? Oh, I forgot, you don't WANT to understand how they work, or how paging
works!
joe
****

Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
Gee, you guys are taking all the fun out of this by giving him the answers!

I had hoped he might actually THINK about the problem and discover all these facts on his
own!

But then, he has explicitly stated he wishes to remain ignorant of reality, so I guess it
is OK you are explaining it to him using short words and simple sentences. Maybe he'll
get it then.
joe

On Thu, 25 Mar 2010 11:13:07 +0100, Oliver Regenfelder <oliver.regenfelder(a)gmx.at> wrote:

>Hello,
>
>Peter Olcott wrote:
>>> Then you
>>> can use memory-mapped files, and share this massive
>>> footprint across multiple processes,
>>> so although you might have 1.5GB in each process, it is
>>> the SAME 1.5GB because every
>>> process SHARES that same data with every other process.
>>>
>>> Seriously, this is one of the exercises in my Systems
>>> Programming course; we do it
>>> Thursday afternoon.
>>> joe
>>
>> But all that this does is make page faults quicker right?
>> Any page faults at can only degrade my performance.
>
>It also reduces overall memory usage as stated earlier.
>
>Say you have 4 processes (not threads!) then each of the 4
>processes has its own address space. So if you need the same
>data in each process the simple thing is
>1) Generate an array or something
>2) Load precalculated data from file
>
>The problem is, that this way, each process has his independend
>copy of the data and you would use 4*data_size memory.
>
>Now if you use memory mapped files, then each process would do
>the following:
>1) Memory map the file[1].
>
>If you do this in each process then the OS will do its magic and
>recognize that the same file is mapped into memory 4 times and only
>keep one physical copy of the file in memory. Thus you only use
>1*data_size of memory.
>
>[1]: You somewhere mentioned something about OS independence so
>have a look at www.boost.org. The interprocess library contains
>the memory mapping code. You might also want to consider it for
>your threading in case the software shall not only run on Windows.
>
>
>Best regards,
>
>Oliver
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:OCPux1AzKHA.3884(a)TK2MSFTNGP06.phx.gbl...
> Peter Olcott wrote:
>
>>>> Ah so this is the code that you were suggesting?
>>>> I won't be able to implement multi-threading until
>>>> volume
>>>> grows out of what a single core processor can
>>>> accomplish.
>>>> I was simply going to use MySQL for the inter-process
>>>> communication, building and maintaining my FIFO queue.
>>> ****
>>> Well, I can think of worse ways. For example, writing
>>> the data to a floppy disk. Or
>>> punching it to paper tape and asking the user to
>>> re-insert the paper tape. MySQL for
>>> interprocess communication? Get serious!
>>
>> Can you think of any other portable way that this can be
>> done?
>> I would estimate that MySQL would actually keep the FIFO
>> queue resident in RAM cache.
>
> Huh?
>
> Will MySQL will keep a FIFO queue resident?
>
> WOW! This is unbelievable.
>
> Do you know what MySQL is? Or even a FIFO queue?

Do you know what file caching is? I know that a SQL provider
would not be required to always hit disk for a 100K table
when multiple GB of RAM are available.

>
> Honestly, do you really understand what MySQL or any SQL
> engine is?
>
> And you are worry about performing?

Not so much on the 1% share of the response time total. In
this case I want a solution that can be ported across any
hardware platform with minimal or zero changes. It also has
to be fault tolerant and have complete error recovery. (Pull
the plug in the middle of processing, and when the machine
is restarted it picks up where it left off with zero or
minimal errors).

>
> Here is you functions for a FIFO:
>
> AddFifo()
> {
> 1 - connect to SQL server
> 2 - prepare sql statement
>
> sql = "insert into table values (that, this, that,
> ...)"
>
> 3 - execute sql
> 4 - close
> }
>
> GetFifo()
> {
> 1 - connect to SQL server
>
> 2 - prepare sql statement, get last one
>
> sql = "select * from table limit 1, -1"
>
> 3 - execute sql
>
> 4 - Fetch the Record, if any
>
> 5 - get id for record
> 6 - prepare DELETE sql statement
>
> sql = "delete from table where id = whatever"
>
> 7 - execute sql
>
> 8 - close
> }
>
> You are crazy out of this world and nuts if you expect
> this to be an efficient "FIFO" concept.
>
>
> --
> HLS