From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:r3pmq51rukj4j28ed9es0ob84ejblp7bpb(a)4ax.com...
> See below...
> On Wed, 24 Mar 2010 23:10:47 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:dlnlq5pok9nbsc35uaedbot0m18btno5ti(a)4ax.com...
>>>A multithreaded FIFO queue would make more sense; less
>>>chance of priority inversion
>>> effects. An I/O Completion Port makes a wonderful
>>> multithreaded FIFO queue with
>>> practically no effort!
>>> joe
>>
>>How ?
> As I told you a few days ago, read my essay on the use of
> I/O Completion Ports on my MVP
> Tips site.

No you didn't. As you can see from what you said immediately
above, you did not tell me where to find them.

>
> Of course, if I told you that GetQueuedCompletionStatus is
> the dequeue operation,
> PostQueuedCompletionStatus the enqueue operation, it
> should be completely obvious how to
> do it. Or you could have gone to I/O Completion Ports in
> the MSDN documentation and read
> about the API calls and derived this information because
> it is so triviial to see it.
> here's an enqueue operation, and a dequeue operation.
> What more do you need to see?
> joe

Fault tolerance.

> ****
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:%232%23$yCCzKHA.3264(a)TK2MSFTNGP06.phx.gbl...
> Joseph M. Newcomer wrote:
>
>
>> Or they were testing the limits of your credulity.
>> Reminds me of the Calvin & Hobbs
>> cartoon: The family is in the card. Calvin: "Dad, how do
>> they determine the weight limit
>> of a bridge?" Dad: "They run bigger and bigger trucks
>> over it until it collapses, then
>> they rebuild it exactly and post the weight limit"
>
>
> I like that one. :)
>
>> Can you explain why you would accept, without question,
>> such a patently absurd suggestion
>> from one newsgroup while ignoring all the good advice
>> you've been getting in this one?
>
> Well, he probably didn't ask the right question or they
> haven't had the time yet to pry it out of him.
>
> --
> HLS

How else can fault tolerance be provided without persistent
storage?


From: Joseph M. Newcomer on
See below...
On Thu, 25 Mar 2010 08:57:59 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Ismo Salonen" <ismo(a)codeit.fi> wrote in message
>news:OjTbBi$yKHA.5332(a)TK2MSFTNGP02.phx.gbl...
>> Peter Olcott wrote:
>>> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>> message
>>> news:rdqlq5dv2u8bh308se0td53rk7lqmv0bki(a)4ax.com...
>>>> Make sure the addresses are completely independent of
>>>> where the vector appears in memory.
>>>>
>>>> Given you have re-implemented std::vector (presumably as
>>>> peter::vector) and you have done
>>>> all the good engineering you claim, this shouldn't take
>>>> very much time at all. Then you
>>>> can use memory-mapped files, and share this massive
>>>> footprint across multiple processes,
>>>> so although you might have 1.5GB in each process, it is
>>>> the SAME 1.5GB because every
>>>> process SHARES that same data with every other process.
>>>>
>>>> Seriously, this is one of the exercises in my Systems
>>>> Programming course; we do it
>>>> Thursday afternoon.
>>>> joe
>>>
>>> But all that this does is make page faults quicker right?
>>> Any page faults at can only degrade my performance.
>>>
>>
>> just my two cents :
>>
>> memory mapped files are paged in to process memory when
>> the page is 1st time referenced (read or written). This is
>> pagefault mechanism.
>> It is the most efficient way accessing data. Reading
>> memory by ReadFile() or others in that family are slower
>> (well ReadFile directly you buffers would result quite
>> similar performance bause it use the paging mechanism).
>> The pagefault mechanism must be quite optimized as it is
>> the basis for modern operating systems ( windows, linux
>> etc ). The inner workings are quite delicate and I highly
>> suspect you can no way outperform it.
>>
>> It seems that you have plausible product idea but you lack
>> the knowledge
>> how operating system works. Have you read the articles
>> others have
>
>One thing that I know is that a process with page faults is
>slower than a process without page faults everything else
>being equal.
****
Rubbish. You told us you had something like 27,000 page faults while you were loading
your data! And you have completley missed the point here, which is the page faults are
AMORTIZED over ALL processes! So if you have 8 processes, it is like each page fault
counts as 1/8 of a page fault. And they only happen during the loading phase, which
doesn't change anything!
****
>
>It continues to work (in practice) the way that I need it to
>work, and I have never seen it work according to Joe's
>theories. Whenever there is plenty of excess RAM (such as 4
>GB more than anything needs) there are no page-faults in my
>process. I even stressed this out a lot and had four
>processes taking 1.5 GB each (of my 8 GB) and still zero
>page faults in any of the four processes.
****
I don't have theories. I am talking about practice. You talk about having "no" page
faults, but do you know if those pages have been written to the paging file? No, you
don't. And with a MMF, you will quickly converge on the same zero page faults; by what
mystical method do you infer that these page faults are repeated each time the page is
accessed, or are duplicated in each process? So how does this differ from your current
implementation? The timing of the page faults is different, but you will have at least
as many, and more likely fewer, using a MMF. But that's assuming you are willing to learn
something new, which you've stated explicitly is not on your agenda. "Make everything
better, but don't make me learn something new" I think summarizes pretty accurately what
you've told us.
joe

****
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Thu, 25 Mar 2010 09:02:19 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Oliver Regenfelder" <oliver.regenfelder(a)gmx.at> wrote in
>message news:3b7af$4bab373b$547743c7$31073(a)news.inode.at...
>> Hello,
>>
>> Peter Olcott wrote:
>>>> Then you
>>>> can use memory-mapped files, and share this massive
>>>> footprint across multiple processes,
>>>> so although you might have 1.5GB in each process, it is
>>>> the SAME 1.5GB because every
>>>> process SHARES that same data with every other process.
>>>>
>>>> Seriously, this is one of the exercises in my Systems
>>>> Programming course; we do it
>>>> Thursday afternoon.
>>>> joe
>>>
>>> But all that this does is make page faults quicker right?
>>> Any page faults at can only degrade my performance.
>>
>> It also reduces overall memory usage as stated earlier.
>
>I don't care about that at all. The machine will be a
>dedicated server that has the single purpose of running my
>process.
****
And what did you miss about "scalability"? Oh, that;s right, you will just throw more
hardware at it. And rely on your ISP to provide load-balancing. Have you talked to them
about how they do load-balancing when you have multiple servers?
joe

****
>
>>
>> Say you have 4 processes (not threads!) then each of the 4
>> processes has its own address space. So if you need the
>> same
>> data in each process the simple thing is
>> 1) Generate an array or something
>> 2) Load precalculated data from file
>>
>> The problem is, that this way, each process has his
>> independend
>> copy of the data and you would use 4*data_size memory.
>>
>> Now if you use memory mapped files, then each process
>> would do
>> the following:
>> 1) Memory map the file[1].
>
>Why not just have a single read only std::vector with
>multiple threads reading it?
>
>>
>> If you do this in each process then the OS will do its
>> magic and
>> recognize that the same file is mapped into memory 4 times
>> and only
>> keep one physical copy of the file in memory. Thus you
>> only use
>> 1*data_size of memory.
>>
>> [1]: You somewhere mentioned something about OS
>> independence so
>> have a look at www.boost.org. The interprocess library
>> contains
>> the memory mapping code. You might also want to consider
>> it for
>> your threading in case the software shall not only run on
>> Windows.
>
>If I can eliminate page faults then a thing that makes page
>faults quick is still moot, and not worth any learning
>curve.
>
>>
>>
>> Best regards,
>>
>> Oliver
>>
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:t1qmq51qekgajv3is9tod0vl42ggrjp9r8(a)4ax.com...
> See below...
> On Wed, 24 Mar 2010 23:59:32 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:1bqlq5lss7pnt4lo669n2hg6peeuj3qlfm(a)4ax.com...
>>> OK if you have designed your software "correctly", put
>>> that DFA into a binary file which
>>> can be mapped into any process at any aribitrary address
>>> and your software continues to
>>> work correctly. If you can't make this trivial change,
>>> then you have some serious design
>>> issues to address.
>>> joe
>>>
>>
>>It is the unnecessary learning curve
> ***
> "I'm clueless, and I want to stay that way!" That has
> become evident in your responses.
> ***
>>that I don't want to
>>deal with. It one does not carefully pair the unnecessary
>>learning curve from the tree of possibilities one would
>>spend all of their time learning about doing things and
>>never actually get around to doing them.
> ***
> Except, as I keep pointing out, it isn't an UNNECESSARY
> learning curve; it is essential to
> learn to do these things to get the performance you want!
> "I don't need to learn to read,
> my grandpa, he was a succesful businessman and he was
> functionally illiterate all his
> life, and I should be able to succeed without needing to
> learn to read!". (I actually
> know of someone, a family member, who ran a business but
> couldn't read, but it was two
> generations back. It doesn't work today).
>
> You think all this doesn't matter. You want 500ms respons
> time, there is a PRICE TO PAY
> to get that, and it is in learning how to exploit all the
> features of the OS that can
> reduce your overheads to manageable levels. You want the
> performance, you pay the price!
> TANSTAAFL (There Ain't No Such Thing As A Free Lunch).
> ****
>>
>>That said HTTP the Definitive guide is providing most of
>>the
>>info that I need. Things such as pipelined connections
>>will
>>help me to get as close as possible to my 500 ms goal.
> ****
> OH. I though we told you some of this weeks ago, but now
> that you are doing something
> about the learning curve and found something that confirms
> everything we told you, you
> suddenly tell us we were right all along! (And didn't you
> know that ALL TCP/IP
> connections are pipelined? Didn't I tell you about
> Sliding Window Protocol? I'm SURE I

This book seems to say that there are also HTTP aspects to
this that must be addressed.

Maybe it would help if I stated my ultimate goal a little
better. I am trying to derive a new business model for
commercializing software. I want to make it so that people
can rent software for a tiny cost per use. I want the
software to as much as possible execute with the
characteristics of software that is installed on the user's
machine. This tends to require a much faster response time
than is typical of web applications.

I knew long before you told me that there will be factors
beyond my control that will effect this. Within these
constraints I want to provide the best response time
possible. This business model may eventually require a web
server in every major city. I also know that even this is
not enough to make response time completely predictable.

> said that somewhere....and if you'd followed up and read
> about it, you would have known it
> was a pipelining protocol!)
> joe
> ****
>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm