From: Peter Olcott on

"Pete Delgado" <Peter.Delgado(a)> wrote in message
> "Peter Olcott" <NoSpam(a)> wrote in message
> news:9ZSdnc_zBLvdfzbWnZ2dnUVZ_qadnZ2d(a)
>> But I have said many times now that I will not scale by
>> processes I will scale by threads, and these threads all
>> share that same data so the benefit that you keep pushing
>> about memory mapped files continues to be moot. I may
>> actually scale by servers instead of processes or
>> threads, because five single core servers cost half as
>> much as one quad core server.
> The laughable part of all this is that you are completely
> serious! So, given your obvious naivete about development
> you now suggest that you can implement your system using
> multiple servers all the while meeting or exceeding your
> design and performance goals?
> All I can say is good luck...
> -Pete
It is not naivet´┐Ż. I know that the greater the physical
proximity of a server to a customer the fewer the hops that
the customer's request will make to this server. Is this not
correct? If I geographically disperse the servers such that
at least one server is in much closer physical proximity to
a specific set of customers, then these customers will most
likely enjoy faster response time, right?

From: Peter Olcott on

"Hector Santos" <sant9442(a)> wrote in message
> Peter Olcott wrote:
>> You and Joe did give me some excellent help, and I really
>> appreciate that. The idea to base my web application on
>> HTTP was the best. I do not appreciate the rudeness, and
>> denigration.
> We don't appreciate you telling us to prove something that
> is pretty much common knowledge about Windows programming,
> and furthermore, we don't appreciate when you still don't
> believe us and we advise you explore all yourself even to
> he extent of providing simulation code and you still
> hassle us about it without even exploring it. When you
> finally did partially some testing, you have kiddie BUGS
> and still come back to us to help you figure it out.
> Then you tried to front us with some fictitious Specialty
> Group that has all the answers, and LIED about they were
> agreeing with you. When asked to tell us what group was
> this, silence.

The group is comp.programming.threads
along with two linux groups and one unix group.

Outlook express is losing some of the postings. I had to
reply to a reply to Pete's message yesterday because Pete's
original message never made it to outlook express.

> And even if you still didn't believe us, it isn't like the
> world is void of this information. This is all out there
> in googleland and you were given countless links, all
> ignored. But its all there, yet you still refuse to
> believe anything.

I know for a fact that belief and disbelief are both errors
of reasoning known as fallacies. Only comprehension of
reasoning is a reliable means of discerning truth from
falsehood. I apologize for not showing enough deference for
the excellent free advice that you are providing. The advice
that I could verify with reasoning was verifiably superb.

> And thing finally, in the end you finally said you did
> know about something about all this, but forgot because
> you never studied the 2nd half of some book for a canceled
> exam on operating systems. Talk about Virtual Memory!
> Rude? Your behavior is nothing short of being rude.
> --

From: Peter Olcott on

"Oliver Regenfelder" <oliver.regenfelder(a)> wrote in
message news:cf555$4bac7407$547743c7$23143(a)
> Hello,
> Peter Olcott wrote:
>> I am trying to derive a new business model for
>> commercializing software. I want to make it so that
>> people can rent software for a tiny cost per use.
> I wouldn't call that itself a _new_ businessmodell. There
> is
> all that google stuff that comes for free, there is online
> photoshop, I think sometime ago there were rumors about an
> online
> office from microsoft.
> But maybe your approach is different.
> Best regards,
> Oliver

The big difference that I am attempting to provide is
average response time in the ballpark of the response time
that one would get from an app that is directly installed on
the local machine. It looks like this is reasonably feasible
in some cases.

From: Peter Olcott on

"Oliver Regenfelder" <oliver.regenfelder(a)> wrote in
message news:c91fc$4bac75a0$547743c7$23272(a)
> Hello,
> Peter Olcott wrote:
>> If the most RAM it can possibly need is 1 GB, and it has
>> 4 GB then it seems crazy to page anything out. How is
>> this not crazy?
> 1) The OS never knows how much RAM an application will
> possibly need.
> 2) It may page out the data and still keep the pages in
> RAM. This
> way, when the moment comes that the pages have to be
> paged out they
> already are. Essentially you are doing page outs in
> your idle time
> so that you don't have to do it later. And when you are
> using good
> heuristics, then this saves time.
> Best regads,
> Oliver

As several people have now confirmed there is a way to lock
pages into memory so that they won't be swapped out.

From: Peter Olcott on

"Oliver Regenfelder" <oliver.regenfelder(a)> wrote in
message news:e148c$4bac7685$547743c7$23272(a)
> Hello,
> Peter Olcott wrote:
>> I don't know. It does not yet seem worth the learning
>> curve cost. The process is intended to be always running
>> and loaded with data.
> I would say using memory mapped files with e.g. boost is
> not
> that steep a learning curve.
> Best regards,
> Oliver

If we are talking on the order of one day to become an
expert on this, and it substantially reduces my data load
times, it may be worth looking into. I am still convinced
that it is totally useless for optimizing page faults for my
app because I am still totally convinced the preventing page
faults is a far better idea than making them fast. Locking
pages into memory will be my approach, or whatever the
precise terminology is.