From: Hector Santos on
Joseph M. Newcomer wrote:

> My servers are all on UPS units and get notification if power is going to fail in the near
> future; robust code handles WM_POWERBROADCAST messages.

Exactly Joe, but he has to program for that! :)

So unless he finds a utility that captures the signal and sends mouse
strokes to close his vaporware application, it ain't going to happen.

From: Hector Santos on
Pete Delgado wrote:

> "Peter Olcott" <NoSpam(a)> wrote in message

>> Its all far far less convoluted in the OS that I will be using.
> I have to ask this... I really do...
> If you plan on developing on some magical OS where none of this is a problem
> (or is "far les convoluted"), why are you posting your questions in a
> Microsoft newsgroup?

Because the *nix wienies already blew him off? :)

From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)> wrote in
message news:ccqvq5tdav2raahjk72sgiab0a4ttbrjjh(a)
> See below...
> On Thu, 25 Mar 2010 09:18:32 -0500, "Peter Olcott"
> <NoSpam(a)> wrote:
>>"Hector Santos" <sant9442(a)> wrote in
>>> Peter Olcott wrote:
>>>> "Joseph M. Newcomer" <newcomer(a)> wrote in
>>>>> A multithreaded FIFO queue would make more sense; less
>>>>> chance of priority inversion
>>>>> effects. An I/O Completion Port makes a wonderful
>>>>> multithreaded FIFO queue with
>>>>> practically no effort!
>>>>> joe
>>>> How ?
>>> Well, there will be programming effort with IOCP, his
>>> main
>>> point is that you don't understand enough programming
>>> ideas under Windows that you behaving here like a new
>>> kid
>>> who just discovered the toaster oven and you don't want
>>> follow others cooks or read a cook book in the many ways
>>> to use the toaster oven.
>>> Look, your SOCKET is a FIFO. Its inherent. So when
>>> connects come in, they are queued, designed for
>>> multi-threaded worker pool concepts to handle the
>>> incoming
>>> request.
>>> So what you do?
>>> You don't use threads, and you slow it down by adding
>>> ANOTHER FIFO queue to run the OCR per request one at a
>>> time.
>>> But now you think, I got a MONSTER machine. Its easier
>>> for
>>> me to use create a EXE with MONGOOSE+OCR and run it 4
>>> times, assuming it will be 1 EXE per multi-core
>>> processor.
>>No you have this wrong. For proving that multiple threads
>>would work multiple processes sufficed. In practice I will
>>use multiple threads.
> ****
> No, it doesn't. And if it is not obvious to you why this
> is true, you are being more
> obtuse than any human being has the right to be. Or, as
> I've said before, denser than
> depleted uranium. Perhaps you do not understand the
> difference between thread scheduling
> within a process and scheduling thread when the process
> context changes. Perhaps you do
> not understand how computers work, and that's why you are
> asking questions, and those of
> us who KNOW what is going are desperately trying to
> explain to you that your experiment is
> so deeply flawed as to be irrelevant, and WHAT PART OF
> are comparing incomparable experiments, and your
> resistance to running the correct
> experiment is truly impressive for someone who has already
> proven himself to be totally
> clueless. I even tried to explain that the experiments
> are incomparable, but gee, that's
> based on my understanding of the operating system and the
> x86 chip, and what the context
> swap code is known to do when swapping process contexts
> (do you know what the implications
> of "TLB Flush" are on your performance?) Maybe my
> tendency to read documentation biases
> me, and you have no such biases.
> joe

I know the difference between threads and a process and see
no reason why threads would not work if processes do work
the converse not necessarily being true.

What difference is there between threads and processes that
is the basis for even the possibility that threads may not
work whereas processes do work?
Please do not site a laundry list of the differences between
threads and processes, please only cite at least one
difference between threads and processes along with
reasoning to explain why they might not be used whereas
threads worked correctly.

Here are two crucial assumptions why I think that threads
must work if processes do work:
(1) Threads can share memory with each other with most
likely less overhead than processes.
(2) Threads can be scheduled on multiple processor cores,
just like processes.

> ****
>>> Now you screwed up MONGOOSE, now you have 4 competition
>>> SOCKET servers on PORT 80 and thats not possible on a
>>> single machine. You can only have 1 PORT 80 HTTP
>>> So you automatically BLOCKED three of the EXE running in
>>> memory, they will NEVER see a socket connection.
>>> But you say ok, "then I have no choice but to use FOUR
>>> machines or even TWO for now."
>>> Now you need a LOAD BALANCER or just use simple round
>>> robin logic in DNS records by adding two A records into
>>> your ZONE for the same domain.
>>> // machine 1
>>> // machine 2
>>> Under DNS, it will round robin the IP to connect to. No
>>> real concept for load balancing.
>>> The only load balancing concept you have is a FIFO at
>>> the
>>> OCR part.
>>> So it all goes back to the Toaster Oven. You don't know
>>> how to use the simply toaster oven and you don't want to
>>> listen to anyone that what YOU are doing is freaking
>>> crazy
>>> and you don't even realize that YOU can't do what you
>>> want
>>> anyway with EXE per request when a socket is involved
>>> without going into the need to add a LOAD BALANCING for
>>> multiple machines.
>>> Really Peter, it skipped my mind that YOU can't run
>>> multiple EXE when you have a socket service embedded
>>> into
>>> it. But I said it in so many other ways, you are
>>> trying
>>> to add a multi-threaded web server into 1 single thread
>>> OCR process, when you should be adding a multi-thread
>>> OCR
>>> process into a multi-thread service.
>>> It just dawned on me, that YOU can not do multiple
>>> MONGOOSE+OCR EXE processors - mongoose is restricted to
>>> a
>>> SINGLE PORT and you can't not have multiple services on
>>> the same port on the same machine.
>>> --
>>> HLS
>>I will have some sort of web server that communicates with
>>my OCR process using some sort of inter process
>>communication. The web server will have multiple threads
>>(probably pthreads) and the OCR process will have one
>>per CPU core, probably also pthreads. These threads will
>>share a single block of memory, one way or another.
> ****
> Ah, the "handwave" school of design. "We can solve any
> problem by postulating the
> existence of some magical mechanism that solves that
> problem". Sorry, I've been a
> software manager, and if one of my people came to me with
> a proposal as p-baked as yours
> (for, I'm beginning to realize, p << 0.05, so more than an
> order of magnitude below
> "half-baked") I would send them back with instructions to
> design. You just plan to have
> "some sort" of web server with "some sort" of interprocess
> communication. In what fantasy
> world does this constitute "design"? It isn't even CLOSE
> to what is required in this
> profession. And I've already told you that your concept
> of a single-threaded OCR process
> is a losing idea, but you refuse to make any effort to
> gather any data to either support
> or refute this idea, just insist that it must be correct.
> I suppose it constitutes "some
> sort" of performance measurement. I have NO IDEA if a
> multithreaded OCR process will work
> better, but the difference is that I WOULD GET THE DATA TO
> PROPOSAL, and you plan to just blunder ahead without any
> data and build something, using
> "some sort" of design methodology.
> Maybe because I've already done this with my people that I
> know what the standards are, I
> had to go to senior management and there was NO WAY I
> would have done this if someone said
> "we're going to have 'some sort' of solution". Ultimately
> I had to do the performance
> analysis myself because the people who worked for me were
> not qualified to do it, and kept
> coming up with nonsense numbers that had no credibility (I
> ultimately proved all the
> experiments flawed, and why, because they kept insisting I
> was asking for something that
> could not be done; I did it myself in under four hours,
> proving that (a) it could be done
> and (b) wasn't even very hard, which is what I'd been
> saying for a week And to do it in 4
> hours, I was doing it on an operating system I'd never
> programmed before, using APIs I
> only suspected existed, in a language I had not used, and
> a development environment I was
> unfamiliar with)
> You are going to have to come up with a better proposal
> than one that uses the words "some
> sort" in it.
> joe

All that I was saying is that my mind is still open to
alternatives than the ones that I originally suggested.

> Joseph M. Newcomer [MVP]
> email: newcomer(a)
> Web:
> MVP Tips:

From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)> wrote in
message news:oesvq55safqsrg8jih8peiaah4uiqt0qi3(a)
> Well, I know the answer, and I think you are behaving in
> yet another clueless fashion. And
> in my earlier reply I told you why. You want "fault
> tolerance" without even understanding
> what that means, and choosing an implementation whose
> fundamental approach to fault

The only fault tolerance that I want or need can be provided
very simply. The original specification of fault tolerance
that I provided was much more fault tolerance than would be
cost-effective. If I really still wanted this level of fault
tolerance then many of your comments on this subject would
not be moot. Since this degree of fault tolerance has been
determined to never be cost-effective, then any details of
providing this level of fault tolerance become moot.

The other Pete had greater insights into my own needs than I
did myself. I will paraphrase what he said. I only need to
avoid losing transactions. When a client makes a request, I
only need to avoid losing this request until it is
completed. Any faults in-between can be restarted from the

The key (not all the details, just the essential basis for
making it work) to providing this level of fault tolerance
is to have the webserver only acknowledge web requests after
the web request have been committed to persistent storage.

The only remaining essential element (not every little
detail just the essence) is providing a way to keep track of
web requests to make sure that they make it to completed
status in a reasonable amount of time. A timeout threshold
and a generated exception report can provide feedback here.

Please make any responses to the above statement within the
context of the newly defined much narrower scope of fault

> tolerance (synchonrous commit to disk) contradicts your
> requirement that there be no disk
> access. in fact, you explicitly said you could not
> imagine why MySQL could require
> hitting the disk if the database were small. Wel, I am
> not responsible for your lack of
> imagination, nor for your failure to understand reality.
> I cannot figure out how you get
> a concept like "disk access is unacceptable" to coexist
> with "fault tolerance is
> essential" and "MySQL is the only possible interprocess
> communication system". Those of
> us who have worked with databases know that the fault
> tolerance comes at a cost.
> joe
> On Thu, 25 Mar 2010 14:22:30 -0500, "Peter Olcott"
> <NoSpam(a)> wrote:
>>"Hector Santos" <sant9442(a)> wrote in
>>> Peter Olcott wrote:
>>>> The whole process has to be as fault tolerant as
>>>> possible, and fault tolerance requires some sort of
>>>> persistent storage.
>>> There you go again, you read a new buzz word and now you
>>> are fixated with it and further add to you NEVER
>>> finishing
>>> this vapor ware product and project anyway.
>>> --
>>> HLS
>>That is the sort of response that I would expect from
>>someone that did not know the answer.
> Joseph M. Newcomer [MVP]
> email: newcomer(a)
> Web:
> MVP Tips:

From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)> wrote in
message news:4ftvq5l0qdfmrfmd4a351cc0lt98er8p56(a)
> See below...
> On Fri, 26 Mar 2010 09:55:54 -0500, "Peter Olcott"
> <NoSpam(a)> wrote:
>>"Oliver Regenfelder" <oliver.regenfelder(a)> wrote in
>>> Hello,
>>> Peter Olcott wrote:
>>>> I don't know. It does not yet seem worth the learning
>>>> curve cost. The process is intended to be always
>>>> running
>>>> and loaded with data.
>>> I would say using memory mapped files with e.g. boost is
>>> not
>>> that steep a learning curve.
>>> Best regards,
>>> Oliver
>>If we are talking on the order of one day to become an
>>expert on this, and it substantially reduces my data load
>>times, it may be worth looking into. I am still convinced
>>that it is totally useless for optimizing page faults for
>>app because I am still totally convinced the preventing
>>faults is a far better idea than making them fast.
>>pages into memory will be my approach, or whatever the
>>precise terminology is.
> ***
> And your ****opinion*** about what is going to happen to
> your page fault budget is
> meaningless noise because you have NO DATA to tell you
> ANYTHING! You have ASSUMED that a
> MMF is going to "increase" your page faults, with NO
> EVIDENCE to support this ridiculous

I did not say anything at all like this. Here is what I
(1) I need zero pages faults
(2) MMF does not provide zero page faults
(3) Locking memory does provide zero pages faults
(4) Therefore I need locking memory and not MMF

> position. At the very worst, you will get THE SAME NUMBER
> of page faults, and in the best
> case you will have FEWER page faults. But go ahead,
> assume your fantasy is correct, and
> lose out on optimizations you could have. You are fixated
> on erroneous premises, which
> the rest of us know are erroneous, and you refuse to learn
> anything that might prove your
> fantasy is flawed.
> joe
> ****
> Joseph M. Newcomer [MVP]
> email: newcomer(a)
> Web:
> MVP Tips: