From: Peter Olcott on

"Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote in message
news:uB4xUTEzKHA.5936(a)TK2MSFTNGP04.phx.gbl...
>
> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
> message news:dlnlq5pok9nbsc35uaedbot0m18btno5ti(a)4ax.com...
>>A multithreaded FIFO queue would make more sense; less
>>chance of priority inversion
>> effects. An I/O Completion Port makes a wonderful
>> multithreaded FIFO queue with
>> practically no effort!
>> joe
>
> But Joe, that would require him to read a 1000+ page book
> like Jeffery Richter's "Programming Server-Side
> Applications for Windows 2000" to understand what an IOCP
> is and how to use one effectively. Mr. Olcott has already
> stated that he doesn't have the time to do that!!! ;-)
>
> BTW: This whole thread reminds me of another couple of
> great books: Code Complete and Rapid Development by by
> Steve McConnell. Based upon his writings in this thread,
> Mr. Olcott is making some classic mistakes. Of course,
> again, both books I believe are 1,000+ pages!
>
> -Pete
>

I always design completely before writing any code, for
systems level software (where the specification of what is
to be achieved can be complete in advance) this works very
well.

For applications level software where the user changes their
mind thousands of times throughout the process, this would
not work.


From: Hector Santos on
Peter Olcott wrote:

>
> When I speak of fault tolerance and I talking about yanking
> the power code at any point during execution.
>
> I don't see how Clustering or redundancy could recover from
> this.


Of course you don't, just like everything else you don't see.

> There are many other cases where clustering and
> redundancy would make a system more fault tolerant, but, not
> on a transaction by transaction basis.


So if you know the answer why are you asking here?

I told you want best answer given your lack of computer understanding,
lack of programming capabilities, lack of funding and lack of tenacity
to figure things out or lack of ability to trust experts:

- Reduce your caching and buffering I/O or more frequent
flushing so MOST of your data is on a disk MOST of the
time to address ungraceful process abort events or
machine power outages.

Thats your MILLION DOLLAR ANSWER - PAY UP!

--
HLS
From: Hector Santos on
Peter Olcott wrote:

> "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote in message
> news:ORrRkLEzKHA.928(a)TK2MSFTNGP05.phx.gbl...
>>
>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
>> news:EqmdnZkWypVi7DbWnZ2dnUVZ_rednZ2d(a)giganews.com...
>>> (By fault tolerance I mean yank the power plug from the
>>> wall and when the machine is re-started it (as much as
>>> possible) picks up right where it left off)
>> ...take a look at transactional NTFS.
>>
>> http://msdn.microsoft.com/en-us/library/aa365738(VS.85).aspx
>>
>> -Pete
>>
>
> Which I bet requires some sort of persistent storage, yup it
> does.


which means:

Reduce your cache and buffer I/O!

> How could I have very fast inter process communication that
> is also fault tolerant, or are these two mutually exclusive?


We told you already!

What is mutually exclusive is your desire to use RAW MEMORY with ERROR
RECOVERY!

There is one way only to achieve this level of non-destruction:

USE FLASH MEMORY!

Non STOP IT already with this ridiculous questions of yours!


--
HLS
From: Pete Delgado on

"Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
news:E46dnUbV0LzZJzbWnZ2dnUVZ_judnZ2d(a)giganews.com...
>
> "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote in message
> news:ORrRkLEzKHA.928(a)TK2MSFTNGP05.phx.gbl...
>>
>>
>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
>> news:EqmdnZkWypVi7DbWnZ2dnUVZ_rednZ2d(a)giganews.com...
>>> (By fault tolerance I mean yank the power plug from the wall and when
>>> the machine is re-started it (as much as possible) picks up right where
>>> it left off)
>>
>> ...take a look at transactional NTFS.
>>
>> http://msdn.microsoft.com/en-us/library/aa365738(VS.85).aspx
>>
>> -Pete
>>
>
> Which I bet requires some sort of persistent storage, yup it does.

Peter,
You did not, in this particular portion of a very convoluted thread, mention
that you wanted fault tolerance without persistant storage. I apologize if I
have read your mind incorrectly, but you simply asked for fault tolerance
and I gave you a method that you could actually use with *very little
additional knowledge required*! Additionally, I believe that your
interpretation of "fault tolerance" is that a catastrophic event could
happen to your system and you application would not lose *any* data. Is this
the definition that you are using?


-Pete


From: Pete Delgado on

"Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
news:SOqdnbRxhpnk4DbWnZ2dnUVZ_radnZ2d(a)giganews.com...
>
> I was making a conservative estimate, actual measurement indicated zero
> page faults after all data was loaded, even after waiting 12 hours.

Are you:

a) Running under the debugger by any chance?
b) Allowing the system to hibernate during your 12 hour run?
c) Doing anything special to lock the data in memory?

I would expect that the SuperFetch service as well as the working set
manager to work against keeping your large data file in memory if you are
simply using the standard malloc or new to reserve space for your data
within your program. In fact, I built a test system with Windows 7 x64 and
8GB of memory and Quad processor to test your assertion, but I noticed
immedietly that even with the amount of memory installed on my system and NO
additional programs beyond the OS, I still have a non-zero pagefault delta
when I simply switch between applications. While that certainly is not an
exhaustive test, it indicates the type of behavior that I expected.

If you would like to supply me with your test exectuable and data file, I'd
be happy to test on my system to see if I get the same results as you do as
I do not mind reimaging the machine.

-Pete