From: Hector Santos on 25 Mar 2010 11:26
Peter Olcott wrote:
> How else can fault tolerance be provided without persistent
About 1 million dollars!
From: Joseph M. Newcomer on 25 Mar 2010 11:29
On Thu, 25 Mar 2010 09:08:25 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:
>"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
>> Peter Olcott wrote:
>>>>> Ah so this is the code that you were suggesting?
>>>>> I won't be able to implement multi-threading until
>>>>> grows out of what a single core processor can
>>>>> I was simply going to use MySQL for the inter-process
>>>>> communication, building and maintaining my FIFO queue.
>>>> Well, I can think of worse ways. For example, writing
>>>> the data to a floppy disk. Or
>>>> punching it to paper tape and asking the user to
>>>> re-insert the paper tape. MySQL for
>>>> interprocess communication? Get serious!
>>> Can you think of any other portable way that this can be
>>> I would estimate that MySQL would actually keep the FIFO
>>> queue resident in RAM cache.
>> Will MySQL will keep a FIFO queue resident?
>> WOW! This is unbelievable.
>> Do you know what MySQL is? Or even a FIFO queue?
>Do you know what file caching is? I know that a SQL provider
>would not be required to always hit disk for a 100K table
>when multiple GB of RAM are available.
Do you know what a "committed transaction" is? And why it REQUIRES hitting the disk? Note
that most table accesses in databases are inherently treated as transactions to guarantee
database integrity even at the single-table level. And could you cite a reference that a
SQL provider is not required to hit the disk for small tables? I have a couple MySQL
books which I studied a while ago and nothing ever hinted at this. And it is inconsistent
with basic principles of database behavior. Maybe I'm spoiled by listening to all those
talks by database experts we brought to the SEI back in the mid-1980s, such as the
inventors of DB2 and Ingres, and high-level technical people from Oracle, which was just a
little startup company way back then. ALl of whom talked in great detail about how to make
databases robust under power failure and OS crash scenarios. Or maybe it was just my
experience with the IBM AIX transacted file system that biases me. Buy hey, what do I
know? You're the expert here...
DId you know that some hard drives have onboard RAM caches and report "succesfull
commitment" if the data gets to the RAM cache? At a trade show, I asked one technical guy
"do you have a way for me to force it to the magnetic surface?" and he assured me that
this was an absurd request. "So what do you do if my machine loses power and the database
is corrupted because the data never actually got committed" and his answer was immediate:
"We just blame Microsoft". I feel that this ridiculous attitude needs to be made public,
and since I wasn't under NDA when I learned this, I feel I have to tell it as often as I
can. Particularly to database people.
Why do you think file caching is going to work here? Other than your normal
wishful-thinking approach, that is?
As far as knowing what file caching is, I was probably writing code to do it before you
were born; providing you are under age 45. If you're under 50, I was writing it before
you got to first grade.I was writing file system caching code in 1966. Now let me ask
you: do you know how Windows file caching actually works? I do. I've read the File
System Driver book. I lurk on the OSR newgroup, and listen to people like Tony Mason give
talks about Window file systems at Driver DevCon. (Tony is perhaps the world's best
expert on Windows file systems outside Microsoft right now) And friends of mine invented
the concept of transacted file systems at Xerox PARC in the 1970s, and came to CMU and
gave talks on what they did, and we went out for pizza afterwards and we all learned more.
Before you try to be condescending, make sure you have established yourself on the
technical knowledge high ground. So far, you've been demonstrating you are in the
technical knowledge Marianas Trench.
>> Honestly, do you really understand what MySQL or any SQL
>> engine is?
>> And you are worry about performing?
>Not so much on the 1% share of the response time total. In
>this case I want a solution that can be ported across any
>hardware platform with minimal or zero changes. It also has
>to be fault tolerant and have complete error recovery. (Pull
>the plug in the middle of processing, and when the machine
>is restarted it picks up where it left off with zero or
Depends on how much code you are willing to write. For example, two or three lines in
WIndows: SendMessageTimeout(WM_COPYDATA, ...) will handle interprocess communication in
most Windows apps, at a lot less overhead than learning MySQL (CAREFUL: Use of MySQL
REQUIRES SOME LEARNING!) or using MySQL in the communication path.What you are saying is
that in the two places you do any interprocess communication, you have to have identical
source on all platforms, whereas those of us who have actually WRITTEN real, actual
portable code know that in the two places we do this, we call a platform-specific
subroutine that has different code sequences on each platform (I've done a
WIndows/Unix/VMS/Mac port using these techniques). But sure, go ahead. add lots of
overhead. That non-negotiable 500ms limit seems to be less of a concern to you these days
than it was when you insisted it was the ONLY goal you cared about.
And you KNOW that there is only a 1% degradation of the response time, how? Oh, I forgot,
the Tarot cards. Or is it the I Ching? One of those, I'm sure. As an engineer, I need
to MEASURE these things, not use wishful thinking to predict them.
Joseph M. Newcomer [MVP]
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Hector Santos on 25 Mar 2010 11:35
Peter Olcott wrote:
> The whole process has to be as fault tolerant as possible,
> and fault tolerance requires some sort of persistent
There you go again, you read a new buzz word and now you are fixated
with it and further add to you NEVER finishing this vapor ware product
and project anyway.
From: Pete Delgado on 25 Mar 2010 13:23
"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
> Joseph M. Newcomer wrote:
>> Or they were testing the limits of your credulity. Reminds me of the
>> Calvin & Hobbs
>> cartoon: The family is in the card. Calvin: "Dad, how do they determine
>> the weight limit
>> of a bridge?" Dad: "They run bigger and bigger trucks over it until it
>> collapses, then
>> they rebuild it exactly and post the weight limit"
> I like that one. :)
That was the thing about Watterson, his insights were pretty universal and
not strictly aimed at children. :-)
From: Pete Delgado on 25 Mar 2010 13:40
"Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
> Can you provide any other basis for maximizing fault tolerance that does
> not require some sort of persistent storage?
Clustering or redundancy...