From: robertwessel2 on
On Apr 21, 6:57 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:
> "David Schwartz" <dav...(a)webmaster.com> wrote in message
>
> news:1a5b8da0-de50-4e88-87ce-b0f1900d570b(a)u9g2000prm.googlegroups.com...
> On Apr 21, 3:19 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com>
> wrote:
>
> > I would not need a high end database that can run in
> > distributed mode, I would only need a web application that
> > can append a few bytes to a file with these bytes coming
> > through HTTP.
>
> --Yep. Just make sure your web server is designed not to
> send an
> --acknowledgment unless it is sure it has the transaction
> information.
> --And do not allow the computer providing the service to
> continue until
> --it has received and validated that acknowledgment.
> --
> --DS
>
> Yes, those are the two most crucial keys.


It's not quite that simple - a simple protocol can leave your primary
and backup/secondary server's in an inconsistent state. Consider a
transaction is run on the primary, but not yet committed, then is
mirrored to the secondary, and the secondary acknowledges storing
that. Now the primary fails before it can receive the acknowledgement
and commit (and thus when the primary is recovered, it'll back out the
uncommitted transaction, and will then be inconsistent with the
secondary). Or if the primary commits before the mirror operation,
you have the opposite problem - an ill timed failure of the primary
will prevent the mirror operation from happening (or being committed
at the secondary), and again, you end up with the primary and backup
servers in an inconsistent state.

The usual answer to that is some variation of a two-phase commit.
While you *can* do that yourself, getting it right is pretty tricky.
There is more that a bit of attraction to leaving that particular bit
of nastiness to IBM or Oracle, or...
From: Peter Olcott on
For my purposes it is that simple. The server does not
commit the transaction or send the transaction to the backup
server until the customer has already received the data that
they paid for. Because of this if either server fails to
have the transaction, then this server is wrong.

<robertwessel2(a)yahoo.com> wrote in message
news:ba572a70-3386-4516-a673-f005e2934ccc(a)q23g2000yqd.googlegroups.com...
On Apr 21, 6:57 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:
> "David Schwartz" <dav...(a)webmaster.com> wrote in message
>
> news:1a5b8da0-de50-4e88-87ce-b0f1900d570b(a)u9g2000prm.googlegroups.com...
> On Apr 21, 3:19 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com>
> wrote:
>
> > I would not need a high end database that can run in
> > distributed mode, I would only need a web application
> > that
> > can append a few bytes to a file with these bytes coming
> > through HTTP.
>
> --Yep. Just make sure your web server is designed not to
> send an
> --acknowledgment unless it is sure it has the transaction
> information.
> --And do not allow the computer providing the service to
> continue until
> --it has received and validated that acknowledgment.
> --
> --DS
>
> Yes, those are the two most crucial keys.


It's not quite that simple - a simple protocol can leave
your primary
and backup/secondary server's in an inconsistent state.
Consider a
transaction is run on the primary, but not yet committed,
then is
mirrored to the secondary, and the secondary acknowledges
storing
that. Now the primary fails before it can receive the
acknowledgement
and commit (and thus when the primary is recovered, it'll
back out the
uncommitted transaction, and will then be inconsistent with
the
secondary). Or if the primary commits before the mirror
operation,
you have the opposite problem - an ill timed failure of the
primary
will prevent the mirror operation from happening (or being
committed
at the secondary), and again, you end up with the primary
and backup
servers in an inconsistent state.

The usual answer to that is some variation of a two-phase
commit.
While you *can* do that yourself, getting it right is pretty
tricky.
There is more that a bit of attraction to leaving that
particular bit
of nastiness to IBM or Oracle, or...


From: David Schwartz on
On Apr 22, 11:08 am, "robertwess...(a)yahoo.com"
<robertwess...(a)yahoo.com> wrote:

> It's not quite that simple - a simple protocol can leave your primary
> and backup/secondary server's in an inconsistent state.  Consider a
> transaction is run on the primary, but not yet committed, then is
> mirrored to the secondary, and the secondary acknowledges storing
> that.  Now the primary fails before it can receive the acknowledgement
> and commit (and thus when the primary is recovered, it'll back out the
> uncommitted transaction, and will then be inconsistent with the
> secondary).

He's not using rollbacks.

> Or if the primary commits before the mirror operation,
> you have the opposite problem - an ill timed failure of the primary
> will prevent the mirror operation from happening (or being committed
> at the secondary), and again, you end up with the primary and backup
> servers in an inconsistent state.

He will not commit in the primary until the secondary acknowledges.

> The usual answer to that is some variation of a two-phase commit.
> While you *can* do that yourself, getting it right is pretty tricky.
> There is more that a bit of attraction to leaving that particular bit
> of nastiness to IBM or Oracle, or...

I don't think he has any issues given that his underlying problem is
really simple. His underlying problem is "primary must not do X unless
secondary knows primary may have done X". The solution is simple --
primary gets acknowledgment from secondary before it ever does X.

DS
From: robertwessel2 on
On Apr 22, 1:42 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:
> For my purposes it is that simple. The server does not
> commit the transaction or send the transaction to the backup
> server until the customer has already received the data that
> they paid for. Because of this if either server fails to
> have the transaction, then this server is wrong.


So the case where you've delivered product to the customer, and then
your server fails and doesn't record that fact is acceptable to your
application? I'm not judging, just asking - that can be perfectly
valid. And then the state where the remaining server is the one
*without* the record, and eventually the other one (*with* the record)
comes back online and some sort of synchronization procedure
establishes that the transaction *has* in fact occurred, and the out
of date server is updated, and then the state of the customer changes
from "not-delivered" to "delivered" is OK too? Again, not judging,
just asking.

You started this thread with "I want to be able to yank the power cord
at any moment and not get corrupted data other than the most recent
single transaction." Loss of a transaction generally falls under the
heading of corruption. If you actually have less severe requirements
(for example, a negative state must be recorded reliably, a positive
state doesn't - both FSVO of "reliable"), then you may well be able to
simplify things.
From: Peter Olcott on
The problem with my original goal is that the hardware that
I will be getting has no way to force a flush of its
buffers. Without this missing piece most of the conventional
reliability measures fail. It will have both a UPS and
backup generators.

The biggest mistake that I must avoid is losing the
customer's money. I must also never charge a customer for
services not received. A secondary priority is to avoid not
charging for services that were provided. Failing to charge
a customer once in a great while will not hurt my business.

<robertwessel2(a)yahoo.com> wrote in message
news:448b6e04-9287-4737-8a0a-b23234aa8cc3(a)y14g2000yqm.googlegroups.com...
On Apr 22, 1:42 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:
> For my purposes it is that simple. The server does not
> commit the transaction or send the transaction to the
> backup
> server until the customer has already received the data
> that
> they paid for. Because of this if either server fails to
> have the transaction, then this server is wrong.


So the case where you've delivered product to the customer,
and then
your server fails and doesn't record that fact is acceptable
to your
application? I'm not judging, just asking - that can be
perfectly
valid. And then the state where the remaining server is the
one
*without* the record, and eventually the other one (*with*
the record)
comes back online and some sort of synchronization procedure
establishes that the transaction *has* in fact occurred, and
the out
of date server is updated, and then the state of the
customer changes
from "not-delivered" to "delivered" is OK too? Again, not
judging,
just asking.

You started this thread with "I want to be able to yank the
power cord
at any moment and not get corrupted data other than the most
recent
single transaction." Loss of a transaction generally falls
under the
heading of corruption. If you actually have less severe
requirements
(for example, a negative state must be recorded reliably, a
positive
state doesn't - both FSVO of "reliable"), then you may well
be able to
simplify things.