From: Ian Collins on
On 04/21/10 01:00 PM, Peter Olcott wrote:
> "Ian Collins"<ian-news(a)hotmail.com> wrote in message
> news:8370g2Fvj5U6(a)mid.individual.net...
>> On 04/21/10 12:23 PM, Peter Olcott wrote:
>>> "Ian Collins"<ian-news(a)hotmail.com> wrote in message
>>> news:836u94Fvj5U5(a)mid.individual.net...
>>>> On 04/21/10 06:06 AM, Peter Olcott wrote:
>>>>>
>>>>> SSD have a limited life that is generally not
>>>>> compatible
>>>>> with extremely high numbers of transactions.
>>>>>
>>>> Not any more.
>>>>
>>>> They are used in the most transaction intensive (cache
>>>> and
>>>> logs) roles in many ZFS storage configurations. They
>>>> are
>>>> used where a very high number of IOPs are required.
>>>
>>> 100,000 writes per cell and the best ones are fried.
>>> http://en.wikipedia.org:80/wiki/Solid-state_drive
>>
>> That's why they have wear-levelling.
>>
>> Believe me, they are used in very I/O intensive workloads.
>> The article you cite even mentions ZFS as a use case.
>
> 5,000 transactions per minute would wear it out pretty
> quick.

Bullshit.

It would the about 30 minutes to fill a 32GB SATA SSD, and 50,000 hours
to repeat that 100,000 times.

Please, get in touch with the real world. In a busy server, they are
doing 3,000 or more write IOPS all day, every day.

--
Ian Collins
From: Ian Collins on
On 04/21/10 01:09 PM, Peter Olcott wrote:
>
> 5,000 transactions per minute would wear it out pretty
> quick.
>
> With a 512 byte transaction size and 8 hours per day five
> days per week a 300 GB drive would be worn out in a single
> year, even with load leveling.

At that rate, it would take 48 weeks to fill the drive once. Then you
have to repeat 99,999 times...

--
Ian Collins
From: David Schwartz on
On Apr 20, 4:16 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com> wrote:

> It looks like the most cost effective solution is some sort
> of transaction by transaction offsite backup. I might simply
> have the system email each transaction to me.

If the transaction volume is high, something cheaper than an email
would be a good idea. But if your transaction volume is not more than
a few thousand a day, an email shouldn't be a problem.

The tricky part is confirming that the email has been sent such that
the email will be delivered even if the computer is lost. You *will*
need to test this. One way that should work on every email server I
know of is to issue some command, *any* command, after the email is
accepted for delivery. If you receive an acknowledgement from the mail
server, that will do. So after you finish the email, you can just
sent, say "DATA" and receive the 503 error. That should be sufficient
to deduce that the mail server has "really accepted" the email.

Sadly, some email servers have not really accepted the email even
though you got the "accepted for delivery" response. They may still
fail to deliver the message if the TCP connection aborts, which could
happen if the computer crashes.

Sadly, you will need to test this too.

Of course, if you use your own protocol to do the transaction backup,
you can make sure of this in the design. Do not allow the backup
server to send a confirmation until it has committed the transaction.
Even if something goes wrong in sending the confirmation, it must
still retain the backup information as the other side may have
received the confirmation even if it appears to have failed to send.
(See the many papers on the 'two generals' problem.)

DS
From: Peter Olcott on

"Ian Collins" <ian-news(a)hotmail.com> wrote in message
news:83722jFvj5U8(a)mid.individual.net...
> On 04/21/10 01:09 PM, Peter Olcott wrote:
>>
>> 5,000 transactions per minute would wear it out pretty
>> quick.
>>
>> With a 512 byte transaction size and 8 hours per day five
>> days per week a 300 GB drive would be worn out in a
>> single
>> year, even with load leveling.
>
> At that rate, it would take 48 weeks to fill the drive
> once. Then you have to repeat 99,999 times...
>
> --
> Ian Collins

Yeah, I forgot that part. That might even be cost-effective
for my 100K transactions, or I could offload the temp data
to another drive.


From: Peter Olcott on

"David Schwartz" <davids(a)webmaster.com> wrote in message
news:c3516bb2-7fcf-4eb1-a31f-01adfdf8ad92(a)n20g2000prh.googlegroups.com...
On Apr 20, 4:16 pm, "Peter Olcott" <NoS...(a)OCR4Screen.com>
wrote:

> It looks like the most cost effective solution is some
> sort
> of transaction by transaction offsite backup. I might
> simply
> have the system email each transaction to me.


--Of course, if you use your own protocol to do the
transaction backup,
--you can make sure of this in the design. Do not allow the
backup
--server to send a confirmation until it has committed the
transaction.
--Even if something goes wrong in sending the confirmation,
it must
--still retain the backup information as the other side may
have
--received the confirmation even if it appears to have
failed to send.
--(See the many papers on the 'two generals' problem.)
--
--DS

This is the sort of thing that I have in mind. Simply
another HTTP server that accepts remote transactions for the
first server.