From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:cnfcr5t8ht5v51r2b80un0jjcas396bmcs(a)4ax.com...
> See below...
> On Fri, 2 Apr 2010 12:08:21 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>I will sent the results waiting for an HTTP
>>acknowledgement,
>>and only deduct the dime from the account balance if I get
>>an HTTP acknowledgement. These same results will be
>>available for at least thirty days whenever a client logs
>>in. The client request optional email delivery too. This
>>seems sufficient to me, did I miss anything?
> ****
> I think you have introduced at lot of gratuitous
> complexity here; instead, discard the
> whole transaction, credit the end user, and let them
> resubmit if they need to. Simple.

Instead of crediting the transaction if the operation fails,
(failure for a multiplicity of reasons) it is easier to
simply not debit the transaction until the operation is
known to have fully succeeded.

The operation is not known to have fully succeeded until I
get an HTTP acknowledgement that the results have been
received. If my server dies before I get a chance to charge
the customer, I am erring on the side of the customer.

> ****
>>
>>>
>>> Bottom line: you can't use your excuse on real
>>> transactions. The excuses don't have any
>>> legal basis. Have fun!
>>> ****
>>>>
>>>>>> I have no idea what you mean by "precisely measurable
>>>>>> reliability"
>>>>>
>>>>> Is there any sort of hand-shaking protocol such that
>>>>> all
>>>>> emails are explicitly acknowledged as received by the
>>>>> receiving ISP? Even if email itself is very
>>>>> unreliable,
>>>>> as
>>>>> long as it always reports its own failure to deliver,
>>>>> then
>>>>> we have one aspect of precisely measurable
>>>>> reliability.
>>>>
>>>>
>>>>No such thing - PATENT IT!
>>> ****
>>> Sorry, My ISP acknowledges receipt, and their anti-spam
>>> throws the message away and they
>>> don't send it to me, I have not received it. End of
>>> story. You seem to have missed this
>>> point. In networking terms, "reliable" has a different
>>> meaning than the intuitive one. A
>>
>>That always screws me up.
>>
>>> mechanism that reports errors IS considered "100%
>>> reliable" (e.g., TCP/IP) whereas one
>>> that does not report errors is considered UNRELIABLE
>>> (e.g., UDP). So when my ISP accepts
>>> the message, they "reliably" acknowledge that they have
>>> received it. Or, if they
>>> determine it is spam, they fail to acknowledge that they
>>> have received it. But the
>>> antivirus algorithm runs AFTER receipt is acknowledged
>>> and
>>> may yet throw it away.
>>>
>>> You have failed to understand what is meant by
>>> "reliability" and you have made a naive
>>> assumption that ACK==DELIVERY. In Real Life, the
>>> systems
>>> do not work according to your
>>> fantasies, and therefore predicating a business plan on
>>> fantasies boils down to, in its
>>> essence, failure.
>>
>>So what about FTP?
> ****
> Gratuitous complexiyt.
> ****
>
>>
>>>
>>> Also, look into ORBIS (I think that's how the acronym is
>>> spelled); my ISP is an ORBIS
>>> member, and one of my correspondents goes through a
>>> non-ORBIS carrier, and I get about 1
>>> out of 10 messages he sends me. If he sends from
>>> another
>>> of his email accounts, I get 10
>>> of 10 from that account, because it routes through ORBIS
>>> members. ORBIS is an informal
>>> anti-spam consortium who simply drop any email that
>>> comes
>>> from known spam routers.
>>> Apparently the way this is handled to is "acknowledge"
>>> it
>>> and throw it away.
>>>
>>> But a belief that there is a "100% verifiable" mechanism
>>> relies on definitions of the word
>>> "verifiable" that are, at best, creative, and an
>>> interpretation of probability that goes
>>> beyond the formal definition of "Expected value" that
>>> every statistics book takes great
>>> care to explain. So I'm not sure what you are asking
>>> for,
>>> but I'm pretty sure your
>>> expectations are fantasy.
>>>
>>> ****
>>>>
>>>>>> To me, the only reliable mechanism is to drop the
>>>>>> transaction, but credit the account. The
>>>>>> belief that there is a reliable "callback" mechanism
>>>>>> is
>>>>>> ill-founded and I would not use
>>>>>> anything that was not completely guaranteed to be
>>>>>> correct.
>>>>>> Crediting the account can be
>>>>>> made completely reliable, as long as you are using a
>>>>>> transacted database to record the
>>>>>> transactions you are charging for (this is why
>>>>>> transacted
>>>>>> databases were created!)
>>>>>> ****
>>>>
>>>>
>>>>> I don't know maybe. Since all users will also have the
>>>>> results of their transaction posted to their user
>>>>> account,
>>>>> and I won't know that the connection is dropped until
>>>>> the
>>>>> most expensive part of the processing is completed,
>>>>
>>>>
>>>>HA! No drop connection detection
>>> ****
>>> But TCP/IP ALWAYS has a dropped-connection detection!
>>> That's one of its design
>>> requirementments...of course, you actually have to LOOK
>>> for the event!
>>> joe
>>
>>So great then this will be my primary basis. Some
>>customers
>>might prefer some sort of batch oriented automated
>>delivery.
>>Email seemed to be the obvious choice. I have never had
>>the
>>degree of difficulty that you have stated as possible. I
>>even bought several software packages where the means of
>>delivery was an emailed link.
> ****
> Simple, obvious and wrong...
>
> Note that if I don't receive the email, I get to contact
> them again.
>
> Expect this will happen to you.
> ***
>>
>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Joseph M. Newcomer on
See below....
On Fri, 2 Apr 2010 14:44:45 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:1lbcr5lo46hg8ha34t4rmeb5v731i7f14l(a)4ax.com...
>> See below...
>> On Fri, 2 Apr 2010 11:44:43 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>>OK I think that I have it. If the customer account
>>>balances
>>>are stored in a file (binary data, fixed length records,
>>>thus easy to seek to), then we would probably need a
>>>record
>>>lock on the client's record that remains in effect between
>>>reading the current balance, and writing the updated
>>>balance.
>> ****
>> You are violating your rule of dropping to the physic'al
>> level without knowing that the
>> abstract requirements are being met.
>> For example, the presumption that "atomic append"
>> meets the requirements or that a binary file even matters
>> in this discussion seem both
>> unjustified assumptions.
>
>Because the record lock idea would ensure one significant
>aspect of transactional integrity, and your mind is stuck in
>"refute mode" you had to square peg in round hole with
>hammer another aspect of what I said.

***
Well, since I *know* that Unix/linux does not have "record locking" (it is what is called
"cooperative locking" and locks are not enforced by the operating system, as they are in
Windows), I would not have considered record locking to be a big deal, mostly because when
and if it is used, the DBMS uses it at a level far lower than you, as a client of the
DBMS, will ever see. Also, record locking does NOT guarantee transactional integrity,
because transactional integrity requires the concept of positive-commitment and the
unix/linux file systems (and the Windows file systems) do not support positive-commitment
protocols. The recent Windows operating system (I think starting with Vista) now support
positive commitment, which means it no longer had to be done by using unbuffered I/O in
the application.

Yes, I'm stuck in "refute mode", especially, when I hear nonsense. Note that record
locking prevents collisions on the same section of the file (when it is used, which
unix/linux do NOT guarantee or enforce), they do NOT guarantee those updates are
transactional updates. Do you really understand the difference? Just because a byte
range is locked does NOT guarantee transactional integrity on updates. The range is NOT
committed when it is unlocked! And pwrite does NOT guarantee, with or without locks, that
the data is committed to the file! It only guarantees that the data is written in the
correct place in the file on append.

You are constantly working with fundamental misconceptions of what is going on in the
operating system, and complain when we tell you that you are clueless. Alas, this may
satisfy you to tell us we are yelling at you, but it does not solve your cluelessness.
joe
****
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Fri, 2 Apr 2010 14:32:07 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:cnfcr5t8ht5v51r2b80un0jjcas396bmcs(a)4ax.com...
>> See below...
>> On Fri, 2 Apr 2010 12:08:21 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>
>>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>>message news:fg5cr5durc5vv2mk649328v1qs9i0inuqu(a)4ax.com...
>>>> See below...
>>>> On Thu, 01 Apr 2010 15:04:12 -0400, Hector Santos
>>>> <sant9442(a)nospam.gmail.com> wrote:
>>>>
>>>>>
>>>>>Peter Olcott wrote:
>>>>>
>>>>>>> And a belieft that email is reliable (in spite of the
>>>>>>> overwhelming evidence that it is
>>>>>>> not) solves this how?
>>>>>>
>>>>>> I asked if email at least had verifiable reliability
>>>>>> and
>>>>>> no
>>>>>> one answered.
>>>>>
>>>>>
>>>>>Thats because you were not listening again.
>>>> ****
>>>> Actually, nobody answered because the question was
>>>> essentially silly. Anyone who receives
>>>> email knows that it isn't very reliable.
>>>
>>>What about FTP? I could do my on-the-fly backup using FTP.
>> ***
>> OK, and you have now introduced another complex mechanism
>> into the validation of the state
>> diagram. How is it that FTP increases the reliability
>> without introducing what I might
>> call "gratuitous complexity". Note that your state
>> diagram now has to include the entire
>> FTP state diagram as a subcomponent. Have you looked at
>> the FTP state diagram? Especially
>> for the new, restartable, FTP protocols?
>> joe
>
>Yeah right the whole thing is so complex that no one could
>every possibly accomplish it so I might as well give up,
>right?
****
No, it is doable, and vastly more complex things are done every day. But they are done by
people who spend the time to LEARN what they are doing, and don't design by throwing darts
at a wall full of buzzwords.

What is flawed is your design process, in which you identify a problem, and then propose
some completely-off-the-wall solution which may or may not be appropriate, and in the case
of FTP, is probably the WRONG approach, because it adds complexity without improving
reliability or robustness.

I and others have design systems that had to keep running after crashes, and I did it by
using a transacted database to keep track of the pending operations. And I spent WEEKS
testing every possible failure mode before I released the software to my client (who has
yet to find a problem in my design, which has now been selling for ten years). I did NOT
toss wild concepts like "pwrite" and "FTP" around as if they would solve the problem;
instead, I analyzed what needed to be handled, and built mechanisms that solved those
problems, based on carefully-analyzed state diagrams (I described them to you) and
fundamentally reliable mechanisms like a transacted database system at the core of the
solution.
>
>I ALWAYS determine feasibility BEFORE proceeding with any
>further analysis.
****
No, you have been tossing buzzwords around as if they are presenting feasible solutions,
without justifying why you think they actually solve the problem!

I use a well-known and well-understood concept, "atomic transaction", you see the word
"atomic" used in a completely different context, and latch onto the idea that the use you
saw corresponds to the use I had, which is simply not true. An atomic file operation does
NOT guarantee transactional integrity. File locks provide a level of atomicity with
respect to record updates, but they do not in and of themselves guarantee transactional
integrity. THe fundamental issue here is integrity of the file image (which might be in
the file system cache) and integrity of the file itself (what you see after a crash, when
the file system cache may NOT have been flushed to disk!)
****
>
>If FTP is not reliable, then I am done considering FTP. If
>FTP is reliable then it might possibly form a way for
>transaction by transaction offsite backup.
****
FTP works, but that is not the issue. The REAL issue is, will adding the complexity of an
FTP protocol contribute to the reliability of your system in a meaningful fashion, or will
it just introduce a larger number of states such that you have more cut-points in the
state transition diagram that have to be evaluated for reliability? And, will the result
be a more effective recovery-from-cut-point or just more complexity? You have failed to
take the correct approach to doing the design, so the output of the design process is
going to be flawed.

Build a state machine of the transactions. At each state transition, assume that the
machine will fail *while executing that arc* of the graph. Then show how you can analyze
the resulting intermediate state to determine the correct recovery procedure. If you do
this, concepts like "FTP" beome demonstrably inappropriate, because FTP adds dozens of cut
points to the state transition diagram, making the recovery that much more complex.
*****
>
>It seems that you investigate all of the little nuances of
>every detail before even considering feasibly. That is not
>very efficient is it?
****
But it DOES produce systems that actually WORK and RECOVER from failures.

To give you an idea, one of the projects I worked on had an MTBF of 45 minutes, had no
recovery, and failure was indeed catastrophic. A year later, it ran for one of (a) six
weeks without failing in a way that impacted users (b) failed once a day but recovered so
quickly and thoroughly nobody noticed. Actually, (b) was the real situation; I just
examined all the cut-points (there were HUNDREDS) and made sure that it could recover. My
fallback was that there was an exception, "throw CATASTROPHIC_RESTART_REQUEST" (no, it
wasn't C++, it was Bliss-11) and when things got really messed up, I'd throw that, and
this would engage in a five-second restart sequence that in effect restarted the app from
scratch, and I rebuilt the state from transactionally stored state. The program listing I
got was 3/4" thick; a year later, it was 4" thick. THAT's what it takes to make a program
with hundreds of cut-points work reliably. This didn't count the 1/2" of kernel mode code
I had to write to guarantee a transactional persistent storage. That exception got thrown
about once a day, for reasons we could never discover (it appeared to be memory bus
failure returning a NULL pointer, but I could recover even from this!)

Those "little nuances" are the ONLY things that make the difference between a design that
"runs" and one that is "bulletproof".

The server managerment system I did a decade ago has a massive amount of code in it that
essentially never executes, except in very rare and exotic error recovery circumstances,
which almost never happen. Its startup code is thousands of lines that try to figure out
where the cut-point was, and once this has been determined, provides the necessary
recovery.

So don't talk to me about how to write bulletproof software. I've done it. It is
expensive to build. And I know that your current design approach is doing more to
generate buzzword solutions than to produce an actual robust implementation.
joe
****
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:8vdfr5l11iu9e6k3fbp5im74r2hneqc5gb(a)4ax.com...
> See below...
> On Fri, 2 Apr 2010 14:32:07 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>>>What about FTP? I could do my on-the-fly backup using
>>>>FTP.
>>> ***
>>> OK, and you have now introduced another complex
>>> mechanism
>>> into the validation of the state
>>> diagram. How is it that FTP increases the reliability
>>> without introducing what I might
>>> call "gratuitous complexity". Note that your state
>>> diagram now has to include the entire
>>> FTP state diagram as a subcomponent. Have you looked at
>>> the FTP state diagram? Especially
>>> for the new, restartable, FTP protocols?
>>> joe
>>
>>Yeah right the whole thing is so complex that no one could
>>every possibly accomplish it so I might as well give up,
>>right?
> ****
> No, it is doable, and vastly more complex things are done
> every day. But they are done by
> people who spend the time to LEARN what they are doing,
> and don't design by throwing darts
> at a wall full of buzzwords.
>
> What is flawed is your design process, in which you
> identify a problem, and then propose
> some completely-off-the-wall solution which may or may not
> be appropriate, and in the case
> of FTP, is probably the WRONG approach, because it adds
> complexity without improving
> reliability or robustness.

This is my most efficient learning mode, and it does
eventually result in some very excellent designs when
completed.

>
> I and others have design systems that had to keep running
> after crashes, and I did it by
> using a transacted database to keep track of the pending
> operations. And I spent WEEKS
> testing every possible failure mode before I released the
> software to my client (who has
> yet to find a problem in my design, which has now been
> selling for ten years). I did NOT
> toss wild concepts like "pwrite" and "FTP" around as if
> they would solve the problem;

Heh, but, this was not the very first time that you ever
designed such a system was it? How would you have approached
this design if you instead only had some rough ideas about
how things worked?

> instead, I analyzed what needed to be handled, and built
> mechanisms that solved those
> problems, based on carefully-analyzed state diagrams (I
> described them to you) and
> fundamentally reliable mechanisms like a transacted
> database system at the core of the
> solution.

I like to fully understand the underlying infrastructure
before I am fully confident of a design. For example, I now
know the underlying details of exactly how SQLite can fully
recover from a power loss. Pretty simple stuff really.

>>
>>I ALWAYS determine feasibility BEFORE proceeding with any
>>further analysis.
> ****
> No, you have been tossing buzzwords around as if they are
> presenting feasible solutions,
> without justifying why you think they actually solve the
> problem!

On-the-fly transaction by transaction offsite backups may
still be a good idea, even if it does not fit any
pre-existing notions of conventional wisdom. I start with
the most often false premise that all convention wisdom is
pure hooey. As this conventional wisdom proves itself item
by item point by point, I accept the validity of this
conventional wisdom only on those items and points that it
specifically proved itself. This process makes those aspects
of conventional wisdom that have room for improvement very
explicit.

>
> I use a well-known and well-understood concept, "atomic
> transaction", you see the word
> "atomic" used in a completely different context, and latch
> onto the idea that the use you
> saw corresponds to the use I had, which is simply not
> true. An atomic file operation does

I understood both well. My mind was not fresh on the
atomicity of transaction until I thought about it again for
a few minutes.

> NOT guarantee transactional integrity. File locks provide
> a level of atomicity with
> respect to record updates, but they do not in and of
> themselves guarantee transactional
> integrity. THe fundamental issue here is integrity of
> the file image (which might be in

They do provide one key aspect of exactly how SQLite
provides transactional integrity.

> the file system cache) and integrity of the file itself
> (what you see after a crash, when
> the file system cache may NOT have been flushed to disk!)
> ****

There are simple way to force this in Unix/Linux, I don't
bother cluttering my head with their names, I will look them
up again when the time comes There are even ways to flush
the hard drives on-board buffer.

>>
>>If FTP is not reliable, then I am done considering FTP. If
>>FTP is reliable then it might possibly form a way for
>>transaction by transaction offsite backup.
> ****
> FTP works, but that is not the issue. The REAL issue is,
> will adding the complexity of an
> FTP protocol contribute to the reliability of your system
> in a meaningful fashion, or will
> it just introduce a larger number of states such that you
> have more cut-points in the
> state transition diagram that have to be evaluated for
> reliability? And, will the result

Since I can not count on something not screwing up it seems
that at least the financial transactions must have off-site
backup. I would prefer this to be on a transaction by
transaction basis, rather than once a period-of-time.

> be a more effective recovery-from-cut-point or just more
> complexity? You have failed to
> take the correct approach to doing the design, so the
> output of the design process is
> going to be flawed.
>

Not at all. I have identified a functional requirement and
provided a first guess solution. The propose solution is
mutable, the requirement is not so mutable.

> Build a state machine of the transactions. At each state
> transition, assume that the
> machine will fail *while executing that arc* of the graph.
> Then show how you can analyze
> the resulting intermediate state to determine the correct
> recovery procedure. If you do
> this, concepts like "FTP" beome demonstrably
> inappropriate, because FTP adds dozens of cut
> points to the state transition diagram, making the
> recovery that much more complex.
> *****

More like this:
(1) I wait until the client gets their final result data.
(2) Then deduct the dime from their account balance as a
single atomic transaction.
(3) Then I send a copy of this transaction to offsite
backup.

>>
>>It seems that you investigate all of the little nuances of
>>every detail before even considering feasibly. That is not
>>very efficient is it?
> ****
> But it DOES produce systems that actually WORK and RECOVER
> from failures.

If instead you would look at this using categories instead
of details you could get to the same place much quicker, by
elimination multitudes of details in one fell swoop. I
admit that I may not even yet have the categories quite
right, but, then this level of detailed design on this
subject is all new to me.

>
> To give you an idea, one of the projects I worked on had
> an MTBF of 45 minutes, had no
> recovery, and failure was indeed catastrophic. A year
> later, it ran for one of (a) six
> weeks without failing in a way that impacted users (b)
> failed once a day but recovered so
> quickly and thoroughly nobody noticed. Actually, (b) was
> the real situation; I just
> examined all the cut-points (there were HUNDREDS) and made
> sure that it could recover. My
> fallback was that there was an exception, "throw
> CATASTROPHIC_RESTART_REQUEST" (no, it
> wasn't C++, it was Bliss-11) and when things got really
> messed up, I'd throw that, and
> this would engage in a five-second restart sequence that
> in effect restarted the app from
> scratch, and I rebuilt the state from transactionally
> stored state. The program listing I
> got was 3/4" thick; a year later, it was 4" thick. THAT's
> what it takes to make a program
> with hundreds of cut-points work reliably. This didn't
> count the 1/2" of kernel mode code
> I had to write to guarantee a transactional persistent
> storage. That exception got thrown
> about once a day, for reasons we could never discover (it
> appeared to be memory bus
> failure returning a NULL pointer, but I could recover even
> from this!)

My above design seems to have minimal complexity. By waiting
until everything has completely succeeded before charging
the customer all the complex transaction roll backs prior to
this point become unnecessary.

>
> Those "little nuances" are the ONLY things that make the
> difference between a design that
> "runs" and one that is "bulletproof".
>

Some of the nuances are required, some can be made moot
through using a simpler design.

> The server managerment system I did a decade ago has a
> massive amount of code in it that
> essentially never executes, except in very rare and exotic
> error recovery circumstances,
> which almost never happen. Its startup code is thousands
> of lines that try to figure out
> where the cut-point was, and once this has been
> determined, provides the necessary
> recovery.

I could simply start all over from scratch, as long as I
could count on the original client request's validity.

>
> So don't talk to me about how to write bulletproof
> software. I've done it. It is
> expensive to build. And I know that your current design
> approach is doing more to
> generate buzzword solutions than to produce an actual
> robust implementation.
> joe
> ****

What I am trying to accomplish is inherently much simpler
than the examples that you provided from your experience. I
can leverage this greatly reduced inherent simplicity to
derive a design with very high degrees of fault tolerance
with a much simpler implementation strategy.

>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:mddfr59859omp27giql6333aah4217qfm1(a)4ax.com...
> See below....
> On Fri, 2 Apr 2010 14:44:45 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:1lbcr5lo46hg8ha34t4rmeb5v731i7f14l(a)4ax.com...
>>> See below...
>>> On Fri, 2 Apr 2010 11:44:43 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>>OK I think that I have it. If the customer account
>>>>balances
>>>>are stored in a file (binary data, fixed length records,
>>>>thus easy to seek to), then we would probably need a
>>>>record
>>>>lock on the client's record that remains in effect
>>>>between
>>>>reading the current balance, and writing the updated
>>>>balance.
>>> ****
>>> You are violating your rule of dropping to the physic'al
>>> level without knowing that the
>>> abstract requirements are being met.
>>> For example, the presumption that "atomic append"
>>> meets the requirements or that a binary file even
>>> matters
>>> in this discussion seem both
>>> unjustified assumptions.
>>
>>Because the record lock idea would ensure one significant
>>aspect of transactional integrity, and your mind is stuck
>>in
>>"refute mode" you had to square peg in round hole with
>>hammer another aspect of what I said.
>
> ***
> Well, since I *know* that Unix/linux does not have "record
> locking" (it is what is called
> "cooperative locking" and locks are not enforced by the
> operating system, as they are in
> Windows), I would not have considered record locking to be
> a big deal, mostly because when
> and if it is used, the DBMS uses it at a level far lower
> than you, as a client of the
> DBMS, will ever see. Also, record locking does NOT
> guarantee transactional integrity,
> because transactional integrity requires the concept of
> positive-commitment and the
> unix/linux file systems (and the Windows file systems) do
> not support positive-commitment
> protocols. The recent Windows operating system (I think
> starting with Vista) now support
> positive commitment, which means it no longer had to be
> done by using unbuffered I/O in
> the application.
>
> Yes, I'm stuck in "refute mode", especially, when I hear
> nonsense. Note that record
> locking prevents collisions on the same section of the
> file (when it is used, which
> unix/linux do NOT guarantee or enforce), they do NOT
> guarantee those updates are

I already knew that, but already knowing that it also knew
that it is not really all that to difficult to derive these
things from scratch. As it turns out many relational DBMS
already have done this work for me. Also another very simple
solution is to simply serialize all transactions to a single
thread.

> transactional updates. Do you really understand the
> difference? Just because a byte
> range is locked does NOT guarantee transactional integrity
> on updates. The range is NOT
> committed when it is unlocked! And pwrite does NOT
> guarantee, with or without locks, that
> the data is committed to the file! It only guarantees
> that the data is written in the
> correct place in the file on append.

The somewhat more difficult (although still not all that
difficult) job would be to guarantee transactional integrity
even if someone pulls the power cord from the wall in the
middle of the transaction. SQLite explains exactly how they
do this, its not really all that hard.

>
> You are constantly working with fundamental misconceptions
> of what is going on in the
> operating system, and complain when we tell you that you
> are clueless. Alas, this may

No it just seems that way from my short-hand way of talking.
I make it a specific point to make sure that I never learn
anything by memorization, only by comprehension. "Correct"
terminology can only be learned by memorization. Words are
merely labels that are attached to concepts. Having the idea
right but the words wrong is far better than the converse.

> satisfy you to tell us we are yelling at you, but it does
> not solve your cluelessness.
> joe

I definitely do have some cluelessness, but, far less than
you are perceiving.

> ****
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm