From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:25hkr5pe6icu1mn8cs67ufbllme6p3tu28(a)4ax.com...
> See below...
> On Fri, 2 Apr 2010 10:39:45 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:923cr5t67b0tv6mjc0lpi8mnoo6e89mv69(a)4ax.com...
>>> See below...
>>> On Thu, 1 Apr 2010 14:16:30 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>>> || The term "Sparse Matrix" is taken to have the
>>>>> common
>>>>> meaning
>>>>> || of the term "Sparse" combined with the common
>>>>> computer
>>>>> science
>>>>> || meaning of the term "Matrix", a two dimensional
>>>>> array
>>>>> of elements.
>>>>
>>>> http://en.wikipedia.org/wiki/Sparse_matrix
>>>>To be unambiguously distinguished from:
>>>>
>>>>In the subfield of numerical analysis, a sparse matrix
>>>>is
>>>>a
>>>>matrix populated primarily with zeros (Stoer & Bulirsch
>>>>2002, p. 619). The term itself was coined by Harry M.
>>>>Markowitz.
>>> ****
>>> Actually, a sparse (2-D, to simplify the discussion)
>>> matrix is a matrix of dimension [m*n]
>>> whose values at some coordinates [i,j] is not defined.
>>> 0.0 is a defined value, and
>>> therefore a matrix with 0 values is not "sparse".
>>> Essentially, a sparse matrix is a
>>> matrix for which the mapping [i x j] -> value is not a
>>> total mapping.
>>>
>>> A numerical analyst is free to redefine this in some
>>> other
>>> terms, but the essence of a
>>> sparse matrix is the failure to have a total mapping.
>>> The
>>> trick on "comb vectors" is to
>>> create an alternate representation that maintains the
>>> partial mapping of the original
>>> while implementing it as a vector such that the mapping
>>> [i]->value is nearly total (the
>>> difference represents the ability to do perfect
>>> compaction). So in the abstract world, a
>>> sparse 2-D matrix is characterized by a partial mapping
>>> (forall)i (forall)j [i X j]. I
>>> could try to write out the fully general
>>> characterization,
>>> but without using Microsoft
>>> Equation Editor it is clumsy. To interpret this any
>>> other
>>> way is to misinterpret the
>>> definition, or, as the numerical analysts have
>>> apparently
>>> redefined it, to change the
>>> definition to something else.
>>> joe
>>
>>At the time that I wrote the above patent it seemed that
>>the
>>computer science definition of the term was comparable to
>>the numerical analysis definition. This same definition
>>seems to predominate the use of the term when the term
>>[sparse matrix] is searched on google.
>>
>>I hung out in the misc.int.property group while I was
>>prosecuting my patent. One of the things that I found
>>there
>>was that a patent has to be so clear that even a great
>>effort to intentionally misconstrue its meaning will fail.
>>I
>>tried to let this standard be my guide.
> *****
> Yes, and the point is...? This is a requirement of a
> patent, that between the set of
> claims and the descriptions of the preferred embodiments,
> anyone who is experienced in the
> field can reproduce your work. Failure to disclose would
> invalidate a patent. So what is
> new about the fact that you had to produce full
> disclosure?

This requirement is actually much more stringent when it
must be proven in court. Since this requirement is much more
stringent when it must be proven in court (so that even when
extreme effort is made to intentionally misconstrue the
meaning of the specification that even this extreme effort
will fail). I felt it necessary to make sure that no one
could intentionally misconstrue that my use of the term
"sparse matrix" had anything at all to do with the meanings
that google still provides for the term [sparse matrix].

>
> All you are saying here is that you have tried to write a
> patent according to the rules
> for writring a patent.

You keep (intentionally?) ignoring the keys
terms--->"intentionally misconstrue"

>
> It has nothing to do with sparse matrices, no matter what
> you think they mean.
>
> Consider the state transition matrix for a parser.
> Essentially the columns might
> represent terminal symbols (including classes of
> terminals) and the rows represent
> nonterminals (for an LR(1) grammar such as is accepted by
> YACC, this is how the parse
> table is represented). Note in the case of parser
> generators, they might accept an input
> specification more general than LR(1) and reduce it to
> LR(1) internally; a failure to
> reduce it to LR(1) is flagged as an error in the grammar
> and must be fixed.
>
> For a large set of states, being in a particular
> production and finding a terminal which
> does not match it an error. We represent these by, for
> example, putting a 0 in the parse
> table indicating "not a valid state". Thus, for N a
> nonterminal and t a terminal, the
> index Nxt is a partial mapping which has (many, typically
> mostly) invalid mappings. An
> invalid mapping is a syntax error in the string being
> parsed. So we have a sparse matrix,
> one with incomplete mappings. Because of the limitations
> of representation, we encode
> those invalid mapping in a total matrix as 0 values. It
> does not change the fact that the
> mapping is partial; all it does is change the mapping so
> we are working in a total mapping
> because that is what a rectangular array representation
> requires. YOu MUST have a total
> mapping for a simple rectangular array, or your code
> doesn't work. Then, given, that
> total mapping, you designate some particular value (e.g.,
> 0) to mean "incomplete mapping
> indicator") and as a metaconcept imposed on the total
> mapping you use that value to
> indicate the partial mapping. Now, if you want to
> properly represent a sparse array in
> minimal storage, you encode the detection of invalid state
> in other ways, so that you end
> up with a more compact implementation of the partial
> mapping. That's what "comb vectors"
> were all about: a new encoding of partial matrices.
> Because the machine does not have
> any way to encode a partial mapping on a conventional
> [i,j]=>value map, we need to
> implement an [i,j]=>undefined mapping on top of the
> rectangular mapping. A numerical
> analyst might say the values are "mostly 0" but that is an
> interpretation of a sparse
> matrix according to one encoding in one problem domain.
> Note that in a system in which
> all values are valid, any encoding [i,j]=>value should
> produce a valid value, so strictly
> speaking in a numerical analysis problem there is no way
> to distinguish a valid 0 from an
> error 0. Or maybe it doesn't matter. In parse tables,
> the 0 value is the "invalid next
> state" value, so whether we encode it directly in the
> mapping or derive it from a more
> complex encoding doesn't matter to the upper level. Note
> that a matrix that encoded the
> value 1 most everywhere could ALSO be encoded as a "sparse
> matrix" where 1 (let's call it
> the identify mapping) is the most common value.
>
> The ideal sparse matrix representation would be a Huffman
> encoding of the entire
> matrix-as-sequential-addresses, but the mapping is now
> incredibly expensive; to determine
> the mapping of [i,j] the matrix compression must be fully
> decoded each time, but only the
> value of the [i,j] has to be kept, so as the decompression
> proceeds, all values other than
> [i,j] are discarded and you don't need tons of extra
> space. Or you can Huffman-encode the
> invidividual rows. Or use a 2-D mapping much like G4
> faxes use for encoding, but they are
> easy because the values are binary (black or white). So
> dobn't confuse an implementation
> or a representation with a concept.
>
> What was interesting about the comb-vector encoding was
> that it achieved 80% of the
> theoretical optimum encoding, had constant-time access for
> any [i,j] pair, and could be
> easily computed from the original matrix and from an mxn
> matrix had a space requirement
> which was O(m+n+f(m,n)) when f(m,n) represented the packed
> result and was always < m*n.
> I used a "first fit" aproach which was computationally
> fast and gave us between 80% and
> 90% of the predicted optimum; essentially, I allowed for
> (by simply recoding one
> subroutine), best-fit (which was O( (m*n)) and optimum
> which was O((m*n)**p) for p >=2,
> Essentially, it was in the extreme case the bin-packing
> problem which is NP-hard). But
> the first-fit was O(m+n) in complexity, good enough for a
> production piece of code, and
> typically reduced the matrix to 10% of its original size,
> I think 6% in the best case
> (that is, using the size compression, I was able to get
> O(m + n + 0.1(m * n) == 1.1m +
> 1.1n space out of m*n). And that was Good Enough.
> joe
> ****
>>
> (
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Joseph M. Newcomer on
See below...
On Sat, 3 Apr 2010 17:35:14 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:sncfr5llbe96pplqj5hnq751r92t2iaoal(a)4ax.com...
>> On Fri, 2 Apr 2010 14:24:01 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>It seems that you are beginning to do everything that you
>>>can to disagree with me without much of a trace left of a
>>>semblance of helpfulness.
>>
>> ***
>> No sure where I "disagreed" with you; I certainly
>> corrected some nonsensical concepts, for
>> example, that such a thing as a "binary file with fixed
>> length records" is something that
>> linux can have (it can't; any notion of being a binary
>> file with fixed-length records is
>> not a property of linux but of the application) and that
>> windows can't (both have
>> stream-oriented file systems and are nearly identical). I
>> also corrected some erroneous
>> ideas (such that "binary files" are meaningfully more
>> efficent than "text files" in any
>> context). The citation was to prove that I actually
>> understand something about the
>
>You are flat out wrong here because of your false
>assumptions. There are possible scenarios out of every way
>that this can be done such that the difference between pure
>binary data and text data can have very substantial
>differences in performance.
****
Try explaining something I didn't already deeply understand in 1978. That's when we wrote
the IDL binary reader/writer because the performance of the text reader/writer was
unacceptable. And what made it work was the ability to map a file instantly into the
address space. It also mean we did not have to call the storage allocator once per node,
which was the real performance bottleneck, not the text-to-binary conversion. I know,
because I MEASURED it, as opposed to using a tarot deck to decide where the performance
problem was!

Note that I said "meaningful". By the time you have dealt with the overhead of obtaining
a record from the database, the cost of doing the text-to-binary conversion is such a tiny
percentage (tenths of hundredths of a percent) of the total time it is not meaningful.
This is why a large number of popular DBMS systems don't store integers or decimal numbers
in binary (but as fixed-point text!). I used these systems, I KNOW how they stored
numbers!

So I'm wrong because I already know how real systems work? Or I'm wrong because I
disagree with your preconceived notion of how real systems work?
*****
>
>I may make many huge mistakes because of my current lack of
>knowledge and experience in this area. You continue to make
>mistakes based on false assumptions.
****
You do make huge mistakes. But you won't listen to people who tell you that you are
making huge mistakes.

What "false assumption" did I make? You can't just wave your hands and say "false
assumption". As I said, what made the IDL text reader (and most XML text file readers)
slow was the storage allocator, not the fact that numbers were represented as text. You
assume, for some reason, that this must be the source of performance problems. Those of
us who have built real systems know this is rarely, if ever, the problem. Measure the
atoi or atof functions, and figure out how many thousands of numbers you have to convert
from text to binary before you have made an impact on your total performance! The way you
do htis is measure the number of calls to atoi/atof (_ttoi/_ttof), compute the total time
spent, then subtract this from the total time spent in the program. Compare the two
times. Only in very rare cicrcumstances will it make a difference that is not lost in the
"noise" (by "noise", I mean that any meaningful performance measure involves running the
experiment many, manyt times, 30 to 50, and computing the mean and standard deviation of
the results, to determine the expected value and its tolerance. e.g., 411�2 seconds. If
your numeric conversion is under 2 seconds, it is essentially lost in the noise. If it is
under 4 seconds, it contributes < 1% to the total time, and half of that is suspect. DO
it. Real performance numbers mean seriously interesting changes in design...for example,
when I use XML databases, I never convert integers to binary, in fact, I usually just
continue to do property lookup in the nodes rather than build structs. The code is more
robust, and doesn't run significantly slower. I know this, I've measured it).
*****
>
>> difference between text and binary I/O (the paper was
>> pubished in 1987, captuing work I
>> had done in 1979-1980, and the implementation was done by
>> David Dill [of Verified Voting
>> fame] and myself.
>>
>> Oh, the way we got REAL performance was to avoid the
>> equivalent of ReadFile and simply di
>> the equivalent of mapping the file into memory in that
>> operating system.
>>
>> I would think that learning to use the technical language
>> correctly so you don't sound
>> like a complete fool would be "helpful".
>>
>> I also corrected a misconception that having pwrite be
>> "atomic" would guarantee the
>> integrity of a database; in fact, it does not and cannot
>> in and of itself make such a
>> guarantee, and belief that it would could certainly result
>> in a system that was prone to
>> catstrophic failure; so how is telling you not believe an
>> erroneous concept not "helpful"?
>
>In the case where pread() and pwrite() must work together to
>provide a single atomic operation, such as the case of
>deducting ten cents from a customer's account balance you
>are correct. A simpler way to provide this atomicity for my
>purposes might be to only have a single thread that updates
>the client's balances. In this case there is no need of
>contriving a record lock on a system that lacks this
>inherent capability.
****
Actually, they do not provide a single atomic operation, and this should be obvious. The
CORRECT way to provide the atomicity is use a transacted database. Nothing else is going
to be reliable, unless, of course, you re-implement transacted databases on your own.

If you think otherwise, you do not have a clue about what a transacted database does, why
they are important, and why there are no alternatives!
****
>
>In other less complex cases such as updating a status flag
>from [InProccesing] to [Completed] it would seem that the
>single guarantee that the pwrite() operation itself is
>atomic would provide sufficient atomicity for the current
>design (only one writer per record).
****
pwrite() does not guarantee an atomic transaction. Your failure to comprehend this after
I have explained it several times is why I keep being forced to apply the term "stupid".
THERE IS NO GUARANTEE! You did not find one in the documentation, and you will not find
one in the implementation. You have confused a concept (atomic modification to a file)
with a much more important conept (a transacted database that DOES NOT LOSE INFORMATION AS
A CONSEQUENCE OF ANY ABNORMAL TERMINATION OF THE APP OR SYSTEM!)

Please explain how pwrite guarantees transactional integrity. You obviously have some
information which is not in the documentation and cannot be derived by reading the linux
source code.

Consider the following

Let f be a file handle
Let f be an empty file when this code starts

char buf = '*';
for(int i = 0; i < SOME_SIZE; i++)
pwrite(f,&buf, 1, i);

Now, if the systyem fails for ANY REASON in the loop, you will, upon restart, find in the
file represented by handle f:

(1) SOME_SIZE instances of '*'
(2) 0 <= i <= SOME_SIZE instances of '*', depending on the exact value of i when the crash
occurred
(3) 0 instanceas of '*'
(4) Some indeterminate number of instances, m, of '*', 0 <= m <= i where i is the value
achieved when the system crashed, where m is unrelated to the value of i when the system
crashed exxept that it must be less than or equal to it

Hint: the correct answer is 4.

Question: suppose the loop completes, and you are well into some other, much later,
computation when the crash occurs.

Hint: the correct answer is 4.

Now, consider the following, where I hypotehsize the existence of two primitives
BEGIN_TRANSACTION starts a transaction of the file handle
END_TRANSACTiON ends the transaction and commits the changes to the file

BEGIN_TRANSACTION(f); // [1]
for(int i = 0; i < SOME_SIZE; i++) // [2]
prwrite(...as above...); // [3]
END_TRANSACTION(f) // [4]

If there is an abnormal temrination before line [4] executes completely, the nuimber of
instances of * found will be
(choose one of the above answers)

Hint: the correct answer is 3

In fact, there are points in the implementaiton of line 4 where either the transaction has
not been committed, and the correct answer is 3, or the transaction is committed and the
correct answer is 1, but there is NEVER any point in the code where any other answer is
possible! And that's why transacted systems are HARD to write!

Now consider the following:

for(int i = 0; i < SOMES_SIZE; i++) // [1]
{
BEGIN_TRANSACTION(f); // [2]
pwrite(...as above...); // [3]
END_TRANSACTION(f); // [4]
} // [5]

Choose one of the answers.

Hint: the correct answer is 2, and the exact value depends on whether or not the crash
took place on lines1, 2, 3, 4 or 5. If it took place on any line but 5, then the number
of *s will be i - 1. If it took place on line 5, the number of *s will be i..

If you do not understand the reasons why wrapping the file operation in a hyopthetical
transaction makes a difference, go back and undertand what a transacted database does. In
the case of a transacted database, the "add new record" operation or "update record"
operation replaces pwrite.
****
>
>>
>> If you are writing specification which are based on
>> incorrect usage and erroneous
>> assumptions, you are likely to get an implementation that
>> conforms to the specs, but is
>> not correct. Ao other than "disagreeing" with your flawed
>> ideas, how have I failed to
>> help?
>>
>
>I am still at the early investigative phase of these aspects
>of this design. may of the ideas that I am proposing are for
>the primary purposes of discerning the specific areas where
>I need to attain greater fundamental understanding. I think
>that I know the basic ideas of many of the fundamental
>principles pretty well, even though I may lack the
>appropriate terminology.
****
You are proposing implementation strategies that cannot possibly work.

And if you are proposing ridiculous ideas, don't take exception when we refute them.

And when we try to explalin proper terminolgy to you, we are accused of focusing in
minutuae, and are accused of being negative and unhelpful. Sadly, even after being
corrected, you persist in using the same incorrect terminology, which is annoying.
joe
*****

>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:r7jkr55trmr5langp6gukohsb6d7dq9lu3(a)4ax.com...
>>In the case where pread() and pwrite() must work together
>>to
>>provide a single atomic operation, such as the case of
>>deducting ten cents from a customer's account balance you
>>are correct. A simpler way to provide this atomicity for
>>my
>>purposes might be to only have a single thread that
>>updates
>>the client's balances. In this case there is no need of
>>contriving a record lock on a system that lacks this
>>inherent capability.
> ****
> Actually, they do not provide a single atomic operation,
> and this should be obvious. The
> CORRECT way to provide the atomicity is use a transacted
> database. Nothing else is going
> to be reliable, unless, of course, you re-implement
> transacted databases on your own.

When one is referring to transactions one is referring to at
least one read operation combined with at least one write
operation forming the single atomic transaction unit. This
is useful when one may be deducting a dime from the
customer's current account balance.

There are other less inclusive forms of atomicity that can
also be relevant. If one thread needs to write to a record
in a file (specific byte offset within the file), and
another thread needs to append a record (fixed length set of
bytes) to a file it is useful to know that these operations
can complete without interfering with each other. This level
of atomicity is sufficient for updating my transaction log,
yet insufficient for updating the customer's account
balance.

>
> If you think otherwise, you do not have a clue about what
> a transacted database does, why
> they are important, and why there are no alternatives!
> ****
>>
>>In other less complex cases such as updating a status flag
>>from [InProccesing] to [Completed] it would seem that the
>>single guarantee that the pwrite() operation itself is
>>atomic would provide sufficient atomicity for the current
>>design (only one writer per record).
> ****
> pwrite() does not guarantee an atomic transaction. Your
> failure to comprehend this after

It doesn't have to, there are no transactions involved.

> I have explained it several times is why I keep being
> forced to apply the term "stupid".

Because you are not paying enough attention to what I am
saying.

> THERE IS NO GUARANTEE! You did not find one in the
> documentation, and you will not find

There is a guarantee that two pwrites() will not collide
with each other resulting in garbage. This is sufficient for
the transaction log, yet insufficient for updating the
customer's account balance.

> one in the implementation. You have confused a concept
> (atomic modification to a file)
> with a much more important conept (a transacted database
> that DOES NOT LOSE INFORMATION AS
> A CONSEQUENCE OF ANY ABNORMAL TERMINATION OF THE APP OR
> SYSTEM!)
>

Which as I have already said, is easily prevented by using
the design pattern in SQLite.

> Please explain how pwrite guarantees transactional
> integrity. You obviously have some
> information which is not in the documentation and cannot
> be derived by reading the linux
> source code.

The transaction log does not need transactional integrity.
It does need crash fault tolerance, which might be construed
as a form of transactional integrity. I could make all of
this moot by simply having a single process with a single
thread that handles all disk writes to a specific resource.
This would inherently have transaction integrity, and crash
proofing would be as easy as using the SQLite design
pattern.

>
> Consider the following
>
> Let f be a file handle
> Let f be an empty file when this code starts
>
> char buf = '*';
> for(int i = 0; i < SOME_SIZE; i++)
> pwrite(f,&buf, 1, i);
>
> Now, if the systyem fails for ANY REASON in the loop, you
> will, upon restart, find in the
> file represented by handle f:
>
> (1) SOME_SIZE instances of '*'
> (2) 0 <= i <= SOME_SIZE instances of '*', depending on the
> exact value of i when the crash
> occurred
> (3) 0 instanceas of '*'
> (4) Some indeterminate number of instances, m, of '*', 0
> <= m <= i where i is the value
> achieved when the system crashed, where m is unrelated to
> the value of i when the system
> crashed exxept that it must be less than or equal to it
>
> Hint: the correct answer is 4.

Which is very easy to handle using the SQLite design
pattern.

>
> Question: suppose the loop completes, and you are well
> into some other, much later,
> computation when the crash occurs.
>
> Hint: the correct answer is 4.
>
> Now, consider the following, where I hypotehsize the
> existence of two primitives
> BEGIN_TRANSACTION starts a transaction of the file handle
> END_TRANSACTiON ends the transaction and commits the
> changes to the file
>
> BEGIN_TRANSACTION(f); // [1]
> for(int i = 0; i < SOME_SIZE; i++) // [2]
> prwrite(...as above...); // [3]
> END_TRANSACTION(f) // [4]
>
> If there is an abnormal temrination before line [4]
> executes completely, the nuimber of
> instances of * found will be
> (choose one of the above answers)
>
> Hint: the correct answer is 3

I already know what transaction processing is.

>
> In fact, there are points in the implementaiton of line 4
> where either the transaction has
> not been committed, and the correct answer is 3, or the
> transaction is committed and the
> correct answer is 1, but there is NEVER any point in the
> code where any other answer is
> possible! And that's why transacted systems are HARD to
> write!
>
> Now consider the following:
>
> for(int i = 0; i < SOMES_SIZE; i++) // [1]
> {
> BEGIN_TRANSACTION(f); // [2]
> pwrite(...as above...); // [3]
> END_TRANSACTION(f); // [4]
> }
> // [5]
>
> Choose one of the answers.
>
> Hint: the correct answer is 2, and the exact value depends
> on whether or not the crash
> took place on lines1, 2, 3, 4 or 5. If it took place on
> any line but 5, then the number
> of *s will be i - 1. If it took place on line 5, the
> number of *s will be i..
>
> If you do not understand the reasons why wrapping the file
> operation in a hyopthetical
> transaction makes a difference, go back and undertand what
> a transacted database does. In
> the case of a transacted database, the "add new record"
> operation or "update record"
> operation replaces pwrite.
> ****

That is exactly what the SQLite design pattern does, and it
allows clean restarts even after power failures.



From: Joseph M. Newcomer on
See below...
On Sat, 3 Apr 2010 18:42:52 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:mddfr59859omp27giql6333aah4217qfm1(a)4ax.com...
>> See below....
>> On Fri, 2 Apr 2010 14:44:45 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>
>>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>>message news:1lbcr5lo46hg8ha34t4rmeb5v731i7f14l(a)4ax.com...
>>>> See below...
>>>> On Fri, 2 Apr 2010 11:44:43 -0500, "Peter Olcott"
>>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>>>OK I think that I have it. If the customer account
>>>>>balances
>>>>>are stored in a file (binary data, fixed length records,
>>>>>thus easy to seek to), then we would probably need a
>>>>>record
>>>>>lock on the client's record that remains in effect
>>>>>between
>>>>>reading the current balance, and writing the updated
>>>>>balance.
>>>> ****
>>>> You are violating your rule of dropping to the physic'al
>>>> level without knowing that the
>>>> abstract requirements are being met.
>>>> For example, the presumption that "atomic append"
>>>> meets the requirements or that a binary file even
>>>> matters
>>>> in this discussion seem both
>>>> unjustified assumptions.
>>>
>>>Because the record lock idea would ensure one significant
>>>aspect of transactional integrity, and your mind is stuck
>>>in
>>>"refute mode" you had to square peg in round hole with
>>>hammer another aspect of what I said.
>>
>> ***
>> Well, since I *know* that Unix/linux does not have "record
>> locking" (it is what is called
>> "cooperative locking" and locks are not enforced by the
>> operating system, as they are in
>> Windows), I would not have considered record locking to be
>> a big deal, mostly because when
>> and if it is used, the DBMS uses it at a level far lower
>> than you, as a client of the
>> DBMS, will ever see. Also, record locking does NOT
>> guarantee transactional integrity,
>> because transactional integrity requires the concept of
>> positive-commitment and the
>> unix/linux file systems (and the Windows file systems) do
>> not support positive-commitment
>> protocols. The recent Windows operating system (I think
>> starting with Vista) now support
>> positive commitment, which means it no longer had to be
>> done by using unbuffered I/O in
>> the application.
>>
>> Yes, I'm stuck in "refute mode", especially, when I hear
>> nonsense. Note that record
>> locking prevents collisions on the same section of the
>> file (when it is used, which
>> unix/linux do NOT guarantee or enforce), they do NOT
>> guarantee those updates are
>
>I already knew that, but already knowing that it also knew
>that it is not really all that to difficult to derive these
>things from scratch. As it turns out many relational DBMS
>already have done this work for me. Also another very simple
>solution is to simply serialize all transactions to a single
>thread.
****
Serializing these operations to a single threa does not guarantee transactional integrity.
See my previous discussion of the use of a hypothetical BEGIN_TRANSACTION primitive.

If you believe a single thread guarantees transactional integrity, you are wrong.
joe
****
>
>> transactional updates. Do you really understand the
>> difference? Just because a byte
>> range is locked does NOT guarantee transactional integrity
>> on updates. The range is NOT
>> committed when it is unlocked! And pwrite does NOT
>> guarantee, with or without locks, that
>> the data is committed to the file! It only guarantees
>> that the data is written in the
>> correct place in the file on append.
>
>The somewhat more difficult (although still not all that
>difficult) job would be to guarantee transactional integrity
>even if someone pulls the power cord from the wall in the
>middle of the transaction. SQLite explains exactly how they
>do this, its not really all that hard.
*****
Actually, at the defail level, it IS hard. The assumption is that for some pair of
consecutive instructions [I1][I2] that at [I1] the database fails to meet the integrity
requirements and is therefore invalid, and at [I2], it meets the requirements. You must
prove this for every pair of instructions in the DBMS and in the kernel. Furthermore,
there is a strong assumption there is a predicate P(f) which when applied to a file (f) is
TRUE if the integiry of the file is consistent and FALSE if the integrity is compromised,
and that upon restart, P can effectively be applied to the file, and there is a recovery
mechanism (re-committ, rollback, etc.) such that upon completion of that operation P(f) is
TRUE.

Fortunately, there are very few places where you actually have to examine instruction
pairs. The first part of the exercise is discovering where those places are. The next
part of the exercise is actually reading the code. Outside a few critical places, it
doesn' really matter.

I;ve had friends read generated kernel assembly code to prove transactional integrity in
file systems, and as a consequence of reading this code, find bugs in the kernel that
resulted in P(f) returning TRUE when it should have returned FALSE. Note that it is
ALWAYS valid to have P(f) return FALSE when in fact P(f) is TRUE; it results in a recovery
on the empty set, which can be optimized to do nothing at all. So when doing the commits,
the order in which things happen is utterly essential (we called it "the transactional
dance"; I was not the implementor of the kernel object manager, where this happened, but
Fred Pollack, now retired from Intel, was, and he spent a very long time making sure that
the commits happened in EXACTLY the right order! Of course, he also controlled the disk,
and knew when objects were committed to the magnetic media...the drives had no onboard
caches). Fred went on to be the instruction set architect of the P2 chip series and many
others, and eventually an Intel Fellow For more detail see
http://seekingalpha.com/author/fred-pollack. I believe his reference to chief architect
of an object-oriented operating system was the Hydra system I worked on at CMU. Fred
guaranteed object integrity of the file system. He is one of several people who I know
who read generated machine code to determine file system integrity, and once
hand-generated code to deal with a compiler bug that could compromise the file system).

The explanations of how database systems manage transaction implementations are
necessarily simplified to gloss over the really ugly details required to make them work
correctly. Makes for nice explanations and is usually far removed from the implementation
details. I know this because I've watched people worry about the implementation details.
joe
****
>
>>
>> You are constantly working with fundamental misconceptions
>> of what is going on in the
>> operating system, and complain when we tell you that you
>> are clueless. Alas, this may
>
>No it just seems that way from my short-hand way of talking.
>I make it a specific point to make sure that I never learn
>anything by memorization, only by comprehension. "Correct"
>terminology can only be learned by memorization. Words are
>merely labels that are attached to concepts. Having the idea
>right but the words wrong is far better than the converse.
>
>> satisfy you to tell us we are yelling at you, but it does
>> not solve your cluelessness.
>> joe
>
>I definitely do have some cluelessness, but, far less than
>you are perceiving.
>
>> ****
>>>
>> Joseph M. Newcomer [MVP]
>> email: newcomer(a)flounder.com
>> Web: http://www.flounder.com
>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:gemkr5detjlu36cm7thh6n6p84g5ki4bt5(a)4ax.com...
> See below...
> On Sat, 3 Apr 2010 18:42:52 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>I already knew that, but already knowing that it also knew
>>that it is not really all that to difficult to derive
>>these
>>things from scratch. As it turns out many relational DBMS
>>already have done this work for me. Also another very
>>simple
>>solution is to simply serialize all transactions to a
>>single
>>thread.
> ****
> Serializing these operations to a single threa does not
> guarantee transactional integrity.
> See my previous discussion of the use of a hypothetical
> BEGIN_TRANSACTION primitive.
>
> If you believe a single thread guarantees transactional
> integrity, you are wrong.
> joe

It does gurantee that:
(1) Read balance
(2) Deduct 10 cents from balance
(3) Write updated balance
don't have any intervening updates between steps.

> ****
>>
>>> transactional updates. Do you really understand the
>>> difference? Just because a byte
>>> range is locked does NOT guarantee transactional
>>> integrity
>>> on updates. The range is NOT
>>> committed when it is unlocked! And pwrite does NOT
>>> guarantee, with or without locks, that
>>> the data is committed to the file! It only guarantees
>>> that the data is written in the
>>> correct place in the file on append.
>>
>>The somewhat more difficult (although still not all that
>>difficult) job would be to guarantee transactional
>>integrity
>>even if someone pulls the power cord from the wall in the
>>middle of the transaction. SQLite explains exactly how
>>they
>>do this, its not really all that hard.
> *****
> Actually, at the defail level, it IS hard. The assumption
> is that for some pair of

I don't see what is hard about using this design pattern,
and I don't see how this could fail:
http://sqlite.org/atomiccommit.html
You have a complete audit trail of every detail up to the
point of failure.

> consecutive instructions [I1][I2] that at [I1] the
> database fails to meet the integrity
> requirements and is therefore invalid, and at [I2], it
> meets the requirements. You must
> prove this for every pair of instructions in the DBMS and
> in the kernel. Furthermore,
> there is a strong assumption there is a predicate P(f)
> which when applied to a file (f) is
> TRUE if the integiry of the file is consistent and FALSE
> if the integrity is compromised,
> and that upon restart, P can effectively be applied to the
> file, and there is a recovery
> mechanism (re-committ, rollback, etc.) such that upon
> completion of that operation P(f) is
> TRUE.
>
> Fortunately, there are very few places where you actually
> have to examine instruction
> pairs. The first part of the exercise is discovering
> where those places are. The next
> part of the exercise is actually reading the code.
> Outside a few critical places, it
> doesn' really matter.
>
> I;ve had friends read generated kernel assembly code to
> prove transactional integrity in
> file systems, and as a consequence of reading this code,
> find bugs in the kernel that
> resulted in P(f) returning TRUE when it should have
> returned FALSE. Note that it is
> ALWAYS valid to have P(f) return FALSE when in fact P(f)
> is TRUE; it results in a recovery
> on the empty set, which can be optimized to do nothing at
> all. So when doing the commits,
> the order in which things happen is utterly essential (we
> called it "the transactional
> dance"; I was not the implementor of the kernel object
> manager, where this happened, but
> Fred Pollack, now retired from Intel, was, and he spent a
> very long time making sure that
> the commits happened in EXACTLY the right order! Of
> course, he also controlled the disk,
> and knew when objects were committed to the magnetic
> media...the drives had no onboard
> caches). Fred went on to be the instruction set architect
> of the P2 chip series and many
> others, and eventually an Intel Fellow For more detail
> see
> http://seekingalpha.com/author/fred-pollack. I believe
> his reference to chief architect
> of an object-oriented operating system was the Hydra
> system I worked on at CMU. Fred
> guaranteed object integrity of the file system. He is one
> of several people who I know
> who read generated machine code to determine file system
> integrity, and once
> hand-generated code to deal with a compiler bug that could
> compromise the file system).
>
> The explanations of how database systems manage
> transaction implementations are
> necessarily simplified to gloss over the really ugly
> details required to make them work
> correctly. Makes for nice explanations and is usually far
> removed from the implementation
> details. I know this because I've watched people worry
> about the implementation details.
> joe
> ****