From: Erland Sommarskog on
JG (jg(a)nospam.com) writes:
> Just want to add that we found a rogue app which is using a different
> methodology to generate the ID for new record INSERTS. It's simply doing
> a SELECT MAX from the base data table, rather than using the
> transactional ID pump. Obviously, this 'naive' approach is very unsafe.

Yup. About all roll-your-own ID schemes are built on the assumption
that everyone is going the same route.

> With this in mind, the round-up is that we no longer have any reason to
> suspect a breach of transactionality inherent in the transactional ID
> pump. Perhaps this proc could be streamlined in a couple of ways (as per
> suggestions baove) however the upshot of our analysis is that it should
> be sound as currently constructed. Once the base UPDATE (which is the
> first statement in the transaction batch) executes, then all other
> UPDATES to this particular row should block until this UPDATE either
> commits or rolls back. Thus, as long as all clients use this ID pump,
> pump executions should be serialized and every client should get a
> unique ID number. Is this not the case?

Yes, as long as the float-to-string conversion is not tripping you.


--
Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

From: m on
Well, I don't have any trouble demonstrating the behaviour I describe using
your example; but perhaps I am doing something wrong.

Using your example, I can make it behave differently (block or not) by
changing:
- transaction isolation (for both the implicit and explicit transactions)
- lock hints (on both the update and select / update / insert / delete on
the other spid)
- the PK and indices of the table
- the distribution of data (cause an alternate query plan)
- the database defaults
- batch versus compiled execution of statements

Each of these affects (directly or indirectly) the execution pattern of the
operations and can cause them to acquire row, page, or table locks that are
exclusive, shared, or intent variously. You can observe this behaviour best
by tracing the lock acquire, lock release events in SQL profiler (and
correlating the hobt_id values to object_id using the sys.partitions view)
as this will illustrate the actual locks taken during an execution. Also,
bear in mind that a batch execution is different from a compiled execution
in several key ways; and some specifically affect locking and transactions.

Regardless of whether I am right, or merely belligerent, the OP has at least
a non-performant design which can be improved by a change of design.


"Erland Sommarskog" <esquel(a)sommarskog.se> wrote in message
news:Xns9DBE9CC939233Yazorman(a)127.0.0.1...
> m (m(a)b.c) writes:
>> That is only the case if the transaction isolation is repeatable read or
>> serializable. It is explained opaquely by Microsoft here:
>
> No. Do this as a setup:
>
> CREATE TABLE alfons (a int NOT NULL PRIMARY KEY NONCLUSTERED,
> b sysname)
> INSERT alfons(a,b)
> SELECT object_id, name FROM sys.objects
>
>
> Then do this in one query window:
>
> BEGIN TRANSACTION
>
> UPDATE alfons SET b = upper(b) WHERE object_id = 3
>
> Then try to modify or read alfons.b from a different query window.
>
> What you are saying is correct if the first statement of the transaction
> was a SELECT statement, but it isn't.
>
>
> --
> Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se
>
> Links for SQL Server Books Online:
> SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
> SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
> SQL 2000:
> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
>
From: Erland Sommarskog on
m (m(a)b.c) writes:
> Well, I don't have any trouble demonstrating the behaviour I describe
> using your example; but perhaps I am doing something wrong.
>
> Using your example, I can make it behave differently (block or not) by
> changing:
> - transaction isolation (for both the implicit and explicit transactions)
> - lock hints (on both the update and select / update / insert / delete on
> the other spid)
> - the PK and indices of the table
> - the distribution of data (cause an alternate query plan)
> - the database defaults
> - batch versus compiled execution of statements

OK, so produce a repro where a process under READ COMMITTED inside a
transaction updates a row and leaves the transaction uncommitted,
and where another process is able to modify this row anew, or is
even able to read the updated value (without using READ UNCOMMITTED).

If you are able to, it would be a very serious bug in SQL Server.

--
Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

From: JG on
Hi Erland,

> Yes, as long as the float-to-string conversion is not tripping you.

We're nowhere near that particular issue at this point. (And hopefully we'll
have this codebase corrected long before we get there!) We'd only begin to
encounter this if our whole number values were to exceed that largest value
which can be expressed by the significand portion of the floating point
format. We don't think we'll ever get close to 15 digits; since we're
storing whole numbers below 15 digits we should not run into any loss of
precision issues.

I did note your comment above about 'worst practices' with a bit of a wry
grin ;-). We're working with an inherited codebase; the guilty parties have
long since abandoned us to pursue their sadistic software development
tendencies elsewhere...

;-)

JG

"Erland Sommarskog" <esquel(a)sommarskog.se> wrote in message
news:Xns9DBE9D0F4A060Yazorman(a)127.0.0.1...
> JG (jg(a)nospam.com) writes:
>> Just want to add that we found a rogue app which is using a different
>> methodology to generate the ID for new record INSERTS. It's simply doing
>> a SELECT MAX from the base data table, rather than using the
>> transactional ID pump. Obviously, this 'naive' approach is very unsafe.
>
> Yup. About all roll-your-own ID schemes are built on the assumption
> that everyone is going the same route.
>
>> With this in mind, the round-up is that we no longer have any reason to
>> suspect a breach of transactionality inherent in the transactional ID
>> pump. Perhaps this proc could be streamlined in a couple of ways (as per
>> suggestions baove) however the upshot of our analysis is that it should
>> be sound as currently constructed. Once the base UPDATE (which is the
>> first statement in the transaction batch) executes, then all other
>> UPDATES to this particular row should block until this UPDATE either
>> commits or rolls back. Thus, as long as all clients use this ID pump,
>> pump executions should be serialized and every client should get a
>> unique ID number. Is this not the case?
>
> Yes, as long as the float-to-string conversion is not tripping you.
>
>
> --
> Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se
>
> Links for SQL Server Books Online:
> SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
> SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
> SQL 2000:
> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
>