From: Jack W. on
> SQL Server can indeed handle multiple concurrent transactions, but a
> presumption is that you have a proper design.

> I would suggest that you post the code you have problem with as a start.
> Most likely we will ask to see table and index definitions as well.

I'm now researching blocking issues in detail and conducting experiments to
get a handle on things (playing with isolation levels, NOLOCK keyword,
etc.). Posting the code will take some time since I need to extricate it
from the app. Honestly though, it's not likely to help without posting the
DB itself but the situation seems very simple on the surface. That is, I've
since discovered that if client A starts a ("ReadCommitted") transaction and
inserts a single record into a table, then client B can't SELECT from that
table until client A ends the transaction. Moreover, the data client B is
SELECTing *never* targets the data that's being inserted by client A though
SQL Server is still dealing with the indexes on that table of course. The
indexes however are very straight-forward on the surface though I know
you're a better judge of that than myself :)

Note that if I set client A's transaction level to "ReadUncommitted" however
(it normally defaults to "ReadCommitted"), and I then have client B perform
its SELECT statement in its own "ReadUncommitted" transaction, then no
blocking occurs. Client B shouldn't have to start a transaction just to
issue its SELECT statement however but so far it's the only way I've been
able to find to circumvent the problem. I'm still researching all this
however.

I'm not a SQL Server expert so you obviously know far more about the subject
than myself. I'm a very experienced developer however (25+ years on MSFT
platforms) and my instincts tell me the solution is probably simple (since
the scenario itself is very simple). I could be wrong of course so I need to
perform more research to understand things better first. If I can't resolve
it then I'll certainly post something here (short and concise) but if you
have any further advice to get me started then I'd appreciate it. Maybe you
can just confirm whether this situation is even normal, i.e., if a client
inserts a single record into a table inside a "ReadCommitted" transaction,
should that actually block other clients from SELECTing different records in
the same table until the transaction is committed? If so then why? (is the
index locked for instance).

Thanks again.

From: Tibor Karaszi on
Here's some pseudo code describing your situation:

A:
BEGIN TRAN
INSERT INTO t (c1) VALUES(7)
....

B:
SELECT * FROM t WHERE c1 > 10


Now, put yourself in transaction/session B's situation. How can you possibly
know whether the row inserted by A lies within the boundaries of your SELECT
statement (WHERE c1 > 10)? Without looking at the row. You can't! That row
*might* be inside the boundaries of your WHERE clause. But if there's an
index on the c1 column, then SQL Server won't block B since it doesn't need
to even look at the row that A inserted. That (I think) is an example of
what Erland meant by properly designed and indexed.

Now, there are myriads of ways to handle these situations. NOLOCK (READ
ONCIMMITTED) to do dirty reads is one. Using any of the new snapshot
isolation levels to get prior version of the row is another.

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi



"Jack W." <_no_spam@_no_spam.com> wrote in message
news:#G#JqZUtKHA.3536(a)TK2MSFTNGP06.phx.gbl...
>> SQL Server can indeed handle multiple concurrent transactions, but a
>> presumption is that you have a proper design.
>
>> I would suggest that you post the code you have problem with as a start.
>> Most likely we will ask to see table and index definitions as well.
>
> I'm now researching blocking issues in detail and conducting experiments
> to get a handle on things (playing with isolation levels, NOLOCK keyword,
> etc.). Posting the code will take some time since I need to extricate it
> from the app. Honestly though, it's not likely to help without posting the
> DB itself but the situation seems very simple on the surface. That is,
> I've since discovered that if client A starts a ("ReadCommitted")
> transaction and inserts a single record into a table, then client B can't
> SELECT from that table until client A ends the transaction. Moreover, the
> data client B is SELECTing *never* targets the data that's being inserted
> by client A though SQL Server is still dealing with the indexes on that
> table of course. The indexes however are very straight-forward on the
> surface though I know you're a better judge of that than myself :)
>
> Note that if I set client A's transaction level to "ReadUncommitted"
> however (it normally defaults to "ReadCommitted"), and I then have client
> B perform its SELECT statement in its own "ReadUncommitted" transaction,
> then no blocking occurs. Client B shouldn't have to start a transaction
> just to issue its SELECT statement however but so far it's the only way
> I've been able to find to circumvent the problem. I'm still researching
> all this however.
>
> I'm not a SQL Server expert so you obviously know far more about the
> subject than myself. I'm a very experienced developer however (25+ years
> on MSFT platforms) and my instincts tell me the solution is probably
> simple (since the scenario itself is very simple). I could be wrong of
> course so I need to perform more research to understand things better
> first. If I can't resolve it then I'll certainly post something here
> (short and concise) but if you have any further advice to get me started
> then I'd appreciate it. Maybe you can just confirm whether this situation
> is even normal, i.e., if a client inserts a single record into a table
> inside a "ReadCommitted" transaction, should that actually block other
> clients from SELECTing different records in the same table until the
> transaction is committed? If so then why? (is the index locked for
> instance).
>
> Thanks again.

From: Jack W. on
> Here's some pseudo code describing your situation:
>
> A:
> BEGIN TRAN
> INSERT INTO t (c1) VALUES(7)
> ...
>
> B:
> SELECT * FROM t WHERE c1 > 10
>
>
> Now, put yourself in transaction/session B's situation. How can you
> possibly know whether the row inserted by A lies within the boundaries of
> your SELECT statement (WHERE c1 > 10)? Without looking at the row. You
> can't! That row *might* be inside the boundaries of your WHERE clause. But
> if there's an index on the c1 column, then SQL Server won't block B since
> it doesn't need to even look at the row that A inserted. That (I think) is
> an example of what Erland meant by properly designed and indexed.

That's very helpful, thank you. I'll start looking at my indexes in
particular. B's queries never target the data that A is inserting which is
why I was puzzled about the blocking. Moreover, I was originally operating
under the assumption that B couldn't even see this newly inserted data until
the transaction is committed. That's what would normally makes sense for
most users IMO (by default). In your above example for instance, I wouldn't
expect that 7 could be seen by client B until A commits. The SELECT
statement should therefore proceed without issue (oblivious to the pending
insertion of 7). Apparently this assumption is incorrect unless I muck
around with isolation levels, NOLOCK, etc. In any case, you've now confirmed
that no blocking should occur given suitable indexes (for my particular data
anyway). I'll run with this and see if I can solve the problem.


> Now, there are myriads of ways to handle these situations. NOLOCK (READ
> ONCIMMITTED) to do dirty reads is one. Using any of the new snapshot
> isolation levels to get prior version of the row is another.

I've since discovered that NOLOCK does eliminate the problem but I'd rather
not use that. Newly inserted data should remain hidden from clients until
the transaction is committed. I'll focus on the indexes and do more research
in general to see if I can prevent the blocking in the first place (without
using any extraneous keywords). Will follow up here if I can't. Thanks again
for your help!

From: Uri Dimant on
Tibor
I think if you have more than one column in t table and those columns are
not indexed so you will be blocked any way

The below won't be blocked

B:
SELECT c1 FROM t WHERE c1 > 10

Just my two cents




"Tibor Karaszi" <tibor_please.no.email_karaszi(a)hotmail.nomail.com> wrote in
message news:u3DQEqVtKHA.5036(a)TK2MSFTNGP02.phx.gbl...
> Here's some pseudo code describing your situation:
>
> A:
> BEGIN TRAN
> INSERT INTO t (c1) VALUES(7)
> ...
>
> B:
> SELECT * FROM t WHERE c1 > 10
>
>
> Now, put yourself in transaction/session B's situation. How can you
> possibly know whether the row inserted by A lies within the boundaries of
> your SELECT statement (WHERE c1 > 10)? Without looking at the row. You
> can't! That row *might* be inside the boundaries of your WHERE clause. But
> if there's an index on the c1 column, then SQL Server won't block B since
> it doesn't need to even look at the row that A inserted. That (I think) is
> an example of what Erland meant by properly designed and indexed.
>
> Now, there are myriads of ways to handle these situations. NOLOCK (READ
> ONCIMMITTED) to do dirty reads is one. Using any of the new snapshot
> isolation levels to get prior version of the row is another.
>
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
>
>
> "Jack W." <_no_spam@_no_spam.com> wrote in message
> news:#G#JqZUtKHA.3536(a)TK2MSFTNGP06.phx.gbl...
>>> SQL Server can indeed handle multiple concurrent transactions, but a
>>> presumption is that you have a proper design.
>>
>>> I would suggest that you post the code you have problem with as a start.
>>> Most likely we will ask to see table and index definitions as well.
>>
>> I'm now researching blocking issues in detail and conducting experiments
>> to get a handle on things (playing with isolation levels, NOLOCK keyword,
>> etc.). Posting the code will take some time since I need to extricate it
>> from the app. Honestly though, it's not likely to help without posting
>> the DB itself but the situation seems very simple on the surface. That
>> is, I've since discovered that if client A starts a ("ReadCommitted")
>> transaction and inserts a single record into a table, then client B can't
>> SELECT from that table until client A ends the transaction. Moreover, the
>> data client B is SELECTing *never* targets the data that's being inserted
>> by client A though SQL Server is still dealing with the indexes on that
>> table of course. The indexes however are very straight-forward on the
>> surface though I know you're a better judge of that than myself :)
>>
>> Note that if I set client A's transaction level to "ReadUncommitted"
>> however (it normally defaults to "ReadCommitted"), and I then have client
>> B perform its SELECT statement in its own "ReadUncommitted" transaction,
>> then no blocking occurs. Client B shouldn't have to start a transaction
>> just to issue its SELECT statement however but so far it's the only way
>> I've been able to find to circumvent the problem. I'm still researching
>> all this however.
>>
>> I'm not a SQL Server expert so you obviously know far more about the
>> subject than myself. I'm a very experienced developer however (25+ years
>> on MSFT platforms) and my instincts tell me the solution is probably
>> simple (since the scenario itself is very simple). I could be wrong of
>> course so I need to perform more research to understand things better
>> first. If I can't resolve it then I'll certainly post something here
>> (short and concise) but if you have any further advice to get me started
>> then I'd appreciate it. Maybe you can just confirm whether this situation
>> is even normal, i.e., if a client inserts a single record into a table
>> inside a "ReadCommitted" transaction, should that actually block other
>> clients from SELECTing different records in the same table until the
>> transaction is committed? If so then why? (is the index locked for
>> instance).
>>
>> Thanks again.
>


From: Erland Sommarskog on
Jack W. (_no_spam@_no_spam.com) writes:
> That's very helpful, thank you. I'll start looking at my indexes in
> particular. B's queries never target the data that A is inserting which
> is why I was puzzled about the blocking. Moreover, I was originally
> operating under the assumption that B couldn't even see this newly
> inserted data until the transaction is committed. That's what would
> normally makes sense for most users IMO (by default).

Well, that depends on which DB engine you are using. I believe what you
say is true for Oracle.

From your original post, I got the impression that you only had insertions,
but I was very tired last night, so I might have misunderstood.

Anyway, say that one client insert a row with a value of col = 45 and leaves
the transaction uncommitted. Another client now tries to read rows where
col = 99. If there is an index on col, this should work fine. If there is
not, SQL Server has to scan the table, and will be blocked on the
uncommitted row.

The main reason to add indexes is not to avoid blocking, but to speed up
performance. If your table is small, you may not need the index for
performance, then again, if you are only testing now, and your production
environment will have more rows, maybe you have an issue after all.

In any case, there is a possibility to do this without indexes. And without
NOLOCK (or READPAST), and that is to use snapshot isolation in one of its
two forms. (Of which both require SQL 2005 or later).

In its strictest form, you say

SET TRANSACTION ISOLATION LEVEL SNAPSHOT

Now when you start a transaction, you will see data as it was at the point
when the transaction started. (Note that you do not need an explicit BEGIN
TRAN, as each statement is a transaction of its own.) To be able to use
this, first need to say ALTER DATABASE db SET ALLOW_SNAPSHOT_ISOLATION ON.

The other form of snapshot is a little more lax, and is that is required
is that you say:

ALTER DATABASE db SET READ_COMMITTED_SNAPSHOT_ISOLATION ON

With this setting, READ COMMITTED is implemented using the snapshot. The
difference to true snapshot isolation is that if you start reading at time
T and at time T1 another process commits an update, and you read that row
at time T2, you will see that update. With true snapshot, you would not.



--
Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx