From: Ben Hopkins on
We have encountered the following error message when hitting a particular
trigger. This trigger hasnt been altered for nearly 2 years.

"Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should
be discarded."

The SQL Server in question is version 9.00.4035.00 SP3 Standard Edition
(64-bit).

I have taken backup of the DB and restored it to my local machine which has
SQL Server 9.00.4285.00 SP3 Standard Edition (32-bit) but the error persists.

The exception.log files for both installs contains lines similar to the
follolwing:
03/22/10 14:32:04 spid 55 Exception 0xc0000005 EXCEPTION_ACCESS_VIOLATION
reading address 00000000 at 0x019D89D4
03/22/10 14:34:12 spid 56 Exception 0xc0000005 EXCEPTION_ACCESS_VIOLATION
reading address 00000000 at 0x019D89D4
03/22/10 14:48:23 spid 55 Exception 0xc0000005 EXCEPTION_ACCESS_VIOLATION
reading address 00000000 at 0x019D89D4

We have also restored the same database to a SQL Express install - version
9.00.4053.00 and do not see the error at all. We then upgraded this instance
to version 9.00.4285.00 and still no error.



From: Erland Sommarskog on
Ben Hopkins (BenHopkins(a)discussions.microsoft.com) writes:

> We have encountered the following error message when hitting a particular
> trigger. This trigger hasnt been altered for nearly 2 years.
>
> "Msg 0, Level 11, State 0, Line 0
> A severe error occurred on the current command. The results, if any,
> should be discarded."

This error means that SqlClient suffered an error. Usually when level is
11, it means that the error occurred in SqlClient itself. Level 20 indicates
that the server terminated the connection.

Then again, that's academic:

> The exception.log files for both installs contains lines similar to the
> follolwing:
> 03/22/10 14:32:04 spid 55 Exception 0xc0000005 EXCEPTION_ACCESS_VIOLATION
> reading address 00000000 at 0x019D89D4
> 03/22/10 14:34:12 spid 56 Exception 0xc0000005 EXCEPTION_ACCESS_VIOLATION
> reading address 00000000 at 0x019D89D4
> 03/22/10 14:48:23 spid 55 Exception 0xc0000005 EXCEPTION_ACCESS_VIOLATION
> reading address 00000000 at 0x019D89D4

Makes it quite clear that there is a problem in SQL Server itself.

> We have also restored the same database to a SQL Express install -
> version 9.00.4053.00 and do not see the error at all. We then upgraded
> this instance to version 9.00.4285.00 and still no error.

This may be due to differences in query plans. If memory serves, Express
uses only one CPU, which means that you don't get parallelism.

This could be a clue. Run a Profiler trace with SP:StmtStarting and
SP:StmtCompleted to determine the statement that fails, and then add
OPTION (MAXDOP 1) and see it this helps.

There is also reason to run DBCC CHECKDB on the databsae, as this error
could indicate corruption. The reason could also be a bug in SQL Server.
In any case, there is no problem with your code as such. (Unless the
trigger invokes unsafe code like an extended stored procedure or similar.)
--
Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
From: Ben Hopkins on
Thanks Erland. Roughly following your advice we have managed to eradicate
the error. We isolated the particular statement which was causing the error,
which was an update using a subquery, and replaced this. Although this has
solved the problem for now it does not explain why this occurred, as the same
statement has executed absolutely fine for years. We think that some data
change must have changed the way the optimizer plans the query, but it
certainly looks like a sql bug rather than a problem with the db.

Leaving the thread open as we would like to get to the bottom of this to be
sure that we won't have the same problem again.
From: Ben Hopkins on
The offending query was:

update job
set invoice_id = null
where invoice_id = (select invoice_id from inserted where
update(invoice_status) and invoice_status = 'X')

"Ben Hopkins" wrote:

> Thanks Erland. Roughly following your advice we have managed to eradicate
> the error. We isolated the particular statement which was causing the error,
> which was an update using a subquery, and replaced this. Although this has
> solved the problem for now it does not explain why this occurred, as the same
> statement has executed absolutely fine for years. We think that some data
> change must have changed the way the optimizer plans the query, but it
> certainly looks like a sql bug rather than a problem with the db.
>
> Leaving the thread open as we would like to get to the bottom of this to be
> sure that we won't have the same problem again.
From: Iain Sharp on

Then the problem showed because you managed to get an update/insert
which set more than one invoice to status 'X'

Therefore the sub-select had more than one answer, and the where =
couldn't cope.

A more robust version of the query would be.


update job
set job.invoice_id = null
from job inner join
(select invoice_id from inserted where
update(invoice_status) and invoice_status = 'X') as dead on
job.invoice_id = dead.invoice_id


On Tue, 23 Mar 2010 08:07:01 -0700, Ben Hopkins
<BenHopkins(a)discussions.microsoft.com> wrote:

>The offending query was:
>
>update job
>set invoice_id = null
>where invoice_id = (select invoice_id from inserted where
>update(invoice_status) and invoice_status = 'X')
>
>"Ben Hopkins" wrote:
>
>> Thanks Erland. Roughly following your advice we have managed to eradicate
>> the error. We isolated the particular statement which was causing the error,
>> which was an update using a subquery, and replaced this. Although this has
>> solved the problem for now it does not explain why this occurred, as the same
>> statement has executed absolutely fine for years. We think that some data
>> change must have changed the way the optimizer plans the query, but it
>> certainly looks like a sql bug rather than a problem with the db.
>>
>> Leaving the thread open as we would like to get to the bottom of this to be
>> sure that we won't have the same problem again.