From: SnapDive on
Well, can you recommend a tight/foolproof/best-practice way to
possibly do the merge statement in n-row sets (each with their own
transaction?) so I can possibly isolate the bad row? At the expense of
a probable speed penalty, if I could at least know the error occured
within 20 rows, a human could visually likely see where the problem
might be.

Thanks.



On Tue, 08 Jun 2010 23:43:59 +0200, Erland Sommarskog
<esquel(a)sommarskog.se> wrote:

>SnapDive (SnapDive(a)community.nospam) writes:
>> Since SQL Server failed the action, it has to know what and where
>> things failed.
>
>Yeah, but that does not mean that it will tell us!
>
>> I was thinking that something akin to an OUTPUT clause
>> would tell me what blew up.
>
>Alas, SQL Server is a bit pedantic, so all rows inserted sofar
>in a failing INSERT statement will be rolled back. The one exception
>is with INSERT EXEC. But that is only if the called procedure produces
>several result sets.

From: Erland Sommarskog on
SnapDive (SnapDive(a)community.nospam) writes:
> Well, can you recommend a tight/foolproof/best-practice way to
> possibly do the merge statement in n-row sets (each with their own
> transaction?) so I can possibly isolate the bad row? At the expense of
> a probable speed penalty, if I could at least know the error occured
> within 20 rows, a human could visually likely see where the problem
> might be.

I know that when I faced this situation once, my source was a file
which I tried to load with BCP. I think I used a text editor to divide
the file in half, then I took the half that did not load and so on
until I found the date of 920230.

But with a mere 50000 rows, and today's hardware, I would try to write
a simple cursor which inserts the rows one by one in TRY CATCH, and
in the CATCH block, I would insert into an error table. One reason for
this choice is that my experience tells me that there may be more than
one bad row.

Note that if you want to put this in a stored procedure, you have
TRY CATCH around your MERGE, and the in the CATCH block call the
procedure that runs the cursor as a fallback.


--
Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

From: SnapDive on
Thanks for the input (and Bob Barrows also)!

On Wed, 09 Jun 2010 23:25:42 +0200, Erland Sommarskog
<esquel(a)sommarskog.se> wrote:

>SnapDive (SnapDive(a)community.nospam) writes:
>> Well, can you recommend a tight/foolproof/best-practice way to
>> possibly do the merge statement in n-row sets (each with their own
>> transaction?) so I can possibly isolate the bad row? At the expense of
>> a probable speed penalty, if I could at least know the error occured
>> within 20 rows, a human could visually likely see where the problem
>> might be.
>
>I know that when I faced this situation once, my source was a file
>which I tried to load with BCP. I think I used a text editor to divide
>the file in half, then I took the half that did not load and so on
>until I found the date of 920230.
>
>But with a mere 50000 rows, and today's hardware, I would try to write
>a simple cursor which inserts the rows one by one in TRY CATCH, and
>in the CATCH block, I would insert into an error table. One reason for
>this choice is that my experience tells me that there may be more than
>one bad row.
>
>Note that if you want to put this in a stored procedure, you have
>TRY CATCH around your MERGE, and the in the CATCH block call the
>procedure that runs the cursor as a fallback.