From: Tony Rogerson on


"--CELKO--" <jcelko212(a)earthlink.net> wrote in message
news:93b0c487-b480-4f94-9a59-826e78137e29(a)q22g2000yqm.googlegroups.com...
> This is an old DB2 trick; I had not seen it done with MySQL. Since the
> data is getting pushed on a parameter stack, it should fast in most
> SQLs. SQL Server can take 2k parameters, but DB2 takes 32k and Oracle
> takes 64K.
>
> I have an article on it I have to send off this week; I will let you
> know when it comes out.

It used to surprise me why you push best practice from other products onto
SQL Server when in fact in this case and other cases its the worst thing you
can do.

Many SQL Server experts have had this discussion with you about having many
hundreds of parameters on a stored procedure instead of using a table
variable or single parameter Comma Separated Value string.

Erland has done the research and has the comparisons between the different
methods.

The hundreds of parameters suffer from a number of things...

1. It breaks 1NF because the parameters are repeating groups and the
domain being modelled is the same. Correct, that is the case with CSV so you
must use by definition a TABLE valued parameter in order to fullfil
relational theory requirements.

2. It's un-maintainable - the majority of developers working against SQL
Server use Management Studio, Management Studio has no macro language, in
order to handle the hundreds of parameters some automated code generator
needs to be used in order to save from typo's and errors creeping in, that
would require the developer to learn and buy yet another language.

3. Have you ever tried to read and debug a stored procedure with a few
hundred parameters - one such example is the sudoko proc which whilst doing
what it says on the tin is un-maintainable.

4. Domain protection is NOT done, for instance in the sudoko the data
type does not prevent you from passing a number that is outside the range
0 - 9, in order to do that you need either a load of IF statements or you
insert all the values into a table which defeats why you used parameters in
the first place and should have just used a table valued parameter.

I do look forward to seeing your article and will also critically point out
its failings if you propose to use the many parameters method.

--ROGGIE--


From: Erland Sommarskog on
--CELKO-- (jcelko212(a)earthlink.net) writes:
> This is an old DB2 trick; I had not seen it done with MySQL. Since the
> data is getting pushed on a parameter stack, it should fast in most
> SQLs. SQL Server can take 2k parameters, but DB2 takes 32k and Oracle
> takes 64K.

I don't know about the other products, but I when I tested various methods
for searching for a list of values, the idea of sending many parameters
fared very poorly.

As long as I measured performance server-side only, the test procedure
behaved very well, and even competed for the top. But when I measured the
client-side time and thus added the call overhead, the picture was
completely different. Passing 2000 parameters took 500 ms - then it doesn't
matter if the procedure itself completes in 20 ms.


You can find the data at
http://www.sommarskog.se/arrays-in-sql-perftest-2009.html

--
Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

From: --CELKO-- on
>> It breaks 1NF because the parameters are repeating groups and the domain being modelled is the same. Correct, that is the case with CSV so you must use by definition a TABLE valued parameter in order to fulfill relational theory requirements. <<

Parameter lists are not tables. I happen to like tabel valued
parameters and think they have some advantage. I would prefer that
they be declared in the parameter list instead of outside of it, but
that is probably coming.

>> It's UN-maintainable - the majority of developers working against SQL Server use Management Studio, Management Studio has no macro language, in order to handle the hundreds of parameters some automated code generator needs to be used in order to save from typo's and errors creeping in, that would require the developer to learn and buy yet another language. <<

Why not use the macro language in your C compiler? Or your word
processor? I need to move some of my old WORD stuff over to Open
Office soon, which was free.

I find a lot of people dislike Management Studio for that and other
reasons. Pretty printers, macros, source code controls and a lot of
other feaures are missing. They tend toward Toad and DB-Viz instead or
other tools. The argument that Management Studio makes code UN-
maintainable is not a very good one. No law requires you to use it
exclusively.

>> Have you ever tried to read and debug a stored procedure with a few hundred parameters - one such example is the Sudoku procedure which whilst doing what it says on the tin is UN-maintainable. <<

Yes, and I have no problem; I use a regular expression to do any edits
to the set of parameters rather than one parameter at a time. The code
is so simple and obvious most of the time, you never really look at it
again. But if you do, it is to extent the pattern rather than change
processing -- this is just a parameter list.

>> Domain protection is NOT done, for instance in the Sudoku the data type does not prevent you from passing a number that is outside the range 0 - 9, in order to do that you need either a load of IF statements or you insert all the values into a table which defeats why you used parameters in the first place and should have just used a table valued parameter. <<

Gee, I used the equivalent of this for my macro. Why use a procedural
IF when I have declarative CHECK()?

FOR %1 IN [0-9] DO
FOR %2 IN [0-9] DO
"cell_%1%2 SMALLINT NOT NULL CHECK (cell_%1%2 BETWEEN 0 AND
9)"

Then there were similar ones for the column, row and square checking
rules.
From: --CELKO-- on
>> As long as I measured performance server-side only, the test procedure behaved very well, and even competed for the top. But when I measured the client-side time and thus added the call overhead, the picture was completely different. Passing 2000 parameters took 500 ms - then it doesn't matter if the procedure itself completes in 20 ms. <<

I saw that and I cannot figure out what Microsoft is doing in the
client. I expect the database to do very well, since we are using a
calling stack in most SQLs -- load right into main storage, etc. But
I do not know what the client could be doing. Let's ask Kalen ...
From: Tony Rogerson on


"--CELKO--" <jcelko212(a)earthlink.net> wrote in message
news:d6279d9a-7cfd-43d8-b3b3-8a69a7115db5(a)c10g2000yqi.googlegroups.com...
>>> It breaks 1NF because the parameters are repeating groups and the
>>> domain being modelled is the same. Correct, that is the case with CSV so
>>> you must use by definition a TABLE valued parameter in order to fulfill
>>> relational theory requirements. <<
>
> Parameter lists are not tables. I happen to like tabel valued
> parameters and think they have some advantage. I would prefer that
> they be declared in the parameter list instead of outside of it, but
> that is probably coming.
>

I knew you'd gloss over you violating 1NF, you are breaking the set of
values (from the app) up into parameters just to force it into the
procedure - very poor.

>>> It's UN-maintainable - the majority of developers working against SQL
>>> Server use Management Studio, Management Studio has no macro language,
>>> in order to handle the hundreds of parameters some automated code
>>> generator needs to be used in order to save from typo's and errors
>>> creeping in, that would require the developer to learn and buy yet
>>> another language. <<
>
> Why not use the macro language in your C compiler? Or your word
> processor? I need to move some of my old WORD stuff over to Open
> Office soon, which was free.
>
> I find a lot of people dislike Management Studio for that and other
> reasons. Pretty printers, macros, source code controls and a lot of
> other feaures are missing. They tend toward Toad and DB-Viz instead or
> other tools. The argument that Management Studio makes code UN-
> maintainable is not a very good one. No law requires you to use it
> exclusively.
>

In order to write SQL you now need a C compiler? Laughable.

More people like SSMS than dislike it - if there was such a movement
disliking SSMS then there would be a significant user base of TOAD in the
SQL Server space - there isn't.

>>> Have you ever tried to read and debug a stored procedure with a few
>>> hundred parameters - one such example is the Sudoku procedure which
>>> whilst doing what it says on the tin is UN-maintainable. <<
>
> Yes, and I have no problem; I use a regular expression to do any edits
> to the set of parameters rather than one parameter at a time. The code
> is so simple and obvious most of the time, you never really look at it
> again. But if you do, it is to extent the pattern rather than change
> processing -- this is just a parameter list.
>

So, as well as having to learn a macro language the developer now needs to
learn regular expressions.

To summarise, so far in order to do this development you would need a) a C
Compiler, b) have familiarity with the particular macro language that C
compiler uses - are macro languages standard? (no) and c) you need to
understand the complexities and vagaries of regular expressions.

>>> Domain protection is NOT done, for instance in the Sudoku the data
>>> type does not prevent you from passing a number that is outside the
>>> range 0 - 9, in order to do that you need either a load of IF statements
>>> or you insert all the values into a table which defeats why you used
>>> parameters in the first place and should have just used a table valued
>>> parameter. <<
>
> Gee, I used the equivalent of this for my macro. Why use a procedural
> IF when I have declarative CHECK()?
>
> FOR %1 IN [0-9] DO
> FOR %2 IN [0-9] DO
> "cell_%1%2 SMALLINT NOT NULL CHECK (cell_%1%2 BETWEEN 0 AND
> 9)"
>
> Then there were similar ones for the column, row and square checking
> rules.

Like I said - in order to get your domain checking you must insert the
variables into a table, that means one row constructor per parameter - with
hundreds of parameters that is a significant amount of code.

All this before you've even got to the real logic of the procedure, so
approx 1,000 lines of code just to handle using parameters rather than a CSV
or table valued parameter.

Like I said - totally unmaintainable and you'll not find a single Microsoft
example in the product nor in books online of using your proposed method.

--ROGGIE--