From: Dan Guzman on
> If you are doing paging you need to pass back the total number of rows of
> the query.

Why? You can also do pagination using only First, Next, Previous and Last
functionality. Pagination using keys rather than row numbers will perform
better, assuming appropriate indexing. If you want to display the total
number of rows, you can run a separate COUNT query when the first page is
displayed.

--
Hope this helps.

Dan Guzman
SQL Server MVP
http://weblogs.sqlteam.com/dang/

"tshad" <t(a)dslextreme.com> wrote in message
news:OCPlq2TzKHA.1236(a)TK2MSFTNGP06.phx.gbl...
> If you are doing paging you need to pass back the total number of rows of
> the query.
>
> What is the best way to do that?
>
> What one person does is do the SELECT/INTO a temp table to get the total
> number of rows.
>
> Then he does a "SET ROWCOUNT @RowNumber", then does a SELECT * ...
>
> The problem here is that he is using doing an insert into a temp table
> with 2 selects (one to insert and one to pass back the results).
>
> What I do is have 2 functions (could do it in one with a parameter) that
> have identical queries except
> 1) in one I have all the columns I want to return and in the other I have
> a SELECT using the ROW_NUMBER() function. I have to do it this way
> because the number of @@ROWCOUNT will be equal to the number of rows from
> your "WHERE ResultSetNumber BETWEEN @StartRow and @EndRow" clause not the
> number of the total result set.
> 2) in the other I have a "SELECT Count(*)" with the rest of the query the
> same as #1.
>
> Is there a better way to do this?
>
> Thanks,
>
> Tom
>
From: tshad on
According to the article, it seems it would be hard then to tell if you are
actually geting better results. You can't really look at duration or CPU as
they seem to change all the time. I can get consistant Reads results,
however. But if you can't tell from that - then it makes it difficult to
tell if you are really helping anything.

As I mentioned in an earlier post, I got 5 index suggestions suggestions
which seemed to lower the reads by quite a bit, except for one. Which when
I removed it took the Reads from 1696 to 1296. I assume these are all
logical reads. But according to the article I can't really trust the
numbers.

In the article, he goes from 800 reads to 4000 reads and the second was
better. I understand that it was doing Hash Match and 2 index scans. But
the other did 2 Index Seeks and a Key Lookup. But it did do 3200 more reads
to get the information. Not sure why that is better.

Also, he talks about a missing index option that I can't seem to find in my
2008 MS. I do get the same set of numbers in my profiler but I can't see
where he gets:
******************************************************
The "missing indexes" option in management studio suggested that I add the
following index:
/*

Missing Index Details from logical_reads.sql - AMI-PC.AdventureWorks
(DBSOPHIC\Ami (52))
The Query Processor estimates that implementing the following index could
improve the query cost by 13.1751%.
*/
/*
USE [AdventureWorks]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [Sales].[Customer] ([TerritoryID],[CustomerType])
INCLUDE ([CustomerID])
GO
*/
***********************************************************

Is this not in 2008?

It is also important to remember that the hash table was probed 31,465
times - once for each key from the SalesOrderHeader table. These probes,
which do consume resources, do not constitute a logical read and are not
available as separate counters in the SQL Trace and in STATISTICS IO.

Where does he get the 31,465???

Thanks,

Tom

"Plamen Ratchev" <Plamen(a)SQLStudio.com> wrote in message
news:hYGdndg2a5rr8TDWnZ2dnUVZ_gIAAAAA(a)speakeasy.net...
> COUNT() OVER() was added in SQL Server 2005, together with the ranking
> functions. The OVER clause supports the other aggregate functions too
> (SUM, MIN, MAX, AVG).
>
> If you are talking about logical reads then it may not be an issue. See
> more here:
> http://www.sql-server-performance.com/articles/per/logical_reads_p1.aspx
>
> --
> Plamen Ratchev
> http://www.SQLStudio.com


From: Plamen Ratchev on
tshad wrote:
> Also, he talks about a missing index option that I can't seem to find in my
> 2008 MS. I do get the same set of numbers in my profiler but I can't see
> where he gets:
> ******************************************************
> The "missing indexes" option in management studio suggested that I add the
> following index:
> /*
>
> Missing Index Details from logical_reads.sql - AMI-PC.AdventureWorks
> (DBSOPHIC\Ami (52))
> The Query Processor estimates that implementing the following index could
> improve the query cost by 13.1751%.
> */
> /*
> USE [AdventureWorks]
> GO
> CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
> ON [Sales].[Customer] ([TerritoryID],[CustomerType])
> INCLUDE ([CustomerID])
> GO
> */
> ***********************************************************
>
> Is this not in 2008?
>

Look at the actual execution plan in SSMS. You will see the missing index info on top of the execution plan.


> It is also important to remember that the hash table was probed 31,465
> times - once for each key from the SalesOrderHeader table. These probes,
> which do consume resources, do not constitute a logical read and are not
> available as separate counters in the SQL Trace and in STATISTICS IO.
>
> Where does he get the 31,465???
>

This is the number of rows in the SalesOrderHeader table.

--
Plamen Ratchev
http://www.SQLStudio.com
From: tshad on

"Plamen Ratchev" <Plamen(a)SQLStudio.com> wrote in message
news:roydnTizzYpBiC3WnZ2dnUVZ_qCdnZ2d(a)speakeasy.net...
> tshad wrote:
>> Also, he talks about a missing index option that I can't seem to find in
>> my 2008 MS. I do get the same set of numbers in my profiler but I can't
>> see where he gets:
>> ******************************************************
>> The "missing indexes" option in management studio suggested that I add
>> the following index:
>> /*
>>
>> Missing Index Details from logical_reads.sql - AMI-PC.AdventureWorks
>> (DBSOPHIC\Ami (52))
>> The Query Processor estimates that implementing the following index could
>> improve the query cost by 13.1751%.
>> */
>> /*
>> USE [AdventureWorks]
>> GO
>> CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
>> ON [Sales].[Customer] ([TerritoryID],[CustomerType])
>> INCLUDE ([CustomerID])
>> GO
>> */
>> ***********************************************************
>>
>> Is this not in 2008?
>>
>
> Look at the actual execution plan in SSMS. You will see the missing index
> info on top of the execution plan.
>
Yup, there is was.
>
>> It is also important to remember that the hash table was probed 31,465
>> times - once for each key from the SalesOrderHeader table. These probes,
>> which do consume resources, do not constitute a logical read and are not
>> available as separate counters in the SQL Trace and in STATISTICS IO.
>>
>> Where does he get the 31,465???
>>
>
> This is the number of rows in the SalesOrderHeader table.

I see.

It was confusing because he mentions that "remember that the hash table was
probed 31,465 times ", but I didn't see where that was mentioned.

Thanks,

tom
>
> --
> Plamen Ratchev
> http://www.SQLStudio.com


From: tshad on

"Erland Sommarskog" <esquel(a)sommarskog.se> wrote in message
news:Xns9D487348CFD46Yazorman(a)127.0.0.1...
> tshad (tfs(a)dslextreme.com) writes:
>> I had never seen the COUNT(*) OVER() before. Is it new?
>>
>> But it does add some overhead.
>>
>> I am trying to optimize a query that was doing a FULL JOIN into a temp
>> table (to get the Total Count) then does a SET ROWCOUNT followed by a
>> SELECT *... from the temporary table.
>>
>> Doing that the profiler was showing 340400 Reads.
>>
>> I got rid of the Temporary table and got the number of reads down to
>> 339110 reads (about 10,000 reads less).
>>

Actually, it was about 1,000 not 10,000.

>> But when I added the COUNT(*) OVER() to the SELECT - it went up to
>> 339,900
>> (not sure why that is).
>
> I wouldn't say that 340400, 339900 or 339110 are all the same to me.
>
???
They aren't the same. Not sure what you mean here.

> And what foremost matters to performance is wallclock time.
>

Right.

But, I was just curious why adding Count(*) OVER() would add about 800
reads. I wasn't saying that it was a problem.

> I you are going to paging, saving the result set in a session-keyed table
> once, and then page from that table might be the best option. Or if the
> number of rows is not excessive, just a few thousand rows, get all rows
> into the client and page there.
>
> --
> Erland Sommarskog, SQL Server MVP, esquel(a)sommarskog.se
>
> Links for SQL Server Books Online:
> SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
> SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
> SQL 2000:
> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
>


First  |  Prev  |  Next  |  Last
Pages: 1 2 3
Prev: Row_Number() Sorting
Next: Remote Access Steps