From: Linchi Shea on
The SAN setup is not necessarily a problem. Spreading the I/Os over as many
spindles as possible is in general a good approach.

There are three possible causes: (1) your SQL query is not efficient (e.g.
bad plan), (2) you are I/O throughput limited, and (3) the performance is
killed by the I/O latency problem. Point 3 is unlikely as relevant as Point 2
in this case because you query appears to be dealing a large data set.

I'd first check if an inefficient parallel plan is being used. Assume that
your I/O subsystem can do ~200MB/sec, it shouldbe able to pull in over a
terabyte of data in 2 hours unless the query plan is such that it results in
smaller I/Os or not stressing I/O at all. A bad parallelism plan can do that
to you.

You can check several perfmon counters to get a feel whether the I/O
subsystem is saturated, in particular: Avg. Disk Bytes/Read and Avg. Disk
sec/Read. The former tells you how much throughput you are using and the
latter tells you how large is each read. To efficiently process
reporting-type of queries, the latter should be relatively large. Otherwise,
the former will not be good. Ideally, the latter should be larger than 64K
and the fomer should be close to 200MB/sec.

To be sure about what kind of throughput you can get from the drive
presented from your SAN, you should run some tests. If you have a chance, try
a simple table scan on a large and wide table to see how many MBs/sec you
achieve.

BTW, are you using a sinle 2Gb card or two load-balanced 1Gb cards?

Linchi

"ianwr" wrote:

> Hi,
>
> I wondered in anyone can help with the following problem that i'm
> experiencing, i'll try to provide as much info as possible and any
> suggestions would be appreciated.
>
> I have just started at an organsiation and there seems to be slow
> performance maybe on the san on a 64bit itanium dual core machine. 4
> CPUs are being showed to sql server, it also has 16gb of RAM. I'll
> start with the configuration of the SAN.
>
> After speaking to the SAN guy, rather than carve the SAN up into
> different area's for san Logs/Data etc they have gone for the approach
> of spreading a Vdisk across as many spindles as possible (All 145 of
> them). So the area that is presented to the SQL Server according the
> the SAN guys is a vraid 5 stripe made up of all 145 disks which are
> all 72gb fibre-channel disks.
>
> This storage is not just made available to sql server but also made
> available to other apps as well that need storage. Having read the
> manufactres best practice on setting this up there is a valid argument
> for doing this.
>
> The bandwidth from the SAN is 2Gb fibre, with each computer that uses
> the SAN having 2Gb fibre cards.
> Clearly, that could act as a bottle-neck. But, there's nothing that
> can be done about it according to the SAN guy.
>
> Needless to say, any changes on the SAN are pretty much going to be
> out of the question as far as he's concerned but i think performance
> isn't that good for the type of box they have and the SAN its attached
> to.
>
> The 2nd thing i'll explain is the setup of the database in question,
> firstly whoever set it up split the database into 16 different file of
> 4 filegroups so the table that i'm selecting to is in one filegroup
> split over 4 files and the the table selecting from is in another
> filegroup made up of another 4 files. These are placed on the same
> physical disk made up of the SAN LUN with 145 spindles.
>
> Anyway when i do a select from a sales table which has various group
> bys and then insert the results into a blank table with no indexes it
> can take over 2hours for 200k rows which i find very slow.
>
> When i look at the sysprocesses table i am getting various waits as
> follows :-
>
> 72 4272 0 0x0042 900 PAGEIOLATCH_SH 6:9:2192094
> 72 4272 0 0x0069 0 SLEEP_TASK
> 72 4272 0 0x0000 0 SOS_SCHEDULER_YIELD
>
> The process seams to be going inbetween a PAGEIOLATCH and
> SOS_SCHEDULER_YIELD a few times per second.
>
> Running the following to get io stalls gives the following :-
>
> Select * from sys.dm_io_virtual_file_stats (6,7)
> Select * from sys.dm_io_virtual_file_stats (6,8)
> Select * from sys.dm_io_virtual_file_stats (6,9)
> Select * from sys.dm_io_virtual_file_stats (6,10)
>
> gives results like :-
>
> 6 7 1708539850 1562421 82465128448 294572225 26431 2455404544 12438340
> 307010565 44907495424 0x0000000000000954
>
> It worries me that when the process is on the PAGEIOLATCH the wait
> can be over 1000. Is it normal for the wait to be this long and what
> would be the best way to prove one way or another if the configuration
> of the san is causing this kind of performance???
>
> Thanks for any suggestions in advance
>
> Ian.
>
From: ianwr on
I think the SAN guy said it was a single 2gb card on the san and that
all servers that accessed the same san also used 2gb cards so he knows
this could be a bottleneck, the entire company uses this san for just
about all server ... i would estimate there must be about 6 MIS type
servers using this SAN 3 live and 3 test plus a number of smaller TP
type systems.

I'll run a few tests on the throughput and let you know

Ian
From: ianwr on
Ok, just done a quick table scan on a large table and the stats were
as follows :-

avg disk bytes/sec approx 76,000 which is on 76k, not exactly the
200mb a sec you had said.
Avg disk sec/read 0.6

I take it these figures aren't too good. Any idea what i can check to
see why they look so bad?

Ian.

From: ianwr on
stats on the tablescan are as follows :-

Avg. Disk Bytes/Read 76,000 on average which = 76k which isn't
anything near your 200mb/sec
Avg. Disk sec/Read = 0.6

Any ideas how i can narrow down exactly why the throughput is so bad?

Thanks

Ian.
From: ianwr on
Guys,

Just a quick update on the performance, here are the performance
counters i took this morning running the table scan again :-

Disk Rad Bytes.Sec = between 11m and 33m was very choppy. so i guess
this is about 10 times too slow as it's on average about 20mb/sec
rather than the 200mb you were expecting.

Avg Disk Bytes/Read was around the 77k mark

Avg Disk Sec/Read was around the 0.7 mark but at times went upto
1.2 ... pretty shocking. Going to get sqlio on the job and see if we
can get some timing to go back to the SAN guys with.

Ian.