From: John Fullbright on
Ok, you left out the pessimism. I actually do storage design for Exchange
for a storage vendor and enjoy it when competitors string the customer along
like that. When the customer gets the sticker shock 12 to 18 months down
the line, the competative takeout is a no brainer.

The only thing I don't see is the calculation for RAID 1 write penalty. If
you have a base of 320 IOPS read performance and 160 IOPS write performance
(RAID 5 would be 160 Read and 40 write with a 3 drive set) and your
read/write ratio 3:1, we say 320*.75 + 160*.25 = 280 IOPS. Assuming the
MS estimate for heavy user is accurate (don't, measure it) .75 * 400 = 300.
You're just a tad under. In reality use 85 IOPs per spindle for 10K and 110
IOPS for 15k spindles. So you're just about there with 10K spindles and
have some breathing room with 15K spindles. Many vendors give higher IOPS
numbers per spindle, but there is no access time associated with that higher
number. The times I give are tested for a target 20ms IO. The more you
load each spindle the longer IOs to the spindle take. If I load a 10K
spindle to 100 IOPS, then we're taking about 50-60ms IOs.

In the field, I typically see a range of 1.0 to 3.5 IOPS per user. It's
driven by:

1. Mailbox size
2. Average message size
3. MAPI applications (server side; Blackberry typically adds .5 IOPS per
user by opening a second uncached session for example)
4. MAPI applications (Client side: Lookout or other search addins when the
client is not in cached mode are a good example)
5. Server side antivirus scanning

In the referenced paper, look at the section titled "How to calculate your
disk I/O requirements using environmental data" This method is far more
accurate than using the assumptions.

Good luck.






"Paul Hutchings" <paul(a)spamcop.net> wrote in message
news:paul-963A17.23412213012006(a)msnews.microsoft.com...
> In article <#Txa74HGGHA.608(a)TK2MSFTNGP14.phx.gbl>,
> "John Fullbright" <Fullbrij(a)comcast.com> wrote:
>
>> Wow, a thousand. Let's see.. the assumption for a heavy user, which in
>> over
>> 300 deployments in my experience has never been accurate, for a heavy
>> user
>> is .75 IOPS. Let's be generous and assume everyon is using cahced mode
>> and
>> we have a 2:1 read write ratio. Your limit would still be just under 200
>> users using the math in "Optimizing Storage for Exchange 2003".
>
>
> John, thanks for all the info.
>
> I've been reading the "Calculate Your Server Size" document at Microsoft.
>
> Now if I'm understanding all this correctly, at it's most basic I should
> assume around 80 IOPS/Second per spindle on a RAID10 disk subsystem.
>
> This gives approx 320 IOPS/Second on a 4 spindle RAID10.
>
> If I assume that every single user (physical member of staff not
> mailboxes) who will be using this server is a "heavy user" I have around
> 400 staff at 0.75 IOPS/Second per user, so 400 x 0.75 = 300 IOPS/Second.
>
> That's if *everyone* is a heavy user, I guess it also assumes they all
> using the server at the same time (they don't) and it also assumes that
> Microsofts average of 0.75 for a heavy user is accurate.
>
> It seems to suggest that I'm speccing about right?
>
> Of those 400 staff I'd break it down as approx 200/100/100 if I were to
> try and say what the light/medium/heavy user split is (we have a lot of
> technicians and users who don't work at a desk and use mail all day).
>
> I'll do some digging and see how they suggest measuring/converting the
> load on the current 5.5 box to IOPS/Second per Mailbox so I can get a
> measurement rather than take an educated guess.
>
> Thanks again,
> Paul
> --
> paul(a)spamcop.net


From: Hank Arnold on
There is a kit that will split the 6 drive bay backplane into a 2 & 4 drive
arrays. Add the kit to convert the 2 drive external drive bays into a 2
drive array and you have 8 drives set up for 2/2/4 drive arrays. You can add
two RAID adapters, 1 with 2 internal channels and the other with one. This
will allow a RAID 1/RAID 1/RAID 10 (or RAID 5 + 1 hot spare).

RAID 10 requires a minimum of 4 drives. RAID 5 requires a minimum of 3
drives and RAID 1 (or RAID 0) requires 2 drives.

--
Regards,
Hank Arnold

"Mark Arnold [MVP]" <mark(a)mvps.org> wrote in message
news:99bfs1hpgehd341ieia37thlhac1aphdf9(a)4ax.com...
> On Fri, 13 Jan 2006 04:50:31 -0500, "Hank Arnold" <rasilon(a)aol.com>
> wrote:
>
>>Sure you can.... There is a cage that can be added that will convert the 2
>>external drive bays into a 3rd RAID array. It will require that the bays
>>be
>>empty and adding at least one more RAID adapter. We configured this for
>>our
>>2 ML370 servers until we decided to go with new hardware.....
>
> Hank, Have they changed the internal architecture? Last time I had a
> 370 in my hands, which was a long time ago since, the stack of 6 was
> connected via backplane to a single controller cable.
> How could you do RAID10 on any more than 2 disks?


From: Hank Arnold on
I'm not aware of a RAID adapter that has more than 2 *internal* channels. We
ended up with 2 RAID adapters to get three separate channels...

--
Regards,
Hank Arnold

"Paul Hutchings" <paul(a)spamcop.net> wrote in message
news:paul-A36CC9.17375613012006(a)msnews.microsoft.com...
>
> I guess if you needed it you could combine the two with a four channel
> RAID controller and have three channels.
>

From: Hank Arnold on
That wasn't our experience at all. We talked with the Dell sales folks and
their tech support group. We were asking for an estimate on two server
cluster setups with two servers in each cluster. They, in fact, came up with
a significantly less expensive solution by proposing a network attached
storage box with tons of space and two SCSI/RAID arrays which allowed us to
reduce the total cost of the whole configuration and allow for significant
growth in the future...
--
Regards,
Hank Arnold

"John Fullbright" <Fullbrij(a)comcast.com> wrote in message
news:%23Txa74HGGHA.608(a)TK2MSFTNGP14.phx.gbl...
>
> <PESSIMISM>
> Consider the source: Dell is trying to sell you hardware. Microsoft is
> trying to prevent support cases. Which would you trust? Sounds to me
> like and undersell, and Dell will go after the SAN sale in 12 months with
> a similarly undersize Clariion. Six months later, they'll follow up by
> recommending a bunch of additional shelves costing more than the original
> SAN purchase.
> </PESSIMISM>


From: Paul Hutchings on
In article <#pIEICKGGHA.1192(a)TK2MSFTNGP11.phx.gbl>,
"John Fullbright" <Fullbrij(a)comcast.com> wrote:

> Ok, you left out the pessimism. I actually do storage design for Exchange
> for a storage vendor and enjoy it when competitors string the customer along
> like that. When the customer gets the sticker shock 12 to 18 months down
> the line, the competative takeout is a no brainer.

The HP sizer was a little more realistic I think - it suggested more or
less what I had in mind (it said less memory and slower CPUs which I
shall ignore), the main difference being an external enclosure because
of the number of spindles with dedicated disks for the transaction logs,
and spares.

I could have a dedicated mirror for the logs if I sacrificed the
hot-spare and the spindle for the NT backup , BUT it would be on the
same RAID controller channel as the databases, so I'm not sure if I'd be
any better off than having the OS/logs on the same spindles but on a
different RAID controller channel.

So far as measuring what level of disk performance I need, from what
I've read this seems useful (from
http://blogs.technet.com/exchange/archive/0001/01/01/240868.aspx)

Measure the physical disk\disk transfers per second for all databases
for between 20 minutes to 2 hours during your most active time (for
example, this is from 9-11 AM on a Monday here at Microsoft). During
this time, also measure the number of active users (MSExchangeIS\Active
User Count). Take an average of these counters. Sum the disk
transfers/sec for each database, divide the first number by the second
and? Voila! You have just calculated the number of IOPS per user.

Paul
--
paul(a)spamcop.net