From: gg on
I am currently using a 3 500GB Seagate Sata Drive raid 5 volume and is about
to run out of space.

Is it wise to add a brand new Seagate 500GB Sata drive to the volume. The
volume is the system boot up volume and is dual boot . both windows OS
partition are on the same volume.

can someone tell me any usage experience from using 4 or more drive raid
volumes?

or would it be more reliability and responsive system for me to break the
OS boot and partitions into a new volume of single Sata drive? or new volume
with 2 small stripped drives like the 80GB ea I am talking about 7200 RPM
drives

total drive in the current system is 5. The spec does allow max 6 internal
on
the primary controller and 2 additional on the silicone image as well as one
"e-sata on the go"



From: Sudsy on
On Apr 24, 3:09 pm, "gg" <g...(a)Edm.noMail.net> wrote:
> I am currently using a 3 500GB Seagate Sata Drive raid 5 volume and is about
> to run out of space.
>
> Is it wise to add a brand new Seagate 500GB Sata drive to the volume.  The
> volume is the system boot up volume and is dual boot . both windows OS
> partition are on the same volume.
>
> can someone tell me  any usage experience from using 4 or more drive raid
> volumes?

Contemporary wisdom is that you needn't RAID your OS. In fact, it's a
lot
easier to recover from a failure when you can simply restore an image
of your OS. Just make sure that you've got a reliable backup. I've
seen
situations where there's a complete backup drive which can be simply
plugged-in and the system can be operational in minutes.
It's the data, which can grow in size over time, which needs to be
protected
and RAID5 is a great solution. I typically provision it on 5 hot-
pluggable
drives. With the right hardware or software, failure of a single drive
doesn't
impact the continuing operation of the system. I've even proven this
by
physically removing one of the drives while the system is running!
Plug it
back in, the storage subsystem resynchronizes and you're good to go.
Highly recommended for mission-critical scenarios.
As an aside, transaction logs for a RDBMS are best served by a
separate
RAID1 (two drive) array. Such an approach provides better overall
performance while not impacting the ability to recover.
From: shaw news on
thx, have you used 5 drive raid-5 on p5q-e? or are you talking about some
fancy serious raid controller with enterprise class drives?

I am only using regular Seagate desktop Sata II NCD 7200 drives, nothing
like the nearline or es stuff.

I have set the vista 64 at performance level of power savings, the p5q-e
mainboard is only transferring at the disappointing pace of 40MB/s from the
corrupted volume to an external drive (2TB WD green 5200rpm) the esata
connection is supposedly up to 3gbs with the built-in controller and the
thermaltake bacx esata dock


there are some cheap $60 st lab raid 5 controller for 4 drives


"Sudsy" <sudsy2222(a)yahoo.com> wrote in message
news:1f3868af-7a03-4a95-a940-44cd3275940e(a)k36g2000yqn.googlegroups.com...

Contemporary wisdom is that you needn't RAID your OS. In fact, it's a
lot
easier to recover from a failure when you can simply restore an image
of your OS. Just make sure that you've got a reliable backup. I've
seen
situations where there's a complete backup drive which can be simply
plugged-in and the system can be operational in minutes.
It's the data, which can grow in size over time, which needs to be
protected
and RAID5 is a great solution. I typically provision it on 5 hot-
pluggable
drives. With the right hardware or software, failure of a single drive
doesn't
impact the continuing operation of the system. I've even proven this
by
physically removing one of the drives while the system is running!
Plug it
back in, the storage subsystem resynchronizes and you're good to go.
Highly recommended for mission-critical scenarios.
As an aside, transaction logs for a RDBMS are best served by a
separate
RAID1 (two drive) array. Such an approach provides better overall
performance while not impacting the ability to recover.


From: Paul on
shaw news wrote:
> thx, have you used 5 drive raid-5 on p5q-e? or are you talking about some
> fancy serious raid controller with enterprise class drives?
>
> I am only using regular Seagate desktop Sata II NCD 7200 drives, nothing
> like the nearline or es stuff.
>
> I have set the vista 64 at performance level of power savings, the p5q-e
> mainboard is only transferring at the disappointing pace of 40MB/s from the
> corrupted volume to an external drive (2TB WD green 5200rpm) the esata
> connection is supposedly up to 3gbs with the built-in controller and the
> thermaltake bacx esata dock
>
>
> there are some cheap $60 st lab raid 5 controller for 4 drives

http://blogs.zdnet.com/Ou/?p=484&page=1

They got 218MB/sec read and 193MB/sec write with their software RAID5.
And the claim is, the benchmark is doing sequential access.

On ordinary disks, you can use the free version of HDTune, to test
sequential read performance. But this doesn't necessarily work with
every kind of storage device you can think of. I like it though, for
testing a new disk when I buy one.

http://www.hdtune.com/files/hdtune_255.exe

You can also test sequential performance, by creating a single large
file, and timing how long it takes to transfer that file.

How to make a large file for testing - I use the port of "dd" and make
a large file of zeros. FSUtil also knows how to make large files, but
it cheats and makes "sparse" files, which you don't want. If FSUtil makes
a large file on an NTFS file system, it makes the file sparse. "dd" doesn't
cheat, and writes out real zeros to every sector of the file.

http://www.chrysocome.net/dd

dd if=/dev/zero of=C:\my_big_file.dd bs=65536 count=12345

In that example, the resulting file is 65536 * 12345 = 809,041,920 bytes total.
The block size (bs) should stay within the maximum command size that a storage
device supports. You can make the count number larger if you want.

Once the big file is available, use a stopwatch to time the transfer time.
Or if you need a utility to test with, I like "robocopy" from Microsoft,
as a means to doing copies. It harvests all available speed. You can use
the Performance plugin in Windows, to watch as the transfer occurs (add
counter, add "disk write bytes/sec" and the like).

http://technet.microsoft.com/en-us/magazine/2006.11.utilityspotlight.aspx

That is supposed to include a copy of version XP026 of Robocopy. This is
the typical command I use for copying from one disk to another. In these two
examples, I copy the contents of Y: onto F:, then I reformat Y:, and later,
copy the contents of F: back to Y:. As far as I know, the options here, copy
the NTFS file attributes as well. The log file is useful for checking later,
what happened. (I've never tried the GUI.)

robocopy Y:\ F:\ /mir /copy:datso /dcopy:t /r:3 /w:2 /zb /np /tee /v /log:y_to_f.log

robocopy F:\ Y:\ /mir /copy:datso /dcopy:t /r:3 /w:2 /zb /np /tee /v /log:f_to_y.log

Hard drives have "seek time", which is the time to reposition the heads. No
work is done, while those heads are being moved. If all you copy, is tiny
1KB files, your RAID array will look terrible. If you want impressive numbers,
test with one large file.

If you want a cure for "seek time", buy a set of SATA SSDs. They have seek times
of around the 0.1 millisecond mark, which helps eliminate at least some of the
seek effect. But SSDs also have a natural 128KB page size, and writing the
1KB files is not something they really appreciate (it means read-modify-write,
to preserve 127KB not changing, and add in the 1KB change). If you were building
a RAID5, I suppose the stripe size of the RAID could be arranged, to align
with the preferences of the SSDs. Writing a 1KB file still stinks, but at least
the RAID to SSD is nicely aligned with the natural size it wants. For some background,
Anandtech has had several articles on SSDs, uncovering some of the issues with
using them.

One benefit a hardware RAID card may offer, is a cache DIMM plugged into the
card. But I wonder if that is really cost effective, considering the price of
some of those cards. The Areca with a ton of ports on it, costs around
$1000. That is a lot of money, to speed up a few file transfers.
Some cheaper cards, may have a smaller cache, or no cache at all, and
all they might provide is an XOR chip. In some cases, the cheap ones
are little better than an Intel soft RAID, in terms of what you're getting.
You definitely want to research stuff like this, before parting with
hard earned money. Cards like this one, even have an option to support
arrays larger than 2.2TB.

http://images17.newegg.com/is/image/newegg/16-151-035-S01?$S640W$

(When your array is really large, you'll want to read this.)

http://en.wikipedia.org/wiki/GUID_Partition_Table

HTH,
Paul


> "Sudsy" <sudsy2222(a)yahoo.com> wrote in message
> news:1f3868af-7a03-4a95-a940-44cd3275940e(a)k36g2000yqn.googlegroups.com...
>
> Contemporary wisdom is that you needn't RAID your OS. In fact, it's a
> lot
> easier to recover from a failure when you can simply restore an image
> of your OS. Just make sure that you've got a reliable backup. I've
> seen
> situations where there's a complete backup drive which can be simply
> plugged-in and the system can be operational in minutes.
> It's the data, which can grow in size over time, which needs to be
> protected
> and RAID5 is a great solution. I typically provision it on 5 hot-
> pluggable
> drives. With the right hardware or software, failure of a single drive
> doesn't
> impact the continuing operation of the system. I've even proven this
> by
> physically removing one of the drives while the system is running!
> Plug it
> back in, the storage subsystem resynchronizes and you're good to go.
> Highly recommended for mission-critical scenarios.
> As an aside, transaction logs for a RDBMS are best served by a
> separate
> RAID1 (two drive) array. Such an approach provides better overall
> performance while not impacting the ability to recover.
>
>
From: Sudsy on
Paul is always a font of wisdom on these points! I'm thrilled that he
shares his
knowledge on these lists.
In answer to your question, my RAID5 experience was in fact on an
HP-9000
server. I'd go with a hardware controller if available but modern
motherboards
(like the Asus M4A786-M I installed this weekend) incorporate RAID5
support
in the BIOS.
I'm using it to host both Windows Server 2003 and XP as VMs. Just make
sure that you enable hardware virtualization before installing Xen.