From: David Brown on
On 12/05/2010 01:07, me wrote:
> A question on the performance impact of Raid 1. I realize up front
> that I might have too little info for a perfect decision... so I will
> take whatever input I can get on performance and configuration.
>
> I have an Asus P4P800SE circa 2004 mobo with a 3.0G CPU which runs
> win2003 Server as a SOHO file server. The load on it is light with
> just one or two users and not a lot of concurrent access or large
> files. It has an ICH5R Southbridge Controller supporting RAID 0 / 1 on
> the 150 SATA drives. The two hard drives are matching SATA 5400 RPM
> 80gb 13ms/14ms Read/Write.
>
> Questions:
>
> 1. How much of a performance impact will it be to have RAID 1 setup
> and then writing to both drives?
>

That depends on your usage patterns. If you think about what the system
actually has to do for raid 1, it will give you a better idea of how it
will affect performance. Writing to disk involves writing two copies of
everything. But these writes should happen in parallel to the two
disks, so they take much the same time as writing to a single disk.
When reading, the system can get the data from either drive - it should
therefore be faster, especially if you have concurrent reads. How much
faster will depend on your load, and on how smart the fakeraid and
windows combination is.

> 2. The mobo instructions take me through using the BIOS to write RAID
> info to the drives. They also discuss removing RAID info;
>
> a. Does this mean that in the event the mobo crashes that I would
> need to have another mobo with the same RAID controller on it to use
> the drives or remove the RAID info? Would I be able to access the
> drive in a another non-RAID machine as pure SATA in the event of a
> mobo failure?
>

It is sometimes possible to recover data from fakeraid drives using a
different motherboard, or using something like linux mdadm, but it is
not an easy job. If you choose to use fakeraid, you should assume that
losing the motherboard means losing the disks.

If you want to have portable raid that can be recovered on a different
machine, you either have to have real hardware raid, or real software
raid - not the hybrid worst-of-both-worlds fakeraid. With hardware
raid, you have a separate raid card - if it fails (and that's very
unlikely), you will have an easier time getting a compatible
replacement. With software raid, any other system running the same OS
will be able to use the disks.

I have used Linux software raid, but I've had no experience with Windows
software raid. It would certainly be possible to put a data on a raid
partition, but I don't know if it is possible to install an entire
windows server system on software raid - other windows experts will have
to answer there.

But in general, software raid is going to be at least as fast, possibly
faster, than fakeraid. In some circumstances, it will be faster than
all but the most expensive hardware raid cards. Software raid has at
least as good reliability as fakeraid, though not as good as a solid
hardware raid card.

> b. Does the process of writing or removing the RAID info affect the
> data and OS already on the drive? That is, if I write this info to a
> drive already in use on the machine, do I have to start fresh or will
> all the data be preserved?
>

Start from scratch. It's possible that you could use some sort of disk
imaging software to do the transfer, but it is likely to be more time
and effort than a re-install. Again, others here have more practice
with such software, and may give other advice.

> 3. The mobo instructions mention setting up a RAID driver on a floppy
> for Windows to use when starting XP or Win2000. Will Win 2003 have the
> controller built in or do I still need an external driver?
>
> 4. Anything else I should know about RAID before heading in this
> direction? I'd rather not be saying "Doh!" after I finish :-)
>

Remember, raid is not about keeping your data safe - that's what backups
are for. raid is about speed, and uptime - redundant raid means you
won't have to restore data from backups or re-install your OS just
because a hard disk died.

From: Roger Blake on
On 2010-05-12, me <noemail(a)nothere.com> wrote:
> Thanks for all the info. What makes the Intel mobo implementation
> "fakeraid" instead of hardware raid?

It's a BIOS-assisted software RAID implementation, there's no hardware
RAID controller. (At least not unless you have a server-class motherboard
that specifically includes one.)

My experience has been that these motherboard fakeRAID systems tend to
be less reliable than either real hardware RAID or the operating
system's native software RAID.

--
Roger Blake
(Change "invalid" to "com" for email. Google Groups killfiled due to spam.)
"Obama dozed while people froze."
From: David Brown on
me wrote:
> On Wed, 12 May 2010 10:13:33 +0200, David Brown
> <david(a)westcontrol.removethisbit.com> wrote:
>
> <trimmed>
>> Remember, raid is not about keeping your data safe - that's what backups
>> are for. raid is about speed, and uptime - redundant raid means you
>> won't have to restore data from backups or re-install your OS just
>> because a hard disk died.
>
> David:
>
> Thanks for all the info. What makes the Intel mobo implementation
> "fakeraid" instead of hardware raid?
>

"Fakeraid" is a generic term for motherboard-based raid that has become
common in the last few years, which is not really anything more than a
limited form of software raid supported by the bios.

In proper software raid, the OS low-level layers accesses the disks as
individual SATA (or IDE, SCSI, whatever) disks. The OS raid layer
handles the combining of these, and the file systems and user-level
software see the raid setup as a single disk. This gives a lot of
flexibility - the OS can support many types of raid setups, it can
combine partitions into raid sets rather than whole disks, it can take
advantage of greater knowledge of the file access patterns to improve
performance, and it can provide features that you don't get with
hardware raid systems (or at least, not without paying a great deal of
money). For example, I believe Linux mdadm raid is the only system that
will let you have raid 1+0 on any number of disks (greater than 1,
obviously) - it will happily let you have striped and mirrored raid on 2
or 3 disks, while hardware solutions will require a multiple of 4 disks.
And if the OS or the hardware dies, you can put the same disk set in
another computer with the same OS, and access your drives.

The disadvantage of software raid is that if the OS dies, or the power
to the motherboard fails, you could be in big trouble and get your disks
out of sync. You are also using the main processor to do the work, but
that's seldom an issue these days unless your processor is already
heavily loaded. Raid 0 and 1 levels are particularly light on processor
use.

With proper hardware raid, the controller card is separate so the OS
only ever sees a single large disks. All processing such as parity
generation, syncing, checking, etc., is handled by the card without
taking host processor cycles. And the controller card will typically
have a battery backup (or non-volatile memory) to provide consistency
and reliability even if you get a power fail. It won't protect you from
logical faults if the OS dies, but it will protect you from raid set
inconsistencies. For very large setups and expensive hardware, hardware
raid can perform better than software raid even with just raid 0 or 1.
If the hardware controller dies, you will typically need to get the same
model or a similar model to replace it, since manufacturers use
different arrangements for their raid setups.

Fakeraid gives you the worst of both worlds. There is no separate
processor, so everything is handled by the host - either by BIOS
routines, or drivers loaded by the OS. You don't get the full
flexibility of proper software raid, but only the limited functionality
provided by the fakeraid. And access to the disk is limited to the
chipset used in the motherboard - if the motherboard dies (more likely
than a hardware raid controller dying), you could lose access to your disks.

Where fakeraid wins is if you are using a limited OS like windows, and
want to install the whole system on a raid drive but don't want to pay
for a hardware raid card. As far as I know, you can't install windows
on a windows software raid drive - you can only use such drives for
non-system partitions.

From: Arno on
Roger Blake <rogblake(a)iname.invalid> wrote:
> On 2010-05-12, me <noemail(a)nothere.com> wrote:
>> Thanks for all the info. What makes the Intel mobo implementation
>> "fakeraid" instead of hardware raid?

> It's a BIOS-assisted software RAID implementation, there's no hardware
> RAID controller. (At least not unless you have a server-class motherboard
> that specifically includes one.)

> My experience has been that these motherboard fakeRAID systems tend to
> be less reliable than either real hardware RAID or the operating
> system's native software RAID.

And in addition, they are often also OS specific (meaning, tools
are only available under Windows) and are therefore inferiour
to an OS integrated solution. Therefore the 'fakeRAID'. BTW,
a lot of these work with the Linux 'mraid' driver/tool, which
offers far superior management and recovery. Sometmes these
fakeRAIDs will not even boot if a disk is missing ans you
have to attack and lengthy resync a replacement disk before
you can access your data. Also monitoring and alerting
often sucks badly.

General advce for RAID: Test that you can recover your data
by unplugging a drive, before depending on it. The risk of
messing up a recovery and loosing all data is to large if
you have to find out how to recover in an emergency later.
Do it before and document (on paper!) how it is done.

Arno
--
Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: arno(a)wagner.name
GnuPG: ID: 1E25338F FP: 0C30 5782 9D93 F785 E79C 0296 797F 6B50 1E25 338F
----
Cuddly UI's are the manifestation of wishful thinking. -- Dylan Evans
From: Rod Speed on
David Brown wrote
> me wrote
>> David Brown <david(a)westcontrol.removethisbit.com> wrote

>>> Remember, raid is not about keeping your data safe - that's what
>>> backups are for. raid is about speed, and uptime - redundant raid
>>> means you won't have to restore data from backups or re-install
>>> your OS just because a hard disk died.

>> Thanks for all the info. What makes the Intel mobo implementation
>> "fakeraid" instead of hardware raid?

> "Fakeraid" is a generic term for motherboard-based raid that has
> become common in the last few years, which is not really anything
> more than a limited form of software raid supported by the bios.

> In proper software raid, the OS low-level layers accesses the disks as individual SATA (or IDE, SCSI, whatever) disks.
> The OS raid layer
> handles the combining of these, and the file systems and user-level
> software see the raid setup as a single disk. This gives a lot of
> flexibility - the OS can support many types of raid setups, it can
> combine partitions into raid sets rather than whole disks, it can take
> advantage of greater knowledge of the file access patterns to improve
> performance, and it can provide features that you don't get with
> hardware raid systems (or at least, not without paying a great deal of
> money). For example, I believe Linux mdadm raid is the only system
> that will let you have raid 1+0 on any number of disks (greater than
> 1, obviously) - it will happily let you have striped and mirrored raid on 2 or 3 disks, while hardware solutions will
> require a multiple of 4
> disks. And if the OS or the hardware dies, you can put the same disk
> set in another computer with the same OS, and access your drives.

> The disadvantage of software raid is that if the OS dies, or the power
> to the motherboard fails, you could be in big trouble and get your
> disks out of sync. You are also using the main processor to do the
> work, but that's seldom an issue these days unless your processor is
> already heavily loaded. Raid 0 and 1 levels are particularly light on
> processor use.

> With proper hardware raid, the controller card is separate so the OS
> only ever sees a single large disks. All processing such as parity
> generation, syncing, checking, etc., is handled by the card without
> taking host processor cycles. And the controller card will typically
> have a battery backup (or non-volatile memory) to provide consistency
> and reliability even if you get a power fail. It won't protect you from logical faults if the OS dies, but it will
> protect you from raid set inconsistencies. For very large setups and expensive hardware,
> hardware raid can perform better than software raid even with just raid 0 or 1.

But as you say, thats not usually a problem with modern systems.

> If the hardware controller dies, you will typically need to get the
> same model or a similar model to replace it, since manufacturers use different arrangements for their raid setups.

And you really need to keep a spare of what is the most expensive approach too.

> Fakeraid gives you the worst of both worlds.

Thats overstating it, particularly with price.

> There is no separate processor, so everything is handled by the host - either by BIOS routines, or drivers loaded by
> the OS.

But as you say, thats not a problem with modern systems and the simpler forms of RAID.

> You don't get the full flexibility of proper software raid, but only the limited functionality provided by the
> fakeraid. And access to the disk is limited to the chipset used in the motherboard - if the motherboard dies (more
> likely than a hardware raid controller dying),

That last is very arguable.

> you could lose access to your disks.

Not necessarily forever and a spare motherboard is going to
cost less than the spare controller with fancy hardware raid too.

> Where fakeraid wins is if you are using a limited OS like windows, and want to install the whole system on a raid
> drive but don't want to pay for a hardware raid card.

So it isnt actually the worst of both worlds.

> As far as I know, you can't install windows on a windows software raid drive

Yes you can.

> - you can only use such drives for non-system partitions.

Thats just plain wrong.