From: Rahul on
"Trevor Hemsley" <Trevor.Hemsley(a)mytrousers.ntlworld.com> wrote in
news:gjxI70UYBlcC-pn2-rfdtcfGrhWKB(a)trevor2.dsl.pipex.com:

> That sounds like a standard RHEL5/Centos5 kernel (and quite an old one
> too since the latest is 2.6.18-194.8.1-el5). If so then it won't be a
> preemptible kernel.
>

It is a off-the-shelf CentOS kernel. We put the systems into production
about a year ago so it's whatever kernel was standard (for CentOs 5.4) at
that point of time.

Now that CentOS5.5 is recently out I can always upgrade to that one. But
not sure if that has advannced kernel versions a lot or not.

--
Rahul
From: Robert Heller on
At Fri, 6 Aug 2010 12:24:17 +0000 (UTC) Rahul <nospam(a)invalid.invalid> wrote:

>
> "Trevor Hemsley" <Trevor.Hemsley(a)mytrousers.ntlworld.com> wrote in
> news:gjxI70UYBlcC-pn2-rfdtcfGrhWKB(a)trevor2.dsl.pipex.com:
>
> > That sounds like a standard RHEL5/Centos5 kernel (and quite an old one
> > too since the latest is 2.6.18-194.8.1-el5). If so then it won't be a
> > preemptible kernel.
> >
>
> It is a off-the-shelf CentOS kernel. We put the systems into production
> about a year ago so it's whatever kernel was standard (for CentOs 5.4) at
> that point of time.
>
> Now that CentOS5.5 is recently out I can always upgrade to that one. But
> not sure if that has advannced kernel versions a lot or not.

Check the CentOSPlus repo...

>



--
Robert Heller -- Get the Deepwoods Software FireFox Toolbar!
Deepwoods Software -- Linux Installation and Administration
http://www.deepsoft.com/ -- Web Hosting, with CGI and Database
heller(a)deepsoft.com -- Contract Programming: C/C++, Tcl/Tk


From: Rahul on
David Brown <david(a)westcontrol.removethisbit.com> wrote in
news:4c5b3f72$0$12224$8404b019(a)news.wineasy.se:

> Raid10 is easy, and thus fast - it's just stripes of mirrors and there
> are no calculations needed. It's fast with either hardware or
> software raid.

Thanks for all the great tips everyone! I'm convinced I should change my
strategy.

I'm going in for a RAID10. I decided to by 9 2-Terabyte drives instead.
8 I will put in a RAID10 and 1 hot spare. These days the cost difference
in 1TB vs 2 TB disk is not a whole lot. And since I don't desire high
performance (IOPS) the number of spindles is not that big a deal.

That gets me 8 TB usable space, fairly good reads and writes, no parity
calulation, no expensive rebuilds, and can survive 2-5 disk failures
depending on how unlucky I am. Not bad for under $5000! I went with a
standard LSI HBA and will use mdadm for the RAID10.

> host IO is double for software raid10 (since it needs to write
> everything twice explicitly), while with hardware raid10 the host IO
> is single and the raid card doubles it up. But if you have plenty of
> ram the writes will cache and you will see little difference.

I've 16 Gigs RAM and a 8 core Intel-Xeon-Nehalem. Not sure if enough or
not. But at least since no parity is calculated with a RAID10 I don;t
have to worry about the "write hole". So no need for HW-RAID with its
battery-backed up memory.

> With software mdadm raid10 you can do things like different layouts
> (such as -p f2 for "far" layout) that can speed up many types of
> access.

Ah! That's new for me. Need to read that up. Never knew of "layout"
parameters.

> single disk. You can always triple-mirror if you want extra
> >redundancy.

Nah. I think I'll use std. RAID10 just add another hot spare and take my
chances.

>
> Performance differences will vary according to the load. Small writes
> are terrible with raid6, but no problem with raid10. Large writes
> will be better with raid6 compared to standard layout raid10 ("near"
> layout in mdadm terms, or hardware raid10), but there will be less
> difference if you use "far" layout mdadm raid10. Large reads are
> similar, except that mdadm "far" raid10 should be faster than raid6.

The system should be mostly writes. Not sure small or large. It is a
backup storage for rsync. So maybe small writes.



--
Rahul
From: Trevor Hemsley on
On Fri, 6 Aug 2010 19:09:00 UTC in comp.os.linux.hardware, Rahul
<nospam(a)nospam.invalid> wrote:

> The bug still persists. Damn my distro choice! What's a
> good solution in such a fix? Apply a patch? Upgrade my kernel? Change my
> distro [sic]?!

Depends on your level of comfort with 'unsupported' systems. I run my Centos 5.5
with a 2.6.3x kernel rpm which I build myself. Started with the .config file
from the Centos supplied kernel then enabled various other options that I
wanted/needed then ran `make rpm`. There is at least one option that needs to be
enabled in current kernel configs that is not present in the Centos supplied
config - something about legacy support for sysfs I think. It'll be fixed by
Redhat when they release RHEL6 as that's got a 2.6.32.x kernel :-) If you have a
support contract with them then you could raise an issue about it not working
and see if you can get them to fix it in RHEL5.

Applying a patch sounds like more work than making your own RPM since you have
to do all that and apply the patch too.

--
Trevor Hemsley, Brighton, UK
Trevor dot Hemsley at ntlworld dot com
From: David Brown on
Rahul wrote:
> David Brown <david(a)westcontrol.removethisbit.com> wrote in
> news:4c5b3f72$0$12224$8404b019(a)news.wineasy.se:
>
>> Raid10 is easy, and thus fast - it's just stripes of mirrors and there
>> are no calculations needed. It's fast with either hardware or
>> software raid.
>
> Thanks for all the great tips everyone! I'm convinced I should change my
> strategy.
>
> I'm going in for a RAID10. I decided to by 9 2-Terabyte drives instead.
> 8 I will put in a RAID10 and 1 hot spare. These days the cost difference
> in 1TB vs 2 TB disk is not a whole lot. And since I don't desire high
> performance (IOPS) the number of spindles is not that big a deal.
>
> That gets me 8 TB usable space, fairly good reads and writes, no parity
> calulation, no expensive rebuilds, and can survive 2-5 disk failures
> depending on how unlucky I am. Not bad for under $5000! I went with a
> standard LSI HBA and will use mdadm for the RAID10.
>
>> host IO is double for software raid10 (since it needs to write
>> everything twice explicitly), while with hardware raid10 the host IO
>> is single and the raid card doubles it up. But if you have plenty of
>> ram the writes will cache and you will see little difference.
>
> I've 16 Gigs RAM and a 8 core Intel-Xeon-Nehalem. Not sure if enough or
> not. But at least since no parity is calculated with a RAID10 I don;t
> have to worry about the "write hole". So no need for HW-RAID with its
> battery-backed up memory.
>
>> With software mdadm raid10 you can do things like different layouts
>> (such as -p f2 for "far" layout) that can speed up many types of
>> access.
>
> Ah! That's new for me. Need to read that up. Never knew of "layout"
> parameters.
>

There doesn't seem to be a huge amount of information about the effect
of the different layouts for raid10, but you can find a few useful
pages. Wikipedia has some comments:

<http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10>

Of course, the man pages for mdadm and md are important:

<http://linux.die.net/man/4/md>
<http://linux.die.net/man/8/mdadm>

For a backup system, you are probably best with -o2 layout - reads will
be reasonably close to raid0 speed if you have large reads, while writes
will be close to standard raid1 speeds (i.e., half raid0 since you need
to write everything twice). -n2 layout, which is standard raid10, will
often only be half raid0 speed for reads. -f2 layout is typically
slightly faster for streamed reads (pretty much straight raid0 speeds),
but slower on writes than -o2 due to longer seeks for the mirrored writes.

But this is all quite dependent on the load - if you need the best speed
out of the system (you probably don't for backup applications), you'd
have to test it yourself. Whatever the layout, you still get the
benefits of raid10 reliability and fast recovery.

Another option is to manually create a set of raid1 pairs, then create a
stripe of them all. This way you know exactly which copies are on which
drives - that could be useful if you wanted to remove more than one
drive at a time, or ensure that different copies are attached to
different disk controllers. The disadvantage of this setup is that your
hot spare won't be automatic, since a hot spare can only be attached to
a single raid set at a time. The trick then is a cron job that spots
degraded sets and puts the drive into the required set as needed.


>> single disk. You can always triple-mirror if you want extra
>>> redundancy.
>
> Nah. I think I'll use std. RAID10 just add another hot spare and take my
> chances.
>
>> Performance differences will vary according to the load. Small writes
>> are terrible with raid6, but no problem with raid10. Large writes
>> will be better with raid6 compared to standard layout raid10 ("near"
>> layout in mdadm terms, or hardware raid10), but there will be less
>> difference if you use "far" layout mdadm raid10. Large reads are
>> similar, except that mdadm "far" raid10 should be faster than raid6.
>
> The system should be mostly writes. Not sure small or large. It is a
> backup storage for rsync. So maybe small writes.
>
>
>