From: Rahul on
Aragorn <aragorn(a)chatfactory.invalid> wrote in news:i3fbr4$dvf$2
@news.eternal-september.org:

> 1 GHz is impossible. :-) It's 1 MHz - based upon what you've pasted -
> but that applies for each CPU in your system.

Stupid me. Of course, you are right. :)

gcc timer.c
../a.out
>>>kernel timer interrupt frequency is approx. 1001 Hz<<<

--
Rahul
From: Pascal Hambourg on
Hello,

Rahul a �crit :
> Aragorn <aragorn(a)chatfactory.invalid> wrote in news:i3fbr4$dvf$2
> @news.eternal-september.org:
>
>> 1 GHz is impossible. :-) It's 1 MHz - based upon what you've pasted -
>> but that applies for each CPU in your system.
>
> Stupid me. Of course, you are right. :)
>
> gcc timer.c
> ./a.out
>>>> kernel timer interrupt frequency is approx. 1001 Hz<<<

That's 1 kHz. Even 1 MHz (1000000 Hz) would be very high.
From: Pascal Hambourg on
Rahul a �crit :
>
> I am running a 2.6 kernel (2.6.18-164.el51). This is supposed to have
> preemption. But is there a way to find out if this feature was complied
> into my current kernel?

grep PREEMPT /path/to/config
From: David Brown on
On 05/08/2010 23:22, Rahul wrote:
> Aragorn<aragorn(a)chatfactory.invalid> wrote in
> news:i3f31g$jum$1(a)news.eternal-september.org:
>
>> On Thursday 05 August 2010 18:49 in comp.os.linux.hardware, somebody
>> identifying as Rahul wrote...

>>> The reason I'm switching over from RAID5 to RAID6 is that this time I
>>> am using eight 1-Terabyte drives in the RAID. And at this capacity I
>>> am scared by the horror stories about "unrecoverable read errors"
>>
>> Why not use RAID 10? It's reliable and fast, and depending on what
>> disk fails, you may be able to sustain a second disk failure in the
>> same array before the first faulty disk was replaced. And with an
>> array of eight disks, you definitely want to be having a couple of
>> spares as well.
>
> RAID10 could work as well. But is that going to be faster on mdadm? I
> have heard reports that RAID10, RAID5 and RAID6 are where the HWRAID
> really wins over SWRAID. So not sure how to decide between RAID10 and
> RAID6.
>
> This is the pipeline I aim to use:
>
> [ primary server ] -> rsync -> ethernet -> [ storage server ] ->
> rsnapshot -> LVM -> mdadm -> SATA disk
>

Raid10 is easy, and thus fast - it's just stripes of mirrors and there
are no calculations needed. It's fast with either hardware or software
raid.

It /may/ be faster with mdadm raid10 than hardware raid10 - this depends
on the particular hardware in question. mdadm will often be faster
since you are cutting out a layer of indirection, and there are no
calculations to offload. However, if you are doing a lot of writes, the
host IO is double for software raid10 (since it needs to write
everything twice explicitly), while with hardware raid10 the host IO is
single and the raid card doubles it up. But if you have plenty of ram
the writes will cache and you will see little difference.

With software mdadm raid10 you can do things like different layouts
(such as -p f2 for "far" layout) that can speed up many types of access.
You can also do weird things if you want, such as making a three-way
mirror covering all your 8 disks.


There are a number of differences between raid6 and raid10.

First, with 8 disks in use, raid6 gives you 6 TB while raid10 gives you
4 TB.

Theoretically, raid6 can survive two dead disks. But with even a single
dead disk, you've got a very long re-build time which involves reading
all the data from all the other disks - performance is trashed and you
risk provoking another disk failure. With raid10, you can survive
between 1 and 4 dead disks, as long as only one disk per pair goes bad.
Rebuilds are simple copies, which are fast and only run through a
single disk. You can always triple-mirror if you want extra redundancy.

Performance differences will vary according to the load. Small writes
are terrible with raid6, but no problem with raid10. Large writes will
be better with raid6 compared to standard layout raid10 ("near" layout
in mdadm terms, or hardware raid10), but there will be less difference
if you use "far" layout mdadm raid10. Large reads are similar, except
that mdadm "far" raid10 should be faster than raid6.
From: Chris Cox on
On Thu, 2010-08-05 at 16:49 +0000, Rahul wrote:
> Are there any users using mdadm for RAID6? I wanted to get a performance
> estimate. Usually for RAID0,1,5 etc. I prefer well-tested, open-source
> mdadm to some sort of hardware implemented proprietary RAID card.
>

Have not tried it... but should perform ok. HW raid solutions usually
cannot afford expensive general CPUs... that's about the primary
difference and so often times you get a very low end CPU and sometimes
it's not all that great when you go HW raid. Not that it takes too much
to generate the parity info...

> But I've read that the checksum computation for RAID6 is computationally
> expensive. Hence I was wondering if it would be foolhardy to attempt a
> RAID6 via software-RAID. Or maybe not? It's going to be a disk-to-disk
> backup storage so performance is not super critical but I don't want to end
> up with something that crawls either.

Performance issue strictly on writes, reads should be like RAID 5. Even
so, I know the HW raid subsystems I use usually have enough cache and
such to where I do not feel any difference at all.

>
> In case I am forced to go with a hardware solution, though, are there
> recommendations for cards that play well with Linux? I'm thinking LSI is
> making some good cards but if there are any caveats I'd love to hear them.

I USED to use HW RAID cards with Linux until I discovered that HW
manufacturers would let their support lapse, etc.... that leaves you
with one expensive (depends) boat anchor of a card.

So, IMHO, there are really only two choices:

1. md SW RAID (which is quite good if configured with hotswap, etc.)

2. RAID subsystems (usually external).

Personally, I'd avoid HW RAID cards... I'm sick of them. And I've owned
plenty of the "best" and all were slower than using SW raid.


>
> The reason I'm switching over from RAID5 to RAID6 is that this time I am
> using eight 1-Terabyte drives in the RAID. And at this capacity I am scared
> by the horror stories about "unrecoverable read errors"
>
>

:-) Well... the issue is that the time to rebuild a large drive in RAID
5 is very long and there are greater odds of losing another drive.

With that said, all of my newer external RAID subsystems are RAID 6, not
that I've had any issue with the older RAID 5 ones (even those with 400G
drives).

Why external subsystems? More flexibility, SAN, etc. (SAN NOT meaning
iSCSI... you can certainly turn your md soln into a iSCSI target using
Linux).