From: n17ikh on
I realize this is fairly specific, which is why google has failed me
and I come to Usenet. I'm looking for a (obviously linux-
compatible) ATA hard drive controller (RAID is OK, but won't be
used, I use MD RAID). It needs to be PCI-X OR PCI-64/66 and have
either 4 channels with the capability for two drives per channel, or
more preferably, 8 channels with single-drive-per-channel
capability. I've found one in the 3ware 7506-8, but all those have
mysteriously vanished off the face of ebay. If someone knows where
I can find one of those used or the model number/manufacturer of
something else I could use, I'd be eternally grateful. Also
preferably, I'd like to be able to find something under 200USD or
even 150USD. This was something that I could have done with the
7506-8s but unfortunately they're gone, apparently for good.
Anyways, if anyone knows a good alternative, I will love you
forever. In a good way.

TIA
-n17ikh
--
* Posted with NewsLeecher v3.0 Beta 7
* http://www.newsleecher.com/?usenet
From: Frantisek.Rysanek on
Why ATA, why not SATA? Do you have a hoard of parallel ATA drives?
If you're planning to buy new drives, you should definitely try SATA.
Parallel ATA is a pain - the cables are wide, PATA hardware eats more
power, the parallel bus with its myriad contacts is more prone to
errors... Master/slave attachment is a problem when used in a RAID.

The 3ware that you've mentioned was a true hardware-based RAID
controller, rather than a bare ATA controller. I really don't remember
meeting any octal bare ATA controllers in my life. Perhaps Promise and
3ware used to make something, but those boards were always marketed as
RAID controllers. Consider buying maybe 4 pcs of Promise dual-port bare
PATA controllers.

As for octal bare SATA HBA's, I'd recommend the following:
- anything with Marvell 88SX6081 - usually an onboard chip
- anything with Adaptec AIC9410 (SAS) - usually an onboard chip
(see e.g. http://www.supermicro.com/products/motherboard/matrix/)
- the Promise SATAII150 SX8 - open-source Linux driver available on the
Promise web

None of the above is exactly seamless to get going in Linux, but not
impossible either.
There are also a number of quad SATA HBA's that are supported by
vanilla Linux kernels.

Frank Rysanek

From: n17ikh on
Yes actually, I *do* have a hoard of PATA drives (got a good deal).
The reason I'm looking to upgrade to a PCI-X solution is because
right now I'm using two 4-drive Highpoint cards that are plain PCI,
and with that many drives in RAID-6 the bottleneck in both reading
and writing is the PCI bus itself, e.g. when it writes it needs to
write to all 9 drives (one is on the onboard controller) at once,
and since PCI has a bandwidth of 133MB/sec I get 133/9 MB/sec, or
around 14 MB/sec. In practice it's even less, because usually that
data comes from the network, which also uses the PCI bus, so I get a
half or a third of that. Practically unacceptable, but when you're
on a budget, what can you do, eh? Using the PCI-X slots on my board
should solve or at least alleviate that problem.
-n17ikh
--
* Posted with NewsLeecher v3.0 Beta 7
* http://www.newsleecher.com/?usenet
From: Frantisek.Rysanek on
14 Megs per second? That seems too slow, even for plain old PCI
(32bits(a)33MHz).

My experience is that this plain PCI bus on reasonably modern chipsets
(i845 and above) throttles at about 100 MBps. I've measured that with
sequential transfers to various external RAID units, capable of 150-220
MBps, via an U320 HBA.

If you have 9 drives on a single PCI bus, their total bandwidth should
still be around 100 MBps. In RAID 6, two drives are parity overhead.
That's some 20 per cent off your total bandwidth. Thus, I'd expect
about 70-80 MBps under easy sequential load, such as
cp /dev/zero /dev/md0
or
cp /dev/md0 /dev/null

You're right that if this RAID machine serves files via a Gb Eth NIC,
you'll get half of the bandwidth eaten by the NIC.

What CPU do you have? You seem to say that you can choose between PCI
and PCI-X in your system - that doesn't look like completely feeble
hardware. Makes me think that your PC has a Pentium 4 in an entry-level
server chipset, such as the i875 or i7210, combined with a 6300ESB (PCI
and PCI-X). Or it could be an older Pentium 3 server with a ServerWorks
chipset... Either way, you shouldn't be starved of CPU horsepower for
the RAID operations (XOR, Reed Solomon) - how much estimated throughput
does "md" report at boot, for the algorithm that it selects? (see
dmesg)

Have you tried putting one of your PCI IDE controllers into a PCI-X
slot? You *can* put a 32bit PCI board into a 64bit slot and it should
work, provided that the 5V/3.3V compatibility keys in the slot and on
the board are in a mutually permitting constellation... (You can even
put a 64bit board into a 32bit slot, for that matter - not your case,
though.)

If you can put each IDE controller into a separate PCI segment, that
should double your bandwidth to the disk drives. Or you could keep the
two IDE controllers together on one bus, and use the other bus for the
NIC, if you don't have an Ethernet PHY integrated in your south-bridge,
or attached via some proprietary Ethernet-only link from the
south-bridge...

Also note that if you have a 6300ESB, the "Hub Link" between s.b. and
n.b. is only capable of 266 MBps, I don't remember whether this is full
or half duplex. So the segment of PCI64(a)66 off the south bridge,
nominally capable of 533 MBps (half duplex), can be throttled by the
HubLink.

Back to your weird symptoms. 14 MBps is *really* slow, regardless of
your chipset and CPU, unless it's a Pentium-class machine.

What sort of load do you have? Large files? A myriad small files? A
database? Sequential transfers? Small random rewrites? Note that RAID 5
and RAID 6 act like "poisoned" when asked to do a lot of tiny write
transactions that trash the cache. On every such write, the RAID first
has to read the whole corresponding stripe set, calculate the parity
stripes (two of them for RAID 6) and finally write the payload stripe
and the parity stripes.

None of this happens when reading (from a healthy array). No parity
calculation. Close to RAID0 performance.

Let me suggest an easy test of sequential reading performance:
cp /dev/md0 /dev/null
and, on another console,
iostat 2
The iostat util is a part of the "sysstat" package and shows transfers
in the unit of sectors, i.e. divide the figure in "blocks per second"
by two and you get a transfer rate in kBps...

Do your disk drives run in UDMA mode? You should be able to get to know
using
hdparm -I /dev/drive
Some IDE drivers also report that on boot (see dmesg).

Are all your disk drives healthy? There are several possible clues.

Firstly, if you try a sequential reading test (see above) on a single
disk drive, with iostat measuring your throughput, does the transfer
rate fluctuate? It should not, the drive should read at a fairly stable
pace. If the transfer rate drops now and then almost to zero, and then
goes back to normal, that's a sign that your drive has weaker areas, or
that it's got "transparently remapped" sectors and has to seek during
linear reading to reach for them. Try holding a suspicious disk drive
in your hand - if you feel sudden seeking now and then (while the
transfer rate drops), you know what the problem is.

Secondly, download and compile smartmontools. You need 'smartctl'. Try
smartctl -a /dev/hda
Either you get some three pages of data, or the drive complains about
unknown command (DriveReady SeekComplete ...). This probably means that
S.M.A.R.T. is off. Turn it on and try again:
smartctl -s on /dev/hda
smartctl -a /dev/hda
Focus on the SMART error log. If smartctl says that there are no errors
in the SMART error log, that may or may not mean that the drive is OK.
I believe the SMART error log 'sector' is stored on the disk itself,
and some drives seem to be flawed to the point that they can't even log
their own errors... If there *are* some errors in the SMART log, that
should be enough evidence for an RMA of the disk drive.

Do your IDE controllers share IRQ's, with each other or with other
devices? If your machine has an APIC, is it enabled? (Provides
additional IRQ lines, decreases the order of IRQ sharing.)
If unsure, post a listing of
cat /proc/interrupts

It's also theoretically possible that your quad IDE controllers are
flawed in their design to the point that their performance is impaired,
but I've never actually met any such hardware... The driver would have
to use polling IO to achieve transfer rates this low :-)

Frank Rysanek


n17ikh(a)gmail.com napsal:
> Yes actually, I *do* have a hoard of PATA drives (got a good deal).
> The reason I'm looking to upgrade to a PCI-X solution is because
> right now I'm using two 4-drive Highpoint cards that are plain PCI,
> and with that many drives in RAID-6 the bottleneck in both reading
> and writing is the PCI bus itself, e.g. when it writes it needs to
> write to all 9 drives (one is on the onboard controller) at once,
> and since PCI has a bandwidth of 133MB/sec I get 133/9 MB/sec, or
> around 14 MB/sec. In practice it's even less, because usually that
> data comes from the network, which also uses the PCI bus, so I get a
> half or a third of that. Practically unacceptable, but when you're
> on a budget, what can you do, eh? Using the PCI-X slots on my board
> should solve or at least alleviate that problem.
> -n17ikh
> --
> * Posted with NewsLeecher v3.0 Beta 7
> * http://www.newsleecher.com/?usenet

From: Frantisek.Rysanek on
Ahh sorry, forgotten to ask, did you mean 14 Megs per second *per
drive*?

Frank Rysanek

Frantisek.Rysanek(a)post.cz napsal:
> 14 Megs per second? That seems too slow, even for plain old PCI
> (32bits(a)33MHz).