From: John A. Sullivan III on
On Fri, 2010-08-06 at 14:33 -0700, Justin The Cynical wrote:
> On 8/5/10 11:18 PM, Phillipus Gunawan wrote:
> > Hi There,
> >
> > Sorry if this is a bit 'Off The Topic' discussion. I am planning to get
> > fileserver dedicated for iSCSI.
> >
> > My option is to grab Thecus N4200 or to build OpenFiler with any Duo-Core CPU,
> >> 1G RAM, RocketRAID 644 controller, and other basic PC stuff.
> >
> > I have been googling to compare OpenFiler vs FreeNAS (winner for me: OpenFiler)
> > and cheap iSCSI capable barebone NAS from Thecus vs QNAP (winner: Thecus N4200)
> >
> > When it come to comparing between OpenFiler vs Thecus N4200, I can not get any
> > goodies from google. I used OpenFiler for a bit, creating 2 or 3 iSCSI target,
> > share folder, but only for short time, just for a fun.
> >
> > My concern:
> > - HDD realibility check (live error check, reporting when one of the HDD
> > faulty/bad sector/etc)
> > - RAID/iSCSI re-sizing when I add another HDD
> > - Easy to maintain
> >
> > So, if anyone expert can give me a light, would be much appreciated. For a
> > started, I will only go with 3x2TB HDD for either option, to be RAID-ed as RAID5
>
> Have you looked at NexentaStore? Free community addition is good for up
> to 12 TB. It's based on OpenSolaris, not Linux, but it uses apt-get for
> packages and upgrades.
>
> CIFS: Check
> NFS: Check
> iSCSI: Check
> ZFS: Check
>
> ZFS at this time isn't all that good at dynamic resizing, but I believe
> Sun/Oracle is working on that functionality.
>
> I looked at FreeNAS myself, but the ZFS level it supports is very
> experimental. OpenFiler, while I like the idea behind it, is based on
> CentOS (which is a clone of that bloated thing called RedHat. Ugg).
>
> Not to start a flame war or argument, but I'd skip the hardware RAID
> card and go software. Most CPU's available have plenty of power to
> handle the calculations needed for RAID. Also, you are not locked into
> a single vendor, nor do you have to worry about replacement when the
> RAID card dies (and it will eventually).
>
> I'm running three VM's from an ESXi 4 machine using the NexentaStore on
> an iSCSI datastore before moving my production VM's over to it, and so
> far, I'm liking what I see overall. It seems to be a bit on the hungry
> side, like OpenFiler would be, but it's quite usable with the 4 gigs and
> the C2D 7500 CPU I have in it.
>
>
We are using the commercial version of Nexenta and have had mixed
results. Portions of it are outstanding while others have been a
disaster. The founders certainly know their technology and were the
authors of the Linux iSCSI implementation I believe.

Our first problem was abysmal performance. To their credit, Nexenta
support worked tirelessly on the problem but the ultimate assistance
came from the dm-multipath mailing list. The problem was not entirely
Nexenta's. The problem was we were using iSCSI for Linux file services.
Because of the limitation of 4KB memory page sizes in Linux, the maximum
file system block size is also 4KB. That small size means that no
matter how fat the pipe, the data transfer rate is bound by latency
(number of I/Os per second) rather than bandwidth. We had difficulty
pushing our multi-Gigabit links beyond 40 Mbps with 4KB block sizes
whereas we could saturate them at larger block sizes.

Nexenta's contribution to the problem is that the opensolaris network
stack adds a huge amount of latency - well over an order of magnitude
greater than Linux (200us to 700us pinging its own interface!). We have
not yet upgraded to Nexenta 3.x which I gather is a major OS upgrade.
It is quite possible it has improved latency. I've just tested our
current version and it is averaging 40us to 80us so considerable
improvement. Our Linux systems seem to average around 20us.

We then tried to implement LDAP and it was pathetic, e.g., no sub
queries, limited options, finicky syntax. It was absolutely useless to
us. Again, we have not tested this on the latest software.

Quality control has been an issue. The company tends to think like
developers who don't mind trashing and rebooting systems but not like
enterprise sysadmins who need to keep systems running 24 x 7. As an
extreme example, an undocumented change between upgrades resulted on our
corrupting our RAID arrays and a catastrophic data loss the day BEFORE
we implemented our disaster recovery backups (shame on us). This was
probably very specific to our environment as a pre-release adopter of
COMSTAR but was an absolute disaster nonetheless.

We then tried to implement SNMP. Following instructions carefully, the
result was that the entire system hung knocking the disk subsystems out
from roughly 30 servers. Again, this was pre 3.x.

The day before we were supposed to put the system into production, it
decided that one of the disks was bad and then decided it was not.
That's not so bad but it did all the processing as a foreground process
consuming all CPU. The result - it knocked the disk subsystems out from
under 30 servers and provoke a 30 hour plus outage while it slogged its
way through (followed by ages of fsck).

The week we went into production, it decided the disk was bad after all
and swapped it out with a spare. Again, re-silvering became a
foreground process consuming all CPU. Thankfully, it only provoked a
five minute outage on the disk subsystems and the servers were able to
recover.

To Nexenta's credit, this bug has been fixed in 3.x. To their great
discredit, they released as production ready a system with this kind of
a bug.

On the other hand, ZFS is incredible. As much as we've wanted to dump
Nexenta because of the above problems, we haven't found anything quite
to compare. Until someone else releases a ZFS based system or BTRFS
matures (or Nexenta makes some other catastrophic quality control
blunder), we're going to try to stick with them. This is largely due to
our trust in Pogo Linux who sold us the original system and with whom
we've been pleased.

So Nexenta is certainly a system worth (cautiously) evaluating - John


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/1281141681.29208.30.camel(a)denise.theartistscloset.com