From: Rahul on
I have a master server that much accept NFS mounts from ~300 other
machines. Each of the slave machines has a 2-gigabit bonded port.

In anticipation of the high network loads on the master-node would it be
worthwhile to get it set up with a 10gigabit eth card? Anything else that
could be done? How about on the switch? Can I get a switch with one or a
few high speed ports so that the master node plugs into a high speed pipe?

The server is going to be a Intel Nehalem E5520 with about 16 GB RAM. Sound
reasonable? Or should I be ramping up RAM etc?

Finally, I heard rumors that NFS is pretty passe especially for large
installations. Any good alternatives?

--
Rahul
From: Rahul on
The Natural Philosopher <tnp(a)invalid.invalid> wrote in news:h7o5tj$fro$2
@news.albasani.net:

> NFS over UDP or TCP?
>

Either way is OK to me. I am not really sure what I am used to using. This
is a closed environment with its own switches, address space and LAN. So I
guess whichever option gives the better performance TCP or UDP.


--
Rahul
From: Rahul on
Chris Cox <chrisncoxn(a)endlessnow.com> wrote in
news:1252002208.6805.128.camel(a)geeko:

Thanks Chris! Sorry, I never noticed your very useful reply.

> If you have any really old NFS clients out there, don't do this.

None. All new machine. So then I ought to do NFS over UDP? Where exactly
is this specified. UDP versus TCP.

> That's ok. Reasonable (on the extreme side). I use a LOT less and
> serve up well over 100 clients without issue.. just 1Gbit network.

These are HPC nodes though. Notorious for doing lot of I/O and do it
24/7.


> They usually get 30+MB/sec or so.... so, not terribly shabby. Single
> client benchmark (last one I did before going live on the 1Gbit
> network) showed 92MB/sec on seq. read and 60MB/sec on seq. write
> (random io was good in the 40-50MB/sec range)...

I just posted bonnie++ output from my similar (but much smaller) NFS
setup.

http://dl.getdropbox.com/u/118481/io_benchmarks/bonnie_op_node25.html

I'm still trying to figure out which ones of your numbers to compare with
which of mine corresponding numbers! :)

which might not be
> picture perfect, but good enough (same network as normal traffic, no
> jumbo frames).

Should I use jumbo frames? I mean no compatibility issues for me. All
this is my private network end-to-end.


> Our NAS is split into two servers each serving up about 2TB max. Each
> is a DL380G5 2x5130 with 8G ram with 8 nfsd's each. Backend storage
> comes off a SAN. Both are running SLES10SP1 currently. Just checked,
> one is serving to about 150 client hosts and the other about 110.
> TONS of free memory. No evidence of them EVER swapping. So I still
> think 16G is overkill.
>

Any way to check what;s the RAM utilization of my current NFS server
setup? I tried nfsstat but it won't show me anything useful.



--
Rahul
From: Jerry McBride on
Chris Cox wrote:

>
>
> On Fri, 2009-09-25 at 01:51 +0000, Rahul wrote:
>> Chris Cox <chrisncoxn(a)endlessnow.com> wrote in
>> news:1252002208.6805.128.camel(a)geeko:
>>
>> Thanks Chris! Sorry, I never noticed your very useful reply.
>>
>> > If you have any really old NFS clients out there, don't do this.
>>
>> None. All new machine. So then I ought to do NFS over UDP? Where exactly
>> is this specified. UDP versus TCP.
>
> NO. No you do NOT want to use UDP. It's just really old systems that
> had this restriction.
>
> NFS UDP over "high speed" networks (gigabit) will result in corruption.
>
>>
>> > That's ok. Reasonable (on the extreme side). I use a LOT less and
>> > serve up well over 100 clients without issue.. just 1Gbit network.
>>
>> These are HPC nodes though. Notorious for doing lot of I/O and do it
>> 24/7.
>>
>>
>> > They usually get 30+MB/sec or so.... so, not terribly shabby. Single
>> > client benchmark (last one I did before going live on the 1Gbit
>> > network) showed 92MB/sec on seq. read and 60MB/sec on seq. write
>> > (random io was good in the 40-50MB/sec range)...
>>
>> I just posted bonnie++ output from my similar (but much smaller) NFS
>> setup.
>>
>> http://dl.getdropbox.com/u/118481/io_benchmarks/bonnie_op_node25.html
>
> Not great. This was an NFS test across gigabit?? Reads look bad.
>
> With that said, there are good versions of bonnie++ and bad versions.
> What version did you use?
>
> But still, I'm not aware of a version of bonnie++ that had a problem
> with block reads.
>
>>
>> I'm still trying to figure out which ones of your numbers to compare with
>> which of mine corresponding numbers! :)
>>
>> which might not be
>> > picture perfect, but good enough (same network as normal traffic, no
>> > jumbo frames).
>>
>> Should I use jumbo frames? I mean no compatibility issues for me. All
>> this is my private network end-to-end.
>
> Probably NOT. You can convert to jumbo frames IF ALL NICS are running
> Jumbo frames (whole network NO EXCEPTIONS). If you don't, you'll get
> frame errors all over the place.
>

Don't forget... along with jumbo frame compatible mic's, you'll need the same
compatibility in your switch boxes...


>>
>>
>> > Our NAS is split into two servers each serving up about 2TB max. Each
>> > is a DL380G5 2x5130 with 8G ram with 8 nfsd's each. Backend storage
>> > comes off a SAN. Both are running SLES10SP1 currently. Just checked,
>> > one is serving to about 150 client hosts and the other about 110.
>> > TONS of free memory. No evidence of them EVER swapping. So I still
>> > think 16G is overkill.
>> >
>>
>> Any way to check what;s the RAM utilization of my current NFS server
>> setup? I tried nfsstat but it won't show me anything useful.
>
> free?
>
> I've got a pretty heavily hit setup... we just have 8G of ram... and I
> doubt we ever use it all.

--

*****************************************************************************

From the desk of:
Jerome D. McBride

10:03:35 up 1 day, 15:43, 4 users, load average: 0.10, 0.13, 0.15

*****************************************************************************