[Ma-linux] alternatives to NFS
ejb at ql.org
Fri Jan 11 22:33:45 EST 2008
Thanks for the many interesting and useful responses. I'll reply to a
few at once.
One quick piece of background: basically, I've just transferred within
the company and have started helping address some stability and
performance problems within the development network. I'm just trying
to informally survey what others are doing to make sure that there
isn't anything obvious that I'm overlooking and to see whether others'
experiences are similar to mine. It looks like others have had pretty
good with NFS in some environments, which is encouraging. I'll answer
a few things people asked.
Serge Wroclawski <serge at wroclawski.org> wrote:
> On Thu, Jan 10, 2008 at 05:40:08PM -0500, Jay Berkenbilt wrote:
>> I'm curious to find out what others are doing to support network file
>> sharing in a medium to large scale Linux/UNIX environment.
>> The solution does not have to be low cost or use free software.
>> Though I always prefer such solutions, in this instance,
>> performance concerns trump those concern
> What's your budget?
I'm not exactly sure, but I don't think the company would blink at
$100K. They would probably balk at $500K. So we're most likely in
It would be useful to me to have some kind of sense of what I could do
for various amounts...say $10K, $100K, $1M. Even though I know I
could easily get $10K just to try things out and that $1M would be
more than I think would be approved, it's nice to be able to provide
>> Even with optimal performance (one client, one server, no other
>> network traffic), the fastest networks are not as fast as local
> They can be faster. I've not seen hard drives that can give me 100
> megabytes per second writes. I've seen that on some network storage
Any in the $100K to $500K range? We're talking about probably on the
order of 10s of terabytes of space. Maybe close to 100 TB, but not
more than that for the foreseeable future. We're supporting about 500
users at the moment. I know our network won't support 100 MB/sec
writes from the desktop, but it's not inconceivable that we couldn't
get this from some servers or build systems if the underlying system
could support it.
Jason <ma-linux at jasons.us> wrote:
> There are a couple of other global namespace filesystems as well,
> such as the former Sistina, which RedHat bought and now calls GFS.
> If you want payware you could look at Isilon, Panasas, IBM GPFS,
> BlueArc, IBRIX and, of course, NetApp. Before NetApp I spent four
> years working for one of the other companies mentioned here so
> having spent five years doing this stuff I'll echo Serge's comments
> that it really does depend on your budget and requirements. NFS v3,
> especially over TCP, is vastly more reliable and faster than its
> reputation. I have customers with compute clusters numbering in the
> thousands of cores using NFS to talk to hundreds of TBs of storage
> and it just works. NFS v4 adds tighter security, as has been
> mentioned, and NFS v4.1, due out in a year or two, will add pNFS,
> making it that much faster without requiring client-side drivers.
This is particularly useful information. At the moment, our NFS
servers are mostly Solaris 8, though some are Linux. We all believe
this is a large part of the problem but have not been able to move
away from it for historical reasons. Everyone is completely convinced
that moving away from Solaris 8 is critical. Getting a NetApp box of
some sort is on the table. I don't know whether the box is going to
implement its own networking or be connected with fiber to a Solaris
10 box or something else.
The vast majority of clients are Linux. To my knowledge, no special
tuning has been done. We also have only 100 Mb/s to the desktop,
which means that *any* network file system will be seriously
constrained. This should go to 1 Gb/s within about 18 months. Having
some specific systems connected with faster connections may be good
enough even if most desktops have slower access. We'll see.
"Lee R. Burton" <lburton at mrow.org> wrote:
> I am using AFS in two production environments (TJHSST and a server
> group of mine). MIT and CMU are as well. AFS performance is pretty
> much onpar with NFS for most operations.. I haven't done very many
> benchmarks, however AFS scales very well compared to NFSv3 (haven't
> tried V4 yet) and has some nice features that are just showing up
> now in NFSv4. AFS cache size will not help your write speeds, but
> it can dramatically affect your reads of data especially if your
> using AFS from a remote location/over the internet (which it
> supports without a VPN/etc through its "global namespace"). The
> other nice thing about AFS over NFSv4 is platform support, it runs
> well on linux, windows, solaris, some BSDs, and OSX.
This is consistent with my experience. Although we're going to do
some measurements on AFS, I think it's unlikely that we will choose
it. There are many ways in which I think it would work very well for
us, but I think I can achieve the same effect in most of those cases
by using rsync to simulate cache and read-only replication.
Thanks for all the replies!
Jay Berkenbilt <ejb at ql.org>
More information about the Ma-linux