On Nov 19, 2008, at 12:15 AM, Harry M. Greenblatt wrote:
> There are, of course, alternatives these days in the Unix world to
> NFS, in the form of other distributed file systems,
Let us not forget the most sane and far superior alternative to a
distributed file system: a local file system. I've been running a
local file system for several years now and I find that it is faster
and more reliable than distributed alternatives like NFS.
NFS was invented to solve the specific problems of computing and
storage resource limitations in a shared computing environment. Its
buggy implementation, poor performance, and complete lack of security
are symptoms of its anachronistic status. In the world of $400
core2duos, 4GB addressing, and 1TB hard drives, do we really need the
overhead of distributed file systems?
To emulate some of the functions of a distributed file system like
data backup, consistent system configuration, facilitation of shared
projects, and one-to-many user-to-computer relationships, I suggest a
combination of SVN and scheduled backups using something like rsync or
unison.
My advice to those running a distributed file system is to consider
deeply whether they really need it. If one has a core facility of four
or five computers with a homogeneous set of programs, then, yes, NFS
is probably a good idea in such an environment. Beyond that, one is
basically begging for unrelenting network problems no matter which
server they run.
--
James Stroud
UCLA-DOE Institute for Genomics and Proteomics
Box 951570
Los Angeles, CA 90095
http://www.jamesstroud.com
|