All other things being equal, I think you're much better off with more RAM
than with a RAID. Even with a RAID, if you don't have enough RAM, the
machine will have to swap memory pages back and forth with the disk, which
really slows things down.
Other writers have added some caveats about RAIDs. I'll add my own.
Disclaimer: our group had a very bad experience with a RAID.
Caveat: I'm implicitly comparing to our systems, where we have Suns
running SCSI disks. I don't know about Linux, where there's a greater
chance you don't have SCSI disks (which might make it harder to add more
disk space if you pursue the non-RAID route).
1. I wouldn't get one unless you have very competent system administration
resources available.
2. We never really did get ours running. Perhaps it depends on the vendor,
but ours admitted (after we must have cost his company oodles of $$ in
support time) that installation was "rocket science." (Maybe if 1. above
isn't a problem, this won't matter.)
3. At least 1 or 2 other groups here at NIH (which *do* satisfy 1. above)
say they have sufficient trouble getting the thing back on line after a
power outage that they have to call the vendor.
4. I'm not convinced by the argument about data preservation. If you're
sufficiently steeped in UNIX culture, you know that everything should be
backed up on tape anyway. (Sure, if you don't do it every day, you'll lose
some stuff, but how often do disk crashes occur? And "raw" data should be
treated specially anyway.) I can imagine some kind of calamity (electrical
storm?) where the whole RAID crashes. (I'm not an expert, so I don't know
if RAIDs are routinely backed up on tape, but if it were my data, *I*
would.) RAIDs are crucial to e.g. an e-business site or airline ticket
processing center, where they don't have *time* to backup from tape. (Hence
"hot-swappable disks".) My calculation is that you'll spend much more time
setting up a RAID (though again, maybe this depends on the vendor and your
sysop) than you ever will restoring after disk crashes.
5. RAIDs also purport to be faster at transferring data (although see
caveat from another poster to this thread). When I asked my vendor about
this, he said at the outside 20% faster, but typically 5-10%. Nothing to
brag about (though I don't have authoritative figures on this).
6. One definite advantage of a RAID is that it allows you to pretend all
the disks belong to one filesystem. That is, if you have a bunch of 10GB
disks which aren't in a RAID configuration, and you "mv" files over to your
colleague's disk, timestamps etc. will get messed up (though there are ways
around that). On the other hand, you can get 50 GB SCSI disks for about
$850 street, so this issue is largely mooted.
Overall, if you're a large group with data coming out of your ears (and you
have a competent sysop), it might be a good idea. Otherwise...
Best,
Stephen Fromm, PhD
NIDCD/NIH
----- Original Message -----
From: "Christine Preibisch" <[log in to unmask]>
To: "spm mailing-list" <[log in to unmask]>
Sent: Monday, May 22, 2000 5:35 AM
Subject: RAID arrays under Linux
> Dear SPM'ers,
>
> A question concerning computer hardware once again.
>
> Reviewing previous emails regarding this issue I found a mail from Joe
> Devlin (14.02.00) where he recommended RAID arrays for further
acceleration
> of analysis. Is it worthwhile using them if the RAM is big enough already
> (e.g. 1GB)? Can anybody comment on how much faster this systems really
are,
> and which RAID level should be used?
>
> Does anybody have experience using a RAID array (of level 5) under (Suse)
> Linux? Are any RAID arrays on Intel-based PC's are properly supported by
any
> Linux species?
>
> Any comment would be highly appreciated.
>
> Thank you,
>
> Christine Preibisch
>
> Universitdt Frankfurt
> ZRAD - Institut f|r Neuroradiologie
> Schleusenweg 2-16
> 60 528 Frankfurt
>
> Tel: ++49 69 6301 4651
> Fax: ++49 69 6301 5989
>
> email: [log in to unmask]
>
>
>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|