Print

Print


Hi John,

Our 3ware and MegaRAID cards were all from post acquisition, so I'm reasonably comfortable in being happy with the LSI MegaRAID cards we have now.

Cheers
David

On 4 Sep 2014, at 11:22, John Hill <[log in to unmask]> wrote:

> LSI bought 3ware a while ago - not sure whether that makes 3ware cards more trustworthy or LSI cards more risky.
> 
> John
> 
> On 04/09/2014 11:10, David Crooks wrote:
>> Hi Dan,
>> 
>> We have 3ware 9750-8i's in our 4 year old disk servers; these have been consistently finicky and so at least in the near future we're a little wary of other 3ware cards. The MegaRAID cards we have (including the ones DELL has used for its PERC cards) have been very solid. We have some older ARECA cards which have similarly behaved well, but we don't have so much experience with more recent iterations of these.
>> 
>> We have similar stipulations to Ewan in our tenders.
>> 
>> Cheers,
>> David
>> 
>> On 4 Sep 2014, at 10:55, Ewan MacMahon <[log in to unmask]> wrote:
>> 
>>>> -----Original Message-----
>>>> From: Testbed Support for GridPP member institutes [mailto:TB-
>>>> [log in to unmask]] On Behalf Of Daniel Traynor
>>>> 
>>>> Has anybody got a preference for raid controllers when buying new kit, or
>>>> at least cards / brands to avoid?
>>>> 
>>> Not terribly much, but having a mixture of 3ware, Areca, Adaptec and LSI (and PERC), I now have a marginal preference for 'not weird', so I'd be more inclined to favour kit with Adaptec or LSI controllers rather than anything else. The 3wares were fine at the time they were bought (a long time ago), and the Arecas were mostly OK, but with a definite helping of 'interesting', and it was much harder to find useful things by Googling since so few people have/had them.
>>> 
>>>> Does anybody make specific requirements on the raid card abilities?
>>>> 
>>> Fairly minimal, basically 'must do the obvious'. In particular, we don't ask for anything relating to speed. For sake of completeness I've copied the disk server section from our last grid tender request below, but it's pretty generic. About the only potentially interesting bit is that we do require the RAID cards (and, indeed, everything else) to be supported by standard issue SL - life's too short for faffing around with add-on drivers (hello Chelsio T5 owners :-P).
>>> 
>>> 
>>> Ewan
>>> 
>>> 
>>> 
>>> ------------
>>> 
>>> Detailed below is the specification required by the University and suppliers are encouraged to submit their products which either meet this specification exactly or that they feel is directly comparable. Suppliers should give full details of their system specification. These servers are intended to run bespoke storage software (currently Disk Pool Manager). Scientific Linux 6 will be the operating system used, and we require that all hardware is fully supported by the drivers that are included in the standard SL distribution.
>>> 
>>> Storage Nodes
>>> 
>>> 1. A high performance storage subsystem featuring RAID controller(s). Disk controllers should implement RAID level 6 with battery backed-up or NV RAM cache. The controller(s) should be capable of allowing large RAID volumes ideally of at least 30TB. The disks should be qualified for the particular RAID controller and should be of a type intended for RAID applications. The server must contain at least one, quad core x86 64-bit compatible processors, ie Intel or AMD.
>>> 2. The servers should be fitted with dual redundant PSUs.
>>> 3. Disks should be attached to the RAID controller(s) via SATA or SAS interfaces.
>>> 4. The disks should be mounted in a modular way and hot swapping must be supported.
>>> 5. The disks and controllers will be mounted in rack(s) described above.
>>> 6. The storage should offer the highest value TB per £ to be available at the time of installation.
>>> 7. The system should contain approximately 1GB of RAM per usable TB of data storage. For example, a 30TB server would be fitted with 32GB of RAM.
>>> 8. The servers should include a 10Gbit SFP+ NIC which is the PXE bootable interface.
>>> 9. The system will use part of the RAID array, set up as a logical volume for the operating system. This configuration should therefore be bootable via PXE as stated below.
>>> 10. The servers should run Scientific Linux 6 (a RedHat Enterprise clone see https://www.scientificlinux.org/ ). We require that the systems can be installed over the network using PXE boot and kickstart without any additional drivers.
>>> 11. Servers should contain IPMI support for remote monitoring and be accessible via KVM over LAN. This should provide access to the BIOS and allow remote power control.
>>> 12. The costs for maintenance must be shown separately and should include 5 years warranty covering parts and labour. Next Business day response.