Print

Print


The same limit also applies to other vendors (I think our latest system 
with a Supermicro X9DRG-QF board had this). The PCIe subsystem is 
integrated into the newer CPUs and each CPU only supports so many 
slots/lanes.

John

On 04/09/2014 11:10, jeremy maris wrote:
> Slightly off topic re RAID controllers but re Ewan's storage spec:
>
> Be aware that if you buy a Dell R720XD with only one processor, you only have two PCI slots available.
>
> The three half height cards slots and the third full height  slot are only available if you have two processors.
>
> The requirement for a second processor is in small print on page 35 of the R720 XD technical guide and not mentioned at all in the spec sheet…
>
> Jeremy M
>
>
> On 4 Sep 2014, at 10:55, Ewan MacMahon wrote:
>
>>> -----Original Message-----
>>> From: Testbed Support for GridPP member institutes [mailto:TB-
>>> [log in to unmask]] On Behalf Of Daniel Traynor
>>>
>>> Has anybody got a preference for raid controllers when buying new kit, or
>>> at least cards / brands to avoid?
>>>
>> Not terribly much, but having a mixture of 3ware, Areca, Adaptec and LSI (and PERC), I now have a marginal preference for 'not weird', so I'd be more inclined to favour kit with Adaptec or LSI controllers rather than anything else. The 3wares were fine at the time they were bought (a long time ago), and the Arecas were mostly OK, but with a definite helping of 'interesting', and it was much harder to find useful things by Googling since so few people have/had them.
>>
>>> Does anybody make specific requirements on the raid card abilities?
>>>
>> Fairly minimal, basically 'must do the obvious'. In particular, we don't ask for anything relating to speed. For sake of completeness I've copied the disk server section from our last grid tender request below, but it's pretty generic. About the only potentially interesting bit is that we do require the RAID cards (and, indeed, everything else) to be supported by standard issue SL - life's too short for faffing around with add-on drivers (hello Chelsio T5 owners :-P).
>>
>>
>> Ewan
>>
>>
>>
>> ------------
>>
>> Detailed below is the specification required by the University and suppliers are encouraged to submit their products which either meet this specification exactly or that they feel is directly comparable. Suppliers should give full details of their system specification. These servers are intended to run bespoke storage software (currently Disk Pool Manager). Scientific Linux 6 will be the operating system used, and we require that all hardware is fully supported by the drivers that are included in the standard SL distribution.
>>
>> Storage Nodes
>>
>> 1. A high performance storage subsystem featuring RAID controller(s). Disk controllers should implement RAID level 6 with battery backed-up or NV RAM cache. The controller(s) should be capable of allowing large RAID volumes ideally of at least 30TB. The disks should be qualified for the particular RAID controller and should be of a type intended for RAID applications. The server must contain at least one, quad core x86 64-bit compatible processors, ie Intel or AMD.
>> 2. The servers should be fitted with dual redundant PSUs.
>> 3. Disks should be attached to the RAID controller(s) via SATA or SAS interfaces.
>> 4. The disks should be mounted in a modular way and hot swapping must be supported.
>> 5. The disks and controllers will be mounted in rack(s) described above.
>> 6. The storage should offer the highest value TB per £ to be available at the time of installation.
>> 7. The system should contain approximately 1GB of RAM per usable TB of data storage. For example, a 30TB server would be fitted with 32GB of RAM.
>> 8. The servers should include a 10Gbit SFP+ NIC which is the PXE bootable interface.
>> 9. The system will use part of the RAID array, set up as a logical volume for the operating system. This configuration should therefore be bootable via PXE as stated below.
>> 10. The servers should run Scientific Linux 6 (a RedHat Enterprise clone see https://www.scientificlinux.org/ ). We require that the systems can be installed over the network using PXE boot and kickstart without any additional drivers.
>> 11. Servers should contain IPMI support for remote monitoring and be accessible via KVM over LAN. This should provide access to the BIOS and allow remote power control.
>> 12. The costs for maintenance must be shown separately and should include 5 years warranty covering parts and labour. Next Business day response.


-- 
John Bland                       [log in to unmask]
Research Fellow                  office: 220
High Energy Physics Division     tel (int): 42911
Oliver Lodge Laboratory          tel (ext): +44 (0)151 794 2911
University of Liverpool          http://www.liv.ac.uk/physics/hep/
"I canna change the laws of physics, Captain!"