Also from my call to survey of other DPM sites we have currently the
following for three sites who replied...
"
UKI-Scotgrid-Glasgow has 9*servers, each with 22*500G drives attached
to areca raid cards thus giving us ~9.5T per server
"
"
Hi,
in Cyfronet, Krakow, PL, we got ~60TB split into two pool nodes, one
headnode.
Each of them is an eight-core, 16GB RAMmachine with 4Gb redundant FC
to the actual storage.
Interconnected via 1Gb ethernet, 10Gb ehternet awaiting deployment.
We think we shall reasonably to 200TB with 4 poolnodes, FC and XGb
eth equipped.
Cheers
"
"
for example at praguelcg2 we have one dpm head node (golias100, which
also works as a disk node) with 3 additional disk nodes.
The biggest array connected to one disk node is 13T disk array.
We haven't had any performance problems yet, but the maximum
number of jobs that would use dpm at the same time is about 150.
[root@golias100 koubat]# ./dpm-qryconf2 --pooldata NONE --fsdata
SERVER,FS,CAPACITY --groups --sed --fsheader | sort -r
SERVER FS CAPACITY
se4.farm.particle.cz /mnt/hep_fs1 2.00T
se4.farm.particle.cz /mnt/gen_fs1 750.38G
se4.farm.particle.cz /mnt/auger_fs2 2.00T
se4.farm.particle.cz /mnt/auger_fs1 4.00T
se4.farm.particle.cz /mnt/atlas_fs1 4.00T
goliasx98.farm.particle.cz /mnt/array6 1.95T
goliasx98.farm.particle.cz /mnt/array5 1.95T
goliasx98.farm.particle.cz /mnt/array4 7.10T
goliasx98.farm.particle.cz /mnt/array3 1.95T
goliasx98.farm.particle.cz /mnt/array2 999.87G
goliasx98.farm.particle.cz /mnt/array1 1.95T
golias100.farm.particle.cz /nbd_1 984.30G
golias100.farm.particle.cz /mnt/star_test/ 96.83M
golias100.farm.particle.cz /mnt/array3 642.61G
golias100.farm.particle.cz /mnt/array2/ 916.71G
golias100.farm.particle.cz /mnt/array1/ 1.97T
cl5.ujf.cas.cz /raidRB 1.61T
"
-----Original Message-----
From: GRIDPP2: Deployment and support of SRM and local storage
management [mailto:[log in to unmask]] On Behalf Of Davies,
BGE (Brian)
Sent: 18 June 2008 13:36
To: [log in to unmask]
Subject: Re: Minutes for today
FYI , Martin Bly is giving a talk on Friday of hepsysman regarding
storage procurement.
-----Original Message-----
From: GRIDPP2: Deployment and support of SRM and local storage
management [mailto:[log in to unmask]] On Behalf Of Simon
George
Sent: 18 June 2008 11:32
To: [log in to unmask]
Subject: Re: Minutes for today
The new RHUL cluster (installed by Clustervision in December '07) has 15
storage nodes providing 330 TB in total.
Each node has 24 x 1TB disks hooked up via a 3ware 9650SE-24M8
controller in a RAID6 configuration, so 22 TB usable space.
Networking is 4 bonded gigabit ethernet controllers connected to Nortel
switches.
I can dig out detailed specs for anyone who asks.
Cheers,
Simon
Peter Love wrote:
> I think Glasgow said they only had 9TB per server, not 20. LYON T1 had
> 20TB, still not hitting any load issue. Will be keen to hear about
> Glasgow tender response, in terms of architecture.
>
> Couple of solutions we have for ~200TB are:
>
> 13 IO servers, 1 x 1Gbit connection each, 16TB backend SCSI attached
> 2 IO servers with 4 x 1Gbit connection each, 4 x 48TB Fibre-C attached
>
> Factor 2 in cost.
>
> Jensen, J (Jens) ([log in to unmask]) wrote:
>> For some reason wireless is working in my conference room, in a
>> dramatic break with tradition.
>> Tappety-tappety: minutes, now uploaded.
>>
>> http://indico.cern.ch/conferenceDisplay.py?confId=36157
>>
>> Thanks
>> --jens
|