Print

Print


For many many years than major LHC experiments requested 2GB/core (ie per job).
Some sites have started provisioning more than this but I would think it comes down to how many cores on the chip they chose for a particular procurement and what was a sensible multiple. Recent CPU chips with for example 10 cores per chip which equate to 40 HT cores per system would suggest 80GB RAM which is not normally an option, so sites may have chosen 64GB, 96GB or may 128GB in such a situation.

We could do with updated advice from the major experiments as to what they would like/require at tier-2 sites.

The Tier-1 has I think been provisioning twice this level for some years.

Pete

--
----------------------------------------------------------------------
Peter Gronbech  GridPP Project Manager          Tel No. : 01865 273389

Department of Particle Physics,
University of Oxford,
Keble Road, Oxford  OX1 3RH, UK  E-mail : [log in to unmask]<mailto:[log in to unmask]>
----------------------------------------------------------------------

From: Testbed Support for GridPP member institutes [mailto:[log in to unmask]] On Behalf Of Daniela Bauer
Sent: 17 July 2017 13:37
To: [log in to unmask]
Subject: Re: Memory per core at https://www.gridpp.ac.uk/wiki/HEPSPEC06

Hi Steve,
I was just looking for an approximate measure. Someone asked how much memory was 'typically' available in a GridPP cluster and I can't think of any source to even get a ball park figure from (especially on mixed clusters -- my own run from 1.5 Gb to 4Gb).
Cheers,
Daniela

On 17 July 2017 at 13:31, Stephen Jones <[log in to unmask]<mailto:[log in to unmask]>> wrote:
Hi Daniela,

Re: Would it be possible to use memory/thread (i.e. available per job slot) ?

I hope you don't mind me saying so, but this might be a bother. You see, some sites give various HEPSPEC06 readings for the exact same hardware.

They select  a number of job slots at various values between (say) cores and cores*2 (HTs), perhaps in steps of 2. This is useful, because maximum HEPSPEC06 is often not coincident with slots == HTs.

(e.g. see Liverpool's readings for E5-2630 v2, where readings using 22 slots and 24 slots are provided. Similar for E5620.)

Also, nodes used for VAC are set with less slots/more mem per slots than nodes used for Condor, because VMs have more overhead. It would be superfluous and error prone to give the arithmetic for both/all sets.

Why not give the total memory? And the number of slots  the figures were  calibrated for? The user can then derive  all the information he/she needs in (say) a spreadsheet or a script.

BTW: There is now no standard set of fields in that table. Perhaps there should be?

Cheers,

Ste









--
Steve Jones                             [log in to unmask]<mailto:[log in to unmask]>
Grid System Administrator               office: 220
High Energy Physics Division            tel (int): 43396
Oliver Lodge Laboratory                 tel (ext): +44 (0)151 794 3396<tel:%2B44%20%280%29151%20794%203396>
University of Liverpool                 http://www.liv.ac.uk/physics/hep/



--
Sent from the pit of despair

-----------------------------------------------------------
[log in to unmask]<mailto:[log in to unmask]>
HEP Group/Physics Dep
Imperial College
London, SW7 2BW
Tel: +44-(0)20-75947810
http://www.hep.ph.ic.ac.uk/~dbauer/