Print

Print


Hi,

We have an upcoming hardware procurement for the next phase of our 
compute facility and we are currently having discussions on the resource 
requirements for jobs that will be run on the new equipment.

At the moment the current perception is to go with 2GB of memory per 
core - where one core maps to one job slot. Does anyone know if this is 
an acceptable (and relatively decent) level of memory allocation for LHC 
experiment software? Or would it be sensible to invest in a more memory?

I am particularly interested in profiling how scaling memory improves 
execution speed and overall job efficiency. If anyone has resources I 
can reference on this subject that would be great. On a related note 
does anyone have any experience with optimising NUMA configuration on 
multi-core boxes in a batch system environment?

Thanks for your help,
Andy.


-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.