Was wondering how people implement this. We have a mix of amount of
memory/machine, job memory requirements, and MPI/non-MPI too! Here's what
we do and why:
* set torque default pvmem for most queues to 512MB (this allows all 8
cores to be used even on our lightweight 4GB machines)
* set torque default pvmem for "himem" q used by atlas and lhcb to 2500MB
Jobs coming in pick up these values and then are (I hope) scheduled by
Maui appropriately to fill the mix of nodes (some with 16GB, some with
4GB). We use vmem because Linux doesn't enforce torque's mem option (but
maybe that's actually what we want, Graeme?) We use *p*vmem because it
applies per-process for MPI jobs.
We'll also probably move from vo-based queues to mem&walltime-based queues
as this would help in actually setting the memory requirements (until the
CREAM CE is widely deployed at least ...)
Stephen
--
Dr. Stephen Childs,
Research Fellow, EGEE Project, phone: +353-1-8961797
Computer Architecture Group, email: Stephen.Childs @ cs.tcd.ie
Trinity College Dublin, Ireland web: http://www.cs.tcd.ie/Stephen.Childs
|