Hmm,
and after adding something like this to maui.cfg, what to do with the
max_running values for the queues, set by qmgr? To set them to 0 (this cause
errors on GStat, since FreeCPU count (and all others) is then 0), or to the
number of Logical CPUs?
Thanks, Antun
-----
E-mail: [log in to unmask]
Web: http://scl.phy.bg.ac.yu/
Phone: +381 11 3160260, Ext. 152
Fax: +381 11 3162190
Scientific Computing Laboratory
Institute of Physics, Belgrade
Serbia and Montenegro
-----
---------- Original Message -----------
From: Jeff Templon <[log in to unmask]>
To: [log in to unmask]
Sent: Fri, 9 Sep 2005 16:35:45 +0200
Subject: Re: [LCG-ROLLOUT] Atlas with atlas, dteam with dteam, Kodak with
Kodak, etc.
> yo,
>
> we use process caps. here is an abbreviated example:
>
> GROUPCFG[dteam] FSTARGET=2 PRIORITY=5000 MAXPROC=32
> GROUPCFG[alice] FSTARGET=15 PRIORITY=100 MAXPROC=100 ADEF=lhc
> GROUPCFG[atlas] FSTARGET=50 PRIORITY=100 MAXPROC=160 ADEF=lhc
> GROUPCFG[atlsgm] FSTARGET=50 PRIORITY=100 MAXPROC=160 ADEF=lhc
> GROUPCFG[lhcb] FSTARGET=35 PRIORITY=100 MAXPROC=230 ADEF=lhc
> GROUPCFG[lhcbsgm] FSTARGET=35 PRIORITY=100 MAXPROC=230 ADEF=lhc
> GROUPCFG[cms] FSTARGET=1- PRIORITY=1 MAXPROC=10 ADEF=lhc
>
> GROUPCFG[esr] FSTARGET=5 PRIORITY=50 MAXPROC=32 ADEF=nlgrid
> GROUPCFG[ncf] FSTARGET=40 PRIORITY=100 MAXPROC=120 ADEF=nlgrid
> GROUPCFG[asci] FSTARGET=40 PRIORITY=100 MAXPROC=120 ADEF=nlgrid
> GROUPCFG[pvier] FSTARGET=5 PRIORITY=100 MAXPROC=12 ADEF=nlgrid
>
>
> ACCOUNTCFG[lhc] FSTARGET=50 MAXPROC=230
> ACCOUNTCFG[nlgrid] FSTARGET=50 MAXPROC=110
>
> Note that we give dteam a very high priority but a very low fair
> share and a rather severe process cap. On the other hand, the LHC
> groups all have a rather high fair share, and are limited to 230
> processes in total. Right now we have 246 CPUs in the farm, so it
> is impossible for just LHC to take all our CPUs. Sometimes they are
> all full, but this is during times that we have e.g. 180 LHC jobs
> running, 50 from biomed, and 16 from dzero. But in most cases we
> are not full, so dteam jobs run immediately.
>
> Even when we are full it's not a problem. For a big site, being
> full isn't so bad because with lots of jobs, you have a relatively
> large number of jobs ending during any given time period.
>
> JT
>
> Mario David wrote:
> > Hi Dan
> > how do you set a WN only to dteam with pbs/maui?
> >
> > we are having problems because all nodes are full of atlas and cms jobs
> > and dteam sft doesn't enter. Despite fairshares in maui.conf
> > in the past I had tried to set specific nodes to specific groups
> > in the qmgr but was not successfull.
> >
> > cheers
> >
> > Mario
> >
> > Quoting Dan Schrager <[log in to unmask]>:
> >
> >
> >>Dear Christine,
> >>
> >>I have deleted your simulation(?) job run as user dteam at my site
> >>because it was blocking the unique WN reserved for short dteam (SFT
> >>kind) jobs.
> >>Use in the future an atlas certificate for such purposes.
> >>
> >>Regards,
> >>Dan
> >>
> >>
------- End of Original Message -------
|