Print

Print


Hi Jeff,

thanks for your explanation but I'm not completely sure yet. I was experimenting with this a bit yesterday.

So first I used
	[Scheduler]
		vo_max_jobs_cmd : cat /opt/glite/etc/vomaxjobs


Nov 05 14:29 [root@ce01:etc]# cat vomaxjobs 
{
'atlas': 1036,
'cms': 1036,
'lhcb': 960,
'dech': 200,
'dteam': 200,
'hone': 200,
'ops': 96,
'vo.gear.cern.ch': 200
}

Where the max jobs values for dech, dteam, hone and vo.gear.cern.ch are group based caps in Moab. But the
values for atlas, cms, lhcb is the maximum number of job slots that could be available to those VOs and ops will
always have exactly 96 job slots available.


This configuration would print for lhcb the following:

dn: GlueVOViewLocalID=lhcb,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-lhcb,mds-vo-name=resource,o=grid
GlueVOViewLocalID: lhcb
GlueCEStateRunningJobs: 7
GlueCEStateWaitingJobs: 0
GlueCEStateTotalJobs: 7
GlueCEStateFreeJobSlots: 400
GlueCEStateEstimatedResponseTime: 37
GlueCEStateWorstResponseTime: 75

Which is not correct because at that moment we had ~1120 jobs running and can run a maximum of 1152 production jobs (+96 reserved for ops + 288 reserved
internally because we do not have enough memory to run more production jobs)


So I wrote a script
	/opt/lcg/libexec/queuemaxjobs-moab

which will return the maximum number of jobs each VO can run at that moment taking standing reservations into account:

Nov 05 14:51 [root@ce01:etc]# /opt/lcg/libexec/queuemaxjobs-moab -c /opt/glite/etc/lcg-info-dynamic-scheduler.conf
{'hone': 6, 'dech': 6, 'ops': 95, 'lhcb': 24, 'atlas': 6, 'dteam': 6, 'cms': 6, 'vo.gear.cern.ch': 6}

and the output of
	Nov 05 14:52 [root@ce01:etc]# /opt/glite/etc/gip/plugin/glite-info-dynamic-scheduler-wrapper
is the following (only showing GlueVOViewLocalIDs):


dn: GlueVOViewLocalID=atlas,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-atlas,mds-vo-name=resource,o=grid
GlueVOViewLocalID: atlas
GlueCEStateRunningJobs: 861
GlueCEStateWaitingJobs: 375
GlueCEStateTotalJobs: 1236
GlueCEStateFreeJobSlots: 0
GlueCEStateEstimatedResponseTime: 3975
GlueCEStateWorstResponseTime: 56700000

dn: GlueVOViewLocalID=cms,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-cms,mds-vo-name=resource,o=grid
GlueVOViewLocalID: cms
GlueCEStateRunningJobs: 254
GlueCEStateWaitingJobs: 164
GlueCEStateTotalJobs: 418
GlueCEStateFreeJobSlots: 0
GlueCEStateEstimatedResponseTime: 923
GlueCEStateWorstResponseTime: 42508800

dn: GlueVOViewLocalID=dech,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-other,mds-vo-name=resource,o=grid
GlueVOViewLocalID: dech
GlueCEStateRunningJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateTotalJobs: 0
GlueCEStateFreeJobSlots: 6
GlueCEStateEstimatedResponseTime: 37
GlueCEStateWorstResponseTime: 75

dn: GlueVOViewLocalID=dteam,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-other,mds-vo-name=resource,o=grid
GlueVOViewLocalID: dteam
GlueCEStateRunningJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateTotalJobs: 0
GlueCEStateFreeJobSlots: 6
GlueCEStateEstimatedResponseTime: 37
GlueCEStateWorstResponseTime: 75

dn: GlueVOViewLocalID=hone,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-other,mds-vo-name=resource,o=grid
GlueVOViewLocalID: hone
GlueCEStateRunningJobs: 0
GlueCEStateWaitingJobs: 3
GlueCEStateTotalJobs: 3
GlueCEStateFreeJobSlots: 0
GlueCEStateEstimatedResponseTime: 1096
GlueCEStateWorstResponseTime: 388800

dn: GlueVOViewLocalID=lhcb,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-lhcb,mds-vo-name=resource,o=grid
GlueVOViewLocalID: lhcb
GlueCEStateRunningJobs: 1
GlueCEStateWaitingJobs: 0
GlueCEStateTotalJobs: 1
GlueCEStateFreeJobSlots: 23
GlueCEStateEstimatedResponseTime: 37
GlueCEStateWorstResponseTime: 75

dn: GlueVOViewLocalID=ops,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-ops,mds-vo-name=resource,o=grid
GlueVOViewLocalID: ops
GlueCEStateRunningJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateTotalJobs: 0
GlueCEStateFreeJobSlots: 95
GlueCEStateEstimatedResponseTime: 37
GlueCEStateWorstResponseTime: 75

dn: GlueVOViewLocalID=vo.gear.cern.ch,GlueCEUniqueID=ce01.lcg.cscs.ch:2119/jobmanager-pbs-other,mds-vo-name=resource,o=grid
GlueVOViewLocalID: vo.gear.cern.ch
GlueCEStateRunningJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateTotalJobs: 0
GlueCEStateFreeJobSlots: 6
GlueCEStateEstimatedResponseTime: 37
GlueCEStateWorstResponseTime: 75


Where you can see that FreeJobsSlots is 0 for VOs having WaitingJobs, LHCb having 23 FreeJobSlots because they have access to standing
reservations (like ops having 95) and the other VOs that do not have standing reservation have some few free slots (6 in this case) that were not
filled yet filled with waiting jobs by the scheduler.


So my impression is that the second approach (taking reservations into account) is working and for us gives a better picture of what is the real
status of the cluster. Of course, what is not tested here because the cluster was always full is what happens if FreeJobSlots <  RunningJobs.
Maybe here an adjustment of the dynamic scheduler program would be needed.

Because we have a per queue reservations my script needs to know the VO to queue mapping which i put into /opt/glite/etc/lcg-info-dynamic-scheduler.conf:

queuemap :
  atlas:atlas
  cms:cms
  lhcb:lhcb
  ops:ops
  dech:other
  dteam:other
  hone:other
  vo.gear.cern.ch:other


So what do you think about the approach of taking the reservations into account? 

Cheers,

  Peter

-- 
Ing. Peter Oettl | CSCS Swiss National Supercomputing Centre
Systems Engineer | HPC Co-Location Services
Via Cantonale, Galleria 2 | CH-6928 Manno
[log in to unmask] | www.cscs.ch | Phone +41 91 610 82 34

On Nov 5, 2010, at 11:05 AM, Jeff Templon wrote:

> Hi Peter,
> 
>   The script should return the maximum number of jobs that a VO can run.  It's ok if this number changes, but it should not be adjusted to take into account the number of currently running jobs *of that VO*.  That statement is probably ambiguous so let me illustrate.
> 
> At Nikhef we use maui process caps on unix groups, so for example we may have group "patlas" (mapped to generic atlas proxies) limited to 1500 jobs.  What vomaxjobs should then print is 1500.  This is regardless of whether we have zero running ATLAS jobs, 450, 1500, or 1600.  the dynamic scheduler program prints "available slots" by taking the number printed by vomaxjobs, and subtracting from this the number of running jobs belonging to this unix group.  It also accounts for the total number of slots free in the LRMS (not taking into account standing reservations!) So for VO "atlas" if we have
> 
> max processes : 1500
> running atlas jobs : 623
> 
> then vomaxjobs should print 1500 for atlas, running jobs printed by the dynamic scheduler will be 623, and available slots printed will be 877.
> 
> It may be that because of some standing reservation, you realize that atlas will not be able to run 1500 jobs but only 700 in total, it's then fine to print that number.  It's helpful to remember that in normal circumstances, the number printed by vomaxjobs should be greater than or equal to the number of running jobs for that VO.  It can be lower but this is because of exceptional circumstances, like
> 
> - you may have done "qrun" to manually run jobs that the scheduler refused to schedule due to process caps
> - you have just recently lowered the process cap
> 
> One more thing : you won't be able to get it completely right, if you support per-queue caps AND you have queues that support more than one VO.  In that case, you are interested in the following feature request
> 
>   https://savannah.cern.ch/bugs/?23586
> 
> and you could make a comment there "me too" to help increase the priority, if you are really interested.  The request obviously does not have incredibly high priority, it's three years old :-)
> 
> 										JT
> 
> On Nov 4, 2010, at 15:02 , Öttl Peter wrote:
> 
>> Hi Jeff,
>> 
>> here at CSCS we are using Moab instead of Maui and have limits on queues rather than groups.
>> Therefore I wanted to write our own vo_max_jobs_cmd but I'm not fully sure what output is expected.
>> 
>> Should it return the maximum number of jobs that a VO can run at any time or at the very moment the script runs?
>> 
>> The latter would be better for us because we could uses Moab's showbf command to determine the number
>> of job slots available at the moment:
>> 
>> Nov 04 14:47 [root@lrms01:~]# showbf -c atlas
>> Partition     Tasks  Nodes      Duration   StartOffset       StartDate
>> ---------     -----  -----  ------------  ------------  --------------
>> ALL               6      2      INFINITY      00:00:00  14:56:22_11/04
>> 
>> This would also take into account standing reservations and nodes that are down or offline.
>> 
>> Cheers,
>> 
>> Peter
>> 
>> -- 
>> Ing. Peter Oettl | CSCS Swiss National Supercomputing Centre
>> Systems Engineer | HPC Co-Location Services
>> Via Cantonale, Galleria 2 | CH-6928 Manno
>> [log in to unmask] | www.cscs.ch | Phone +41 91 610 82 34
>>