On 30-05-13 15:26, Stephen Burke wrote:
> LHC Computer Grid - Rollout [mailto:[log in to unmask]]
>> On Behalf Of Fokke Dijkstra said:
>> How can this ever work for the end user? I've just got a question from
>> someone who is trying to request a queue for his job with enough
>> wallclock time available. He ran into the issue that RUG-CIT and
>> SARA-MATRIX publish wallclock time in hours, where NIKHEF-ELPROD
>> publishes wallclock time in minutes. I'm quite sure that in the recent
>> past we published minutes as well at RUG-CIT. Adding seconds into the
>> mix makes the problem even worse.
> The change is only for GLUE 2, GLUE 1 stays in minutes. The latest WMS does support matching against GLUE 2 attributes but I don't think anyone is using it yet, so hopefully people will have upgraded their CREAMs by the time it becomes relevant. Anyway publishing hours is definitely wrong - do they have some kind of customised info provider?
I finally had some time to look into this. Our site runs the standard
scripts from:
info-dynamic-pbs-3.0.1-1.sl6.noarch
The script involved is /usr/libexec/info-dynamic-pbs. There is a divide
by 60 in there for the resources_max.walltime parameter, which is not
there for the resources_max.cput and resources_max.pcput.
I've looked at the changes between version 3.0.0-1 and 3.0.1-1. The
problem seems to be caused by a removal of the division by 60 while
parsing the torque output for cpu time, and not for walltime. The
division by 60 has moved for all these parameters to the printf
statements, which generate the ldif file. The result of this is that the
walltime limits are now divided by 60 twice.
I will file a ticket with GGUS for this issue.
Kind regards,
Fokke
--
Fokke Dijkstra <[log in to unmask]>
High Performance Computing & Visualisation
Donald Smits Center for Information Technology, University of Groningen
Postbus 11044, 9700 CA Groningen, The Netherlands
+31-50-363 9243
|