Print

Print


On 14/06/12 14:21, Kashif Mohammad wrote:
> Hi Chris
>

Thanks Kashif, that's really helpful. I think there are two potential 
problems - see below for details.

> I haven't opened a ticket separately but ticket opened by Stephan explain
> it in detail (https://ggus.eu/tech/ticket_show.php?ticket=72506 )
>

That ticket explains the following problem:

> It is not a proxy issue as such but it is a bug in cream that it does not handle
> jobs properly whose proxy expire while still in queue.

I can see that this is clearly a bug.

> As for I understand
> if you upload a long proxy to myproxy server and then submit a job with
> initial default 12 hour proxy to CE and  job gets chance to run within
> 12 hour then CE will keep updating the proxy unless job finishes.

OK - though I actually believe the WMS updates the proxy and pushes it 
to the CE.


> But
> if it couldn't get chance to run with in 12 hour then there is no mechanism
> to update proxy.

However I see no evidence in that bug for this latter statement. If, as 
I understand, the WMS keeps pushing an updated proxy to the CE, the 
proxy will continually be updated on the CE, and job will work when it 
eventually hits the front of the queue.

Of course if a job doesn't use myproxy correctly (or at all), or the 
proxy uploaded to myproxy expires, then you'll end up with an invalid 
proxy in the job - and in this situation.

> Ideally cream should handle this and cancel the job. But
> somehow cream delete expired delegation without checking that there is job
> in the queue. There is a savannah ticket for this  https://savannah.cern.ch/bugs/?86700.
>
> I am just going to delete 70 esr jobs because they are in this state. It seems
> that a esr user submitted a bunch of job to Oxford and first batch of jobs ran
> successfully but as they crossed their fair share, rest of jobs got lower priority
> and it went into this weird state.


Chris

>
> Cheers
> Kashif
>
>
>
>
> -----Original Message-----
> From: Christopher J. Walker [mailto:[log in to unmask]]
> Sent: 13 June 2012 22:28
> To: Testbed Support for GridPP member institutes
> Cc: Kashif Mohammad
> Subject: Re: Cleaning up the PBS/Torque queues
>
> On 13/06/12 17:30, Kashif Mohammad wrote:
>> Hi John
>>
>> It is a longstanding issue with CREAM/PBS and Stephen opened a
>> detailed ticket (https://ggus.eu/tech/ticket_show.php?ticket=72506 )
>> but it is not fixed yet. At Oxford we regularly kill jobs which
>> are either in W state or in Q state but assigned to a WN.
>>
>> for job in $(qstat | grep " Q " | cut -d. -f1) ; do if ( qstat -f ${job} | grep exec>>/dev/null) ; then  qdel -p ${job} ; fi ; done
>>
>> It will kill any job which is in Q state but assigned to a WN.
>>
>> One of the issue we have noticed is that some time jobs from lower
>> priority VO/users  has to stay in queue for  long enough to get its
>> proxy expired and CREAM doesn't handle this situation properly.
>
> Not more proxy problems :-(.
>
> I know you raised the possibility of this happening at a previous
> operations meeting - but I wasn't aware anyone had done tests to
> demonstrate this as a problem in addition to the WMS not renewing proxies.
>
> Can I have a GGUS ticket number please.
>
> Thanks,
>
> Chris
>
>>
>> Cheers
>> Kashif
>>
>>
>> -----Original Message-----
>> From: Testbed Support for GridPP member institutes [mailto:[log in to unmask]] On Behalf Of John Hill
>> Sent: 13 June 2012 16:37
>> To: [log in to unmask]
>> Subject: Cleaning up the PBS/Torque queues
>>
>> While investigating the recent supposed CVMFS and analysis job issues at
>> Cambridge, I came across PBS errors in /var/log/messages on the WNs
>> which reported copy errors when getting files from the CREAM Sandbox
>> area. Further digging has identified these as old pilot jobs (some from
>> August last year!) which are still lurking in the PBS queue and are
>> being periodically restarted. "showq" indicates that we have about 3500
>> of these relic jobs.I was wondering whether there was there a
>> recommended way to tidy up the queue?
>>
>> John
>