Print

Print


Any tips for ICE jobs in a similar state on WMS?

Catalin



> -----Original Message-----
> From: LHC Computer Grid - Rollout [mailto:[log in to unmask]]
> On Behalf Of Maarten Litmaath
> Sent: 24 November 2010 17:40
> To: [log in to unmask]
> Subject: Re: [LCG-ROLLOUT] condor jobs in WMS
> 
> Hola Arnau,
> 
> > We have many jobs in H status in our WMS. Most of them are really
> old:
> >
> > 184308.0   glite           4/27 21:26   0+00:00:00 H  0   9.8
> JobWrapper.https_3
> > 197405.0   glite           5/11 22:18   1+17:25:59 H  0   9.8
> JobWrapper.https_3
> > 198019.0   glite           5/12 15:37   0+20:36:19 H  0   9.8
> JobWrapper.https_3
> > 198058.0   glite           5/12 15:42   1+03:51:48 H  0   9.8
> JobWrapper.https_3
> > 198190.0   glite           5/12 15:55   1+10:48:08 H  0   9.8
> JobWrapper.https_3
> > 198536.0   glite           5/12 16:33   0+23:34:50 H  0   9.8
> JobWrapper.https_3
> > 198598.0   glite           5/12 16:36   0+18:43:36 H  0   9.8
> JobWrapper.https_3
> > 198770.0   glite           5/12 16:55   0+21:51:36 H  0   9.8
> JobWrapper.https_3
> > 200168.0   glite           5/13 08:44   0+00:00:00 H  0   9.8
> JobWrapper.https_3
> > 214206.0   glite           5/27 00:59   0+00:00:00 H  0   9.8
> JobWrapper.https_3
> > 214507.0   glite           5/27 10:48   0+00:00:00 H  0   9.8
> JobWrapper.https_3
> > [...]
> >
> >
> > Shouldn't be deleted by any wms process? if not, may I delete them
> by
> > hand?
> > # condor_q|grep -c H
> > 997
> 
> That is nothing; some of our WMS had >15k before we did some cleanup.
> A solution is provided in this bug report:
> 
>     https://savannah.cern.ch/bugs/index.php?70401