JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for LCG-ROLLOUT Archives


LCG-ROLLOUT Archives

LCG-ROLLOUT Archives


LCG-ROLLOUT@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

LCG-ROLLOUT Home

LCG-ROLLOUT Home

LCG-ROLLOUT  February 2009

LCG-ROLLOUT February 2009

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Cream CE not parsing PBS logs correctly (Possibly)

From:

Massimo Sgaravatto - INFN Padova <[log in to unmask]>

Reply-To:

LHC Computer Grid - Rollout <[log in to unmask]>

Date:

Fri, 20 Feb 2009 18:27:59 +0100

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (351 lines)

I am not very expert on Torque, but as far as I understand (I might be 
completely wrong), if a job is not running but you see something in 
the "exec host" field, it is because the job tried to run there.

What does

checkjob 2409813

report ? 
It should say why the job doesn't want to run

					Cheers, Massimo


On Fri, 20 Feb 2009, Douglas McNab wrote:

> Hi,
> 
> I said this because a qstat -f of the job id *2409813* for the cream job in
> torque shows an exec_host:
> 
> svr016:/var/spool/pbs/server_logs# *qstat -f 2409813*
> Job Id: *2409813*.svr016.gla.scotgrid.ac.uk
>     Job_Name = cream_520507277
>     Job_Owner = [log in to unmask]
>     job_state = W
>     queue = q30m
>     server = svr016.gla.scotgrid.ac.uk
>     Checkpoint = u
>     ctime = Fri Feb 20 14:36:34 2009
>     Error_Path = dev011.gla.scotgrid.ac.uk:/dev/null
> *    exec_host = node309/3*
>     Execution_Time = Fri Feb 20 17:22:52 2009
>     Hold_Types = n
>     Join_Path = n
> 
> But if you look at pbsnodes for that node, *node309* and grep for the cream
> job id, *2409813* it returns nothing.
> 
> svr016:~# *pbsnodes node309 | grep 2409813*
> 
> Then looking closer at node309 you see this job: 3/*2410148*.
> svr016.gla.scotgrid.ac.uk which is not the cream job
> 
> svr016:~# pbsnodes node309
> node309
>      state = free
>      np = 8
>      properties = lcgpro
>      ntype = cluster
>      jobs = 0/2408249.svr016.gla.scotgrid.ac.uk, 1/
> 2376195.svr016.gla.scotgrid.ac.uk, 2/2408647.svr016.gla.scotgrid.ac.uk, 3/*
> 2410148*.svr016.gla.scotgrid.ac.uk, 4/2408285.svr016.gla.scotgrid.ac.uk, 5/
> 2371288.svr016.gla.scotgrid.ac.uk, 6/2408028.svr016.gla.scotgrid.ac.uk
>      status = opsys=linux,uname=Linux node309.beowulf.cluster
> 2.6.9-78.0.1.ELsmp #1 SMP Tue Aug 5 13:53:03 CDT 2008 x86_64,sessions=32057
> 2151 2589 24011 19802 29533
> 27211,nsessions=7,nusers=4,idletime=955188,totmem=5952308kb,availmem=5594108kb,physmem=16438780kb,ncpus=8,loadave=7.11,netload=4294967294,state=free,jobs=
> 2371288.svr016.gla.scotgrid.ac.uk 2376195.svr016.gla.scotgrid.ac.uk
> 2408028.svr016.gla.scotgrid.ac.uk 2408249.svr016.gla.scotgrid.ac.uk
> 2408647.svr016.gla.scotgrid.ac.uk 2408285.svr016.gla.scotgrid.ac.uk
> 2410148.svr016.gla.scotgrid.ac.uk,rectime=1235149229
> 
> Then a quick qstat of *2410148* shows that its actually a totally different
> job.
> 
> svr016:~# qstat -f *2410148*
> Job Id: 2410148.svr016.gla.scotgrid.ac.uk
>     Job_Name = STDIN
>     Job_Owner = [log in to unmask]
>     resources_used.cput = 00:04:14
>     resources_used.mem = 468532kb
>     resources_used.vmem = 2307700kb
>     resources_used.walltime = 00:05:51
>     job_state = R
>     queue = q30m
>     server = svr016.gla.scotgrid.ac.uk
>     Checkpoint = u
> 
> Perhaps the wait status in Torque is saying that its waiting for 2410148 to
> finish before running the cream job - not sure.
> I just seems strange.  The cream job has now been on the cluster for 2.5
> hours and have been moved through various exec_hosts but nothing is
> happening in terms of actually running.
> 
> Cheers,
> 
> Dug
> 
> 
> 2009/2/20 Massimo Sgaravatto - INFN Padova <[log in to unmask]>
> 
> > If a job has been submitted to torque but it is not running, it is correct
> > that CREAM reports that the job is in IDLE status.
> > So why did you say "On deeper investigation it looks to me like torque or
> > cream
> > thinks the job is actually running on a job slot" ?
> >
> >
> > Then we have to understand why the job doesn't want to run ...
> >
> >
> >                                Cheers, Massimo
> >
> > On Fri, 20 Feb 2009, Douglas McNab wrote:
> >
> > > Hi Massimo,
> > >
> > > What I have seen is that the job looks to have been submitted to torque,
> > > moves from queued to a waiting state:
> > >
> > > svr016:/var/spool/pbs/server_logs# qstat | grep dteam083
> > > 2409813.svr016            cream_520507277  dteam083               0 Q
> > q30m
> > >
> > > svr016:/var/spool/pbs/server_logs# qstat | grep dteam083
> > > 2409813.svr016            cream_520507277  dteam083               0 W
> > q30m
> > >
> > > and then sits there.  It looks from qtstat'ing the job that it try to
> > > schedule it to a worker node, sets the exec_host but the job never runs.
> > >
> > > In terms of the cream ce:
> > >
> > > -bash-3.00$ glite-ce-job-status
> > > https://dev011.gla.scotgrid.ac.uk:8443/CREAM520507277
> > > 2009-02-20 16:27:01,545 WARN - No configuration file suitable for
> > loading.
> > > Using built-in configuration
> > >
> > > ******  JobID=[https://dev011.gla.scotgrid.ac.uk:8443/CREAM520507277]
> > >     Status        = [IDLE]
> > >
> > > It also looks from tracejob that it keeps rescheduling the job for some
> > > reason.
> > >
> > > 02/20/2009 15:19:37  S    Job Modified at request of
> > > [log in to unmask]
> > > 02/20/2009 15:19:37  S    Job Run at request of
> > > [log in to unmask]
> > > 02/20/2009 15:50:24  S    Job Modified at request of
> > > [log in to unmask]
> > > 02/20/2009 15:50:24  S    Job Run at request of
> > > [log in to unmask]
> > > 02/20/2009 16:21:55  S    Job Modified at request of
> > > [log in to unmask]
> > > 02/20/2009 16:21:55  S    Job Run at request of
> > > [log in to unmask]
> > >
> > > There is no mention of "unable to run job" in the logs.  It looks like it
> > > never actually gets that far.
> > > What does the BLParser actually do?
> > >
> > > Cheers,
> > >
> > > Dug
> > >
> > > 2009/2/20 Massimo Sgaravatto - INFN Padova <
> > [log in to unmask]>
> > >
> > > > Hi Douglas
> > > >
> > > > Are you saying that the job is submitted correctly on torque, but it
> > > > doesn't run (for whatever reason), and CREAM instead reports that the
> > job
> > > > is running ?
> > > >
> > > > This looks like bug
> > > >
> > > > https://savannah.cern.ch/bugs/index.php?45717
> > > >
> > > >
> > > > Can you check if the pbs log files there is something like "unable to
> > run
> > > > job" for that job ?
> > > >
> > > >                                Cheers, Massimo
> > > >
> > > >
> > > > On Fri, 20 Feb 2009, Douglas McNab wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I am testing a cream ce that I have set up on a scotgrid dev machine.
> > > > > The current setup is cream ce and torque/maui on different hosts with
> > the
> > > > > logs mounted via NFS on the CE.
> > > > >
> > > > > The job gets submitted successfully and is traceable in torque.
> >  However,
> > > > it
> > > > > moves from Q to W and waits forever.  On deeper investigation it
> > looks to
> > > > me
> > > > > like torque or cream thinks the job is actually running on a job slot
> > > > where
> > > > > another totally different job is running.  When running pbsnodes for
> > the
> > > > > node id and grepping for the cream job id - nothing is returned.
> >  Then
> > > > > tailing the torque logs there is another dteam job with the same
> > > > exec_host
> > > > > as the cream job.  This leads me to thinking that I may not have set
> > up
> > > > the
> > > > > log parsing correctly on the ce or something is getting thoroughly
> > > > confused.
> > > > >
> > > > > The outputs from various commands to come to this conclusion are
> > listed
> > > > > below.  Any thoughts on this would be greatly appreciated.
> > > > >
> > > > > svr016:/var/spool/pbs/server_logs#* tracejob 2409813*
> > > > > /var/spool/pbs/mom_logs/20090220: No such file or directory
> > > > > /var/spool/pbs/sched_logs/20090220: No such file or directory
> > > > >
> > > > > Job: 2409813.svr016.gla.scotgrid.ac.uk
> > > > >
> > > > > 02/20/2009 14:36:34  S    enqueuing into q30m, state 1 hop 1
> > > > > 02/20/2009 14:36:34  S    Job Queued at request of
> > > > > [log in to unmask], owner =
> > > > > [log in to unmask], job name =
> > > > >                           cream_520507277, queue = q30m
> > > > > 02/20/2009 14:36:34  A    queue=q30m
> > > > > 02/20/2009 15:19:37  S    Job Modified at request of
> > > > > [log in to unmask]
> > > > > 02/20/2009 15:19:37  S    Job Run at request of
> > > > > [log in to unmask]
> > > > >
> > > > >
> > > > > svr016:/var/spool/pbs/server_logs# *qstat -f 2409813*
> > > > > Job Id: *2409813.svr016.gla.scotgrid.ac.uk*
> > > > >     Job_Name = cream_520507277
> > > > >     Job_Owner = [log in to unmask]
> > > > >     job_state = W
> > > > >     queue = q30m
> > > > >     server = svr016.gla.scotgrid.ac.uk
> > > > >     Checkpoint = u
> > > > >     ctime = Fri Feb 20 14:36:34 2009
> > > > >     Error_Path = dev011.gla.scotgrid.ac.uk:/dev/null
> > > > >     *exec_host = node182/2*
> > > > >     Execution_Time = Fri Feb 20 15:49:41 2009
> > > > >     ......
> > > > >
> > > > > svr016:~# *pbsnodes node182 | grep 2409813*
> > > > >
> > > > > svr016:~# *pbsnodes node182*
> > > > > node182
> > > > >      state = job-exclusive
> > > > >      np = 8
> > > > >      properties = lcgpro
> > > > >      ntype = cluster
> > > > >      jobs = 0/2406819.svr016.gla.scotgrid.ac.uk, 1/
> > > > > 2409176.svr016.gla.scotgrid.ac.uk, 2/
> > 2409262.svr016.gla.scotgrid.ac.uk,
> > > > 3/
> > > > > 2340154.svr016.gla.scotgrid.ac.uk, 4/
> > 2354251.svr016.gla.scotgrid.ac.uk,
> > > > 5/
> > > > > 2407443.svr016.gla.scotgrid.ac.uk, 6/
> > 2408238.svr016.gla.scotgrid.ac.uk,
> > > > 7/
> > > > > 2407591.svr016.gla.scotgrid.ac.uk
> > > > >      status = opsys=linux,uname=Linux node182.beowulf.cluster
> > > > > 2.6.9-78.0.1.ELsmp #1 SMP Tue Aug 5 13:53:03 CDT 2008
> > > > x86_64,sessions=30091
> > > > > 1175 1856 9228 11111 20888 22885
> > > > >
> > > >
> > 30853,nsessions=8,nusers=4,idletime=3709813,totmem=5952308kb,availmem=2123760kb,physmem=16438780kb,ncpus=8,loadave=8.04,netload=4294967294,state=free,jobs=
> > > > > 2340154.svr016.gla.scotgrid.ac.uk 2354251.svr016.gla.scotgrid.ac.uk
> > > > > 2406819.svr016.gla.scotgrid.ac.uk 2407443.svr016.gla.scotgrid.ac.uk
> > > > > 2407591.svr016.gla.scotgrid.ac.uk 2408238.svr016.gla.scotgrid.ac.uk
> > > > > 2409176.svr016.gla.scotgrid.ac.uk *2409262.svr016.gla.scotgrid.ac.uk
> > > > > ,rectime=1235143422*
> > > > >
> > > > > 02/20/2009 15:20:23;S;*2409262.svr016.gla.scotgrid.ac.uk
> > *;user=dteam166
> > > > > group=dteam jobname=STDIN queue=q3d ctime=1235130339 qtime=1235130339
> > > > > etime=1235130339 start=1235143223
> > > > > [log in to unmask] *exec_host=node182/2
> > > > > *Resource_List.cput=72:00:00 Resource_List.neednodes=1
> > > > > Resource_List.nodect=1 Resource_List.nodes=1
> > > > Resource_List.walltime=72:00:00
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Dug
> > > > >
> > > > >
> > > >
> > > > --
> > > >               \\\|///
> > > >            \\ ~ ~ //
> > > >            (/ @ @ /)
> > > >   -------oOOo-(_)-oOOo----------------------------------
> > > >                         Massimo Sgaravatto
> > > >                         INFN Sezione di Padova
> > > >                         Via Marzolo, 8
> > > >                         35131 Padova - Italy
> > > >                         Tel: ++39 0498277047   Fax: ++39 0498277102
> > > >          oooO           E-mail: massimo.sgaravatto [at] pd.infn.it
> > > >          (   )   Oooo   Home page: http://www.pd.infn.it/~sgaravat<http://www.pd.infn.it/%7Esgaravat>
> > <http://www.pd.infn.it/%7Esgaravat>
> > > >   --------\ (----(   )----------------------------------
> > > >            \_)    ) /
> > > >                  (_/
> > > >
> > >
> > >
> > >
> > >
> >
> > --
> >               \\\|///
> >            \\ ~ ~ //
> >            (/ @ @ /)
> >   -------oOOo-(_)-oOOo----------------------------------
> >                         Massimo Sgaravatto
> >                         INFN Sezione di Padova
> >                         Via Marzolo, 8
> >                         35131 Padova - Italy
> >                         Tel: ++39 0498277047   Fax: ++39 0498277102
> >          oooO           E-mail: massimo.sgaravatto [at] pd.infn.it
> >          (   )   Oooo   Home page: http://www.pd.infn.it/~sgaravat<http://www.pd.infn.it/%7Esgaravat>
> >   --------\ (----(   )----------------------------------
> >            \_)    ) /
> >                  (_/
> >
> 
> 
> 
> 

-- 
              \\\|///
            \\ ~ ~ //
            (/ @ @ /)
   -------oOOo-(_)-oOOo----------------------------------
                         Massimo Sgaravatto
                         INFN Sezione di Padova
                         Via Marzolo, 8
                         35131 Padova - Italy  
                         Tel: ++39 0498277047   Fax: ++39 0498277102
          oooO           E-mail: massimo.sgaravatto [at] pd.infn.it
          (   )   Oooo   Home page: http://www.pd.infn.it/~sgaravat
   --------\ (----(   )----------------------------------
            \_)    ) /
                  (_/

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
November 2023
June 2023
May 2023
April 2023
March 2023
February 2023
September 2022
June 2022
May 2022
April 2022
February 2022
December 2021
November 2021
October 2021
September 2021
July 2021
June 2021
May 2021
February 2021
January 2021
November 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
February 2018
January 2018
November 2017
October 2017
September 2017
July 2017
June 2017
May 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager