Hi, can you give your site's coordinates ( I mean site-name )
so we can see which batching services you publish via grid ?
I mean, if you don't publish that condor is avail, this indeed is rogue
activity, even if this is ATLAS regular ( yet sill rogue ;) ) activity.
Another question is:
You WNs seem to run condor_schedd, which is the submission daemon of
condor.
If you had it only on CE, then local to WN process wouldn't be able to
submit any condor job.
You control which daemons run on WN in their condor's configuration
file, which can be local/global, there's variable named DAEMON_LIST you
should lookup at
www.cs.wisc.edu/condor -> condor manual.
Regards,
Max.
On Thu, 2006-10-26 at 10:36 +0200, Jeff Templon wrote:
> Hey
>
> no, they are essentially pilot jobs being run by ATLAS - they come in
> via normal channels, but they user payload is a condor glide-in, which
> connects back to the main condor and grabs "real payload". The jobs
> appear to be completely kosher, but at some point the real payload is
> run via something like
>
> /bin/sh --login <script_name> <args>
>
> if you do 'ps' you can see that this process tree is racking up CPU
> time, and Rod claims that the process parentage is all Kosher, but for
> some reason Torque doesn't register the CPU time used by the processes
> under the /bin/sh. The symptom you then see is as Martin said:
>
> - processes on your WNs using up all the CPU
> - lots of apparently stalled jobs on the WNs with zero CPU
> - nothing new getting scheduled.
>
> JT
>
> Steve Traylen wrote:
> >
> > Hi Jeff,
> >
> > What are you waiting for me for, must have missed that one?
> >
> > I think from Martin's description only these jobs at RAL are processes
> > that are left
> > over after the batch job has finished in torque? I think David G
> > wrote some scripts for
> > killing off these rogue processes at the end of jobs.
> >
> > Steve
> >
> > On Oct 26, 2006, at 10:23 AM, Jeff Templon wrote:
> >
> >> Hi,
> >>
> >> we have seen them, but they are associated with proper ATLAS jobs so
> >> are not draining our farm. what may be fooling you is the metric you
> >> use. indeed, if you use CPU time as the primary metric, these jobs
> >> will have appeared to have drained your entire farm. for some reason,
> >> the CPU time used by these jobs does not get properly accounted for by
> >> Torque.
> >>
> >> On the other hand, the wall time *does* get accounted for. This is
> >> one reason why I keep pleading for wall time being the primary
> >> accounting metric.
> >>
> >> I asked the Traylenator took look into why the CPU time isn't getting
> >> caught by Torque, haven't heard back from him yet. Other volunteers
> >> are welcome. I 'spect Mr. Walker will pipe up and say
> >> something soon.
> >
> >> My take: we need to figure out why torque doesn't catch the CPU time,
> >> and we need to account wall time, otherwise I think these jobs are fine.
> >>
> >> JT
> >>
> >> Bly, MJ (Martin) wrote:
> >>> Hi all,
> >>> We have some WNs here that appear to be running agents for the Condor
> >>> system, trying to do work on our WNs in opposition to the Torque/Maui
> >>> batch/scheduling system and unknown to it: atlassgm 16125 1 0
> >>> Oct20 ? 00:01:08 condor_master -f
> >>> atlassgm 2652 16125 0 Oct22 ? 00:04:25 condor_startd -f
> >>> atlassgm 20839 2652 0 Oct25 ? 00:00:48 condor_starter -f
> >>> higgs05.cs.wisc.edu
> >>> atlassgm 20845 20839 0 Oct25 ? 00:00:00 /bin/sh --login
> >>> /pool/4006441.csflnx353.rl.ac.uk/execute.130.246.180.112-16125/dir_20839
> >>> /condor_exec.ex
> >>> atlassgm 21442 20845 92 Oct25 ? 22:09:41 ./2Qgen
> >>> In the above case, jobid 4006441 has been and gone in the batch system.
> >>> The big problem appears to be that this is causing grief to Maui which
> >>> is refusing to schedule any legitimate work, thus draining the whole
> >>> farm.
> >>> Anyone else seen this?
> >>> This is causing a big hassle: we are terminating all such processing in
> >>> order to get our capacity back online.
> >>> Martin
> >>> Tier1 Systems.
> >
> > --Steve Traylen
> > [log in to unmask]
> > CERN, IT-GD-OPS.
> >
> >
> >
--
Maxim Kovgan
TECHNION-LCG2 Site Admin
+972-4829-3864
|