Yo *,
So Ronald and I just spent a thoroughly enjoyable hour trying to figure
out why the load on the CE has been higher than we feel comfortable
with, the last little while ... and at some point, tracing lots of perl
processes that seemed to be taking a lot of time, descended into the
gram_job_state directory ... where we found 28,800 files laying around,
despite the fact that we only have a few hundred active jobs. Throwing
away the bulk of these files resulted in a factor of three decrease in
the load on the CE machine.
I got a strange sense of deja vu while doing all this, and indeed, it's
not the first time. I reproduce for you below, verbatim, a message from
almost precisely three years ago, containing an analysis of the problem.
Is there any new collective wisdom on why this problem happens? Why
is it still happening??
J "time for lunch" T
====
From [log in to unmask] Thu Sep 2 20:20:20 2004 +0200
Subject: qstat, jobmanagers, PERL, denial of service, and drane bamage
From: Jeff Templon <[log in to unmask]>
To: LHC Computer Grid - Rollout <[log in to unmask]>
So,
I have just spent a couple enjoyable hours trying to figure
out what is going on with this silly qstat business.
Firstly, I am on the verge of banning the following user
/C=UK/O=eScience/OU=QueenMaryLondon/L=Physics/CN=dave kant
since he seems to be responsible for something like 25% of
the load on our CE, looping over and over many jobs.
Then I saw something really strange: most of the jobs being
provided to qstat -f did NOT EVEN EXIST on the system.
Furthermore, the output of qstat -f was being piped to /dev/null
so whatever this silly program is doing, it's not learning
from the mistake ... imagine someone who called you once
every fifteen minutes and asked "can I speak to Rod, please".
You answer "Rod no longer lives here". Fifteen minutes later,
...
So then I tried to inspect the program: you guessed it,
Larry Wall Code, write once read never. The program has
names like:
perl /tmp/grid_manager_monitor_agent.atlas004.28318.1000 --delete-sel
After even more inspection, I see that not only dteam
is doing this silly asking for jobs that no longer exist;
most of ALL of the qstats are doing this. From what of
the code (in this case a good name) I can understand, it
seems to be looking for state files, and I think it means
the files in
/opt/globus/tmp/gram_job_state
of which there are over 10,000 on tbn18. I get the feeling
that this code, if being run as dteam001, is looking
at all the files in this directory, finding out which
are owned by dteam001, extracting the pbs jobid, and doing
a great big loop over all the jobids so gathered.
Let's see, I currently have 672 active jobs (R or Q state)
and 10,000 of these state files, so I expect about 7% of
the qstat calls to refer to an actual real job on thesystem:
tbn18:~> for n in $(seq 30)
do
qstat $(ps ax | egrep 'sh -c .*qstat.*[0-9]+.tbn18.nikhef.nl' | gawk
'{print $9}') >& stat.q.$n
sleep 2
done
tbn18:~> egrep '^[0-9]+' stat.q.* | wc
10 60 908
tbn18:~> grep Unknown stat.q.* | wc
117 585 6295
10 out of 127 is 7.9%.
So the question is, what do we do? Where do we submit the
bug? Can I just do a rm -f on the directory with all these
stale state files on it? It has the potential to drop
the load quite a bit, getting rid of 90% of the qstat
calls ...
J "being polite and not asking to rewrite the program
in a readable language" T
|