Fotis Georgatos [mailto:[log in to unmask]] said:
> I haven't tried yet more complex experiments, like having some
> balancing technique among multiple RBs or using multiple UIs.
Did you notice the recent discussion about configuring multiple brokers?
> The rate of 25 jobs/minute is not necessarily bad, since the LCG
> is meant for cpu-intensive tasks where the submission time is
> miniscule
> compared to the run time of the load imposed on the Grid worker nodes.
It isn't quite as simple as that, as Rod Walker has pointed out. A job
may run for 12 hours, but if there are tens of thousands of WNs it's
entirely possible that someone might want to have 1000 or more
concurrent jobs, which means a job every 40 seconds or less. And that
assumes that you submit continuously, whereas you may well prefer to
submit your 1000 jobs in a batch and then go away - at 25/minute that's
40 minutes to submit 1000, which is quite a while to wait. Also it isn't
just submission, generally you have to poll a few times with job-status
to see if the job has finished, and then do a get-output at the end, so
you have several interactions with the RB per job and not just one.
> I find more important the breaking point between 32-64 parallel jobs,
> I believe the developers should give as a hint if the
> behaviour is normal.
As far as I remember the "breaking point" was just that the submission
was rejected, not that anything crashed, which seems like a good
behaviour to me. If you had multiple brokers configured I think it would
fall back to one of the others, although it would be a good things to
test. Also you can potentially separate the RB (i.e. NS+WM) and the LB
on different machines, which might improve the performance.
> The "rate at which the jobs run" is a completely different matter,
> and is dependent on the nature of the jobs, the nodes in question,
> their I/O dependency etc. This is yet another experiment, not
> yet done.
As Jeff said, it isn't just the running time for the job, there is also
a processing time. When you submit the job you interact with the Network
Server process, and that determines how fast job-submit returns, which I
guess is what you are measuring. The jobs are then put in a queue to be
processed by the Workload Manager, which does the matchmaking to choose
a site and then passes the job on to Condor-G to submit. If the job has
complex requirements that can take quite a bit longer than the NS
interaction - in the past it could be a *lot* longer because it involved
querying the GRIS for every matching CE, now it should just use an
internal cache. However, it may still be slower, especially if there are
lots of input files since they have to be looked up in the catalogue.
Also it used to be that the WM was strictly fifo, so if there was a long
queue your jobs might take a long, and unpredictable, time before they
would get submitted to a CE - I don't know if it's still like that,
maybe Maarten can comment?
> Well, the situation is that all the tests where done with the
> -r parameter.
OK, but that bypasses the matchmaking so it will certainly go through a
lot faster than normal job submission. Also it's even less useful to
allow resubmission!
> What I find not much acceptable though, is that there are
> queues in sites that are experimental or disabled and still advertise
> "Production" status. I believe this is not only poisoning the BDII's
> quality of provided information, it is also imposing an unnecessary
> RB load, to the point of wasting users' time.
For real production use by experiments the RB usually looks at a
VO-specific BDII which has a filtered list of sites, e.g. to take out
the ones which are failing the monitoring tests. However, we also need
an all-site BDII (known as the test zone for historical reasons) to
allow the tests to be run in the first place. Different VOs may well
have different criteria for considering a site to be good. "Production"
in the CE status just means that the queue is accepting jobs, it doesn't
imply that they will run successfully!
> "Job RetryCount 3 hit" should only appear when the network connection
> between RB and CE is not available for a "reasonable time period", in
> any other case it should imply either a bug or bad site maintenance...
I don't understand what you mean. A retry happens whenever the RB
detects that a job has failed at a particular site (not all failures are
detected), that may be a network problem or many other things. Usually
the retry will choose a different site, except that you have forced it
to a particular site. A user can specify a limit on the number of
retries (which can be 0), and that message just tells you that the limit
has been reached. To find out what the errors were which triggered the
retries you have to look at the logging-info output. Equally, a job
which has a final status of Done and looks OK may still have had errors
and retries if you look at the logging-info.
> time cat $1_matches |xargs -n1 -P$HOWMANY --replace \
> edg-job-submit --config-vo myui.$1 --nomsg -r {} sleep.jdl \
> |tee $1_jobs.$$_$HOWMANY.log
Are you aware of the -o option for edg-job-submit to write the job IDs
to a file? That's usually the most useful way to collect them.
Stephen
|