Hi Stephen,
Burke, S (Stephen) wrote:
> Did you notice the recent discussion about configuring multiple brokers?
Nope, can you please point me at it?
>>The rate of 25 jobs/minute is not necessarily bad, since the LCG
>>is meant for cpu-intensive tasks where the submission time is
>>miniscule
>>compared to the run time of the load imposed on the Grid worker nodes.
>
> It isn't quite as simple as that, as Rod Walker has pointed out. A job
> may run for 12 hours, but if there are tens of thousands of WNs it's
> entirely possible that someone might want to have 1000 or more
> concurrent jobs, which means a job every 40 seconds or less. And that
> assumes that you submit continuously, whereas you may well prefer to
> submit your 1000 jobs in a batch and then go away - at 25/minute that's
> 40 minutes to submit 1000, which is quite a while to wait. Also it isn't
> just submission, generally you have to poll a few times with job-status
> to see if the job has finished, and then do a get-output at the end, so
> you have several interactions with the RB per job and not just one.
You are perfectly right. I faced what you just said during the submissions.
For the time being it is just a discomforting feature of the middleware,
but it can become discouraging for a certain range of high-end computing
problems that could map well to the grid, eg. approximating algorithms,
parametric simulation of models, and other heavy heuristic methods.
On the other hand, if you consume that many resources on a worlwide scale,
so quickly, you should be able to rewrite parts of RB software, or else...
;-)
>>I find more important the breaking point between 32-64 parallel jobs,
>>I believe the developers should give as a hint if the
>>behaviour is normal.
>
> As far as I remember the "breaking point" was just that the submission
> was rejected, not that anything crashed, which seems like a good
> behaviour to me. If you had multiple brokers configured I think it would
It is a good behaviour that it doesn't crash, but the messages you get
on the UI or RB's logfiles are far too vague to claim the true cause.
> fall back to one of the others, although it would be a good things to
> test. Also you can potentially separate the RB (i.e. NS+WM) and the LB
> on different machines, which might improve the performance.
Now, we are just sysadmins reaching the point of speculation (...),
and the true developers of the software should pop-in for real profiling,
with concrete numbers and facts. I find a better path of improving the
infrastucture to have the people who made the thing measuring it
in a scientific way, rather than experimenting in unknown directions...
> As Jeff said, it isn't just the running time for the job, there is also
> a processing time. When you submit the job you interact with the Network
> Server process, and that determines how fast job-submit returns, which I
> guess is what you are measuring. The jobs are then put in a queue to be
[...]
Indeed, please read in that text "maximum load" instead of "load":
'"Throughput" refers to the innate ability of the RB to accept jobs,
as seen from a nearby UI; it is in practice the load that can be
effectively be set by a single user on a single UI and a single RB,
having confirmed that the submission bottleneck is the RB itself.'
As you said, there are plenty of things that can slow down the
submission process anyway. Far too many to start thinking over them...
> For real production use by experiments the RB usually looks at a
> VO-specific BDII which has a filtered list of sites, e.g. to take out
> the ones which are failing the monitoring tests. However, we also need
> an all-site BDII (known as the test zone for historical reasons) to
> allow the tests to be run in the first place. Different VOs may well
> have different criteria for considering a site to be good. "Production"
> in the CE status just means that the queue is accepting jobs, it doesn't
> imply that they will run successfully!
I do still expect though that a site returning after a "job-list-match",
will at least be able to run a plain bash script, even if all I get back
is the usual complaints for missing binaries and such. My concern is,
that sites that are known to be in a weird state should be shown as such.
I don't see how can a site be in OK status, show up in edg-job-list-match,
and still not be able to run a small script through regular submission.
My advice to the CIC-on-duty is to always ask any relevant sites to move
to aintenance status or do it themselves anyway. I don't mind if they
do this to sites that I am in control of, I think it's a "best practice".
>>time cat $1_matches |xargs -n1 -P$HOWMANY --replace \
>> edg-job-submit --config-vo myui.$1 --nomsg -r {} sleep.jdl \
>> |tee $1_jobs.$$_$HOWMANY.log
>
>
> Are you aware of the -o option for edg-job-submit to write the job IDs
> to a file? That's usually the most useful way to collect them.
Ah, yeah.
I used to employ that and found out about another yet race condition:
The processes try to independently open the supplied output file
for writing, and they result in mangling each other's URLs. :(
It doesn't happen very often, surely, but it does happen, surely.
--
echo "sysadmin know better bash than english" | sed s/min/mins/ \
| sed 's/better bash/bash better/' # Yelling in a CERN forum
|