Addressing some of the points on the thread:
1. Tweaking rfio read ahead doesn't do anything for panda analysis,
because the stager streams the data to the worker node. For most UK
sites this should be rfcp, but in fact I see that many of our DPM
sites are still using lcg-cp. I'll try and correct this tonight. Panda
has a built in timeout on the stager of 1800s, so if the load is such
that the file copy time is greater than this your success rate
plummets (this certainly happened at MAN-HEP2 early last week).
2. Like Liverpool with file stager, we found the rfcp stager in panda
could make significant headway even when the network was maxed out by
apparently useless i/o and network traffic from jobs accessing via
rfio (my speculation is that sensible "streaming" access is probably
preferred by the RAID card so it gets a better throughput, even under
load).
3. The WMS analysis runs a mixture of file stager and DQ2_LOCAL (rfio
or dcap) access, all using Johannes'. In retrospect this is was a
mistake as it meant that the sites had no easy way to distinguish
between these important differences in access pattern. One thing we
could try is to deliberately ban sites from one form of analysis to
try and disentangle this - we could stop the rfio access at Liverpool
and see how well the hammercloud AOD analysis performs with file
stager only. We might well try this at Glasgow on Thursday.
(At the moment http://atlas-ganga-storage.cern.ch/test_426/ and
http://atlas-ganga-storage.cern.ch/test_428/ use DQ2_LOCAL and
http://atlas-ganga-storage.cern.ch/test_429/ uses file stager.)
4. File stager should be more analogous to the way that panda access
the data - copy to the worker node. However, it does not seem to work
much better for Glasgow in the current statistics, but I think this is
probably skewed by the capping which meant that may jobs ran out of
proxy time before they could run (obviously this doesn't happen to
panda jobs). (Our SE hated file stager with small AOD files though, I
remember, it did something evil we never quite fathomed.)
5. As Sam pointed out, the analysis is now running on very large
merged AOD files - this has completely shifted the load pattern on the
SE and rendered the previous rfio read ahead tuning useless. We've had
128MB, tried 0B but neither really seemed to work well. Maybe we can
tweak this in the dying days of STEP and get some useful data, but I
suspect the parameter space is too large and there won't be enough
time.
6. I can't help but post one plot from ganglia, of the week snapshot:
- Giant load spike one week ago when we were running 1250 analysis
jobs; many disk servers in utter panic (load up to 200).
- Job capping the controlled load, but network continually maxed out
(I now think this was the rfio:/// access).
- Saturday I switched read ahead to 0 bytes - network still maxed
out, load reduced, a lot of i/o wait on the disk servers.
- Tuesday all WMS jobs (but 1) were stopped - finally the network
dropped off the 1.4GB/s maximum to ~400MB/s (this was with 200 panda
jobs). Ramp up of panda analysis shows the network load climbing up
again.
Currently we're at 600 panda analysis jobs and running nicely.
Tail+eyeball on the PBS logs seems to hint that the CPU efficiency is
up to ~60%.
Many thanks for everyone's inputs. Feel free to be creative the rest
of the week to try and learn as much as we can.
Cheers
Graeme
On Tue, Jun 9, 2009 at 17:39, John Bland<[log in to unmask]> wrote:
> Sam Skipsey wrote:
>>
>> 2009/6/9 Ewan MacMahon <[log in to unmask]>:
>>>>
>>>> -----Original Message-----
>>>> From: Testbed Support for GridPP member institutes [mailto:TB-
>>>>
>>>> Gentlepersons,
>>>
>>> <huge snip>
>>>>
>>>> Of course, this would be even more useful if other sites (UK for
>>>> starters) could do something similar, so we could compare data across
>>>> storage and cluster implementations too.
>>>>
>>> It sounds like you're having a similar experience to us, but you're a
>>> bit further ahead; I'd expect that we'll be following shortly behind.
>>>
>>> One thing I don't understand is quite what the difference between the
>>> current batch of WMS jobs and those we've seen in previous hammercould
>>> tests is - we're seeing completely different usage patterns with the
>>> bottleneck being very definitely the DPM disk servers (and their network
>>> links), whereas before we were being limited by the rate of
>>> authorisations
>>> going through the DPM head node. Is this just the result of the recent
>>> packing together of data into fewer larger files, or something else?
>>>
>>
>> Mostly the former. The ratio of transfer time to processing time is
>> much better with the merged AODs.
>
> Unfortunately the ratio of data processing to shifting data around on LAN or
> disk is much worse as files on WNs no longer fit in rfio buffers or node
> page cache and so we're being limited by LAN bandwidth (rfio) or disk IOPS
> rather than RAM latency (file stager).
>
> The main limit we're seeing at Liverpool (at about 100 rfio connections on
> each server for a max of ~700 connections) is just plain bandwidth (we have
> turned down rfio buffers to 32/64MB to keep RAM usage on pools sensible).
>
> The rfio processes are sitting around so much because we've got 100 rfio
> processes and 350MB/s of bandwidth on a pool, that's only a max of 3.5MB/s
> per process. With these big files that's a drop in the ocean (roughly 12
> rfio connections can saturate one of our 3Gb/s pools), hence efficiencies
> are through the floor.
>
> At the same time we've got local user analysis going on. With these same
> saturated pool nodes they're using file stager, and getting far more useful
> work done.[1] If we're reading all of the file why are we using rfio when
> AFAICT file stager is miles more efficient for that work flow with these
> size files (smaller files too, IIRC) and the available bandwidth at sites?
> Are STEP09 tests using/going to use file stager (maybe our usage is skewed
> due to our software install problems)?
>
> John
>
> [1] rfio and file stager run in parallel on same cluster; file stagers had
> finished before rfio had barely started.
>
> --
> Dr John Bland, Systems Administrator
> Room 220, Oliver Lodge
> Particle Physics Group, University of Liverpool
> Mail: [log in to unmask]
> Tel : 0151 794 2911
> "I canna change the laws of physics, Captain!"
>
--
Dr Graeme Stewart http://www.physics.gla.ac.uk/~graeme/
Department of Physics and Astronomy, University of Glasgow, Scotland
DEATH TO MEETINGS!
|