JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for TB-SUPPORT Archives


TB-SUPPORT Archives

TB-SUPPORT Archives


TB-SUPPORT@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

TB-SUPPORT Home

TB-SUPPORT Home

TB-SUPPORT  June 2009

TB-SUPPORT June 2009

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: UKI-SCOTGRID-GLASGOW & STEP09 - testing the limits for Panda and WMS analysis jobs.

From:

John Bland <[log in to unmask]>

Reply-To:

Testbed Support for GridPP member institutes <[log in to unmask]>

Date:

Tue, 9 Jun 2009 23:44:34 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (197 lines)

Graeme Stewart wrote:
> Addressing some of the points on the thread:
> 2. Like Liverpool with file stager, we found the rfcp stager in panda
> could make significant headway even when the network was maxed out by
> apparently useless i/o and network traffic from jobs accessing via
> rfio (my speculation is that sensible "streaming" access is probably
> preferred by the RAID card so it gets a better throughput, even under
> load).

RFIO certainly hits a limit below the theoretical bandwidth maximum on 
the nodes, and file stagers mop up that differential. Our RAID is tuned 
to do relatively big read-aheads which will get good throughput for a 
file stager when it eventually gets through.

> 4. File stager should be more analogous to the way that panda access
> the data - copy to the worker node. However, it does not seem to work
> much better for Glasgow in the current statistics, but I think this is
> probably skewed by the capping which meant that may jobs ran out of
> proxy time before they could run (obviously this doesn't happen to
> panda jobs). (Our SE hated file stager with small AOD files though, I
> remember, it did something evil we never quite fathomed.)

I don't remember that for Liverpool, other than the usual "srm daemons 
are leathering our headnode" problem.

> 5. As Sam pointed out, the analysis is now running on very large
> merged AOD files - this has completely shifted the load pattern on the
> SE and rendered the previous rfio read ahead tuning useless. We've had
> 128MB, tried 0B but neither really seemed to work well. Maybe we can
> tweak this in the dying days of STEP and get some useful data, but I
> suspect the parameter space is too large and there won't be enough
> time.

When we first tested rfio access it was on large files analogous to the 
merged AODs, and it was on those files we found that BIGGER was better 
(with diminishing returns). It just so happened that when atlas started 
using smaller files that conveniently fit in the buffer efficiency and 
bandwidth usage got even better.

The general findings were:

128MB+ gives best cpu efficiency
4kB gives best bandwidth usage (but network latency kills efficiency)
between 128kB and ~16MB things are generally worse than the default

I don't think you'll see any of this making much difference on the 
current tests as everything is so mired in random IO and bandwidth 
saturation you'll never see it for the noise.

We're sticking at 64MB for now, but I'd say if you want to use big 
buffers but you're having to use less than 32MB due to RAM constraints 
don't bother, anything other than the default (128kB) is probably 
hurting you.

There's a tricky balance between buffer RAM consumption and page cache. 
The bigger buffers are better but less page cache increases the load on 
the array, and the usage varies with number of clients. When that cache 
runs out load sky rockets in an instant as all those processes hit bare 
metal.

> 6. I can't help but post one plot from ganglia, of the week snapshot:
> 
>   - Giant load spike one week ago when we were running 1250 analysis
> jobs; many disk servers in utter panic (load up to 200).

You were running out of RAM and hitting the disks directly with all that 
random IO, we hit the same thing a few times. Scaling back the buffers 
then would have probably dropped the load way back down again, but job 
efficiency would still have been on the floor due to bandwidth saturation.

>   - Job capping the controlled load, but network continually maxed out
> (I now think this was the rfio:/// access).
>   - Saturday I switched read ahead to 0 bytes - network still maxed
> out, load reduced, a lot of i/o wait on the disk servers.

No buffers means random access hitting the array direct, with lots of 
very small requests (your RAID won't thank you). It drops bandwidth but 
hits efficiency as you're dependent on LAN latency for every read as 
well now.

>   - Tuesday all WMS jobs (but 1) were stopped - finally the network
> dropped off the 1.4GB/s maximum to ~400MB/s (this was with 200 panda
> jobs). Ramp up of panda analysis shows the network load climbing up
> again.

My turn:

http://hep.ph.liv.ac.uk/~jbland/dpm-one-week-main-points-liv.png

Any load spikes above ~20 are due to running out of RAM on a few pools. 
2.2GB/s is the max we can get from the pools' bonded links. IOWAIT 
mostly from pools with diminished amounts of cache. No caps have been 
placed on numbers of jobs, although there's a physical limit of the 
number of worker nodes at ~760, and most of the jobs are running on 
slower nodes.

My main conclusion is that rfio can be tweaked but only to the limits of 
your smallest pool and at this sort of scale and file size it's all 
pointless as file stager is just so much more efficient (not saying much 
though). If we have file stager for full file analysis, and rfio or 
similar for single/tagged event reads, we've got something that's 
predictable, efficient and can be tuned accordingly.

Of course, the main reason efficiency is so depressingly low with any 
method so far is that the file access patterns (rfio or local disk) look 
random. If that file can't be cached in RAM (very low latency) in a 
buffer or page cache because it's too big then it's going to be hitting 
disk/network with random reads (very high latency). Does xrootd handle 
the actual file format and access better? Maybe there's a happy medium 
on a file size that's good for data management and analysis?

John

> Many thanks for everyone's inputs. Feel free to be creative the rest
> of the week to try and learn as much as we can.
> 
> Cheers
> 
> Graeme
> 
> On Tue, Jun 9, 2009 at 17:39, John Bland<[log in to unmask]> wrote:
>> Sam Skipsey wrote:
>>> 2009/6/9 Ewan MacMahon <[log in to unmask]>:
>>>>> -----Original Message-----
>>>>> From: Testbed Support for GridPP member institutes [mailto:TB-
>>>>>
>>>>> Gentlepersons,
>>>> <huge snip>
>>>>> Of course, this would be even more useful if other sites (UK for
>>>>> starters) could do something similar, so we could compare data across
>>>>> storage and cluster implementations too.
>>>>>
>>>> It sounds like you're having a similar experience to us, but you're a
>>>> bit further ahead; I'd expect that we'll be following shortly behind.
>>>>
>>>> One thing I don't understand is quite what the difference between the
>>>> current batch of WMS jobs and those we've seen in previous hammercould
>>>> tests is - we're seeing completely different usage patterns with the
>>>> bottleneck being very definitely the DPM disk servers (and their network
>>>> links), whereas before we were being limited by the rate of
>>>> authorisations
>>>> going through the DPM head node. Is this just the result of the recent
>>>> packing together of data into fewer larger files, or something else?
>>>>
>>> Mostly the former. The ratio of transfer time to processing time is
>>> much better with the merged AODs.
>> Unfortunately the ratio of data processing to shifting data around on LAN or
>> disk is much worse as files on WNs no longer fit in rfio buffers or node
>> page cache and so we're being limited by LAN bandwidth (rfio) or disk IOPS
>> rather than RAM latency (file stager).
>>
>> The main limit we're seeing at Liverpool (at about 100 rfio connections on
>> each server for a max of ~700 connections) is just plain bandwidth (we have
>> turned down rfio buffers to 32/64MB to keep RAM usage on pools sensible).
>>
>> The rfio processes are sitting around so much because we've got 100 rfio
>> processes and 350MB/s of bandwidth on a pool, that's only a max of 3.5MB/s
>> per process. With these big files that's a drop in the ocean (roughly 12
>> rfio connections can saturate one of our 3Gb/s pools), hence efficiencies
>> are through the floor.
>>
>> At the same time we've got local user analysis going on. With these same
>> saturated pool nodes they're using file stager, and getting far more useful
>> work done.[1] If we're reading all of the file why are we using rfio when
>> AFAICT file stager is miles more efficient for that work flow with these
>> size files (smaller files too, IIRC) and the available bandwidth at sites?
>> Are STEP09 tests using/going to use file stager (maybe our usage is skewed
>> due to our software install problems)?
>>
>> John
>>
>> [1] rfio and file stager run in parallel on same cluster; file stagers had
>> finished before rfio had barely started.
>>
>> --
>> Dr John Bland, Systems Administrator
>> Room 220, Oliver Lodge
>> Particle Physics Group, University of Liverpool
>> Mail: [log in to unmask]
>> Tel : 0151 794 2911
>> "I canna change the laws of physics, Captain!"
>>
> 
> 
> 
> 
> ------------------------------------------------------------------------
> 


-- 
Dr John Bland, Systems Administrator
Room 210, Oliver Lodge
Particle Physics Group, University of Liverpool
Mail: [log in to unmask]
Tel : 0151 794 3396

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager