Hi,
On 14 November 2013 14:38, Alastair Dewhurst <[log in to unmask]> wrote:
> Hi
>
> I was unable to answer further question at the ATLAS UK meeting earlier.
>
> I spoke to Roger:
> - The official request is staying at 2GB per job slot, however this is being
> changed to a minimum requirement. It will not be increased because some
> sites would then not be able to declare all their capacity and this would
> cause political problems. A line in the VO Card will be added to say that
> the recommended amount will be either 3 or 4GB per job slot. This hasn't
> been decided yet, although it is leaning towards 4. To be clear, Tier 2s
> will need to run jobs that will use unto 4GB memory (like the Tier 1 does
> currently). I don't think there is a formal road map.
>
Okay. So, the obvious and (I like to think) logical response to having
a large amount of legacy resource which no longer matches your
complete job profile is to require that *new* resources bought meet a
*new* requirement, while allowing jobs that fit in the legacy resources
to run on the legacy resource.
It isn't clear to me (and I suspect it's not clear to anyone below the
more exalted positions in ATLAS) why the ATLAS position isn't "please
buy new hardware with 4GB/core", and then using the 2GB/core estate
for the other jobs that don't have that high a memory requirement.
That would guarantee the existence of resource that big jobs can run
at, and also not render older sites unusable via guidelines. Obviously, a
technical solution to ensuring that 4GB jobs only arrive on 4GB resources
would be necessary, but I believe that this is already possible via the
per-queue resource limits that can be set.
The wooly and milquetoast position that "4GB/core is recommended" seems
like, given the general psychology of Sites, it'll either have everyone just
buy 4GB/core [because no-one wants to lose out on jobs], or no-one buy
4GB/core [because they want to have as many slots as possible].
It certainly isn't going to produce a predictable response, which I would have
thought would have been the most important thing for ATLAS to guarantee.
(I should be clear that my concern re: communication and policy is not
with you or Alessandra, Alastair, but with more rarified locations
within the VO. That we have any idea at all what ATLAS might possibly
believe that it wants is entirely due to you two.)
Sam
> - The reason for the slightly strange requests vs expected amounts is mostly
> political. The request have to follow data taking but the calculation has
> been done assuming sites get a constant amount of money each year.
> Therefore the requests stay the same for now as we aren't taking data, and
> then catch up with what ATLAS expects sites to have during the data taking
> runs.
>
> - Roger disagrees with me about the impact of HPC. He doesn't think we will
> get as many resources as others are hoping for. At some point ATLAS will be
> expect to pay!
>
> Alastair
>
>
> On 13 Nov 2013, at 17:34, Alessandra Forti <[log in to unmask]> wrote:
>
> Please do as if I hadn't replied two hours ago.
>
> best
> alessandra
>
> On 13/11/2013 17:21, Mark Mitchell wrote:
>
> Hi Alastair,
> thanks for this. One question I have is should sites (tier-2s) also factor
> for 4 gig job payloads for RAM ?
> What impact would this have on the site job profile and is there a road map
> for increases in RAM requirements within ATLAS?
> regards,
> Mark
>
>
> On 12 Nov 2013, at 16:15, Alastair Dewhurst <[log in to unmask]>
> wrote:
>
> Hi
>
> I am sorry for not being more prepared earlier for the 11am OPs meeting. I
> have given links to the various sources I have used. The ATLAS links are
> unfortunately protected (for legitimate reasons) and I can't really send
> round copies to everyone. However if you really want to read up, there is
> probably an ATLAS user at your site who you can ask.
>
> On page 15 in [1] there is a plot showing the ratio of storage compared to
> CPU seconds provided to ATLAS for all sites. The ratio is surprising stable
> over the vast majority of sites. This ratio is 7 HepSpec06 for each TB of
> storage provided, or approximately 1 job slot per TB of storage.
>
> In [2] Borut goes into more detail about ATLAS future plans:
> "Our resource planning is based upon the physics programme that can be
> accomplished within achievable pledged resources, corresponding to a ‘flat’
> spending budget, while we hope that our centres and funding agencies will
> continue to provide ATLAS with the invaluable resources beyond those pledged
> that will allow us to accomplish an optimal research programme and physics
> productivity"
>
> The model then assumes that with a flat budget:
> CPU will increase by a factor of 1.2 per year
> Disk space will increase by a factor of 1.15 per year
> so over time, the HEPSpec to storage ratio will grow.
>
> What confuses me, and I can only assume this has been done for political
> reasons, is that the official request lags behind this expected growth for
> the next 2 years but then catches up by the time we reach 2017. So for
> example in 2015 the request for Tier 2s appears to be 55PB which isn't much
> different from now. However ATLAS are expecting 65PB available. However by
> 2017 the request is 98PB which is the ~1.15^4 the 2013 figure.
>
> On slide 8 in [2] it also mentions that ~15% of CPU resources were spent on
> MC generation and this could be moved to opportunistic HPC resources. Most
> of this 15% would have been done at Tier 2s as it doesn't require any
> particular input files. There is no particular time line for when this will
> happen although there was a talk at the ATLAS Weekly meeting today [5] so it
> is a very active area. I would therefore recommend keeping the ratio of
> disk to CPU roughly the same as it is now.
>
> In terms of memory requirements per jobs, there is still an aim to keep jobs
> to 2GB per slot. However for every improvement made there are another 2
> reasons to increase the memory foot print. I would recommend what Martin
> Bly decided for the Tier 1 which was 4 GB per slot. Also, while not wishing
> to comment on LHCb's plans, they do occasionally need to use 6GB per job for
> their problematic work flows at RAL, so for sites planning on hosting LHCb
> data, you have been warned!
>
> I have included two other links, Eric Lancon's talk [3] contains pretty much
> the same as Borut's two talks, but in a more condensed form. If you want
> lots of detail, [4] contains the draft computing model for run 2 in 153
> pages! I believe this is not restricted to just ATLAS.
>
> I hope this helps. I will ask Roger Jones for his comments as well.
>
> Alastair
>
>
> [1] Borut, WLCG workshop:
> https://indico.cern.ch/getFile.py/access?contribId=12&sessionId=1&resId=0&materialId=slides&confId=251191
>
> [2] Borut, physics coordination:
> https://indico.cern.ch/getFile.py/access?contribId=3&resId=0&materialId=slides&confId=270627
>
> [3] Eric Lancon, computing model for Run 2
> https://indico.cern.ch/getFile.py/access?contribId=46&sessionId=11&resId=0&materialId=slides&confId=250727
>
> [4] WLCG TDR:
> https://indico.cern.ch/getFile.py/access?resId=1&materialId=0&confId=212501
>
> [5] ATLAS Weekly on HPC:
> https://indico.cern.ch/getFile.py/access?contribId=1&resId=1&materialId=slides&confId=282963
>
>
> --------------------------------------------
> Mark Mitchell,
> ScotGrid Technical Co-ordinator,
> Rm 427b,
> Kelvin Building,
> School of Physics and Astronomy,
> University of Glasgow,
> G12 8QQ, UK
> Telephone: +44-141-330 6439
> E Mail: [log in to unmask]
>
>
>
> --
> Facts aren't facts if they come from the wrong people. (Paul Krugman)
>
>
|