On 09/04/2012 09:46 PM, David Colling wrote:
> Hi,
>
> Some of you will have noticed that the last session of the next GridPP
> meeting contains a discussion session on "the future". The reason for
> this session is that the experiment computing models and practises are
> evolving. Some new middleware components are coming in and some are
> going out. Funding is finishing for many projects and there are
> "Clouds" on the horizon. For the LHC experiments a lot **may** change
> during LS1 and we need react appropriately.
>
> So the idea of the session is to identify areas where we should be
> proactive and produce the beginnings of a roadmap in each of these.
> The reason for this email is to help identify the areas in which we
> need such maps and to start the discussion a little before hand so
> that people have had a chance to think about the areas in question and
> are not coming to it cold. The reason for having this session at this
> meeting is because it is a forum where we will have the sites, the
> people who work on middleware (support) and (crucially) the
> experiments. The different areas are of course related and independent.
>
> So the three areas to come to my mind are:
>
> - Storage - many issues here DPM support, posix access, xroot access
> etc etc
>
> - workload - job submission, clouds, jobs over remote data (over xroot
> or http) etc
>
> - networks - not sure what to include here. Any suggestions
>
> I have already had an additional suggestion of resource deployment and
> am very open to others or for suggestions that of what should be
> included in the areas that I have already suggested.
>
> So thoughts and ideas please...
>
> Best,
> david
I apologise in advance for offering this abstract response!
As far as I can see, we have two basic ways of steering the development
programme.
First, we could discuss problems in the current baseline and propose
changes that solve the problems, trading-off the pros and cons of each
proposal, and selecting and prioritizing those that survive and pushing
them forward for further work. That way we incrementally improve the
baseline (watching out for "brain farts", i.e. features that beguile us
yet are eventually not cost effective overall). But to use that
approach, we need to know what the problems are. What useful features do
the current storage baselines not provide? Where do the current current
job submission frameworks fall short? In what way can new network kit or
technology bring us benefits? The experiments can help us with these
questions.
In parallel, we can examine “new technologies” (clouds?) and perform a
gap analysis to reject any that cannot be fitted into our baseline or
which offer no tangible advantage over current arrangements.
In any case, we need a “planning function” within GridPP with the
primary task of “Systems Engineering”, i.e. building a consensus with
the stakeholders and creating the road map. We can't do that as a "mob".
Actually, it's very difficult to accomplish, as I'm sure you are aware.
Hope that helps, somehow,
Steve
--
Steve Jones [log in to unmask]
System Administrator office: 220
High Energy Physics Division tel (int): 42334
Oliver Lodge Laboratory tel (ext): +44 (0)151 794 2334
University of Liverpool http://www.liv.ac.uk/physics/hep/
|