Dear All,
I have typed up my notes from the ops meeting discussion(s) on operational effort optimisation (see emails at the end of this message for a reminder). No dobt t will have made mistakes reading my scribbled notes and the summary may spur you to new and greater suggestions (e.g. for sharing/reducing specific services). So, please find the list below and let me know of any corrections, additions or other suggestions/thoughts during Monday morning so that I can forward the input to Maria.
Many thanks,
Jeremy
General
- Currently lack the coordination of a core middleware initiative for MW production and effort for effective maintenance. Downstream impact on sites to run a ‘service’ like ARGUS which has been left hanging. Problems discovered/reported/fixed by sysadmins in parallel.
- Better unifying the software into repositories will help.
- More consistent packaging and comprehensive documentation by product teams will help s/w deployment
- Stop knocking holes in the infrastructure – what we have works!
- Reduce the layers of workarounds which are manpower intensive
- Inefficiencies introduced by many layers (in middleware and communications) can be reduced. E.g. Middleware parameter passing to batch systems will be useful.
- improve the error handling behavior of the middleware to make it easier to debug and resolve problems.
- Improve the test frameworks (e.g. Hammercloud made a big improvement)
- Reduce the barriers to collaboration (at present twiki changes and Vidyo setup require CERN accounts… so information ends up spread). Get rid of the information silos and give all site admins a CERN internal account.
- Consolidate information sources (such as meeting agenda categories to make it easier to know what meetings are happening/minuted, monitoring) or better structure the access.
- Explore wider community supported products (e.g. for distributed file systems) rather than inventing something new and having the whole infrastructure involved in debugging it.
Cloud
- In theory a simplification so easier to manage but….
- Losing what batch systems do for you shifts the complexity to the central pilot service
- Experiments will have to take over some core activities. Debugging support becomes more difficult.
- May introduce new sources of inefficiency (e.g. in VMs)
- Work hard to have common VM images (the experiments often diverge in their approaches and subsequently add to support needs)
- There is a risk that the project will lose the talent on which it depends for evolution.
Coordination
– Ensure effort not wasted – c.f. deployment campaign for glexec which is not used because it adds complexity
– Reduce the layers between the experiment users and sites
– Find ways to get sites working together
– Good training and support processes can improve long term productivity
– Some VOs don’t let you see the source of problems (LHCb, CMS) which leads to extended support discussions
– Good experiment technical representation and communication is a good thing for efficiency
– Giving sysadmins access to VO resources would help (e.g. hypernews)
– CERN accounts for all would help with use of Twiki and Vidyo. Get sysadmins more involved. Single sign-on.
– Better overlap WLCG operations needs with ROD work
– Seek a common infrastructure between experiments
– Consolidate/be consistent with information systems (e.g naming, BDII vs expt. vs GOCDB vs…)
On 8 Aug 2014, at 15:59, Jeremy Coles <[log in to unmask]> wrote:
> Dear All,
>
> This is something I will put on our ops agenda for next Tuesday. Please have a think about the topic before then.
>
> Many thanks,
> Jeremy
>
>
>
> Begin forwarded message:
>
>> From: Maria Alandes Pradillo <[log in to unmask]>
>> Subject: Collecting feedback for operational effort optimisation
>> Date: 5 August 2014 12:49:37 BST
>> To: Jeremy Coles <[log in to unmask]>, Massimo Sgaravatto <[log in to unmask]>, "[log in to unmask]" <[log in to unmask]>, Jan Erik Sundermann <[log in to unmask]>, Isidro Gonzalez Caballero <[log in to unmask]>
>> Cc: Alessandra Forti <[log in to unmask]>, José Flix <[log in to unmask]>, Andrea Sciaba <[log in to unmask]>
>>
>> Dear all,
>>
>> It seems you are a site/region representative according to the WLCG Operations twiki. The reason I'm contacting you is because in WLCG Operations we are collecting feedback to prepare some slides for the next Management Board in September where we would like to start discussing how we could optimise the current operations effort in WLCG.
>>
>> We don't want to carry out a detailed survey among all sites since this is not intended to be a report. This is just gathering ideas from some of the sites and the experiments on how they are currently running services and where they think they could save some effort, what could be different or improved to achieve this and what alternatives we have if any.
>>
>> I have started to collect feedback in this twiki: https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOperationCosts
>>
>> Would it be possible for you to suggest which sites from your region would like to give some input for the different services in the twiki? I think a couple of them would be enough and if possible we should target both big and small sites, since I guess smaller sites may have less means, tools and knowledge to automate some of the operations tasks, although big sites have more complexity to manage. In both cases, it would be good to have their input.
>>
>> Thanks very much in advance,
>> Maria
>>
>>
>
|