On 17/01/12 14:33, Ian Collier wrote:
> On 17 Jan 2012, at 14:18, Stephen Burke wrote:
>> Testbed Support for GridPP member institutes [mailto:TB-
>>> [log in to unmask]] On Behalf Of Ian Collier said:
>>> My view at this point is that better packaging, documentation and
>>> simplified configuration and a yaim-like tool - along with effort to
>>> support the driving of said yaim-like tool from other config management
>>> systems is probably the best bet.
>> I think one general question of philosophy is whether you want the configuration for different services to be unified. At the moment yaim has a lot of variables in a generic style, and it then writes them into service-specific config files and scripts which vary quite widely in format. If you start de-yaimifying things you could go in two possible directions - either accept that all services are different and configure each one in its own way, or have some global config file replicating (some of) the information currently in yaim variables and get each service to use it. Which of those would be preferred?
> Well, here is your chance to express a view which I'll do my best to represent :)
> I think the other question will be who does the work?
> But a clear statement that, for example,
> 'We want something along the lines of yaim, but better and cleaner and are
> prepared to help' might be a positive way of influencing things.
At QMUL, I am one of the "Make all the changes in yaim and keep
rerunning it", but yes something better and cleaner - would be a good thing.
One particular issue we see at the moment is distributing details of a
VO's voms servers. Currently we do this on the GridPP wiki - but this is
prone to error and being out of date - QMUL has certainly had issues
several times, and we aren't the only ones. A better solution, perhaps
providing a repository with appropriate RPMs (and .debs I guess in the
future) -as is done with the CA certificates might solve that problem
(though there are no doubt other solutions).
Also, both Cream and storm have been broken on upgrade (in cream's case,
I think it was upgrading tomcat that broke it) - why would we need to
rerun yaim we haven't changed any configuration?
A further issue was that I came close to breaking an already working sge
installation by running yaim. This isn't really what I expected (there's
now an optional parameter to not reconfigure gridengine, but the default
is to reconfigure it).
> (In case it is unclear my view is firmly that the supported method
> should not be puppet, quattor whatever, rather something simpler
> that can be driven by whatever more sophisticated tools sites choose.
Moving towards the standard tools is definitely the right way to go. Egi
moving toward standard file locations is a good start. The more standard
the packages can become the better - and the more they use the standard
configuration mechanisms the better.
If puppet, quattor, or cfengine became the mandated solution, it would
be difficult for sites with a shared cluster (but using a different
configuration management solution) to deploy. That's clearly a bad thing.
> I'd also say that the experience of the Quattor Working Group sharing
> the work of producing configuration templates - they are not just
> 'produced by GRIF' - even if not perfect, provides a good model.)
That makes sense. Encouraging sites to share this sort of thing is a
My understanding is that the likes of puppet, Quattor, cfengine etc
perform 3 tasks:
1) Deploying config files to client machines.
2) Once the new config has been deployed, how do you ensure it
isactually used (eg do you need to restart services). Ideally a restart
should not result in loss of service.
These can, and probably should, be done by standard config tools - but
as commented elsewhere, we need to know what files should be distributed.
3) How to generate config files - whilst trying to avoid having to
specify the same information in multiple places (otherwise it just
getsout of sync).
That's the grid specific part of the problem and one that the middleware
Yaim is moving away from monolithic configuration files. That's good.
IIRC, Debian has found that package A trying to edit the config file of
package B is prone to error. What works much better is package A
dropping a config fragment into a directory owned by package B, and then
getting package B to generate its config from these various fragments
(for example by catting them together and putting them somewhere in /var/).
One use of this is installing applications in menus on your window
manager - each package drops a config file in a appropriate place
containing things like an icon, the program name etc. Each window
manager then contains a script to parse this information and generate
Dpkg triggers takes this one step further - see
- note that dpkg triggers are a configuration time thing - but you can
reconfigure packages at will. Not sure if rpm provides something similar.
This puts the emphasis on the packages to get the configuration right.
In practice, we already have some of this as there are yaim modules for
StoRM, Cream etc.
The long term ideal should be that you can get a truck arrive from a
vendor, cable everything up, stick a DVD in the drive of one machine and
it will pretty much configure itself - and configure monitoring as well.