Heyup,
I was asked to write up my thoughts on this yesterday, as a long term
minimalist yaim user. As time is short for a coherent blogpost/wikientry
I'm afraid I'm treating you to a hopefully coherent enough mail.
>
> Looking at your list: Taking out log files, rpmdb files, vomses and
> grid-security files, that leaves 6 conf files, 7 environment setting
> files, a couple of cron jobs and log rotate jobs, and the fetch-crl
> subsystem. If you look at it like that, it's not too bad.
Steve sums it up well. However I suspect that you did this on a node
with the users already set, as I would have expected /etc/passwd &
groups to be in that list. Also I'm surprised no batch system configs
were touched (my eyes could have failed me though).
The most useful thing I've found for yaim is the setting up of users &
groups (which you can get it to do in a standalone fashion using the run
function yaim utility), and even then there are ways to copy that (our
shared cluster uses shared passwd & groups, and rolls out home
directories using a home-cooked rpm).
Just about everything else is a flat file that can be copied during
install, or rolled out with a cluster manager system (or if you're
brave, the parallel ssh client of your choice).
>
> PS: re: grid security: VomsSnooper (which is now an RPM) can make your LSC
> files directly from the XML. It also has an (experimental!) feature to
> make the groups.conf file that yaim uses when setting up users.
Also in the coming age of argus it could well mean that you'll be
mounting chunks of /etc/grid-security anyway from a central point.
To summarise (and repeat Steve's & my own points somewhat), here's a
thorough but handwavey list of what we we account for at Lancaster:
- Users & Groups
Run the yaim function on the torque cluster, or our own "novel" solution
on the shared cluster.
-Batch system client
Flat files copied on post-install for the torque cluster (the pbs_server
config, prologue & epilogue scripts). lsf is more complicated (and yaim
doesn't do lsf).
-/etc/grid-security & voms
This is mounted from the tarball server, and looked after (i.e.
fetch-crl, account cleanup) from a single point (the CE, the only node
with write access to the nfs mount). Hopefully one day soon I'll have
the vomsdir populated by VomsSnooper.
-Environment stuff
These are linked from the nfs area to /etc/profiled.d/, long ago the
originals were created by yaim. We also have a local lancsenv.sh but
most of the variables in there are redundant or very site specific
(GLOBUS_TCP_PORT_RANGE, DPNS_HOST, ATLAS_RECOVERDIR).
The really important (and fiddly things) set here are the VO specific
variables - such as the SW_DIRs, and the PATH & LD_LIBRARY_PATHs. Oh,
and don't forget the lists of top bdiis. After the batch system setup
this is IMO the most critical part to get jobs working.
I'm not sure what yaim was doing to /etc/sysconfig/globus & edg, that
would be interesting to find out (if anything).
I hope that's somewhat helpful. My advice would be try to use yaim on a
completely clean (well, with the emi-wn rpms installed so it would be a
little dirty) host, with it set to configure the batch system clients as
well.
Cheers,
Matt
|