Dear all, I'm happy to announce that the next official version of GridPP's dCache dependency RPMs are now released. They come with some scripts I've found useful while playing with dCache and a little bit of extra documentation. The RPMs are *not* meant to be a long-term solution for deploying dCache! The main reason behind creating these RPMs was to speed up my personal dCache experiments as I was looking for a way to minimise the number of packages (and therefore possible clashes) which *really* need to be installed to get a dCache instance up and running as fast as possible. I also needed a quick solution to wipe dCache from the system and reinstall it without reinstalling the entire machine. Last but not least, I needed a quick way of testing GridPP's yaim dCache configuration patches. The RPMs are meant ~~~~~~~~~~~~~~~~~~ - for keeping down the number of RPMs necessary for dCache experiments to a minimum (however contradictory this might sound) - as a cutting-edge trimmed-down dCache configuration fork of the yaim installation method with extra configuration features (until they [hopefully] make it to the yaim itself) - to be yaim's dCache friend and dCache friend only (as opposed to being everybody's friend). This (amongst other things) means that the settings from your yaim site-info.def are used. - for testing our patches to the yaim dCache configuration scripts - to provide some extra pre-installation checks (a stop-gap solution before they are yaimified) - to providing some extra scripts for: + fast wipe-out of dCache from the system + simple functionality testing + adding VOs + recovery from corrupted PNFS tags + (future work): data migration, ... The RPMs are not meant to ~~~~~~~~~~~~~~~~~~~~~~~~~ - be a yaim replacement (dyaim) - do anything not stated above, e.g.: they don't configure BDII and never will since they are simply not meant to do that; please use yaim for this You should be aware of the fact that while I did my best to make the RPMs as complete and bug-free as possible, they have only been tested here at RAL and may not work at all in your particular environment. If this is the case, you are encouraged to send bug-reports directly to me (not yaim maintainers!) and I'll try to make those RPMs more generic even though this is not the RPM's main aim. Bug reports for the yaim patches should be sent to the patch author directly. The the yaim patches are not earth-shattering at the moment as most of my time was spent developing a survivable infrastructure around it. They should improve very soon: Current yaim patches planned (comments welcome) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Refactore dCache-related yaim functions in as least as possible intrusive manner to make them more robust (current yaim installation error-report welcome) - Improve dCache security (pnfs mounts from pool nodes only/from the pool node network segment? --- suggestions welcome) - Is it possible to run dCache (or at least GridFTP server) as non-root user? - Support multiple pools on one machine? - Things I forgot to mention :) Current RPMs improvements planned ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - A script to make use of the GridFTP transfer log information (gnuplot?) - Data migration scripts (someone at CERN reportedly already has already prepared something; if you know more, could I please have the scripts?) To conclude, I need to re-iterate that while you are more than welcome to try our RPMs, the yaim installation method combined with our current yaim patches (should you wish to try them) is *the* recommended method for installing dCache. Owen Synge has already contacted the yaim maintainers and we have a CVS access to their yaim repository which should hopefully speed up the uptake of our patches. At this point, I'd like to encourage you to send in your dCache yaim configuration script patches in the spirit of ``the proven power of many ;)''. The yaim patches together with the instructions on how to use my dependency RPMs can be found at http://storage.esc.rl.ac.uk/ under the dCache menu. Thanks, good luck and regards. -- Jiri