> I only updated the voms server on our test DPM
DPM security is different to batch systems, although both are related
and share a lot. In the batch system, security now involves a central
ARGUS server, which is not used by storage (it should be but the
software has not been fixed up.) DPM still decides on its own, and does
not consult ARGUS.
> there we would need to create LSST as a newly supported VO.
At risk of totally overloading you, here's the briefest way I know to
add a new VO. I think it's mostly complete, but done in a hurry and
again there will be site-specific concerns. Good luck, be careful, take
it easy, test it first where you can and do things slowly. We all
started exactly where you are today, so don't worry.
Cheers,
Steve
-------------------
To start a new VO at a CREAM/TORQUE site, the general steps are, for all
servers needing security (i.e. not BDII or APEL)
*) add records to /opt/glite/yaim/etc/users.conf using new unique names
and id numbers. Typical banks of accounts are set for vo, prdvo, sgmvo
and pilvo (for example lsst , prdlst , sgmlst , pillst ).
*) add records to /opt/glite/yaim/etc/groups.conf.gen using new unique
names. Typical records would be (e.g.)
"/lsst/sgm":::sgm:
"/lsst/lcgprod":::prd:
"/lsst/ROLE=lcgadmin":::sgm:
"/lsst/ROLE=production":::prd:
"/lsst/ROLE=pilot":::pilot:
"/lsst"::::
"/lsst/*"::::
*) Add records to site-info.def. This (at Liverpool, which has only one
batch queue called "long") would be:
VOS=" ... atlas ... lsst .... "
and
LONG_GROUP_ENABLE="\
atlas /atlas/ROLE=lcgadmin /atlas/ROLE=production /atlas/ROLE=pilot\
... more VOs ...
lsst /lsst/ROLE=lcgadmin /lsst/ROLE=production /lsst/ROLE=pilot"
*) Add records to the ARGUS policy for the new VO, e.g.
... for each resource (ce, dpm head) ...
resource "http://ph.liv.ac.uk/hepgrid6" {
obligation "http://glite.org/xacml/obligation/local-environment-map" {}
action ".*" {
rule permit { vo = "atlas" }
... more VOs ...
rule permit { vo = "lsst" }
}
}
... for the worker nodes ...
resource "http://authz-interop.org/xacml/resource/resource-type/wn" {
obligation "http://glite.org/xacml/obligation/local-environment-map" {}
action "http://glite.org/xacml/action/execute" {
rule permit {pfqan = "/atlas" }
rule permit {pfqan = "/atlas/Role=lcgadmin" }
rule permit {pfqan = "/atlas/Role=production" }
rule permit {pfqan = "/atlas/Role=pilot" }
... more VOs ...
rule permit {pfqan = "/lsst" }
rule permit {pfqan = "/lsst/Role=lcgadmin" }
rule permit {pfqan = "/lsst/Role=production" }
rule permit {pfqan = "/lsst/Role=pilot" }
}
}
*) Add records to the qmgr in TORQUE, e.g. start qmgr and type this (or
read from script)
set queue long acl_groups += lsst
set queue long acl_groups += lsstprd
set queue long acl_groups += lsstsgm
set queue long acl_groups += lsstpil
*) Add records for maui.cfg, e.g.
GROUPCFG[lsst] FSTARGET=4+ PRIORITY=2
GROUPCFG[lsstprd] FSTARGET=4+ PRIORITY=2
GROUPCFG[lsstsgm] FSTARGET=1 PRIORITY=500
GROUPCFG[lsstpil] FSTARGET=4+ PRIORITY=2
Once all that's done, roll it out everywhere. Re-yaim most machines,
reload the ARGUS policy and test.
I can't remember anything else I did. It might be good to do this on
your test system first, if you have one, and use dteam to send things to it.
Cheers,
Steve
--
Steve Jones [log in to unmask]
Grid System Administrator office: 220
High Energy Physics Division tel (int): 43396
Oliver Lodge Laboratory tel (ext): +44 (0)151 794 3396
University of Liverpool http://www.liv.ac.uk/physics/hep/
|