Hello,
I have setup an ARC CE6, LCMAPS for mapping to local user accounts and have two worker nodes setup in Univa Grid Engine (UGE/SGE) with a gridpp.q queue. I can submit local jobs using the local accounts to our HPC gridpp.q.
The ARC CE and worker nodes have /cvmfs mounted via autofs.
Would it be possible for you to look over my configuration below for any errors or anything that may be missing please? Any suggestions welcome.
In the '/etc/lcmaps/lcmaps.db' file there is a setting for "-gridmapdir /etc/grid-security/gridmapdir" but I do not have a directory for this and there wa no directory on our CreamCE. Is this required?
I am also not sure how to test if the ARC CE accepts remote GridPP jobs?
My configuration settings are outlined below.
Thank you
Patrick
----------------------------------------------------------------------------------------------------------------------
grid-arc-01.hpc.susx.ac.uk
Arc CE 6 Installed Packages
- nordugrid-arc-arex
- nordugrid-arc-plugins-*
- nordugrid-arc-gridftpd
- nordugrid-arc-infosys-ldap
- bash-completion
- python-argcomplete
- lcmaps
- lcmaps-plugins-c-pep
- lcmaps-plugins-verify-proxy
- glexec
- lcmaps
- lcmaps-plugins-basic
- lcmaps-plugins-c-pep
- lcmaps-plugins-tracking-groupid
- lcmaps-plugins-verify-proxy
- lcmaps-plugins-voms
----------------------------------------------------------------------------------------------------------------------
Arc CE 6 Firewall Rules:
rule family="ipv4" port port="6445" protocol="tcp" accept
rule family="ipv4" port port="2135" protocol="tcp" accept
rule family="ipv4" port port="2811" protocol="tcp" accept
rule family="ipv4" port port="443" protocol="tcp" accept
rule family="ipv4" port port="9000-9300" protocol="tcp" accept
rule family="ipv4" port port="9000-9300" protocol="udp" accept
----------------------------------------------------------------------------------------------------------------------
ARC CE 6 arc.conf
[common]
hostname = grid-arc-01.hpc.susx.ac.uk
x509_host_key = /etc/grid-security/hostkey.pem
x509_host_cert = /etc/grid-security/hostcert.pem
gridmap = /etc/grid-security/grid-mapfile
# voms = vo_name group role capabilities
[authgroup:dteam]
voms = dteam * * *
[authgroup:ops]
voms = ops * * *
[authgroup:atlas]
voms = atlas * * *
[authgroup:all]
voms = dteam ops atlas
[mapping]
# map_with_plugin = authgroup_name timeout plugin [arg1 [arg2 [...]]]
map_with_plugin = all-vos 30 /usr/libexec/arc/arc-lcmaps %D %P liblcmaps.so /usr/lib64 /etc/lcmaps/lcmaps.db arc
[lrms]
lrms = sge
sge_root = /cm/shared/apps/sge/current
sge_bin_path = /cm/shared/apps/sge/current/bin
[arex]
#sessiondir=/cm/shared/gridpp/arc/sessiondir
#scratchdir=/var/spool/arc/scratchdir
#shared_filesystem = yes
#norootpower=yes
shared_filesystem = no
loglevel = 5
[arex/jura]
loglevel = INFO
[arex/jura/archiving]
[arex/jura/apel: EGI]
targeturl = https://mq.cro-ngi.hr:6162
topic = /queue/global.accounting.cpu.central
gocdb_name = UKI-SOUTHGRID-SUSX
benchmark_type = HEPSPEC
benchmark_value = 8.74
use_ssl = yes
[arex/ws]
[arex/ws/jobs]
#allowaccess = all-vos
[gridftpd]
loglevel = DEBUG
[gridftpd/jobs]
allowaccess = all
[infosys]
loglevel = INFO
[infosys/ldap]
#bdii_debug_level = INFO
[infosys/nordugrid]
[infosys/glue2]
admindomain_name = UKI-SOUTHGRID-SUSX
[infosys/glue2/ldap]
[infosys/cluster]
advertisedvo = ops
advertisedvo = dteam
advertisedvo = atlas
alias = SouthGrid Susx
hostname = grid-arc-01.hpc.susx.ac.uk
cluster_location = UK-BN19RH
cluster_owner = University_of_Sussex
clustersupport = [log in to unmask]
#nodememory = 6000
#defaultmemory = 2048
nodeaccess = outbound
[queue:gridpp-test]
comment = Queue for GridPP jobs
homogeneity=True
#[queue:atlas]
#comment = Queue for ATLAS jobs
----------------------------------------------------------------------------------------------------------------------
Local user accounts:
atlas_sgm
atlas001 --> atlas050
atlas_prd001 --> atlas_prd050
atlas_pil001 --> atlas_pil050
dteam_sgm
dteam001 --> dteam050
dteam_prd001 --> dteam_prd050
dteam_pil001 --> dteam_pil050
ops_sgm
ops001 --> ops050
ops_prd001 --> ops_prd050
ops_pil001 --> ops_pil050
----------------------------------------------------------------------------------------------------------------------
/etc/lcmaps/lcmaps.db:
path = /usr/lib64/lcmaps
verify_proxy = "lcmaps_verify_proxy.mod"
"-certdir /etc/grid-security/certificates"
"--discard_private_key_absence"
"--allow-limited-proxy"
# Only for performing VOMS mappings
# NOTE: NO WHITESPACE ALLOWED AT THE END OF THE LINE!!!
# module definitions
posix_enf = "lcmaps_posix_enf.mod -maxuid 1 -maxpgid 1 -maxsgid 32"
localaccount = "lcmaps_localaccount.mod -gridmapfile /etc/grid-security/grid-mapfile"
poolaccount = "lcmaps_poolaccount.mod -override_inconsistency -gridmapfile /etc/grid-security/grid-mapfile"
"-gridmapdir /etc/grid-security/gridmapdir"
vomslocalgroup = "lcmaps_voms_localgroup.mod -groupmapfile /etc/grid-security/groupmapfile -mapmin 0"
vomslocalaccount = "lcmaps_voms_localaccount.mod -gridmapfile /etc/grid-security/grid-mapfile -use_voms_gid"
vomspoolaccount = "lcmaps_voms_poolaccount.mod -gridmapfile /etc/grid-security/grid-mapfile"
"-gridmapdir /share/gridmapdir -do_not_use_secondary_gids"
proxycheck = "lcmaps_verify_proxy.mod"
"-certdir /etc/grid-security/certificates"
#
# # for gridftp related code
# good = "lcmaps_dummy_good.mod"
#
# policies
withvoms:
proxycheck -> vomslocalgroup
vomslocalgroup -> vomslocalaccount
vomslocalaccount -> posix_enf | vomspoolaccount
vomspoolaccount -> posix_enf
# standard:
proxycheck -> localaccount
localaccount -> posix_enf | poolaccount
poolaccount -> posix_enf
----------------------------------------------------------------------------------------------------------------------
/etc/grid-security/grid-mapfile
"/ops/Role=lcgadmin/Capability=NULL" ops_sgm
"/ops/Role=lcgadmin" ops_sgm
"/ops/Role=production/Capability=NULL" .ops_prd
"/ops/Role=production" .ops_prd
"/ops/Role=pilot/Capability=NULL" .ops_pil
"/ops/Role=pilot" .ops_pil
"/ops/Role=NULL/Capability=NULL" .ops
"/ops" .ops
"/dteam/Role=lcgadmin/Capability=NULL" dteam_sgm
"/dteam/Role=lcgadmin" dteam_sgm
"/dteam/Role=production/Capability=NULL" .dteam_prd
"/dteam/Role=production" .dteam_prd
"/dteam/Role=NULL/Capability=NULL" .dteam
"/dteam" .dteam
"/atlas/Role=pilot/Capability=NULL" .atlas_pil
"/atlas/Role=pilot" .atlas_pil
"/atlas/Role=lcgadmin/Capability=NULL" atlas_sgm
"/atlas/Role=lcgadmin" atlas_sgm
"/atlas/Role=production/Capability=NULL" .atlas_prd
"/atlas/Role=production" .atlas_prd
"/atlas/Role=NULL/Capability=NULL" .atlas
"/atlas" .atlas
"/gridpp/Role=NULL/Capability=NULL" .gridpp
"/gridpp" .gridpp
"/gridpp/Role=pilot/Capability=NULL" .gridpp_pil
"/gridpp/Role=pilot" .gridpp_pil
"/gridpp/Role=lcgadmin/Capability=NULL" gridpp_sgm
"/gridpp/Role=lcgadmin" gridpp_sgm
"/gridpp/Role=production/Capability=NULL" .gridpp_prd
"/gridpp/Role=production" .gridpp_prd
----------------------------------------------------------------------------------------------------------------------
/etc/grid-security/groupmapfile
"/ops/Role=lcgadmin/Capability=NULL" ops
"/ops/Role=lcgadmin" ops
"/ops/Role=production/Capability=NULL" ops_prd
"/ops/Role=production" ops_prd
"/ops/Role=pilot/Capability=NULL" ops_pil
"/ops/Role=pilot" ops_pil
"/ops/Role=NULL/Capability=NULL" ops
"/ops" ops
"/dteam/Role=lcgadmin/Capability=NULL" dteam
"/dteam/Role=lcgadmin" dteam
"/dteam/Role=production/Capability=NULL" dteam_prd
"/dteam/Role=production" dteam_prd
"/dteam/Role=NULL/Capability=NULL" dteam
"/dteam" dteam
"/atlas/Role=pilot/Capability=NULL" atlas_pil
"/atlas/Role=pilot" atlas_pil
"/atlas/Role=lcgadmin/Capability=NULL" atlas
"/atlas/Role=lcgadmin" atlas
"/atlas/Role=production/Capability=NULL" atlas_prd
"/atlas/Role=production" atlas_prd
"/atlas/Role=NULL/Capability=NULL" atlas
"/atlas" atlas
"/gridpp/Role=NULL/Capability=NULL" gridpp
"/gridpp" gridpp
"/gridpp/Role=pilot/Capability=NULL" gridpp_pil
"/gridpp/Role=pilot" gridpp_pil
"/gridpp/Role=lcgadmin/Capability=NULL" gridpp
"/gridpp/Role=lcgadmin" gridpp
"/gridpp/Role=production/Capability=NULL" gridpp_prd
"/gridpp/Role=production" gridpp_prd
----------------------------------------------------------------------------------------------------------------------
On UGE HPC:
gridpp.q --> node205 and node206
________________________________________
From: Elena Korolkova [[log in to unmask]]
Sent: 21 November 2019 10:19
To: ATLAS UK Cloud Support ([log in to unmask])
Subject: new ce in Sussex
Morning,
as discussed at the meeting today and for the record
Sussex ce name is
grid-arc-01.hpc.susx.ac.uk
Elena
########################################################################
To unsubscribe from the TB-SUPPORT list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
|