On Thu, 30 Jun 2011, Hrachya Astsatryan wrote:
> Dear all,
>
> We are trying to install CREAM, BDII_site and APEL in the same virtual server
> by using KVM.
> We have installed CREAM and APEL on the KVM guest OSs (RHEL 5.5), and BDII in
> the host OS (RHEL 6).
Uhhh! I'm doubt that one can run some real/production glite
services on RHEL 6 and clones currently. Even EPEL have bdii
for el6, but it's too old version and looks like it's missed
selinix bits.
> The CREAM and APEL have been configured succeeccfully (see below
Seems to me, you can install with safety APEL and site BDII on
the single virtual machine.
> site-info.def), but when we tr to configure BDII we receive the following
> error:
> INFO: The default location of the grid-env.(c)sh files will be:
> /opt/glite/etc/profile.d
> INFO: Sourcing the utilities in /opt/glite/yaim/functions/utils
> INFO: Detecting environment
> WARNING: YAIM was not able to detect the distribution name.
> WARNING: The OS will be set to 'unknown'
> INFO: Executing function: config_gip_site_check
> INFO: Executing function: config_gip_bdii_site_check
> INFO: Executing function: config_info_service_bdii_site_check
> INFO: Executing function: config_bdii_5.1_check
> INFO: Executing function: config_gip_site
> SITE_COUNTRY = Armenia
> INFO: Executing function: config_gip_bdii_site
> INFO: Executing function: config_info_service_bdii_site_setenv
> INFO: Executing function: config_info_service_bdii_site
> INFO: Executing function: config_bdii_5.1
> Stopping BDII: BDII already stopped
> Starting BDII slapd: BDII slapd failed to start [FAILED]
> /usr/sbin/slapd -f /etc/bdii/bdii-slapd.conf -h ldap://0.0.0.0:2170 -u ldap
> -d 256
> @(#) $OpenLDAP: slapd 2.4.19 (Mar 11 2011 08:31:44) $
> [log in to unmask]:/builddir/build/BUILD/openldap-2.4.19/openldap-2.4.19/build-servers/servers/slapd
> daemon: bind(7) failed errno=13 (Permission denied)
> slapd stopped.
> connections_destroy: nothing to destroy.
> ERROR: Error during the execution of function: config_bdii_5.1
> ERROR: Error during the
> configuration.Exiting. [FAILED]
> ERROR: One of the functions returned with error without specifying its
> nature !
>
>
> Coud you, please, help us to solve the problem?
>
>
> Thank you in advanced,
> Hrach
>
>
>
>
>
>
>
>
> ##############################################################################
> # Copyright (c) Members of the EGEE Collaboration. 2004.
> # See http://www.eu-egee.org/partners/ for details on the copyright
> # holders.
> #
> # Licensed under the Apache License, Version 2.0 (the "License");
> # you may not use this file except in compliance with the License.
> # You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS
> # OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> ##############################################################################
> #
> # NAME : site-info.def
> #
> # DESCRIPTION : This is the main configuration file needed to execute the
> # yaim command. It contains a list of the variables needed to
> # configure a service.
> #
> # AUTHORS : [log in to unmask]
> #
> # NOTES : - site-info.def will contain the list of variables
> common to
> # multiple node types. Node type specific variables are
> # distributed by its corresponding module although a unique
> # site-info.def can still be used at configuration time.
> #
> # - Service specific variables are be distributed under
> #
> /opt/glite/yaim/examples/siteinfo/services/<node_type_name>
> # Copy this file under you siteinfo/services directory or
> also copy the variables
> # manually in site-info.def.
> # DPM and LFC variables are not yet distributed in their
> # corresponding YAIM modules. Find the list of relevant
> variables in:
> # -
> https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#DPM
> # -
> https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#LFC
> #
> # - site-info.pre and site-info.post contain default
> variables. When sys admins
> # want to set their own values, they can just define the
> variable in site-info.def
> # and that will overwrite the value in site-info.pre/post.
> #
> # - VO variables for LCG VOs are currently distributed
> with example values.
> # For up to date information of any VO please check the CIC
> portal VO ID Card information:
> # http://cic.in2p3.fr/
> #
> # - For more information on YAIM, please check:
> # https://twiki.cern.ch/twiki/bin/view/EGEE/YAIM
> #
> # - For a detailed description of site-info.def variables,
> please check:
> #
> https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#site_info_def
> #
> # YAIM MODULE: glite-yaim-core
> #
> ##############################################################################
>
> MPI_SHARED_HOME="yes"
> #MPI_MPICH_ENABLE="yes"
> MPI_MPICH2_ENABLE="yes"
> MPI_OPENMPI_ENABLE="yes"
> # MPI_MPICH_PATH="/opt/mpich-1.2.7p1"
> # MPI_MPICH_VERSION="1.2.7p1"
> # MPI_MPICH2_PATH="/opt/mpich2-1.1.1p1"
> # MPI_MPICH2_VERSION="1.1.1p1"
> # MPI_OPENMPI_PATH="/opt/openmpi-1.4.1"
> # MPI_OPENMPI_VERSION="1.4.1"
> # MPI_OPENMPI_MPIEXEC="/opt/openmpi-1.4.1/bin/mpiexec"
> MPI_MPICH_MPIEXEC="/usr/bin/mpiexec"
> #MPI_MPIEXEC_PATH="/opt/mpiexec-0.83"
>
> # Base installation directory
> INSTALL_ROOT=/opt
>
>
> ###################################
> # General configuration variables #
> ###################################
>
> # List of the batch nodes hostnames and optionally the subcluster ID the
> # WN belongs to. An example file is available in
> # ${INSTALL_ROOT}/glite/yaim/examples/wn-list.conf
> # Change the path according to your site settings.
> WN_LIST=${INSTALL_ROOT}/wn-list.conf
>
> # List of unix users to be created in the service nodes.
> # The format is as follows:
> # UID:LOGIN:GID1,GID2,...:GROUP1,GROUP2,...:VO:FLAG:
> # An example file is available in
> ${INSTALL_ROOT}/glite/yaim/examples/users.conf
> # Change the path according to your site settings.
> # For more information please check
> ${INSTALL_ROOT}/glite/yaim/examples/users.conf.README
> USERS_CONF=${INSTALL_ROOT}/users.conf
>
> # List of the local accounts which a user should be mapped to.
> # The format is as follows:
> # "VOMS_FQAN":GROUP:GID:FLAG:[VO]
> # An example file is available in
> ${INSTALL_ROOT}/glite/yaim/examples/groups.conf
> # Change the path according to your site settings.
> # For more information please check
> ${INSTALL_ROOT}/glite/yaim/examples/groups.conf.README
> # NOTE: comment out this variable if you want to specify a groups.conf per VO
> # under the group.d/ directory.
> GROUPS_CONF=${INSTALL_ROOT}/groups.conf
>
> # Uncomment this variable if you want to specify a local groups.conf
> # It is similar to GROUPS_CONF but used to specify a separate file
> # where local accounts specific to the site are defined.
> # LOCAL_GROUPS_CONF=my_local_groups.conf
>
> # Uncomment this variable if you are installing a mysql server
> # It is the MySQL admin password.
> MYSQL_PASSWORD=***
>
> # Uncomment this variable if you want to explicitely use pool
> # accounts for special users when generating the grid-mapfile.
> # If not defined, YAIM will decide whether to use special
> # pool accounts or not automatically
> # SPECIAL_POOL_ACCOUNTS=yes or no
>
> # Site domain /im avelacrac/
> MY_DOMAIN=ysu-cluster2.grid.am
>
> CRON_DIR=/etc/cron.d
>
> # Set this if you want a scratch directory for jobs
> EDG_WL_SCRATCH="/scratch"
>
>
> # Output storage directory for the jobs
> OUTPUT_STORAGE=/tmp/jobOutput
>
> # Reasonable default value for GLOBUS_TCP_PORT_RANGE
> GLOBUS_TCP_PORT_RANGE="20000,25000"
>
> # Set this if your WNs have a shared directory for temporary storage
> CE_DATADIR=""
>
>
> ################################
> # Site configuration variables #
> ################################
>
> # Human-readable name of your site
> SITE_NAME=AM-04-YERPHI
> SITE_WEB="http://www.grid.am/"
> SITE_LOC="Yerevan, Armenia"
> SITE_DESC="Yerevan Physics Institute"
> SITE_COUNTRY="Armenia"
> # The contact e-mail of your site.
> # A coma separated list of email addresses.
> SITE_EMAIL="[log in to unmask]"
>
> # It is the position of your site north or south of the equator
> # measured from -90. to 90. with positive values going north and
> # negative values going south.
> SITE_LAT=40.1
>
> # It is the position of the site east or west of Greenwich, England
> # measured from -180. to 180. with positive values going east and
> # negative values going west.
> SITE_LONG=44.31
>
> # Uncomment this variable if your site has an http proxy
> # in order to reduce the load on the CA host
> # SITE_HTTP_PROXY="http-proxy.my.domain"
>
> #########################################
> # ARGUS authorisation framework control #
> #########################################
>
> # Set USE_ARGUS to yes to enable the configuration of ARGUS
> USE_ARGUS=no
>
> # In case ARGUS is to be used the following should be set
> # The ARGUS service PEPD endpoints as a space separated list:
> # ARGUS_PEPD_ENDPOINTS="http://pepd.example.org:8154/authz"
>
> # ARGUS resource identities: The resource ID can be set
> # for the cream CE, WMS and other nodes respectively.
> # If a resource ID is left unset the ARGUS configuration
> # will be skipped on the associated node.
> # CREAM_PEPC_RESOURCEID=urn:mysitename.org:resource:ce
> # WMS_PEPC_RESOURCEID=urn:mysitename.org:resource:wms
> # GENERAL_PEPC_RESOURCEID=urn:mysitename.org:resource:other
>
> ################################
> # User configuration variables #
> ################################
>
> # Uncomment the following variables if you want to create system user
> # accounts under a HOME directory different from /home.
> # Note: It is recommendable to use /var/lib/user_name as HOME directory for
> # system users.
> # EDG_HOME_DIR=/var/lib/edguser
> # EDGINFO_HOME_DIR=/var/lib/edginfo
> # BDII_HOME_DIR=/var/lib/edguser
>
> ##############################
> # CE configuration variables #
> ##############################
>
> # Optional variable to define the path of a shared directory
> # available for application data.
> # Typically a POSIX accessible transient disk space shared
> # between the Worker Nodes. It may be used by MPI applications
> # or to store intermediate files that need further processing by
> # local jobs or as staging area, specially if the Worker Node
> # have no internet connectivity
> # CE_DATADIR=/mypath
>
> # Site domain
> MY_DOMAIN=yerphi-cluster.grid.am
>
> # Hostname of the CE
> CE_HOST=ce.$MY_DOMAIN
> TORQUE_SERVER=$CE_HOST
>
> #im avelacracna
> APEL_MYSQL_HOST=apel.$MY_DOMAIN
>
>
> #Im avelacrac
> CREAM_DB_USER=root
> CREAM_DB_PASSWORD=***
>
> ############################
> # SubCluster configuration #
> ############################
>
> # Name of the processor model as defined by the vendor
> # for the Worker Nodes in a SubCluster.
> CE_CPU_MODEL=Xeon
>
> # Name of the processor vendor
> # for the Worker Nodes in a SubCluster
> CE_CPU_VENDOR=Intel
>
> # Processor clock speed expressed in MHz
> # for the Worker Nodes in a SubCluster.
> CE_CPU_SPEED=2000
>
> # For the following variables please check:
> # http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_the_OS_name
> #
> # Operating system name used on the Worker Nodes
> # part of the SubCluster.
> CE_OS="Scientific Linux SL"
>
> # Operating system release used on the Worker Nodes
> # part of the SubCluster.
> CE_OS_RELEASE=5.5
>
> # Operating system version used on the Worker Nodes
> # part of the SubCluster.
> CE_OS_VERSION="SL"
>
> # Platform Type of the WN in the SubCluster
> # Check:
> http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_my_machine_architecture
> CE_OS_ARCH=x86_64
>
> # Total physical memory of a WN in the SubCluster
> # expressed in Megabytes.
> CE_MINPHYSMEM=8192
>
> # Total virtual memory of a WN in the SubCluster
> # expressed in Megabytes.
> CE_MINVIRTMEM=8192
>
> # Total number of real CPUs/physical chips in
> # the SubCluster, including the nodes part of the
> # SubCluster that are temporary down or offline.
> CE_PHYSCPU=12
>
> # Total number of cores/hyperthreaded CPUs in
> # the SubCluster, including the nodes part of the
> # SubCluster that are temporary down or offline
> CE_LOGCPU=48
>
> # Number of Logical CPUs (cores) of the WN in the
> # SubCluster
> CE_SMPSIZE=8
>
> # Performance index of your fabric in SpecInt 2000
> CE_SI00=2400
>
> # Performance index of your fabric in SpecFloat 2000
> CE_SF00=2200
>
> # Set this variable to either TRUE or FALSE to express
> # the permission for direct outbound connectivity
> # for the WNs in the SubCluster
> CE_OUTBOUNDIP=FALSE
>
> # Set this variable to either TRUE or FALSE to express
> # the permission for inbound connectivity
> # for the WNs in the SubCluster
> CE_INBOUNDIP=FALSE
>
> # Space separated list of software tags supported by the site
> # e.g. CE_RUNTIMEENV="LCG-2 LCG-2_1_0 LCG-2_1_1 LCG-2_2_0 GLITE-3_0_0
> GLITE-3_1_0 R-GMA"
> #CE_RUNTIMEENV="tag1 [tag2 [...]]"
> CE_RUNTIMEENV="GLITE-3_0_0 GLITE-3_1_0 GLITE-3_2_0 MPI-START MPICH MPICH2
> OPENMPI"
>
> # For the following variables, please check more detailed information in:
> #
> https://twiki.cern.ch/twiki/bin/view/LCG/Site-info_configuration_variables#site_info_def
> #
> # The following values must be defined by the sys admin:
> # - CPUScalingReferenceSI00=<referenceCPU-SI00>
> # - Share=<vo-name>:<vo-share> (optional, multiple definitons)
> CE_CAPABILITY="CPUScalingReferenceSI00=2400"
>
>
> # The following values must be defined by the sys admin:
> # - Cores=value
> # - value-HEP-SPEC06 (optional), where value is the CPU power computed
> # using the HEP-SPEC06 benchmark
> CE_OTHERDESCR="Cores=4,Benchmark=9.17-HEP-SPEC06"
>
>
> ########################################
> # Batch server configuration variables #
> ########################################
>
> # Hostname of the Batch server
> # Change this if your batch server is not installed
> # in the same host of the CE
> BATCH_SERVER=$CE_HOST
>
> # Jobmanager specific settings. Please, define:
> # lcgpbs, lcglfs, lcgsge or lcgcondor
> JOB_MANAGER=pbs
>
> # torque, lsf, sge or condor
> CE_BATCH_SYS=torque
> BATCH_LOG_DIR=/var/torque/server_priv/accounting
> BATCH_VERSION=2.3.13-1
>
> ################################
> # APEL configuration variables #
> ################################
>
> # Database password for the APEL DB.
> APEL_DB_PASSWORD=**
>
> ##############################
> # RB configuration variables #
> ##############################
>
> # Hostname of the RB
> RB_HOST=null
>
> ###############################
> # WMS configuration variables #
> ###############################
>
> # Hostname of the WMS
> WMS_HOST=wms.grid.am
>
> ###################################
> # myproxy configuration variables #
> ###################################
>
> # Hostname of the PX
> PX_HOST=myproxy.grid.am
>
> ################################
> # RGMA configuration variables #
> ################################
>
> # Hostname of the RGMA server
> MON_HOST=apel.$MY_DOMAIN
>
> ###################################
> # FTS configuration variables #
> ###################################
>
> # FTS endpoint
> # FTS_SERVER_URL="https://fts.${MY_DOMAIN}:8443/path/glite-data-transfer-fts"
>
> ###############################
> # DPM configuration variables #
> ###############################
>
> # Hostname of the DPM head node
> DPM_HOST="se.$MY_DOMAIN"
>
> ########################
> # SE general variables #
> ########################
>
> # Space separated list of SEs hostnames
> SE_LIST="$DPM_HOST"
>
> # Space separated list of SE hosts from SE_LIST containing
> # the export directory from the Storage Element and the
> # mount directory common to the worker nodes that are part
> # of the Computing Element. If any of the SEs in SE_LIST
> # does not support the mount concept, do not define
> # anything for that SE in this variable. If this is the case
> # for all the SEs in SE_LIST then put the value "none"
> # SE_MOUNT_INFO_LIST="[SE1:export_dir1,mount_dir1
> [SE2:export_dir2,mount_dir2 [...]]|none]"
> SE_MOUNT_INFO_LIST="none"
>
> # Variable necessary to configure Gridview service client on the SEs.
> # It sets the location and filename of the gridftp server logfile on
> # different types of SEs. Needed gridftp logfile for gridview is the
> # netlogger file which contain info for each transfer (created with
> # -Z/-log-transfer option for globus-gridftp-server).
> # Ex: DATE=20071206082249.108921 HOST=hostname.cern.ch
> PROG=globus-gridftp-server
> # NL.EVNT=FTP_INFO START=20071206082248.831173 USER=atlas102
> FILE=/storage/atlas/
> # BUFFER=0 BLOCK=262144 NBYTES=330 VOLUME=/ STREAMS=1 STRIPES=1
> DEST=[127.0.0.1]
> # TYPE=LIST CODE=226
> # Default locations for DPM: /var/log/dpm-gsiftp/dpm-gsiftp.log
> # and SE_classic: /var/log/globus-gridftp.log
> SE_GRIDFTP_LOGFILE=/var/log/dpm-gsiftp/dpm-gsiftp.log
>
>
> ################################
> # BDII configuration variables #
> ################################
>
> # Hostname of the top level BDII
> BDII_HOST=bdii.grid.am
>
> # Hostname of the site BDII
> SITE_BDII_HOST=bdii.$MY_DOMAIN
>
> # Uncomment this variable if you want to define a list of
> # top level BDIIs to support the automatic failover in the GFAL clients
> # BDII_LIST=my-bdii1.$MY_DOMAIN:port1[,my-bdii22.$MY_DOMAIN:port2[...]]
> BDII_HTTP_URL="http://www.grid.am/bdii_armngi/bdii.conf"
> BDII_FCR="http://"
> BDII_REGIONS="CE DPM MON" # list of the services provided by the site
> BDII_CE_URL="ldap://$CE_HOST:2170/mds-vo-name=resource,o=grid"
> BDII_DPM_URL="ldap://$DPM_HOST:2170/mds-vo-name=resource,o=grid"
> BDII_MON_URL="ldap://$MON_HOST:2170/mds-vo-name=resource,o=grid"
> BDII_SITE_TIMEOUT=120
>
> [log in to unmask]
> SITE_SUPPORT_EMAIL=$SITE_SUPPORT
> [log in to unmask]
> SITE_SECURITY_EMAIL=$SITE_SECURITY
> SITE_OTHER_EGI_NGI="NGI_ARMGRID"
> SITE_OTHER_GRID="NGI_ARMGRID|EGI"
>
> ##############################
> # VO configuration variables #
> ##############################
> # If you are configuring a DNS-like VO, please check
> # the following URL:
> https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400#vo_d_directory
>
> # Space separated list of VOs supported by your site
> # VOS="vo1 [vo2 [...]]"
> VOS="alice atlas armgrid.grid.am ops dteam"
>
> # Prefix of the experiment software directory in your CE
> # VO_SW_DIR=exp_soft_dir
> VO_SW_DIR=/opt/exp_soft
>
> # Space separated list of queues configured in your CE
> # QUEUES="q1 [q2 [...]]"
> QUEUES="atlas alice armgrid ops dteam"
>
> # For each queue defined in QUEUES, define a _GROUP_ENABLE variable
> # which is a space separated list of VO names and VOMS FQANs:
> # Ex.: MYQUEUE_GROUP_ENABLE="ops atlas cms /cms/Higgs /cms/ROLE=production"
> # In QUEUE names containing dots and dashes replace them with underscore:
> # Ex.: QUEUES="my.test-queue"
> # MY_TEST_QUEUE_GROUP_ENABLE="ops atlas"
> # <queue-name>_GROUP_ENABLE="fqan1 [fqan2 [...]]"
> ARMGRID_GROUP_ENABLE="armgrid.grid.am"
> OPS_GROUP_ENABLE="ops"
> DTEAM_GROUP_ENABLE="dteam"
> ATLAS_GROUP_ENABLE="atlas"
> ALICE_GROUP_ENABLE="alice"
>
> # Optional variable to define the default SE used by the VO.
> # Define the SE hostname if you want a specific SE to be the default one.
> # If this variable is not defined, the first SE in SE_LIST will be used
> # as the default one.
> # VO_<vo_name>_DEFAULT_SE=vo-default-se
>
>
> # Optional variable to define a list of LBs used by the VO.
> # Define a space separated list of LB hostnames.
> # If this variable is not defined LB_HOST will be used.
> # VO_<vo_name>_LB_HOSTS="vo-lb1 [vo-lb2 [...]]"
>
> # Optional variable to automatically add wildcards per FQAN
> # in the LCMAPS gripmap file and groupmap file. Set it to 'yes'
> # if you want to add the wildcards in your VO. Do not define it
> # or set it to 'no' if you do not want to configure wildcards in your VO.
> # VO_<vo_name>_MAP_WILDCARDS=no
>
> # Optional variable to define the Myproxy server supported by the VO.
> # Define the Myproxy hostname if you want a specific Myproxy server.
> # If this variable is not defined PX_HOST will be used.
> # VO_<vo_name>_PX_HOST=vo-myproxy
>
> # Optional variable to define a list of RBs used by the VO.
> # Define a space separated list of RB hostnames.
> # If this variable is not defined RB_HOST will be used.
> # VO_<vo_name>_RBS="vo-rb1 [vo-rb2 [...]]"
>
> # Area on the WN for the installation of the experiment software.
> # If on the WNs a predefined shared area has been mounted where
> # VO managers can pre-install software, then these variable
> # should point to this area. If instead there is not a shared
> # area and each job must install the software, then this variables
> # should contain a dot ( . ). Anyway the mounting of shared areas,
> # as well as the local installation of VO software is not managed
> # by yaim and should be handled locally by Site Administrators.
> # VO_<vo_name>_SW_DIR=wn_exp_soft_dir
>
> SW_DIR=$VO_SW_DIR/armgrid
> DEFAULT_SE=$DPM_HOST
>
> VO_ARMGRID_GRID_AM_VOMS_SERVERS="'vomss://voms.grid.am:8443/voms/armgrid.grid.am?/armgrid.grid.am'"
> VO_ARMGRID_GRID_AM_VOMSES="'armgrid.grid.am voms.grid.am 15000
> /C=AM/O=ArmeSFo/O=IIAP NAS RA/OU=HPC Laboratory/CN=voms.grid.am
> armgrid.grid.am'"
> VO_ARMGRID_GRID_AM_VOMS_CA_DN="'/C=AM/O=ArmeSFo/CN=ArmeSFo CA'"
>
>
> # DEFAULT_SE=$DPM_HOST
> # VOMS_SERVERS="'vomss://voms.irb.hr:8443/voms/seegrid?/seegrid'"
> # VOMSES="'seegrid voms.irb.hr 15010
> /C=HR/O=edu/OU=irb/CN=host/voms.irb.hr seegrid' 'seegrid voms.grid.auth.gr
> 15040 /C=GR/O=HellasGrid/OU=auth.gr/CN=voms.grid.auth.gr
> #VOMS_CA_DN="'/C=HR/O=edu/OU=srce/CN=SRCE CA'
> '/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006'"
>
> # This variable contains the vomses file parameters needed
> # to contact a VOMS server. Multiple VOMS servers can be given
> # if the parameters are enclosed in single quotes.
> # VO_<vo_name>_VOMSES="'vo_name voms_server_hostname port
> voms_server_host_cert_dn vo_name' ['...']"
>
> # DN of the CA that signs the VOMS server certificate.
> # Multiple values can be given if enclosed in single quotes.
> # Note that there must be as many entries as in the VO_<vo-name>_VOMSES
> variable.
> # There is a one to one relationship in the elements of both lists,
> # so the order must be respected
> # VO_<vo_name>_VOMS_CA_DN="'voms_server_ca_dn' ['...']"
>
> # A list of the VOMS servers used to create the DN grid-map file.
> # Multiple values can be given if enclosed in single quotes.
> # VO_<vo_name>_VOMS_SERVERS="'vomss://<host-name>:8443/voms/<vo-name>?/<vo-name>'
> ['...']"
>
> # Optional variable to define a list of WMSs used by the VO.
> # Define a space separated list of WMS hostnames.
> # If this variable is not defined WMS_HOST will be used.
> # VO_<vo_name>_WMS_HOSTS="vo-wms1 [vo-wms2 [...]]"
>
> # Optional variable to create a grid-mapfile with mappings to ordinary
> # pool accounts, not containing mappings to special users.
> # - UNPRIVILEGED_MKGRIDMAP=no or undefined, will contain
> # special users if defined in groups.conf
> # - UNPRIVILEGED_MKGRIDMAP=yes, will create a grid-mapfile
> # containing only mappings to ordinary pool accounts.
> # VO_<vo_name>_UNPRIVILEGED_MKGRIDMAP=no
>
> # gLite pool account home directory for the user accounts specified in
> USERS_CONF.
> # Define this variable if you would like to use a directory different than
> /home.
> # VO_<vo_name>_USER_HOME_PREFIX=/pool_account_home_dir
>
> # Examples for the following VOs are included below:
> #
> # atlas
> # alice
> # lhcb
> # cms
> # dteam
> # biomed
> # ops
> #
> # VOs should check the CIC portal http://cic.in2p3.fr for the VO ID card
> information
> #
> #
> #########
> # atlas #
> #########
> VO_ATLAS_SW_DIR=$VO_SW_DIR/atlas
> VO_ATLAS_DEFAULT_SE=$SE_HOST
> VO_ATLAS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/atlas
> VO_ATLAS_VOMS_SERVERS='vomss://voms.cern.ch:8443/voms/atlas?/atlas/'
> VO_ATLAS_VOMSES="\
> 'atlas lcg-voms.cern.ch 15001 \
> /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch atlas 24' \
> 'atlas voms.cern.ch 15001 \
> /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch atlas 24' \
> 'atlas vo.racf.bnl.gov 15003 \
> /DC=org/DC=doegrids/OU=Services/CN=vo.racf.bnl.gov atlas 24' \
> "
> VO_ATLAS_VOMS_CA_DN="\
> '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' \
> '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' \
> '/DC=org/DC=DOEGrids/OU=Certificate Authorities/CN=DOEGrids CA 1' \
> "
> #
> ##########
> # alice #
> #########
> VO_ALICE_SW_DIR=$VO_SW_DIR/alice
> VO_ALICE_DEFAULT_SE=$SE_HOST
> VO_ALICE_STORAGE_DIR=$CLASSIC_STORAGE_DIR/alice
> VO_ALICE_VOMS_SERVERS='vomss://voms.cern.ch:8443/voms/alice?/alice/'
> VO_ALICE_VOMSES="\
> 'alice lcg-voms.cern.ch 15000 \
> /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch alice 24' \
> 'alice voms.cern.ch 15000 \
> /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch alice 24' \
> "
> VO_ALICE_VOMS_CA_DN="\
> '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' \
> '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' \
> "
> #
> #########
> # dteam #
> #########
> VO_DTEAM_SW_DIR=$VO_SW_DIR/dteam
> VO_DTEAM_DEFAULT_SE=$SE_HOST
> VO_DTEAM_STORAGE_DIR=$CLASSIC_STORAGE_DIR/dteam
> VO_DTEAM_VOMS_SERVERS='vomss://voms.hellasgrid.gr:8443/voms/dteam?/dteam/'
> VO_DTEAM_VOMSES="\
> 'dteam lcg-voms.cern.ch 15004 \
> /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch dteam 24' \
> 'dteam voms.cern.ch 15004 \
> /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch dteam 24' \
> 'dteam voms.hellasgrid.gr 15004 \
> /C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms.hellasgrid.gr dteam 24' \
> 'dteam voms2.hellasgrid.gr 15004 \
> /C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr dteam 24' \
> "
> VO_DTEAM_VOMS_CA_DN="\
> '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' \
> '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' \
> '/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006' \
> '/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006' \
> "
> #
> #######
> # ops #
> #######
> VO_OPS_SW_DIR=$VO_SW_DIR/ops
> VO_OPS_DEFAULT_SE=$SE_HOST
> VO_OPS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/ops
> VO_OPS_VOMS_SERVERS="vomss://voms.cern.ch:8443/voms/ops?/ops/"
> VO_OPS_VOMSES="\
> 'ops lcg-voms.cern.ch 15009 \
> /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch ops 24' \
> 'ops voms.cern.ch 15009 \
> /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch ops 24' \
> "
> VO_OPS_VOMS_CA_DN="\
> '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' \
> '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' \
> "
>
>
--
Best regards,
Valery Mitsyn
|