Hi,
I am using the latest yaim version (glite-yaim-3.0.0-22), and the
site-info file is *attached* to this email.
Here is the complete configure_node command I run on the RB:
/opt/glite/yaim/scripts/configure_node
/opt/glite/yaim/travail/site-info.def LFC_mysql RB BDII
A temporary solution was to edit the files :
/opt/lcg/var/gip/lcg-info-static-rb.conf ,
/opt/lcg/var/gip/ldif/static-file-RB.ldif
and to put into them the right ldap information about the
lcg-file-catalog and the lfc-dli
The result is that the LFC server is correctly being published now.
But I am still open for an official (yaim) solution.
Regards
Ahmed
Louis Poncet wrote:
> No not at all those info are dynamic and it nice like it.
> Are you sure you rerun well the configuration on this host with all
> the type of nodes in the list.
> It is like in the config_gip function of yaim it did not pass by the
> if LFC.
>
> Check also yell your site-info before rerunning yaim.
>
>
> Lp
>
>
> On Sep 8, 2006, at 4:19 PM, [log in to unmask] wrote:
>
>> Hi,
>>
>> Very interesting, it gave exactly what i am trying to add to the static
>> ldif of the globus-mds running on the RB. Do you think I should edit the
>> file (${ISNTALL_PATH}lcg/var/gip/lcg-info-static.ldif ) manually or is
>> there a configuration script which can do this in a more secure way ?
>>
>> Cheers Ahmed
>>
>> PS : The output the command :
>> [root@rb1 yaim]# /opt/lcg/var/gip/provider/lcg-lfc-provider
>> dn: GlueServiceUniqueID=http://rb1.egee.fr.cgg.com:8085/,o=grid
>> objectClass: GlueTop
>> objectClass: GlueService
>> GlueServiceName: CGG-LCG2-lfc-dli
>> GlueServiceType: data-location-interface
>> GlueServiceEndpoint: http://rb1.egee.fr.cgg.com:8085/
>> GlueServiceURI: http://rb1.egee.fr.cgg.com:8085/
>> GlueServiceAccessPointURL: http://rb1.egee.fr.cgg.com:8085/
>> GlueServiceStatus: OK
>> GlueServiceStatusInfo: No Problems
>> GlueServiceWSDL: unset
>> GlueServiceSemantics: unset
>> GlueForeignKey: GlueSiteUniqueID=CGG-LCG2
>> GlueServiceStartTime: 2006-09-08 13:44:59.000000000 +0200
>> GlueServiceVersion: 1.5.7
>> GlueServiceOwner: egeode
>> GlueServiceAccessControlRule: egeode
>>
>> dn: GlueServiceUniqueID=rb1.egee.fr.cgg.com,o=grid
>> objectClass: GlueTop
>> objectClass: GlueService
>> GlueServiceName: CGG-LCG2-lfc
>> GlueServiceType: lcg-file-catalog
>> GlueServiceEndpoint: rb1.egee.fr.cgg.com
>> GlueServiceURI: rb1.egee.fr.cgg.com
>> GlueServiceAccessPointURL: rb1.egee.fr.cgg.com
>> GlueServiceStatus: OK
>> GlueServiceStatusInfo: No Problems
>> GlueServiceWSDL: unset
>> GlueServiceSemantics: unset
>> GlueForeignKey: GlueSiteUniqueID=CGG-LCG2
>> GlueServiceStartTime: 2006-09-08 13:44:59.000000000 +0200
>> GlueServiceVersion: 1.5.7
>> GlueServiceOwner: egeode
>> GlueServiceAccessControlRule: egeode
>>
>>
>> Louis Poncet wrote:
>>> Can you exec that of the LFC host.
>>>
>>>
>>> ${INSTALL_ROOT}/lcg/var/gip/provider/lcg-lfc-provider
>>>
>>> On Sep 8, 2006, at 2:42 PM, [log in to unmask] wrote:
>>>
>>>> Hi Louis,
>>>>
>>>> Yes I did and the output is here :
>>>>
>>>> Regards
>>>>
>>>> Ahmed
>>>>
>>>> [aberiach@ui1 hello]$ ldapsearch -xLLL -H
>>>> ldap://rb1.egee.fr.cgg.com:2135 -b mds-vo-name=local,o=grid
>>>> GlueServiceDataKey
>>>> dn:
>>>> GlueServiceUniqueID=rb1.egee.fr.cgg.com:7772,mds-vo-name=local,o=grid
>>>>
>>>> dn:
>>>> GlueServiceDataKey=HeldJobs,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com
>>>>
>>>>
>>>> :7772,Mds-vo-name=local,o=grid
>>>> GlueServiceDataKey: HeldJobs
>>>>
>>>> dn:
>>>> GlueServiceDataKey=IdleJobs,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com
>>>>
>>>>
>>>> :7772,Mds-vo-name=local,o=grid
>>>> GlueServiceDataKey: IdleJobs
>>>>
>>>> dn:
>>>> GlueServiceDataKey=JobController,GlueServiceUniqueID=gram://rb1.egee.fr.cg
>>>>
>>>>
>>>> g.com:7772,Mds-vo-name=local,o=grid
>>>> GlueServiceDataKey: JobController
>>>>
>>>> dn:
>>>> GlueServiceDataKey=Jobs,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:777
>>>>
>>>>
>>>> 2,Mds-vo-name=local,o=grid
>>>> GlueServiceDataKey: Jobs
>>>>
>>>> dn:
>>>> GlueServiceDataKey=LogMonitor,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.c
>>>>
>>>>
>>>> om:7772,Mds-vo-name=local,o=grid
>>>> GlueServiceDataKey: LogMonitor
>>>>
>>>> dn:
>>>> GlueServiceDataKey=RunningJobs,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.
>>>>
>>>>
>>>> com:7772,Mds-vo-name=local,o=grid
>>>> GlueServiceDataKey: RunningJobs
>>>>
>>>> dn:
>>>> GlueServiceDataKey=WorkloadManager,GlueServiceUniqueID=gram://rb1.egee.fr.
>>>>
>>>>
>>>> cgg.com:7772,Mds-vo-name=local,o=grid
>>>> GlueServiceDataKey: WorkloadManager
>>>>
>>>> dn: Mds-Vo-name=local,o=grid
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> The full output is :
>>>> [aberiach@ui1 hello]$ ldapsearch -xLLL -H
>>>> ldap://rb1.egee.fr.cgg.com:2135 -b mds-vo-name=local,o=grid
>>>> dn:
>>>> GlueServiceUniqueID=rb1.egee.fr.cgg.com:7772,mds-vo-name=local,o=grid
>>>> objectClass: GlueTop
>>>> objectClass: GlueService
>>>> objectClass: GlueKey
>>>> objectClass: GlueSchemaVersion
>>>> GlueServiceUniqueID: rb1.egee.fr.cgg.com:7772
>>>> GlueServiceName: CGG-LCG2-rb
>>>> GlueServiceType: ResourceBroker
>>>> GlueServiceVersion: 1.2.0
>>>> GlueServiceEndpoint: rb1.egee.fr.cgg.com:7772
>>>> GlueServiceURI: unset
>>>> GlueServiceAccessPointURL: not_used
>>>> GlueServiceStatus: OK
>>>> GlueServiceStatusInfo: No Problems
>>>> GlueServiceWSDL: unset
>>>> GlueServiceSemantics: unset
>>>> GlueServiceStartTime: 1970-01-01T00:00:00Z
>>>> GlueServiceOwner: dteam
>>>> GlueServiceOwner: egeode
>>>> GlueServiceOwner: esr
>>>> GlueServiceOwner: fusion
>>>> GlueServiceOwner: atlas
>>>> GlueServiceOwner: alice
>>>> GlueServiceOwner: cms
>>>> GlueServiceOwner: lhcb
>>>> GlueServiceOwner: biomed
>>>> GlueServiceOwner: auvergrid
>>>> GlueServiceOwner: ops
>>>> GlueServiceAccessControlRule: dteam
>>>> GlueServiceAccessControlRule: egeode
>>>> GlueServiceAccessControlRule: esr
>>>> GlueServiceAccessControlRule: fusion
>>>> GlueServiceAccessControlRule: atlas
>>>> GlueServiceAccessControlRule: alice
>>>> GlueServiceAccessControlRule: cms
>>>> GlueServiceAccessControlRule: lhcb
>>>> GlueServiceAccessControlRule: biomed
>>>> GlueServiceAccessControlRule: auvergrid
>>>> GlueServiceAccessControlRule: ops
>>>> GlueForeignKey: GlueSiteUniqueID=CGG-LCG2
>>>> GlueSchemaVersionMajor: 1
>>>> GlueSchemaVersionMinor: 2
>>>>
>>>> dn:
>>>> GlueServiceDataKey=HeldJobs,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com
>>>>
>>>>
>>>> :7772,Mds-vo-name=local,o=grid
>>>> objectClass: GlueTop
>>>> objectClass: GlueServiceData
>>>> objectClass: GlueKey
>>>> objectClass: GlueSchemaVersion
>>>> GlueServiceDataKey: HeldJobs
>>>> GlueServiceDataValue: 0
>>>> GlueChunkKey: GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:7772
>>>> GlueSchemaVersionMajor: 1
>>>> GlueSchemaVersionMinor: 2
>>>>
>>>> dn:
>>>> GlueServiceDataKey=IdleJobs,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com
>>>>
>>>>
>>>> :7772,Mds-vo-name=local,o=grid
>>>> objectClass: GlueTop
>>>> objectClass: GlueServiceData
>>>> objectClass: GlueKey
>>>> objectClass: GlueSchemaVersion
>>>> GlueServiceDataKey: IdleJobs
>>>> GlueServiceDataValue: 0
>>>> GlueChunkKey: GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:7772
>>>> GlueSchemaVersionMajor: 1
>>>> GlueSchemaVersionMinor: 2
>>>>
>>>> dn:
>>>> GlueServiceDataKey=JobController,GlueServiceUniqueID=gram://rb1.egee.fr.cg
>>>>
>>>>
>>>> g.com:7772,Mds-vo-name=local,o=grid
>>>> objectClass: GlueTop
>>>> objectClass: GlueServiceData
>>>> objectClass: GlueKey
>>>> objectClass: GlueSchemaVersion
>>>> GlueServiceDataKey: JobController
>>>> GlueServiceDataValue: 0
>>>> GlueChunkKey: GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:7772
>>>> GlueSchemaVersionMajor: 1
>>>> GlueSchemaVersionMinor: 2
>>>>
>>>> dn:
>>>> GlueServiceDataKey=Jobs,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:777
>>>>
>>>>
>>>> 2,Mds-vo-name=local,o=grid
>>>> objectClass: GlueTop
>>>> objectClass: GlueServiceData
>>>> objectClass: GlueKey
>>>> objectClass: GlueSchemaVersion
>>>> GlueServiceDataKey: Jobs
>>>> GlueServiceDataValue: 0
>>>> GlueChunkKey: GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:7772
>>>> GlueSchemaVersionMajor: 1
>>>> GlueSchemaVersionMinor: 2
>>>>
>>>> dn:
>>>> GlueServiceDataKey=LogMonitor,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.c
>>>>
>>>>
>>>> om:7772,Mds-vo-name=local,o=grid
>>>> objectClass: GlueTop
>>>> objectClass: GlueServiceData
>>>> objectClass: GlueKey
>>>> objectClass: GlueSchemaVersion
>>>> GlueServiceDataKey: LogMonitor
>>>> GlueServiceDataValue: 0
>>>> GlueChunkKey: GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:7772
>>>> GlueSchemaVersionMajor: 1
>>>> GlueSchemaVersionMinor: 2
>>>>
>>>> dn:
>>>> GlueServiceDataKey=RunningJobs,GlueServiceUniqueID=gram://rb1.egee.fr.cgg.
>>>>
>>>>
>>>> com:7772,Mds-vo-name=local,o=grid
>>>> objectClass: GlueTop
>>>> objectClass: GlueServiceData
>>>> objectClass: GlueKey
>>>> objectClass: GlueSchemaVersion
>>>> GlueServiceDataKey: RunningJobs
>>>> GlueServiceDataValue: 14
>>>> GlueChunkKey: GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:7772
>>>> GlueSchemaVersionMajor: 1
>>>> GlueSchemaVersionMinor: 2
>>>>
>>>> dn:
>>>> GlueServiceDataKey=WorkloadManager,GlueServiceUniqueID=gram://rb1.egee.fr.
>>>>
>>>>
>>>> cgg.com:7772,Mds-vo-name=local,o=grid
>>>> objectClass: GlueTop
>>>> objectClass: GlueServiceData
>>>> objectClass: GlueKey
>>>> objectClass: GlueSchemaVersion
>>>> GlueServiceDataKey: WorkloadManager
>>>> GlueServiceDataValue: 0
>>>> GlueChunkKey: GlueServiceUniqueID=gram://rb1.egee.fr.cgg.com:7772
>>>> GlueSchemaVersionMajor: 1
>>>> GlueSchemaVersionMinor: 2
>>>>
>>>> dn: Mds-Vo-name=local,o=grid
>>>> objectClass: GlobusStub
>>>>
>>>>
>>>>
>>>> Louis Poncet wrote:
>>>>> Hi,
>>>>> Did you try a ldap search on it ??
>>>>>
>>>>> Lp
>>>>>
>>>>> On Sep 8, 2006, at 1:46 PM, [log in to unmask] wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> After upgrading the CGG-LCG2 SE classic to a DPM SE, we moved the
>>>>>> LFC
>>>>>> service hosted by this SE to the RB host. But the globus-mds of
>>>>>> the RB
>>>>>> is not publishing the LFC service.
>>>>>> the command " lcg-infosites --vo egeode lfc " does not give any
>>>>>> reponse
>>>>>> and the lcg-XX commands are failing for this VO because they cannot
>>>>>> get
>>>>>> the LFC hostname from the information system.
>>>>>>
>>>>>>
>>>>>>
>>>>>> The involved hosts are :
>>>>>> RB/BDII/LFC : rb1.egee.fr.cgg.com
>>>>>> SE : se1.egee.fr.cgg.com
>>>>>>
>>>>>> Do you have any idea about this problem ?
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Ahmed
>>>>>
>>>
>
# YAIM example site configuration file - adapt it to your site!
MY_DOMAIN=egee.fr.cgg.com
CE_HOST=ce1.$MY_DOMAIN
# note: SE_HOST removed --> see CLASSIC_HOST, DCACHE_ADMIN, DPM_HOST below
RB_HOST=rb1.$MY_DOMAIN
WMS_HOST=rb1.$MY_DOMAIN #This is only to configure the UI to submit jobs to a WMSLB and the ERROR is the same with this line commented out.
PX_HOST=myproxy.cern.ch
BDII_HOST=rb1.$MY_DOMAIN
MON_HOST=mon1.$MY_DOMAIN
FTS_HOST=se1.$MY_DOMAIN
REG_HOST=lcgic01.gridpp.rl.ac.uk # there is only 1 central registry for now
# Set this if you are building a VO-BOX
#VOBOX_HOST=my-vobox.$MY_DOMAIN
#VOBOX_PORT=1975
#Set this to "yes" your site provides an X509toKERBEROS Authentication Server
#Only for sites with Experiment Software Area under AFS
#GSSKLOG=no
#GSSKLOG_SERVER=my-gssklog.$MY_DOMAIN
# LFC
# Set these if you are installing an LFC
LFC_HOST=rb1.$MY_DOMAIN
LFC_DB_PASSWORD=XXXXXX
# These are set to default to using the standard database on the same hosts
# as the LFC daemon is on
LFC_DB_HOST=$LFC_HOST
LFC_DB=cns_db
# All catalogues are local unless you add a VO to
# LFC_CENTRAL, in which case that will be central
LFC_CENTRAL="egeode"
# If you want to limit the VOs your LFC serves, add the locals here
#LFC_LOCAL=""
# If you use a DNS alias in front of your LFC, specify it here
#LFC_HOST_ALIAS=""
# Change this if your torque server is not on the CE
# it is ingored for other batch systems
TORQUE_SERVER=$CE_HOST
WN_LIST=/opt/glite/yaim/travail/wn-list.conf
USERS_CONF=/opt/glite/yaim/travail/users.conf
GROUPS_CONF=/opt/glite/yaim/travail/groups.conf
FUNCTIONS_DIR=/opt/glite/yaim/functions
YAIM_VERSION=3.0.0-3
# Pick the apt-get sources appropriate to your OS - uncomment one line
LCG_REPOSITORY="rpm http://glitesoft.cern.ch/EGEE/gLite/APT/R3.0/ rhel30 externals Release3.0 updates"
# This is the old one : CA_REPOSITORY="rpm http://grid-deployment.web.cern.ch/grid-deployment/gis apt/LCG_CA/en/i386 lcg"
CA_REPOSITORY="rpm http://linuxsoft.cern.ch/ LCG-CAs/current production"
#REPOSITORY_TYPE="apt" # or "yum"
REPOSITORY_TYPE="apt"
# For the relocatable (tarball) distribution, ensure
# that INSTALL_ROOT is set correctly
INSTALL_ROOT=/opt
# You will probably want to change these too for the relocatable dist
OUTPUT_STORAGE=/tmp/jobOutput
JAVA_LOCATION="/usr/java/j2sdk1.4.2_12"
# Set this to '/dev/null' or some other dir if you want
# to turn off yaim installation of cron jobs
CRON_DIR=/etc/cron.d
GLOBUS_TCP_PORT_RANGE="20000 25000"
MYSQL_PASSWORD=XXXXXX
APEL_DB_PASSWORD="XXXXXX"
#
# ---> GRID_TRUSTED_BROKERS: put single quotes around each trusted DN !!! <---
#
GRID_TRUSTED_BROKERS="rb1.egee.fr.cgg.com"
# The RB now uses the DLI by default; set VOs here which should use RLS
#RB_RLS="atlas cms"
GRIDMAP_AUTH="'ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org' 'ldap://vo-server.in2p3.fr/ou=People,o=auvergrid,dc=lcg,dc=org'"
#GRIDMAP_AUTH="ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org ldap://xyz"
GRIDICE_SERVER_HOST=$MON_HOST
[log in to unmask]
SITE_NAME=CGG-LCG2
SITE_LOC="Massy, France"
SITE_LAT=48.72230406591397 # -90 to 90 degrees
SITE_LONG=2.2701680660247803 # -180 to 180 degrees
SITE_WEB="http://www.cgg.com"
SITE_TIER="TIER 2"
SITE_SUPPORT_SITE="my-bigger-site.cern.ch"
JOB_MANAGER=pbs
CE_BATCH_SYS=torque
BATCH_BIN_DIR=/usr/bin
BATCH_VERSION=torque-1.0.1b
BATCH_LOG_DIR=/var/spool/pbs/server_priv/accounting
CE_CPU_MODEL=PIII
CE_CPU_VENDOR=intel
CE_CPU_SPEED=1266
CE_OS="Scientific Linux"
CE_OS_RELEASE=3.0.5
CE_OS_VERSION="SL"
CE_MINPHYSMEM=2048
CE_MINVIRTMEM=4096
CE_SMPSIZE=2
CE_SI00=611
CE_SF00=422
CE_OUTBOUNDIP=TRUE
CE_INBOUNDIP=FALSE
CE_RUNTIMEENV="
LCG-2
LCG-2_1_0
LCG-2_1_1
LCG-2_2_0
LCG-2_3_0
LCG-2_3_1
LCG-2_4_0
LCG-2_5_0
LCG-2_6_0
LCG-2_7_0
GLITE-3_0_0
R-GMA
MPICH
"
# Set this if your WNs have a shared directory for temporary storage
CE_DATADIR=""
CLASSIC_HOST=se1.egee.fr.cgg.com
CLASSIC_STORAGE_DIR="/storage"
# dCache-specific settings
# ignore if you are not running d-cache
# Your dcache admin node
#DCACHE_ADMIN=""
#DCACHE_POOLS="my-pool-node1:/pool-path1 my-pool-node2:/pool-path2"
# Optional
# DCACHE_PORT_RANGE="20000,25000"
# Set to "yes" only if YAIM shall reset the dCache configuration,
# i.e. if you want YAIM to configure dCache - WARNING:
# this may wipe out any dCache parameters previously configured!
#RESET_DCACHE_CONFIGURATION=no
#==== NEW variables ======
# The name of the DPM head node
DPM_HOST=se1.$MY_DOMAIN
# The DPM pool name
DPMPOOL=pool1
# The filesystems/partitions parts of the pool
#DPM_FILESYSTEMS="$DPM_HOST:/storage my-dpm-poolnode.$MY_DOMAIN:/path2"
DPM_FILESYSTEMS="$DPM_HOST:/storage/dpmdata"
# The database user
DPM_DB_USER=dpmuser
# The database user password
DPM_DB_PASSWORD=XXXXXX
# The DPM database host
DPM_DB_HOST=$DPM_HOST
# Specifies the default amount of space reserved for a file
DPMFSIZE=200M
# Variable for the port range - Optional, default value is shown
# RFIO_PORT_RANGE="20000 25000"
# ?? sur leur necessite
#DPMMGR=dpmmgr
#DPMDATA=/storage
#======= Old variables NOT USED =======
# SE_dpm-specific settings
# Ignore if you are not running a DPM
#DPMDATA="/storage"
# The database user
#DPMMGR=the-dpm-db-user
# The database user password
#DPMUSER_PWD=the-dpm-db-pwd
#DPMFSIZE=200M
# Set this if you are building a DPM yourself
# and/or if you need a default DPM for the lcg-stdout-mon
#DPM_HOST="" # my-dpm.$MY_DOMAIN
#DPM_HOST=se1.$MY_DOMAIN
#DPMPOOL=the_dpm_pool_name
#DPMPOOL=pool1
#DPMPOOL_NODES="poolnode1.$MY_DOMAIN:/path1 poolnode2.$MY_DOMAIN:/path2"
# Optional
# DPM_PORT_RANGE="20000,25000" ??
#============ ================
# This largely replaces CE_CLOSE_SE but it is a list of hostnames
SE_LIST="$DPM_HOST" # $DPM_HOST $DCACHE_ADMIN"
SE_ARCH="disk" # "disk, tape, multidisk, other"
FTS_SERVER_URL="https://se1.${MY_DOMAIN}:8443/path/glite-data-transfer-fts"
FTS_DB_TYPE=mysql
BDII_HTTP_URL="http://grid-deployment.web.cern.ch/grid-deployment/gis/lcg2-bdii/dteam/lcg2-all-sites.conf"
# Set this to use FCR
BDII_FCR="http://goc.grid-support.ac.uk/gridsite/bdii/BDII/www/bdii-update.ldif"
#BDII_REGIONS="CE SE RB PX VOBOX"
BDII_REGIONS="CE SE RB LFC" # list of the services provided by the site
BDII_CE_URL="ldap://$CE_HOST:2135/mds-vo-name=local,o=grid"
BDII_SE_URL="ldap://$DPM_HOST:2135/mds-vo-name=local,o=grid"
BDII_RB_URL="ldap://$RB_HOST:2135/mds-vo-name=local,o=grid"
#BDII_PX_URL="ldap://$PX_HOST:2135/mds-vo-name=local,o=grid"
BDII_LFC_URL="ldap://$LFC_HOST:2135/mds-vo-name=local,o=grid"
#BDII_VOBOX_URL="ldap://$VOBOX_HOST:2135/mds-vo-name=local,o=grid"
# Use this to set your contact string.
# Ex.: BDII_BIND="mds-vo-name=mystorage,o=grid"
# E2EMONIT specific settings
# This specifies the location to download the host specific configuration file
#E2EMONIT_LOCATION=grid-deployment.web.cern.ch/grid-deployment/e2emonit/production
#
# Replace this with the siteid supplied by the person setting up the networking
# topology.
#E2EMONIT_SITEID=my.siteid
#VOS="atlas alice lhcb cms dteam biomed"
VOS="dteam egeode esr fusion atlas alice cms lhcb biomed auvergrid ops" # add the other VOs your site supports
QUEUES=${VOS}
VO_SW_DIR=/voarea
# set this if you want a scratch directory for jobs
EDG_WL_SCRATCH="/scr"
VO_ATLAS_SW_DIR=$VO_SW_DIR/atlas
VO_ATLAS_DEFAULT_SE=$DPM_HOST
VO_ATLAS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/atlas
VO_ATLAS_QUEUES="atlas"
VO_ATLAS_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=atlas,dc=eu-datagrid,dc=org
VO_ATLAS_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=atlas,dc=eu-datagrid,dc=org
VO_ATLAS_VOMS_POOL_PATH="/lcg1"
VO_ATLAS_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/atlas?/atlas/' 'vomss://voms.cern.ch:8443/voms/atlas?/atlas/'"
#VO_ATLAS_VOMS_EXTRA_MAPS="'Role=production production' 'usatlas .usatlas'"
VO_ATLAS_VOMSES="'atlas lcg-voms.cern.ch 15001 /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch atlas' 'atlas voms.cern.ch 15001 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch atlas'"
VO_ALICE_SW_DIR=$VO_SW_DIR/alice
VO_ALICE_DEFAULT_SE=$CLASSIC_HOST
VO_ALICE_STORAGE_DIR=$CLASSIC_STORAGE_DIR/alice
VO_ALICE_QUEUES="alice"
VO_ALICE_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=alice,dc=eu-datagrid,dc=org
VO_ALICE_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=alice,dc=eu-datagrid,dc=org
VO_ALICE_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/alice?/alice/' 'vomss://voms.cern.ch:8443/voms/alice?/alice/'"
VO_ALICE_VOMSES="'alice lcg-voms.cern.ch 15000 /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch alice' 'alice voms.cern.ch 15000 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch alice'"
VO_CMS_SW_DIR=$VO_SW_DIR/cms
VO_CMS_DEFAULT_SE=$CLASSIC_HOST
VO_CMS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/cms
VO_CMS_QUEUES="cms"
VO_CMS_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=cms,dc=eu-datagrid,dc=org
VO_CMS_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=cms,dc=eu-datagrid,dc=org
VO_CMS_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/cms?/cms/' 'vomss://voms.cern.ch:8443/voms/cms?/cms/'"
VO_CMS_VOMSES="'cms lcg-voms.cern.ch 15002 /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch cms' 'cms voms.cern.ch 15002 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch cms'"
VO_LHCB_SW_DIR=$VO_SW_DIR/lhcb
VO_LHCB_DEFAULT_SE=$CLASSIC_HOST
VO_LHCB_STORAGE_DIR=$CLASSIC_STORAGE_DIR/lhcb
VO_LHCB_QUEUES="lhcb"
VO_LHCB_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=lhcb,dc=eu-datagrid,dc=org
VO_LHCB_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=lhcb,dc=eu-datagrid,dc=org
VO_LHCB_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/lhcb?/lhcb/' 'vomss://voms.cern.ch:8443/voms/lhcb?/lhcb/'"
VO_LHCB_VOMS_EXTRA_MAPS="lcgprod lhcbprod"
VO_LHCB_VOMSES="'lhcb lcg-voms.cern.ch 15003 /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch lhcb' 'lhcb voms.cern.ch 15003 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch lhcb'"
VO_DTEAM_SW_DIR=$VO_SW_DIR/dteam
VO_DTEAM_DEFAULT_SE=$CLASSIC_HOST
VO_DTEAM_STORAGE_DIR=$CLASSIC_STORAGE_DIR/dteam
VO_DTEAM_QUEUES="dteam"
VO_DTEAM_SGM=ldap://lcg-vo.cern.ch/ou=lcgadmin,o=dteam,dc=lcg,dc=org
VO_DTEAM_USERS=ldap://lcg-vo.cern.ch/ou=lcg1,o=dteam,dc=lcg,dc=org
VO_DTEAM_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/dteam?/dteam/' 'vomss://voms.cern.ch:8443/voms/dteam?/dteam/'"
VO_DTEAM_VOMSES="'dteam lcg-voms.cern.ch 15004 /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch dteam' 'dteam voms.cern.ch 15004 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch dteam'"
VO_BIOMED_SW_DIR=$VO_SW_DIR/biomed
VO_BIOMED_DEFAULT_SE=$CLASSIC_HOST
VO_BIOMED_STORAGE_DIR=$CLASSIC_STORAGE_DIR/biomed
VO_BIOMED_QUEUES="biomed"
VO_BIOMED_USERS=ldap://vo-biome.in2p3.fr/ou=lcg1,o=biomedical,dc=lcg,dc=org
VO_BIOMED_SGM=ldap://vo-biome.in2p3.fr/ou=lcgadmin,o=biomedical,dc=lcg,dc=org
#VO_BIOMED_VOMSES="biomed cclcgvomsli01.in2p3.fr 15000 [log in to unmask] biomed"
VO_BIOMED_VOMSES="biomed cclcgvomsli01.in2p3.fr 15000 /O=GRID-FR/C=FR/O=CNRS/OU=CC-LYON/CN=cclcgvomsli01.in2p3.fr biomed"
VO_EGEODE_SW_DIR=$VO_SW_DIR/egeode
VO_EGEODE_DEFAULT_SE=$CLASSIC_HOST
VO_EGEODE_STORAGE_DIR=$CLASSIC_STORAGE_DIR/egeode
VO_EGEODE_QUEUES="egeode"
VO_EGEODE_USERS=ldap://vo-egeode.in2p3.fr/ou=lcg1,o=egeode,dc=lcg,dc=org
VO_EGEODE_SGM=ldap://vo-egeode.in2p3.fr/ou=lcgadmin,o=egeode,dc=lcg,dc=org
VO_EGEODE_VOMS_SERVERS="vomss://voms-egeode.in2p3.fr:8443/voms/egeode?/egeode/"
#VO_EGEODE_VOMSES="'"egeode" "cclcgvomsli01.in2p3.fr" "15001" "[log in to unmask]" "egeode"'"
VO_EGEODE_VOMSES="'"egeode" "cclcgvomsli01.in2p3.fr" "15001" "/O=GRID-FR/C=FR/O=CNRS/OU=CC-LYON/CN=cclcgvomsli01.in2p3.fr" "egeode"'"
VO_ESR_SW_DIR=$VO_SW_DIR/esr
VO_ESR_DEFAULT_SE=$CLASSIC_HOST
VO_ESR_STORAGE_DIR=$CLASSIC_STORAGE_DIR/esr
VO_ESR_QUEUES="esr"
VO_ESR_USERS=ldap://grid-vo.sara.nl/ou=eobs,o=esr,dc=eu-egee,dc=org
VO_ESR_SGM=ldap://grid-vo.sara.nl/ou=lcgadmin,o=esr,dc=eu-egee,dc=org
#VO_ESR_VOMS_SERVERS="vomss://kuiken.nikhef.nl:8443/voms/esr?/esr/"
#VO_ESR_VOMSES="'esr kuiken.nikhef.nl 15006 /O=dutchgrid/O=hosts/OU=nikhef.nl/CN=kuiken.nikhef.nl esr' 'esr mu4.matrix.sara.nl 30001 /O=dutchgrid/O=hosts/OU=sara.nl/CN=mu4.sara.nl esr'"
#IPSL site-def.conf
#VO_ESR_VOMS_SERVERS="'vomss://mu4.matrix.sara.nl:8443/voms/esr?/esr' 'vomss://kuiken.nikhef.nl:8443/voms/esr?/esr'"
#VO_ESR_VOMSES="'esr mu4.matrix.sara.nl 30001 /O=dutchgrid/O=hosts/OU=sara.nl/CN=mu4.matrix.sara.nl esr' 'esr kuiken.nikhef.nl 15006 /O=dutchgrid/O=hosts/OU=nikhef.nl/CN=kuiken.nikhef.nl esr'"
#D.Weissenbach recommandation
VO_ESR_VOMS_SERVERS="'vomss://mu4.matrix.sara.nl:8443/voms/esr?/esr'"
VO_ESR_VOMSES="'esr mu4.matrix.sara.nl 30001 /O=dutchgrid/O=hosts/OU=sara.nl/CN=mu4.matrix.sara.nl esr'"
VO_FUSION_SW_DIR=$VO_SW_DIR/fusion
VO_FUSION_DEFAULT_SE=$CLASSIC_HOST
VO_FUSION_STORAGE_DIR=$CLASSIC_STORAGE_DIR/fusion
VO_FUSION_QUEUES="fusion"
VO_FUSION_SGM=ldap://swevo.ific.uv.es/ou=swadmin,o=fusion,dc=swe,dc=lcg,dc=org
VO_FUSION_USERS=ldap://swevo.ific.uv.es/ou=lcg1,o=fusion,dc=swe,dc=lcg,dc=org
VO_FUSION_VOMS_SERVERS="vomss://swevo.ific.uv.es:8443/voms/fusion?/fusion/"
VO_FUSION_VOMSES="'fusion swevo.ific.uv.es 14003 /C=ES/O=DATAGRID-ES/O=IFIC/CN=swevo.ific.uv.es fusion'"
VO_AUVERGRID_SW_DIR=$VO_SW_DIR/auvergrid
VO_AUVERGRID_DEFAULT_SE=$CLASSIC_HOST
VO_AUVERGRID_SGM=ldap://vo-server.in2p3.fr/ou=lcgadmin,o=auvergrid,dc=lcg,dc=org
VO_AUVERGRID_USERS=ldap://vo-server.in2p3.fr/ou=lcg1,o=auvergrid,dc=lcg,dc=org
VO_AUVERGRID_STORAGE_DIR=$CLASSIC_STORAGE_DIR/auvergrid
VO_AUVERGRID_QUEUES="auvergrid"
VO_OPS_SW_DIR=$VO_SW_DIR/ops
VO_OPS_DEFAULT_SE=$CLASSIC_HOST
VO_OPS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/ops
VO_OPS_QUEUES="ops"
VO_OPS_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/voms/ops?/ops/"
VO_OPS_VOMSES="'ops lcg-voms.cern.ch 15009 /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch ops'"
|