Hi Sophie,
I moved now the LFC server from the dpm SE to the RB node and the
information system is correctly publishing it.
The command : lcg-infosites --vo egede lfc
returns :
rb1.egee.fr.cgg.com
But I am still having problem when I try to put files on this SE with
the lcg-cr command, here are some commands i run and the output they return:
-------------
[aberiach@ui1 hello]$ lcg-cr --vo egeode -d se1.egee.fr.cgg.com -l
lfn:/grid/egeode/ahmed/test_111_
file://home/cgg/aberiach/dev/Kereon/GFAL/readme.txt
the server sent an error response: 550 550
se1.egee.fr.cgg.com:/storage/dpmdata/egeode/2006-09-11: Permission denied.
lcg_cr: Transport endpoint is not connected
-------------
The log file /var/log/dpm/log shows the following lines , but the first
four lines are generated by my command, I see them periodically.
09/11 09:28:11 27176,23 dpm_srv_getpools: DP092 - getpools request by
/O=GRID-FR/C=FR/O=CGG/OU=RDI/CN=se1.egee.fr.cgg.com (0,0) from
se1.egee.fr.cgg.com
09/11 09:28:11 27176,23 dpm_srv_getpools: returns 0
09/11 09:28:11 27176,23 dpm_srv_getpoolfs: DP092 - getpoolfs request by
/O=GRID-FR/C=FR/O=CGG/OU=RDI/CN=se1.egee.fr.cgg.com (0,0) from
se1.egee.fr.cgg.com
09/11 09:28:11 27176,23 dpm_srv_getpoolfs: returns 0
09/11 09:28:14 27176,23 dpm_srv_inc_reqctr: DP092 - inc_reqctr request
by /O=GRID-FR/C=FR/O=CGG/OU=RDI/CN=Ahmed Beriache (101,103) from
se1.egee.fr.cgg.com
09/11 09:28:14 27176,23 dpm_serv: incrementing reqctr
09/11 09:28:14 27176,23 dpm_serv: msthread signalled
09/11 09:28:14 27176,23 dpm_srv_inc_reqctr: returns 0
09/11 09:28:14 27176,2 msthread: calling Cpool_assign_ext
09/11 09:28:14 27176,2 msthread: decrementing reqctr
09/11 09:28:14 27176,2 msthread: calling Cpool_next_index_timeout_ext
09/11 09:28:14 27176,2 msthread: thread 1 selected
09/11 09:28:14 27176,2 msthread: calling Cthread_mutex_lock_ext
09/11 09:28:14 27176,2 msthread: reqctr = 0
09/11 09:28:14 27176,3 dpm_srv_proc_put: processing request 942 from
/O=GRID-FR/C=FR/O=CGG/OU=RDI/CN=Ahmed Beriache
09/11 09:28:14 27176,3 dpm_srv_proc_put: calling Cns_stat
09/11 09:28:14 27176,3 dpm_srv_proc_put: calling Cns_creatx
09/11 09:28:14 27176,3 dpm_srv_proc_put: calling dpm_selectfs
09/11 09:28:14 27176,3 dpm_selectfs: selected pool: pool1
09/11 09:28:14 27176,3 dpm_selectfs: selected file system:
se1.egee.fr.cgg.com:/storage/dpmdata
09/11 09:28:14 27176,3 dpm_selectfs:
se1.egee.fr.cgg.com:/storage/dpmdata reqsize=3386,
elemp->free=204973279544, pool_p->free=204973279544
09/11 09:28:14 27176,3 dpm_srv_proc_put: returns 0
09/11 09:28:18 27176,23 dpm_srv_rm: DP092 - rm request by
/O=GRID-FR/C=FR/O=CGG/OU=RDI/CN=Ahmed Beriache (101,103) from
se1.egee.fr.cgg.com
09/11 09:28:18 27176,23 dpm_srv_rm: DP098 - rm 0
srm://se1.egee.fr.cgg.com/dpm/egee.fr.cgg.com/home/egeode/generated/2006-09-11/file3f4ae692-c23d-4fe0-957f-bbb07d5258c1
09/11 09:28:18 27176,23 dpm_updfreespace:
se1.egee.fr.cgg.com:/storage/dpmdata incr=3386,
elemp->free=204973282930, pool_p->free=204973282930
09/11 09:28:18 27176,23 dpm_srv_rm: returns 0
The other log files of dpnsdaemon, srmv1 and dpm-gsiftp do not prompt
any thing when I run the lcg-cr command.
access rights on the dpm directories are shown here :
----------
The[root@se1 storage]# dpm-qryconf
POOL pool1 DEFSIZE 200.00M GC_START_THRESH 0 GC_STOP_THRESH 0 DEFPINTIME
0 PUT_RETENP 86400 FSS_POLICY maxfreespace GC_POLICY lru RS_POLICY fifo
GID 0 S_TYPE -
CAPACITY 254.66G FREE 190.90G ( 75.0%)
se1.egee.fr.cgg.com /storage/dpmdata CAPACITY 254.66G FREE 190.90G (
75.0%)
----------
[root@se1 storage]# ls -ld /storage /storage/dpmdata/
/storage/dpmdata/egeode /storage/dpmdata/egeode/2006-09-11/
drwxrwx--- 5 dpmmgr dpmmgr 4096 Sep 7 16:28 /storage
drwxrwx--- 8 dpmmgr dpmmgr 4096 Sep 7 14:50 /storage/dpmdata/
drwxrwx--- 7 dpmmgr dpmmgr 4096 Sep 11 07:21
/storage/dpmdata/egeode
drwxrwx--- 2 dpmmgr dpmmgr 4096 Sep 11 07:21
/storage/dpmdata/egeode/2006-09-11/
-----------
Do you see any strange parameter ?
Regards.
Ahmed
[log in to unmask] wrote:
> Hi Sophie,
>
> Before to transform the Classic SE to a DPM SE, the LFC server of the VO
> EGEODE was installed on the SE and used for the egeode VO. Since the
> DPNS server uses the same port as the LFC server, should I understand
> that I can switch off the LFC server and use the DPNS server instead ?
> or should I keep both the LFC server and the DPNS server but not on the
> same node ? And in both cases, what about the LFC-DLI service ? Should
> it be switched off or run with the dpnsdaemon ?
>
> The problem we are having in the VO is that the data is not accessible
> from the grid.
>
> For information, I stopped the LFC server and the lcg-cr commands give
> now the following errors:
> [aberiach@ui1 hello]$ lcg-cr --vo egeode -d se1.egee.fr.cgg.com -l
> lfn:/dpm/egee.fr.cgg.com/home/egeode/truc
> file://home/cgg/aberiach/dev/Kereon/runTest
> the server sent an error response: 550 550
> se1.egee.fr.cgg.com:/storage/dpmdata/egeode/2006-09-06: Permission denied.
>
> lcg_cr: Transport endpoint is not connected
>
> Regards
>
> Ahmed
>
>
> Sophie Lemaitre wrote:
>
>> Hello Ahmed,
>>
>> The LFC and the DPM should not be installed on the same machine.
>>
> The LFC and the DPM Name Server are based on the same code, and use the
> same MySQL database ("cns_db").
>
>
>> And although it might look fine to use the LFC instead of the DPM Name
>> Server, it is not.
>>
>> To browse the DPM namespace, you should use "dpns-ls", and not "lfc-ls".
>> The DPM Name Server daemon is probably not running on your machine, as
>> the port is already taken by the LFC daemon...
>>
>> Hope it helps.
>> Cheers, Sophie.
>>
>>
>>> Hi,
>>>
>>> After a migration from an SE classic to an SE DPM, here at CGG-LCG2, we
>>> are experiencing very strange behaviors of this node.
>>> i installed this node with yaim.
>>> Before to install the SE DPM I did not remove the old packages like
>>> LFC-mysql , and the lfcdaemon is running under the DPM SE. Is the DPM SE
>>> supposed to provide an LFC service ?
>>> The data of the EGEODE vo is managed by this LFC server.
>>>
>>> Do you have any idea about the reasons of this problem ?
>>> Many thanks in advance for your help.
>>>
>>> Cheers
>>>
>>> Ahmed
>>>
>>> The following information may be useful for understainding the problem.
>>>
>>>
>>> ./install_node my-site-info.def glite-SE_dpm_mysql
>>> ./configure_node my-site-info.def glite-SE_dpm_mysql
>>>
>>> the site-info.def is attached to this email
>>>
>>> After the install the lcg-xx commands worked for a while, but now they
>>> are failing.
>>> Here is the information about my config :
>>>
>>> on the SE :
>>> [root@se1 /]# dpm-qryconf
>>> POOL pool1 DEFSIZE 200.00M GC_START_THRESH 0 GC_STOP_THRESH 0 DEFPINTIME
>>> 0 PUT_RETENP 86400 FSS_POLICY maxfreespace GC_POLICY lru RS_POLICY fifo
>>> GID 0 S_TYPE -
>>> CAPACITY 254.66G FREE 190.91G ( 75.0%)
>>> se1.egee.fr.cgg.com /storage/dpmdata CAPACITY 254.66G FREE 190.91G (
>>> 75.0%)
>>>
>>> [root@se1 /]# netstat -ap | grep LISTEN
>>> tcp 0 0 *:32768
>>> *:* LISTEN 2337/rpc.statd
>>> tcp 0 0 *:32769
>>> *:* LISTEN -
>>> tcp 0 0 localhost.localdomain:32770
>>> *:* LISTEN 3217/xinetd
>>> tcp 0 0 *:2119
>>> *:* LISTEN 24401/globus-gateke
>>> tcp 0 0 *:rfio
>>> *:* LISTEN 3598/rfiod
>>> tcp 0 0 *:8649
>>> *:* LISTEN 3333/gmond
>>> tcp 0 0 *:mysql
>>> *:* LISTEN 2443/mysqld
>>> tcp 0 0 *:sunrpc
>>> *:* LISTEN 2318/portmap
>>> tcp 0 0 *:5010
>>> *:* LISTEN 13536/dpnsdaemon
>>> tcp 0 0 *:ssh
>>> *:* LISTEN 3203/sshd
>>> tcp 0 0 *:5015
>>> *:* LISTEN 15189/dpm
>>> tcp 0 0 se1.egee.fr.cgg.com:2135
>>> *:* LISTEN 3193/slapd
>>> tcp 0 0 localhost.localdomain:smtp
>>> *:* LISTEN 3360/sendmail: acce
>>> tcp 0 0 *:2811
>>> *:* LISTEN 14836/dpm.ftpd
>>> tcp 0 0 *:8443
>>> *:* LISTEN 13632/srmv1
>>> tcp 0 0 *:8444
>>> *:* LISTEN 2213/srmv2
>>> unix 2 [ ACC ] STREAM LISTENING 2582
>>> 2443/mysqld /var/lib/mysql/mysql.sock
>>>
>>> [root@se1 /]# ls -ld /storage /storage/dpmdata/ /storage/dpmdata/egeode/
>>> /storage/dpmdata/egeode/2006-09-05/
>>> drwxrwx--- 17 dpmmgr dpmmgr 4096 Sep 1 13:41 /storage
>>> drwxrwx--- 5 dpmmgr dpmmgr 4096 Sep 5 18:06
>>> /storage/dpmdata/
>>> drwxrwx--- 3 dpmmgr dpmmgr 4096 Sep 5 17:53
>>> /storage/dpmdata/egeode/
>>> drwxrwx--- 2 dpmmgr dpmmgr 4096 Sep 5 17:53
>>> /storage/dpmdata/egeode/2006-09-05/
>>>
>>>
>>> From the user interface :
>>>
>>> [aberiach@ui1 hello]$ lcg-cr --vo egeode -d se1.egee.fr.cgg.com -l
>>> lfn:/dpm/egee.fr.cgg.com/home/egeode/ahmed/test__2_
>>> file://home/cgg/aberiach/dev/Kereon/runTest
>>> the server sent an error response: 550 550
>>> se1.egee.fr.cgg.com:/storage/dpmdata/egeode/2006-09-05: Permission
>>> denied.
>>>
>>> lcg_cr: Transport endpoint is not connected
>>> [aberiach@ui1 hello]$ lfc-ls -ld /dpm/egee.fr.cgg.com/home/egeode/ahmed
>>> drwxrwxr-x 5 22062 2027 0 Sep 05 19:11
>>> /dpm/egee.fr.cgg.com/home/egeode/ahmed
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> # YAIM example site configuration file - adapt it to your site!
>>>
>>> MY_DOMAIN=egee.fr.cgg.com
>>>
>>> CE_HOST=ce1.$MY_DOMAIN
>>> # note: SE_HOST removed --> see CLASSIC_HOST, DCACHE_ADMIN, DPM_HOST
>>> below
>>> RB_HOST=rb1.$MY_DOMAIN
>>> PX_HOST=myproxy.cern.ch
>>> BDII_HOST=rb1.$MY_DOMAIN
>>> MON_HOST=mon1.$MY_DOMAIN
>>> FTS_HOST=se1.$MY_DOMAIN
>>> REG_HOST=lcgic01.gridpp.rl.ac.uk # there is only 1 central
>>> registry for now
>>>
>>> # Set this if you are building a VO-BOX #VOBOX_HOST=my-vobox.$MY_DOMAIN
>>> #VOBOX_PORT=1975
>>>
>>> #Set this to "yes" your site provides an X509toKERBEROS
>>> Authentication Server #Only for sites with Experiment Software Area
>>> under AFS #GSSKLOG=no
>>> #GSSKLOG_SERVER=my-gssklog.$MY_DOMAIN
>>>
>>> # LFC
>>> # Set these if you are installing an LFC
>>> LFC_HOST=se1.$MY_DOMAIN
>>> LFC_DB_PASSWORD=XXXXXXX
>>>
>>> # These are set to default to using the standard database on the same
>>> hosts
>>> # as the LFC daemon is on
>>> LFC_DB_HOST=$LFC_HOST
>>> LFC_DB=cns_db
>>>
>>> # All catalogues are local unless you add a VO to # LFC_CENTRAL, in
>>> which case that will be central
>>> LFC_CENTRAL="egeode"
>>>
>>> # If you want to limit the VOs your LFC serves, add the locals here
>>> LFC_LOCAL=""
>>>
>>> # If you use a DNS alias in front of your LFC, specify it here
>>> LFC_HOST_ALIAS=""
>>>
>>> # Change this if your torque server is not on the CE
>>> # it is ingored for other batch systems
>>> TORQUE_SERVER=$CE_HOST
>>>
>>> WN_LIST=/opt/glite/yaim/travail/wn-list.conf
>>> USERS_CONF=/opt/glite/yaim/travail/users.conf
>>> GROUPS_CONF=/opt/glite/yaim/travail/groups.conf
>>> FUNCTIONS_DIR=/opt/glite/yaim/functions
>>> YAIM_VERSION=3.0.0-3
>>>
>>> # Pick the apt-get sources appropriate to your OS - uncomment one line
>>> LCG_REPOSITORY="rpm http://glitesoft.cern.ch/EGEE/gLite/APT/R3.0/
>>> rhel30 externals Release3.0 updates"
>>>
>>> # This is the old one : CA_REPOSITORY="rpm
>>> http://grid-deployment.web.cern.ch/grid-deployment/gis
>>> apt/LCG_CA/en/i386 lcg"
>>> CA_REPOSITORY="rpm http://linuxsoft.cern.ch/ LCG-CAs/current production"
>>> #REPOSITORY_TYPE="apt" # or "yum"
>>> REPOSITORY_TYPE="apt"
>>>
>>> # For the relocatable (tarball) distribution, ensure
>>> # that INSTALL_ROOT is set correctly
>>> INSTALL_ROOT=/opt
>>>
>>> # You will probably want to change these too for the relocatable dist
>>> OUTPUT_STORAGE=/tmp/jobOutput
>>> JAVA_LOCATION="/usr/java/j2sdk1.4.2_08"
>>>
>>> # Set this to '/dev/null' or some other dir if you want
>>> # to turn off yaim installation of cron jobs
>>> CRON_DIR=/etc/cron.d
>>>
>>> GLOBUS_TCP_PORT_RANGE="20000 25000"
>>>
>>> MYSQL_PASSWORD=XXXX
>>>
>>> APEL_DB_PASSWORD="XXXX"
>>>
>>> #
>>> # ---> GRID_TRUSTED_BROKERS: put single quotes around each trusted DN
>>> !!! <---
>>> #
>>> GRID_TRUSTED_BROKERS="rb1.egee.fr.cgg.com"
>>> # The RB now uses the DLI by default; set VOs here which should use RLS
>>> RB_RLS="atlas cms"
>>>
>>> GRIDMAP_AUTH="'ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org'
>>> 'ldap://vo-server.in2p3.fr/ou=People,o=auvergrid,dc=lcg,dc=org'"
>>> #GRIDMAP_AUTH="ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org
>>> ldap://xyz"
>>>
>>> GRIDICE_SERVER_HOST=$MON_HOST
>>>
>>> [log in to unmask]
>>> SITE_NAME=CGG-LCG2
>>> SITE_LOC="Massy, France"
>>> SITE_LAT=48.72230406591397 # -90 to 90 degrees
>>> SITE_LONG=2.2701680660247803 # -180 to 180 degrees
>>> SITE_WEB="http://www.cgg.com"
>>> SITE_TIER="TIER 2"
>>> SITE_SUPPORT_SITE="my-bigger-site.cern.ch"
>>>
>>> JOB_MANAGER=pbs
>>> CE_BATCH_SYS=torque
>>> BATCH_BIN_DIR=/usr/bin
>>> BATCH_VERSION=torque-1.0.1b
>>> BATCH_LOG_DIR=/var/spool/pbs/server_priv/accounting
>>>
>>> CE_CPU_MODEL=PIII
>>> CE_CPU_VENDOR=intel
>>> CE_CPU_SPEED=1266
>>> CE_OS="Scientific Linux"
>>> CE_OS_RELEASE=3.0.5
>>> CE_OS_VERSION="SL"
>>> CE_MINPHYSMEM=2048
>>> CE_MINVIRTMEM=4096
>>> CE_SMPSIZE=2
>>> CE_SI00=611
>>> CE_SF00=422
>>> CE_OUTBOUNDIP=TRUE
>>> CE_INBOUNDIP=FALSE
>>> CE_RUNTIMEENV="
>>> LCG-2
>>> LCG-2_1_0
>>> LCG-2_1_1
>>> LCG-2_2_0
>>> LCG-2_3_0
>>> LCG-2_3_1
>>> LCG-2_4_0
>>> LCG-2_5_0
>>> LCG-2_6_0
>>> LCG-2_7_0
>>> GLITE-3_0_0
>>> R-GMA
>>> MPICH
>>> "
>>> # Set this if your WNs have a shared directory for temporary storage
>>> CE_DATADIR=""
>>>
>>> CLASSIC_HOST=se1.egee.fr.cgg.com
>>> CLASSIC_STORAGE_DIR="/storage"
>>>
>>> # dCache-specific settings
>>> # ignore if you are not running d-cache
>>>
>>> # Your dcache admin node
>>> #DCACHE_ADMIN=""
>>> #DCACHE_POOLS="my-pool-node1:/pool-path1 my-pool-node2:/pool-path2"
>>> # Optional
>>> # DCACHE_PORT_RANGE="20000,25000"
>>> # Set to "yes" only if YAIM shall reset the dCache configuration,
>>> # i.e. if you want YAIM to configure dCache - WARNING:
>>> # this may wipe out any dCache parameters previously configured!
>>> #RESET_DCACHE_CONFIGURATION=no
>>>
>>> #==== NEW variables ======
>>> # The name of the DPM head node
>>> DPM_HOST=se1.$MY_DOMAIN
>>>
>>> # The DPM pool name
>>> DPMPOOL=pool1
>>>
>>> # The filesystems/partitions parts of the pool
>>> #DPM_FILESYSTEMS="$DPM_HOST:/storage my-dpm-poolnode.$MY_DOMAIN:/path2"
>>> DPM_FILESYSTEMS="$DPM_HOST:/storage"
>>>
>>> # The database user
>>> DPM_DB_USER=dpmuser
>>>
>>> # The database user password
>>> DPM_DB_PASSWORD=XXXXX
>>>
>>> # The DPM database host
>>> DPM_DB_HOST=$DPM_HOST
>>>
>>> # Specifies the default amount of space reserved for a file
>>> DPMFSIZE=200M
>>>
>>> # Variable for the port range - Optional, default value is shown
>>> # RFIO_PORT_RANGE="20000 25000"
>>>
>>> # ?? sur leur necessite
>>> DPMMGR=dpmmgr
>>> DPMDATA=/storage
>>>
>>> #======= Old variables NOT USED =======
>>> # SE_dpm-specific settings
>>> # Ignore if you are not running a DPM
>>> #DPMDATA="/storage"
>>> # The database user
>>> #DPMMGR=the-dpm-db-user
>>> # The database user password
>>> #DPMUSER_PWD=the-dpm-db-pwd
>>> #DPMFSIZE=200M
>>> # Set this if you are building a DPM yourself
>>> # and/or if you need a default DPM for the lcg-stdout-mon
>>> #DPM_HOST="" # my-dpm.$MY_DOMAIN
>>> DPM_HOST=se1.$MY_DOMAIN
>>> #DPMPOOL=the_dpm_pool_name
>>> DPMPOOL=pool1
>>> #DPMPOOL_NODES="poolnode1.$MY_DOMAIN:/path1 poolnode2.$MY_DOMAIN:/path2"
>>> # Optional
>>> # DPM_PORT_RANGE="20000,25000" ??
>>> #============ ================
>>>
>>>
>>>
>>> # This largely replaces CE_CLOSE_SE but it is a list of hostnames
>>> SE_LIST="$DPM_HOST" # $DPM_HOST $DCACHE_ADMIN"
>>> SE_ARCH="disk" # "disk, tape, multidisk, other"
>>>
>>> FTS_SERVER_URL="https://se1.${MY_DOMAIN}:8443/path/glite-data-transfer-fts"
>>>
>>> FTS_DB_TYPE=mysql
>>>
>>>
>>>
>>> BDII_HTTP_URL="http://grid-deployment.web.cern.ch/grid-deployment/gis/lcg2-bdii/dteam/lcg2-all-sites.conf"
>>>
>>> # Set this to use FCR
>>> BDII_FCR="http://goc.grid-support.ac.uk/gridsite/bdii/BDII/www/bdii-update.ldif"
>>>
>>> #BDII_REGIONS="CE SE RB PX VOBOX"
>>> BDII_REGIONS="CE SE RB" # list of the services provided by the site
>>> BDII_CE_URL="ldap://$CE_HOST:2135/mds-vo-name=local,o=grid"
>>> BDII_SE_URL="ldap://$DPM_HOST:2135/mds-vo-name=local,o=grid"
>>> BDII_RB_URL="ldap://$RB_HOST:2135/mds-vo-name=local,o=grid"
>>> #BDII_PX_URL="ldap://$PX_HOST:2135/mds-vo-name=local,o=grid"
>>> #BDII_LFC_URL="ldap://$LFC_HOST:2135/mds-vo-name=local,o=grid"
>>> #BDII_VOBOX_URL="ldap://$VOBOX_HOST:2135/mds-vo-name=local,o=grid"
>>>
>>> # Use this to set your contact string.
>>> # Ex.: BDII_BIND="mds-vo-name=mystorage,o=grid"
>>>
>>>
>>> # E2EMONIT specific settings
>>> # This specifies the location to download the host specific
>>> configuration file
>>> #E2EMONIT_LOCATION=grid-deployment.web.cern.ch/grid-deployment/e2emonit/production
>>>
>>>
>>> #
>>> # Replace this with the siteid supplied by the person setting up the
>>> networking
>>> # topology.
>>> #E2EMONIT_SITEID=my.siteid
>>>
>>>
>>> #VOS="atlas alice lhcb cms dteam biomed"
>>> VOS="dteam egeode esr fusion atlas alice cms lhcb biomed auvergrid
>>> ops" # add the other VOs your site supports
>>> QUEUES=${VOS}
>>>
>>> VO_SW_DIR=/voarea
>>>
>>> # set this if you want a scratch directory for jobs
>>> EDG_WL_SCRATCH="/scr"
>>>
>>> VO_ATLAS_SW_DIR=$VO_SW_DIR/atlas
>>> VO_ATLAS_DEFAULT_SE=$DPM_HOST
>>> VO_ATLAS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/atlas
>>> VO_ATLAS_QUEUES="atlas"
>>>
>>> VO_ATLAS_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=atlas,dc=eu-datagrid,dc=org
>>>
>>> VO_ATLAS_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=atlas,dc=eu-datagrid,dc=org
>>>
>>> VO_ATLAS_VOMS_POOL_PATH="/lcg1"
>>> VO_ATLAS_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/atlas?/atlas/'
>>> 'vomss://voms.cern.ch:8443/voms/atlas?/atlas/'"
>>> #VO_ATLAS_VOMS_EXTRA_MAPS="'Role=production production' 'usatlas
>>> .usatlas'"
>>> VO_ATLAS_VOMSES="'atlas lcg-voms.cern.ch 15001
>>> /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch atlas' 'atlas
>>> voms.cern.ch 15001 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch atlas'"
>>>
>>>
>>> VO_ALICE_SW_DIR=$VO_SW_DIR/alice
>>> VO_ALICE_DEFAULT_SE=$CLASSIC_HOST
>>> VO_ALICE_STORAGE_DIR=$CLASSIC_STORAGE_DIR/alice
>>> VO_ALICE_QUEUES="alice"
>>>
>>> VO_ALICE_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=alice,dc=eu-datagrid,dc=org
>>>
>>> VO_ALICE_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=alice,dc=eu-datagrid,dc=org
>>>
>>> VO_ALICE_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/alice?/alice/'
>>> 'vomss://voms.cern.ch:8443/voms/alice?/alice/'"
>>> VO_ALICE_VOMSES="'alice lcg-voms.cern.ch 15000
>>> /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch alice' 'alice
>>> voms.cern.ch 15000 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch alice'"
>>>
>>>
>>> VO_CMS_SW_DIR=$VO_SW_DIR/cms
>>> VO_CMS_DEFAULT_SE=$CLASSIC_HOST
>>> VO_CMS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/cms
>>> VO_CMS_QUEUES="cms"
>>>
>>> VO_CMS_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=cms,dc=eu-datagrid,dc=org
>>>
>>> VO_CMS_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=cms,dc=eu-datagrid,dc=org
>>>
>>> VO_CMS_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/cms?/cms/'
>>> 'vomss://voms.cern.ch:8443/voms/cms?/cms/'"
>>> VO_CMS_VOMSES="'cms lcg-voms.cern.ch 15002
>>> /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch cms' 'cms voms.cern.ch
>>> 15002 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch cms'"
>>>
>>>
>>> VO_LHCB_SW_DIR=$VO_SW_DIR/lhcb
>>> VO_LHCB_DEFAULT_SE=$CLASSIC_HOST
>>> VO_LHCB_STORAGE_DIR=$CLASSIC_STORAGE_DIR/lhcb
>>> VO_LHCB_QUEUES="lhcb"
>>>
>>> VO_LHCB_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=lhcb,dc=eu-datagrid,dc=org
>>>
>>> VO_LHCB_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=lhcb,dc=eu-datagrid,dc=org
>>>
>>> VO_LHCB_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/lhcb?/lhcb/'
>>> 'vomss://voms.cern.ch:8443/voms/lhcb?/lhcb/'"
>>> VO_LHCB_VOMS_EXTRA_MAPS="lcgprod lhcbprod"
>>> VO_LHCB_VOMSES="'lhcb lcg-voms.cern.ch 15003
>>> /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch lhcb' 'lhcb
>>> voms.cern.ch 15003 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch lhcb'"
>>>
>>>
>>> VO_DTEAM_SW_DIR=$VO_SW_DIR/dteam
>>> VO_DTEAM_DEFAULT_SE=$CLASSIC_HOST
>>> VO_DTEAM_STORAGE_DIR=$CLASSIC_STORAGE_DIR/dteam
>>> VO_DTEAM_QUEUES="dteam"
>>>
>>> VO_DTEAM_SGM=ldap://lcg-vo.cern.ch/ou=lcgadmin,o=dteam,dc=lcg,dc=org
>>> VO_DTEAM_USERS=ldap://lcg-vo.cern.ch/ou=lcg1,o=dteam,dc=lcg,dc=org
>>> VO_DTEAM_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/dteam?/dteam/'
>>> 'vomss://voms.cern.ch:8443/voms/dteam?/dteam/'"
>>> VO_DTEAM_VOMSES="'dteam lcg-voms.cern.ch 15004
>>> /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch dteam' 'dteam
>>> voms.cern.ch 15004 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch dteam'"
>>>
>>>
>>> VO_BIOMED_SW_DIR=$VO_SW_DIR/biomed
>>> VO_BIOMED_DEFAULT_SE=$CLASSIC_HOST
>>> VO_BIOMED_STORAGE_DIR=$CLASSIC_STORAGE_DIR/biomed
>>> VO_BIOMED_QUEUES="biomed"
>>>
>>> VO_BIOMED_USERS=ldap://vo-biome.in2p3.fr/ou=lcg1,o=biomedical,dc=lcg,dc=org
>>>
>>> VO_BIOMED_SGM=ldap://vo-biome.in2p3.fr/ou=lcgadmin,o=biomedical,dc=lcg,dc=org
>>>
>>> VO_BIOMED_VOMSES="biomed cclcgvomsli01.in2p3.fr 15000
>>> [log in to unmask]
>>> biomed"
>>>
>>>
>>> VO_EGEODE_SW_DIR=$VO_SW_DIR/egeode
>>> VO_EGEODE_DEFAULT_SE=$CLASSIC_HOST
>>> VO_EGEODE_STORAGE_DIR=$CLASSIC_STORAGE_DIR/egeode
>>> VO_EGEODE_QUEUES="egeode"
>>>
>>> VO_EGEODE_USERS=ldap://vo-egeode.in2p3.fr/ou=lcg1,o=egeode,dc=lcg,dc=org
>>> VO_EGEODE_SGM=ldap://vo-egeode.in2p3.fr/ou=lcgadmin,o=egeode,dc=lcg,dc=org
>>>
>>> VO_EGEODE_VOMS_SERVERS="vomss://voms-egeode.in2p3.fr:8443/voms/egeode?/egeode/"
>>>
>>> VO_EGEODE_VOMSES="'"egeode" "cclcgvomsli01.in2p3.fr" "15001"
>>> "[log in to unmask]"
>>> "egeode"'"
>>>
>>>
>>> VO_ESR_SW_DIR=$VO_SW_DIR/esr
>>> VO_ESR_DEFAULT_SE=$CLASSIC_HOST
>>> VO_ESR_STORAGE_DIR=$CLASSIC_STORAGE_DIR/esr
>>> VO_ESR_QUEUES="esr"
>>>
>>> VO_ESR_USERS=ldap://grid-vo.sara.nl/ou=eobs,o=esr,dc=eu-egee,dc=org
>>> VO_ESR_SGM=ldap://grid-vo.sara.nl/ou=lcgadmin,o=esr,dc=eu-egee,dc=org
>>> #VO_ESR_VOMS_SERVERS="vomss://kuiken.nikhef.nl:8443/voms/esr?/esr/"
>>> #VO_ESR_VOMSES="'esr kuiken.nikhef.nl 15006
>>> /O=dutchgrid/O=hosts/OU=nikhef.nl/CN=kuiken.nikhef.nl esr' 'esr
>>> mu4.matrix.sara.nl 30001
>>> /O=dutchgrid/O=hosts/OU=sara.nl/CN=mu4.sara.nl esr'"
>>> #IPSL site-def.conf
>>> #VO_ESR_VOMS_SERVERS="'vomss://mu4.matrix.sara.nl:8443/voms/esr?/esr'
>>> 'vomss://kuiken.nikhef.nl:8443/voms/esr?/esr'"
>>> #VO_ESR_VOMSES="'esr mu4.matrix.sara.nl 30001
>>> /O=dutchgrid/O=hosts/OU=sara.nl/CN=mu4.matrix.sara.nl esr' 'esr
>>> kuiken.nikhef.nl 15006
>>> /O=dutchgrid/O=hosts/OU=nikhef.nl/CN=kuiken.nikhef.nl esr'"
>>> #D.Weissenbach recommandation
>>> VO_ESR_VOMS_SERVERS="'vomss://mu4.matrix.sara.nl:8443/voms/esr?/esr'"
>>> VO_ESR_VOMSES="'esr mu4.matrix.sara.nl 30001
>>> /O=dutchgrid/O=hosts/OU=sara.nl/CN=mu4.matrix.sara.nl esr'"
>>>
>>> VO_FUSION_SW_DIR=$VO_SW_DIR/fusion
>>> VO_FUSION_DEFAULT_SE=$CLASSIC_HOST
>>> VO_FUSION_STORAGE_DIR=$CLASSIC_STORAGE_DIR/fusion
>>> VO_FUSION_QUEUES="fusion"
>>>
>>> VO_FUSION_SGM=ldap://swevo.ific.uv.es/ou=swadmin,o=fusion,dc=swe,dc=lcg,dc=org
>>>
>>> VO_FUSION_USERS=ldap://swevo.ific.uv.es/ou=lcg1,o=fusion,dc=swe,dc=lcg,dc=org
>>>
>>> VO_FUSION_VOMS_SERVERS="vomss://swevo.ific.uv.es:8443/voms/fusion?/fusion/"
>>>
>>> VO_FUSION_VOMSES="'fusion swevo.ific.uv.es 14003
>>> /C=ES/O=DATAGRID-ES/O=IFIC/CN=swevo.ific.uv.es fusion'"
>>>
>>> VO_AUVERGRID_SW_DIR=$VO_SW_DIR/auvergrid
>>> VO_AUVERGRID_DEFAULT_SE=$CLASSIC_HOST
>>> VO_AUVERGRID_SGM=ldap://vo-server.in2p3.fr/ou=lcgadmin,o=auvergrid,dc=lcg,dc=org
>>>
>>> VO_AUVERGRID_USERS=ldap://vo-server.in2p3.fr/ou=lcg1,o=auvergrid,dc=lcg,dc=org
>>>
>>> VO_AUVERGRID_STORAGE_DIR=$CLASSIC_STORAGE_DIR/auvergrid
>>> VO_AUVERGRID_QUEUES="auvergrid"
>>>
>>> VO_OPS_SW_DIR=$VO_SW_DIR/ops
>>> VO_OPS_DEFAULT_SE=$CLASSIC_HOST
>>> VO_OPS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/ops
>>> VO_OPS_QUEUES="ops"
>>>
>>> VO_OPS_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/voms/ops?/ops/"
>>> VO_OPS_VOMSES="'ops lcg-voms.cern.ch 15009
>>> /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch ops'"
>>>
>>>
>>>
>>>
>>>
>
>
|