Hello again.
In /var/log/lcg-expiregridmapdir.log I see the following:
================
2005-10-12 05:00:00 Warning: no current users?!
2005-10-12 05:00:00 Found user: lhcb001
2005-10-12 05:00:00 Found unused certificate:
%2fc%3des%2fo%3ddatagrid%2des%2fo%3dusc%2dcesga%2fcn%3djuan%20jose%20saborido%20silva
2005-10-12 05:00:00 Found unused certificate:
%2fc%3duk%2fo%3descience%2fou%3dqueenmarylondon%2fl%3dphysics%2fcn%3ddave%20kant
2005-10-12 05:00:00 Found user: lhcb002
2005-10-12 05:00:00 Found unused certificate:
%2fc%3des%2fo%3ddatagrid%2des%2fo%3dub%2fcn%3dricardo%20graciani
2005-10-12 05:00:00 Warning: no current users?!
2005-10-12 05:00:00 Found user: lhcb001
2005-10-12 05:00:00 Found unused certificate:
%2fc%3des%2fo%3ddatagrid%2des%2fo%3dusc%2dcesga%2fcn%3djuan%20jose%20saborido%20silva
2005-10-12 05:00:00 Found unused certificate:
%2fc%3duk%2fo%3descience%2fou%3dqueenmarylondon%2fl%3dphysics%2fcn%3ddave%20kant
2005-10-12 05:00:00 Found user: lhcb002
2005-10-12 05:00:00 Found unused certificate:
%2fc%3des%2fo%3ddatagrid%2des%2fo%3dub%2fcn%3dricardo%20graciani
==================
About the permisions in gridmapdir, they look ok I think. Here is a
extract of
"ls -lirt" on that directory:
================
1017149 -rw-r--r-- 2 root root 0 Oct 11 19:13 dteamsgm
1017401 -rw-r--r-- 1 root root 0 Oct 11 19:13 dteam050
1017400 -rw-r--r-- 1 root root 0 Oct 11 19:13 dteam049
1017399 -rw-r--r-- 1 root root 0 Oct 11 19:13 dteam048
1017398 -rw-r--r-- 1 root root 0 Oct 11 19:13 dteam047
1017397 -rw-r--r-- 1 root root 0 Oct 11 19:13 dteam046
1017396 -rw-r--r-- 1 root root 0 Oct 11 19:13 dteam045
1017395 -rw-r--r-- 1 root root 0 Oct 11 19:13 dteam044
1017394 -rw-r--r-- 1 root root 0 Oct 11 19:13 dteam043
===================
I attach also my site-info.def file, just I case you inmediately see
something
wrong with the dteam settings.
Juan.
Burke, S (Stephen) wrote:
>LHC Computer Grid - Rollout
>
>
>>[mailto:[log in to unmask]] On Behalf Of Juan
>>J. Saborido Silva said:
>>I am afraid that's not the problem. Nobody from dteam is
>>getting in, and
>>I have 50 pool accounts for them, all free...
>>
>>
>
>If they're all free that's suspicious too! When were they last recycled?
>Did something go wrong with the recycling cron job? Do the permissions
>look right, and is the link count 1 (i.e. nothing else hard-linked to
>them)?
>
>Stephen
>
>
# YAIM example site configuration file - adapt it to your site!
MY_DOMAIN=usc.cesga.es
CE_HOST=lcg-ce.$MY_DOMAIN
SE_HOST=lcg-se.$MY_DOMAIN
RB_HOST=lxn1188.cern.ch
PX_HOST=adc0024.cern.ch
#BDII_HOST=lcgbdii02.ifae.es
BDII_HOST=lcg-bdii.cern.ch
MON_HOST=lcg-se.$MY_DOMAIN
REG_HOST=lcgic01.gridpp.rl.ac.uk # there is only 1 central registry for now
#RB_HOST=my-rb.$MY_DOMAIN
#PX_HOST=my-px.$MY_DOMAIN
#BDII_HOST=my-bdii.$MY_DOMAIN
# Set this is you are building a VO-BOX
#VOBOX_HOST=my-vobox.$MY_DOMAIN
#VOBOX_PORT=1975
#Set this to "yes" your site provides an X509toKERBEROS Authentication Server
#Only for sites with Experiment Software Area under AFS
#GSSKLOG=no
#GSSKLOG_SERVER=my-gssklog.$MY_DOMAIN
# Set this if you are building a LFC server
# not if you are just using clients
#LFC_HOST=my-lfc.$MY_DOMAIN
#LFC_DB_PASSWORD="lfc_password"
# LFC_TYPE is now ignored - all catalogues are local unless
# you add a VO to LFC_CENTRAL, in which case that will be 'central'
#LFC_TYPE="local" # or "central"
#LFC_CENTRAL=""
# Change this if your torque server is not on the CE
# it's ingored for other batch systems
#TORQUE_SERVER=$CE_HOST
WN_LIST=/opt/lcg/config/wn-list.conf
USERS_CONF=/opt/lcg/config/users.conf
FUNCTIONS_DIR=/opt/lcg/yaim/functions
# Pick the apt-get sources appropriate to your OS - uncomment one line
#LCG_REPOSITORY="'rpm http://linuxsoft.cern.ch LCG/apt/LCG-2_6_0/rh73/en/i386 lcg_rh73 lcg_rh73.updates' 'rpm http://grid-deployment.web.cern.ch/grid-deployment/gis apt/LCG-2_6_0/rh73/en/i386 lcg_rh73 lcg_rh73.updates'"
LCG_REPOSITORY="'rpm http://linuxsoft.cern.ch LCG/apt/LCG-2_6_0/sl3/en/i386 lcg_sl3 lcg_sl3.updates' 'rpm http://grid-deployment.web.cern.ch/grid-deployment/gis apt/LCG-2_6_0/sl3/en/i386 lcg_sl3 lcg_sl3.updates'"
CA_REPOSITORY="rpm http://grid-deployment.web.cern.ch/grid-deployment/gis apt/LCG_CA/en/i386 lcg"
# For the relocatable (tarball) distribution, ensure
# that INSTALL_ROOT is set correctly
INSTALL_ROOT=/opt
# You will probably want to change these too for the relocatable dist
OUTPUT_STORAGE=/tmp/jobOutput
JAVA_LOCATION="/usr/java/j2sdk1.4.2_08"
# Set this to '/dev/null' or some other dir if you want
# to turn off yaim installation of cron jobs
CRON_DIR=/etc/cron.d
GLOBUS_TCP_PORT_RANGE="20000 25000"
MYSQL_PASSWORD=lero
APEL_DB_PASSWORD="lero"
GRID_TRUSTED_BROKERS=" "
#GRID_TRUSTED_BROKERS="'broker one' 'broker two'"
GRIDMAP_AUTH="ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org ldap://swevo.ific.uv.es/ou=users,o=registrar,dc=swe,dc=lcg,dc=org"
#GRIDMAP_AUTH="ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org ldap://xxx"
GRIDICE_SERVER_HOST=$MON_HOST
[log in to unmask]
SITE_NAME=USC-LCG2
SITE_LOC="Santiago de Compostela, Spain"
SITE_LAT=42.8667
SITE_LONG=-8.5500
SITE_WEB="http://www.usc.es/gaes"
SITE_TIER="TIER 2"
SITE_SUPPORT_SITE="http://www.pic.es"
## SE_classic should be 'disk', SE_dpm or SE_dcache should be 'srm_v1'
SE_TYPE=disk
JOB_MANAGER=lcgpbs
CE_BATCH_SYS=torque
CE_CPU_MODEL=PIV
CE_CPU_VENDOR=intel
CE_CPU_SPEED=2500
CE_OS=ScientificLinux
CE_OS_RELEASE=3.0.5
CE_MINPHYSMEM=1024
CE_MINVIRTMEM=2048
CE_SMPSIZE=2
CE_SI00=570
CE_SF00=0
CE_OUTBOUNDIP=TRUE
CE_INBOUNDIP=TRUE
CE_RUNTIMEENV="LCG-2 LCG-2_1_0 LCG-2_1_1 LCG-2_2_0 LCG-2_3_0 LCG-2_3_1 LCG-2_4_0 LCG-2_5_0 LCG-2_6_0 R-GMA"
#CE_CLOSE_SE="SE1 SE2"
CE_CLOSE_SE="SE1"
CE_CLOSE_SE1_HOST=$SE_HOST
CE_CLOSE_SE1_ACCESS_POINT=/flatfiles/SE00
#CE_CLOSE_SE2_HOST=another-se.$MY_DOMAIN
#CE_CLOSE_SE2_ACCESS_POINT=/somewhere
# dCache-specific settings
# ignore if you are not running d-cache
#DCACHE_ADMIN="my-admin-node"
#DCACHE_POOLS="my-pool-node1:/pool-path1 my-pool-node2:/pool-path2"
# Optional
# DCACHE_PORT_RANGE="20000,25000"
# Set to "yes" only if YAIM shall reset the dCache configuration,
# i.e. if you want YAIM to configure dCache - WARNING:
# this may wipe out any dCache parameters previously configured!
#RESET_DCACHE_CONFIGURATION=no
# SE_dpm-specific settings
# Ignore if you are not running a DPM
#DPMDATA=$CE_CLOSE_SE1_ACCESS_POINT
# The database user
#DPMMGR=the-dpm-db-user
# The database user password
#DPMUSER_PWD=the-dpm-db-pwd
#DPMFSIZE=200M
# Set this if you are building a DPM yourself
# and/or if you need a default DPM for the lcg-stdout-mon
#DPM_HOST=$SE_HOST
#DPMPOOL=the_dpm_pool_name
# Optional
# DPM_PORT_RANGE="20000,25000" ??
#FTS_SERVER_URL="https://fts.${MY_DOMAIN}:8443/path/glite-data-transfer-fts"
BDII_HTTP_URL="http://grid-deployment.web.cern.ch/grid-deployment/gis/lcg2-bdii/dteam/lcg2-all-sites.conf"
#BDII_REGIONS="CE SE RB PX VOBOX" # list of the services provided by the site
BDII_REGIONS="CE SE" # list of the services provided by the site
BDII_CE_URL="ldap://$CE_HOST:2135/mds-vo-name=local,o=grid"
BDII_SE_URL="ldap://$SE_HOST:2135/mds-vo-name=local,o=grid"
#BDII_RB_URL="ldap://$RB_HOST:2135/mds-vo-name=local,o=grid"
#BDII_PX_URL="ldap://$PX_HOST:2135/mds-vo-name=local,o=grid"
#BDII_VOBOX_URL="ldap://$VOBOX_HOST:2135/mds-vo-name=local,o=grid"
#VOS="atlas alice lhcb cms dteam sixt na48"
VOS="lhcb dteam swetest"
QUEUES=${VOS}
VO_SW_DIR=/opt/expSoftware
# set this if you want a scratch directory for jobs
#EDG_WL_SCRATCH=""
#VO_ATLAS_SW_DIR=$VO_SW_DIR/atlas
#VO_ATLAS_DEFAULT_SE=$SE_HOST
#VO_ATLAS_STORAGE_DIR=$CE_CLOSE_SE1_ACCESS_POINT/atlas
#VO_ATLAS_QUEUES="atlas"
#VO_ATLAS_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=atlas,dc=eu-datagrid,dc=org
#VO_ATLAS_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=atlas,dc=eu-datagrid,dc=org
#VO_ATLAS_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/voms/atlas?/atlas/"
#VO_ATLAS_VOMS_POOL_PATH="/lcg1"
#VO_ATLAS_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/atlas?/atlas/' 'vomss://voms.cern.ch:8443/edg-voms-admin/atlas?/atlas/'"
#VO_ATLAS_VOMS_EXTRA_MAPS="'Role=production production' 'usatlas .usatlas'"
#VO_ALICE_SW_DIR=$VO_SW_DIR/alice
#VO_ALICE_DEFAULT_SE=$SE_HOST
#VO_ALICE_STORAGE_DIR=$CE_CLOSE_SE1_ACCESS_POINT/alice
#VO_ALICE_QUEUES="alice"
#VO_ALICE_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=alice,dc=eu-datagrid,dc=org
#VO_ALICE_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=alice,dc=eu-datagrid,dc=org
#VO_ALICE_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/voms/alice?/alice/"
#VO_CMS_SW_DIR=$VO_SW_DIR/cms
#VO_CMS_DEFAULT_SE=$SE_HOST
#VO_CMS_STORAGE_DIR=$CE_CLOSE_SE1_ACCESS_POINT/cms
#VO_CMS_QUEUES="cms"
#VO_CMS_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=cms,dc=eu-datagrid,dc=org
#VO_CMS_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=cms,dc=eu-datagrid,dc=org
#VO_CMS_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/voms/cms?/cms/"
VO_LHCB_SW_DIR=$VO_SW_DIR/lhcb
VO_LHCB_DEFAULT_SE=$SE_HOST
VO_LHCB_STORAGE_DIR=$CE_CLOSE_SE1_ACCESS_POINT/lhcb
VO_LHCB_QUEUES="lhcb"
VO_LHCB_SGM=ldap://grid-vo.nikhef.nl/ou=lcgadmin,o=lhcb,dc=eu-datagrid,dc=org
VO_LHCB_USERS=ldap://grid-vo.nikhef.nl/ou=lcg1,o=lhcb,dc=eu-datagrid,dc=org
VO_LHCB_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/voms/lhcb?/lhcb/"
VO_LHCB_VOMS_EXTRA_MAPS="lcgprod lhcbprod"
VO_DTEAM_SW_DIR=$VO_SW_DIR/dteam
VO_DTEAM_DEFAULT_SE=$SE_HOST
VO_DTEAM_STORAGE_DIR=$CE_CLOSE_SE1_ACCESS_POINT/dteam
VO_DTEAM_QUEUES="dteam"
VO_DTEAM_SGM=ldap://lcg-vo.cern.ch/ou=lcgadmin,o=dteam,dc=lcg,dc=org
VO_DTEAM_USERS=ldap://lcg-vo.cern.ch/ou=lcg1,o=dteam,dc=lcg,dc=org
VO_DTEAM_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/voms/dteam?/dteam/"
VO_SWETEST_SW_DIR=$VO_SW_DIR/dteam
VO_SWETEST_DEFAULT_SE=$SE_HOST
VO_SWETEST_SGM=ldap://swevo.ific.uv.es/ou=swadmin,o=swetest,dc=swe,dc=lcg,dc=org
VO_SWETEST_USERS=ldap://swevo.ific.uv.es/ou=lcg1,o=swetest,dc=swe,dc=lcg,dc=org
VO_SWETEST_STORAGE_DIR=$CE_CLOSE_SE1_ACCESS_POINT/swetest
VO_SWETEST_QUEUES="swetest"
#VO_SWETEST_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/swetest/swetest?/swetest/"
#VO_SIXT_SW_DIR=$VO_SW_DIR/sixt
#VO_SIXT_DEFAULT_SE=$SE_HOST
#VO_SIXT_STORAGE_DIR=$CE_CLOSE_SE1_ACCESS_POINT/sixt
#VO_SIXT_QUEUES="sixt"
#VO_SIXT_USERS=ldap://lcg-vo.cern.ch/ou=lcg1,o=sixt,dc=lcg,dc=org
#VO_SIXT_VOMS_SERVERS="vomss://lcg-voms.cern.ch:8443/voms/sixt?/sixt/"
#VO_NA48_SW_DIR=$VO_SW_DIR/na48
#VO_NA48_DEFAULT_SE=$SE_HOST
#VO_NA48_STORAGE_DIR=$CE_CLOSE_SE1_ACCESS_POINT/na48
#VO_NA48_QUEUES="na48"
#VO_NA48_VOMS_SERVERS="vomss://na48-voms.cern.ch:8443/voms/na48?/na48"
#VO_NA48_VOMS_EXTRA_MAPS="Role=admin na48adm"
|