Hi Sam,
Please find my site-info.def attached and I haven't modified any YAIM
config files for the SE. Is there other file(s) you may want to have a look?
Cheers,
Santanu
On 14/03/2012 10:21, Sam Skipsey wrote:
> So, now that Santanu's DPM is bleeding-edge updated, with everything
> new except the database, we should look at the anomalous biomed
> support again.
>
> If the DPM is still doing that, then there are relatively few places
> that can be the source of conflicting information; either YAIM
> scripts, or the databases (I assume that the install was onto fresh
> hardware, so config files weren't recreated).
>
> I believe you said you'd deleted all traces of biomed from your DPM
> database, Santanu? I don't suppose you could post a (suitably edited
> to remove passwords) copy of your site-info.def (and node/service
> specific config files for YAIM)?
>
> Sam
#################################################################
# #
# site-info.def :: gLite-yaim-4.0.x (SL5) #
# =========================================================== #
# SITE CONFIGURATION :: Local node at HEP #
# Cavendish Laboratory, Cambridge #
# Last updated: 12/11/2010 #
# #
############################ o0()0o #############################
##########################
# YAIM related variables #
##########################
YAIM_VERSION=`rpm -qa | grep yaim | grep .-[0-9] | sed -e 's/\(.*\).*/\1/g' -e '\,^, s/[a-zA-Z].*-yaim-core-//'`
# Debug variable [Possible values: NONE, ABORT, ERROR, WARNING, INFO, DEBUG]
YAIM_LOGGING_LEVEL=WARNING
# Repository settings
REPOSITORY_TYPE="yum"
###################################
# General configuration variables #
###################################
CONF_DIR=/post-config/Script/source/config_dir
INSTALL_ROOT=/opt # INSTALL_ROOT should set correctly
MY_DOMAIN=hep.phy.cam.ac.uk
# These variables tell YAIM where to find additional configuration files.
WN_LIST=${CONF_DIR}/wn-list.conf
WN_HOST=`hostname -s`
#USERS_CONF=${CONF_DIR}/users.conf
if [ "$WN_HOST" == "farm001" ]; then
USERS_CONF=${CONF_DIR}/users.SGM.conf
else
USERS_CONF=${CONF_DIR}/users.new2.conf
fi
GROUPS_CONF=${CONF_DIR}/groups.conf
FUNCTIONS_DIR=/opt/glite/yaim/functions
# Set this to "yes" if your site provides an X509toKERBEROS Authentication Server
# Only for sites with Experiment Software Area under AFS
GSSKLOG=no
#GSSKLOG_SERVER=my-gssklog.$MY_DOMAIN
OUTPUT_STORAGE=/tmp/jobOutput
# JAVA LOC: You will probably need to change these two too
#JAVA_VER=`rpm -qa | grep jdk-[0-9] | sed -e 's/\(.*\)-[0-9a-z].*/\1/g' -e 's/\(.*\)-/jdk/g'`
##JAVA_LOCATION="/usr/java/${JAVA_VER}"
JAVA_LOCATION="/usr/bin/java"
# Set this to '/dev/null' or some other dir if you want
# to turn off yaim installation of cron jobs
CRON_DIR=/etc/cron.d
# Set this to your prefered and firewall allowed port range
GLOBUS_TCP_PORT_RANGE="20000,25000"
# MySQL password (make sure that this file cannot be read by any grid job)
MYSQL_PASSWORD="_xxxxx_"
# Site-wide settings
SITE_DESC="University of Cambridge, Cavendish Lab"
SITE_EMAIL=lcg-admin@$MY_DOMAIN
SITE_SUPPORT_EMAIL=$SITE_EMAIL
SITE_SECURITY_EMAIL=$SITE_EMAIL
SITE_NAME=UKI-SOUTHGRID-CAM-HEP
SITE_LOC="Cambridge, UK"
SITE_LAT=52.208
SITE_LONG=0.092782
SITE_WEB="http://serv01.hep.phy.cam.ac.uk:8880"
SITE_OTHER_GRID="EGI|WLCG|SOUTHGRID|GRIDPP"
SITE_OTHER_EGI_ROC="NGI_UK"
SITE_OTHER_EGEE_SERVICE="prod"
SITE_OTHER_WLCG_TIER=2
#SITE_HTTP_PROXY="myproxy.my.domain"
##############################
# Common node type variables #
##############################
CE_HOST=serv07.$MY_DOMAIN
CE2_HOST=serv03.$MY_DOMAIN
SE_HOST=serv02.$MY_DOMAIN # actually removed now --> see CLASSIC_HOST, DPM_HOST below
RB_HOST="'lcgrb01.gridpp.rl.ac.uk' 'lcgrb02.gridpp.rl.ac.uk'"
WMS_HOST=lcgwms01.gridpp.rl.ac.uk # my-wms.$MY_DOMAIN
PX_HOST=lcgrbp01.gridpp.rl.ac.uk
BDII_HOST=lcg-bdii.gridpp.ac.uk
#BDII_HOST=lcgbdii.gridpp.rl.ac.uk
MON_HOST=vserv01.$MY_DOMAIN
REG_HOST=lcgic01.gridpp.rl.ac.uk
#################################
# VOBOX configuration variables #
#################################
#VOBOX_HOST=$CE_HOST
#VOBOX_PORT=1975
##############################
# CE configuration variables #
##############################
# Architecture and enviroment specific settings
CE_CPU_MODEL=Xeon
CE_CPU_VENDOR=GenuineIntel
CE_CPU_SPEED=2660
CE_MINPHYSMEM=2048
CE_MINVIRTMEM=2048
CE_SMPSIZE=2
# New HEPSPEC06 values
#[static file: /opt/glite/etc/gip/ldif/static-file-Cluster.ldif]
CE_SI00=2180
CE_SF00=898
CE_PHYSCPU=55
CE_LOGCPU=$[CE_PHYSCPU*4]
CE_CAPABILITY="CPUScalingReferenceSI00=2013"
CE_OTHERDESCR="Cores=4,Benchmark=8.72-HEP-SPEC06"
#CE_OS=`lsb_release -i | cut -f2`
#CE_OS_RELEASE=`lsb_release -r | cut -f2`
#CE_OS_VERSION=`lsb_release -c | cut -f2`
CE_OS=ScientificSL
CE_OS_RELEASE=5.4
CE_OS_VERSION="Boron"
CE_OS_ARCH=`uname -m`
CE_OUTBOUNDIP=TRUE
CE_INBOUNDIP=FALSE
CE_RUNTIMEENV="
LCG-2
LCG-2_1_0
LCG-2_1_1
LCG-2_2_0
LCG-2_3_0
LCG-2_3_1
LCG-2_4_0
LCG-2_5_0
LCG-2_6_0
LCG-2_7_0
GLITE-3_0_0
GLITE-3_1_0
R-GMA
LCG-CE
"
# Set this if your WNs have a shared directory for temporary storage
#CE_DATADIR=""
###################################
# FTS configuration variables #
###################################
FTS_HOST=lcgfts.gridpp.rl.ac.uk
FTS_SERVER_URL="https://lcgfts.gridpp.rl.ac.uk:8443/glite-data-transfer-fts/services/FileTransfer"
APEL_DB_USER="condorr"
APEL_DB_PASSWORD="_xxxxx_"
# GRID_TRUSTED_BROKERS: DNs of services (RBs) allowed to renew/retrives
# credentials from/at the myproxy server. Put single quotes around each trusted DN !!!
#GRID_TRUSTED_BROKERS="'broker one' 'broker two'"
# The RB now uses the DLI by default; set VOs here which should use RLS
#RB_RLS="" # "atlas cms"
# Space separated list of ldap servers in edg-mkgridmap.conf which authenticate users.
GRIDMAP_AUTH="ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org"
#GRIDICE_SERVER_HOST=$MON_HOST # usually run on the MON node
GRIDICE_SERVER_HOST=None
APEL_PUBLISH_USER_DN=no
GIN_BDII=yes
#########################################
# Jobmanager configuration variables #
#########################################
BATCH_SERVER=$CE_HOST
JOB_MANAGER=lcgcondor
CE_BATCH_SYS=condor
BATCH_BIN_DIR=/opt/condor/bin
BATCH_VERSION=`rpm -qa | grep ^condor | sed 's/\(.*\)-1.*/\1/g'`
BATCH_LOG_DIR=/home/condorr/spool
CONDOR_ARCH=INTEL
CONDOR_OS=`uname -s`
# Classic SE (do not ask why still needed )
CLASSIC_HOST=$SE_HOST
CLASSIC_STORAGE_DIR="/dpm/hep.phy.cam.ac.uk/home"
###############################
# DPM configuration variables #
###############################
# DPMDATA is now deprecated. Use an entry like $DPM_HOST:/filesystem in
# the DPM_FILESYSTEMS variable.
DPM_INFO_USER=dpminfo
DPM_INFO_PASS="_xxxxx_"
DPM_HOST=$SE_HOST # The name of the DPM head node
DPM_D01=disk01.$MY_DOMAIN # DPM pool nodes
DPM_D02=disk02.$MY_DOMAIN
DPM_D03=disk03.$MY_DOMAIN
DPM_D04=disk04.$MY_DOMAIN
DPM_D05=disk05.$MY_DOMAIN
DPM_D06=disk06.$MY_DOMAIN
DPM_D07=disk07.$MY_DOMAIN
DPM_D08=disk08.$MY_DOMAIN
DPM_D09=disk09.$MY_DOMAIN
DPM_D10=disk10.$MY_DOMAIN
DPM_D11=disk11.$MY_DOMAIN
DPM_D12=disk12.$MY_DOMAIN
DPMPOOL=dpmCam_2007 # The DPM pool name
# The filesystems/partitions parts of the pool
DPM_FILESYSTEMS="$DPM_D01:/dpm_data $DPM_D02:/dpm_data $DPM_D03:/dpm_data $DPM_D04:/dpm_data $DPM_D05:/dpm_data $DPM_D06:/dpm_data1 $DPM_D06:/dpm_data2 $DPM_D07:/dpm_data1 $DPM_D07:/dpm_data2 $DPM_D08:/dpm_data $DPM_D09:/dpm_data $DPM_D10:/dpm_data1 $DPM_D10:/dpm_data2 $DPM_D11:/dpm_data1 $DPM_D11:/dpm_data2 $DPM_D12:/dpm_data1 $DPM_D12:/dpm_data2"
DPMFSIZE=200M # Default reserved space
DPMROOT="/dpm/$MY_DOMAIN/home" # Set by Santanu
# From now on we use DPM_DB_USER and DPM_DB_PASSWORD to make clear
# it is different role from that of the dpmmgr unix user who owns the
# directories and runs the daemons.
DPM_DB_USER=dpmmgr # The database user
DPM_DB_PASSWORD="_xxxxx_" # The database user password
DPM_DB_HOST=$DPM_HOST # The DPM database host
DPMMGR_UID=505
DPMMGR_GID=505
# RFIO_PORT_RANGE="20000 25000" # Optional, default value
SE_LIST=$DPM_HOST
SE_ARCH="multidisk" # "disk, tape, multidisk, other"
SE_MOUNT_INFO_LIST=none
SE_GRIDFTP_LOGFILE=/var/log/dpm-gsiftp/dpm-gsiftp.log
################################
# BDII configuration variables #
################################
SITE_BDII_HOST=vserv02.hep.phy.cam.ac.uk
BDII_SITE_TIMEOUT=120
BDII_RESOURCE_TIMEOUT=`expr "$BDII_SITE_TIMEOUT" - 5`
GIP_RESPONSE=`expr "$BDII_RESOURCE_TIMEOUT" - 5`
GIP_FRESHNESS=60
GIP_CACHE_TTL=300
GIP_TIMEOUT=150
BDII_HTTP_URL="http://grid-deployment.web.cern.ch/grid-deployment/gis/lcg2-bdii/dteam/lcg2-all-sites.conf"
# Set this to use FCR
BDII_FCR="http://goc.grid-support.ac.uk/gridsite/bdii/BDII/www/bdii-update.ldif"
# List of the services provided by the site
BDII_REGIONS="CE SE"
BDII_CE_URL="ldap://$CE_HOST:2170/mds-vo-name=resource,o=grid"
BDII_CE2_URL="ldap://$CE2_HOST:2170/mds-vo-name=local,o=grid"
BDII_SE_URL="ldap://$CLASSIC_HOST:2170/mds-vo-name=resource,o=grid"
BDII_FTS_URL="ldap://$FTS_HOST:2170/mds-vo-name=resource,o=grid"
BDII_RB_URL="ldap://$RB_HOST:2135/mds-vo-name=local,o=grid"
BDII_PX_URL="ldap://$PX_HOST:2135/mds-vo-name=local,o=grid"
#BDII_VOBOX_URL="ldap://$VOBOX_HOST:2135/mds-vo-name=local,o=grid"
# Use this to set your contact string.
# Ex.: BDII_BIND="mds-vo-name=mystorage,o=grid"
# E2EMONIT specific settings
# This specifies the location to download the host specific configuration file
E2EMONIT_LOCATION=grid-deployment.web.cern.ch/grid-deployment/e2emonit/production
# Replace this with the siteid supplied by the person setting up the networking topology.
E2EMONIT_SITEID=my.siteid
##############################
# VO configuration variables #
##############################
# For help see: https://lcg-sft.cern.ch/yaimtool/yaimtool.py
#
# Space separated list of supported VOs
VOS="alice atlas calice camont cms dteam euindia gridpp lhcb ops vo.southgrid.ac.uk" #enmr.eu
QUEUES="alice atlas calice camont cms dteam euindia gridpp lhcb ops southgrid" #enmr
VO_SW_DIR=/experiment-software
EDG_WL_SCRATCH=/tmp # Scratch directory for jobs
ALICE_GROUP_ENABLE="alice"
ATLAS_GROUP_ENABLE="atlas"
CALICE_GROUP_ENABLE="calice"
CAMONT_GROUP_ENABLE="camont"
CMS_GROUP_ENABLE="cms"
DTEAM_GROUP_ENABLE="dteam"
#ENMR_GROUP_ENABLE="enmr.eu"
EUINDIA_GROUP_ENABLE="euindia"
GRIDPP_GROUP_ENABLE="gridpp"
LHCB_GROUP_ENABLE="lhcb"
OPS_GROUP_ENABLE="ops"
SOUTHGRID_GROUP_ENABLE="vo.southgrid.ac.uk"
SHORT_GROUP_ENABLE=$QUEUES
|