Print

Print


Hello all,

We are currently trying to set up an small LCG-site (We had some contact 
with Nikhef - The Netherlands). This minimal test site will first be 
"closed" (no traffic with other sites). After some testing we probably do 
want to connect to the rest of LCG. Currently I have some open things 
(unclear) to fill in for the site-info.def file, maybe someone has some 
suggestions on the following points (looking for some more background 
information on all the subjects mentioned) :

VO
===
To which VO do we belong or how do we have to set one up ourselves and if 
so: how to do that.
Should we leave all the default VO entries in the def file (we don't 
support them (yet)?) - I commented them all out.

WN SPECIFICATION
=================
We are trying to set up a minimal site which also means we have a mixed 
environment of worker nodes:
2 Worker Nodes:  2.8MHz  1CPU 1024MB
1 Worker Node :  1.8MHz  2CPU 512MB
What should for example be filled for CE_CPU_SPEED, CE_MINPHYSMEM etc.

USERS
=====
users.conf
UID:LOGIN:GID:GROUP:VO:SGM_FLAG
Should we leave all the default users in this file? Since at this moment 
we don't want to give any access, just test the software internally within 
our network. Is it correct that only the users of the supported VO's are 
created anyway?

REG_HOST=lcgic01.gridpp.rl.ac.uk  # there is only 1 central registry for 
now

#Installation home of the re-locatable distribution.
#Does "re-locatable distribution" in this case only refer to the UI or WN? 
Is it not just the location of the lcg middleware?
INSTALL_ROOT = /opt

#YOUR GIIS, what should I fill in here - makeup our own name? (examples 
found: HPTC-LCG2, ru-Novgorod-NSU-LCG2...)
SITE_NAME=my-site-name

INBOUND OUTBOUND CONNECTIVITY
==============================
# TRUE if outbound connectivity is enabled at your site, FALSE otherwise 
(WN specification)
CE_OUTBOUNDIP=TRUE
# TRUE if inbound connectivity is enabled at your site, FALSE otherwise 
(WN specification)
CE_INBOUNDIP=FALSE

# Mount point of the data partition on the SE
CE_CLOSE_SE1_ACCESS_POINT=/storage

DCACHE NOT RECOMMENDED IS IT?
==============================
# dCache-specific settings
# Hostname of the server node which manages the pool of nodes
DCACHE_ADMIN="my-admin-node"
# List of pool nodes managed by the DCACHE_ADMIN server node
DCACHE_POOLS="my-pool-node1:/pool-path1 my-pool-node2:/pool-path2"
# Optional
# DCACHE_PORT_RANGE="20000,25000"

DPM CONFIGURATION
==================
# SE_dpm-specific settings
DPM_POOLS="lxb1727:/dpmpool2"
# Optional
# DPM_PORT_RANGE="20000,25000" ??
DPMDATA=$CE_CLOSE_SE1_ACCESS_POINT
DPMDB_PWD=dpmu_Bar
DPMUSER_PWD=dpmu_Bar
DPMCONFIG=/home/dpmuser/DPMCONFIG
DPMLOGS=/var/tmp/DPMLogs
DPMFSIZE=200M
DPM_HOST=$SE_HOST
## Temp
DPMPOOL=dpmpool2

Hope someone can find the time to make these configuration settings more 
clear.
Kind Regards,

                         Serge