Hi all,
I've succeeded in erecting a functional lcfg server for lcg2 and installed
a few standard nodes. My problem currently is that I'm not clear on the
hierarchial arrangements for the lcg. Specifically UI_RESBROKER and
UI_LOGBOOK. Currently these are set as follows but these are just poorly
educated guesses which don't seem to work.
#define UI_RESBROKER lcgrb02.gridpp.rl.ac.uk
#define UI_LOGBOOK https://lcgrb02.gridpp.rl.ac.uk:7846
Any advice would be appreciated.
I've attached my entire site config file in case anyone is feeling
particularly helpful.
Matt.
--
================================================
Matthew Robinson
HEP System Manager & Dark Matter Group
Department of Physics and Astronomy
University of Sheffield
Hicks Building
Hounsfield Road
Sheffield
S3 7RH
[log in to unmask]
Office: 0114 222 3553
Mobile: 07968 873 775
Fax: 0114 272 8079
================================================
quidquid latine dictum sit, profundum viditur.
/* SOURCE TREE LOCATIONS --------------------------------------------------
------------------------------------------------------------------------- */
/* Define the root locations of the Globus, EDG, and LCG software trees. These
are used in many configuration files and for setting the ld.so.conf
libraries. NOTE: the underscore at the end of the define. Used to avoid
confusion with the GLOBUS_LOCATION and EDG_LOCATION tags in configuration
files. */
#define GLOBUS_LOCATION_ /opt/globus
#define EDG_LOCATION_ /opt/edg
#define EDG_LOCATION_VAR_ EDG_LOCATION_/var
#define EDG_LOCATION_TMP_ /tmp
#define LCG_LOCATION_ /opt/lcg
#define LCG_LOCATION_VAR_ LCG_LOCATION_/var
#define LCG_LOCATION_TMP_ /tmp
/* COMMON GRID DEFINITIONS ------------------------------------------------
--------------------------------------------------------------------------- */
/* This is a space-separated list of the subject names of all of the grid's
trusted brokers. Each subject name MUST be enclosed in double quotes.
This is used by the MyProxy server to recognize from which brokers to
allow proxy renewal. */
#define GRID_TRUSTED_BROKERS "/O=Grid/O=UKHEP/OU=lcgrb02.gridpp.rl.ac.uk/CN=lcgrb02.gridpp.rl.ac.uk"
/* COMMON SITE DEFINITIONS ------------------------------------------------
--------------------------------------------------------------------------- */
/* CE AND SE HOST NAMES. These are defined here because they are used in
some of the site definitions. */
/* ComputingElement hostname */
#define CE_HOSTNAME ce.gridpp.shef.ac.uk
/* StorageElement hostname */
#define SE_HOSTNAME se01.gridpp.shef.ac.uk
#define SITE_LOCALDOMAIN gridpp.shef.ac.uk
#define SITE_MAILROOT [log in to unmask]
#define SITE_ALLOWED_NETWORKS 127.0.0.1, 143.167.250.
#define SITE_GATEWAYS 143.167.250.30
#define SITE_NAMESERVERS 143.167.250.1
#define SITE_NETMASK 255.255.255.224
#define SITE_NETWORK 143.167.250.0
#define SITE_BROADCAST 143.167.250.31
/* NTP server */
#define SITE_NTP_HOSTNAME ntp.gridpp.shef.ac.uk
/* The time zone */
#define SITE_TIMEZONE Europe/London
/* Site name */
#define SITE_NAME_ MySite-LCG2
/* Site EDG version */
#define SITE_EDG_VERSION LCG-2_0_0
/* Site installation date year month day time */
#define SITE_INSTALLATION_DATE_ 20042004110000Z
#define SITE_LCFG_SERVER lcfg.gridpp.shef.ac.uk
/* The following settings are used to enable NFS mount access between CE/SE/WNs
The definitions below are appropriate for sites with one SE and one CE.
Sites with multiple SEs/CEs will have to modify the individual configuration
files as well as the SITE_NFS_ACL_FROM_CE/SE definitions.
The actual NFS options lists are correct if the HOSTS values can be
expressed as a single wildcarded value. If this is not the case, then the
SITE_NFS_ACL_FROM_CE/SE definitions will have to be specified manually. */
#define SITE_CE_HOSTS CE_HOSTNAME
#define SITE_SE_HOSTS SE_HOSTNAME
#define SITE_WN_HOSTS wn*.gridpp.shef.ac.uk
#define SITE_NFS_ACL_FROM_CE SITE_SE_HOSTS(rw,no_root_squash) SITE_WN_HOSTS(rw,no_root_squash)
#define SITE_NFS_ACL_FROM_SE SITE_CE_HOSTS(rw,no_root_squash) SITE_WN_HOSTS(rw,no_root_squash)
/* The default configuration of MDS is that there is a GRIS running on each
functional node (CE, SE). There is a single site-level GIIS running by
default on the CE. This site-level GIIS then registers to a Regional MDS.
The details are handled via the globuscfg configuration object. */
#define SITE_GIIS uk-Sheffield
#define SITE_GIIS_HOSTNAME CE_HOSTNAME
/* You MUST use the quotes and space for the value. If you use a comma, PBS job
submissions will fail! */
#define SITE_GLOBUS_TCP_RANGE "65000 65500"
/* COMMON DEFAULT VALUES --------------------------------------------------
--------------------------------------------------------------------------- */
/* This defines the default location for the host certificates. If
this is different for your site define the new value here. If you
need to change it for the CE or SE separately, see below. */
#define SITE_DEF_GRIDSEC_ROOT /etc/grid-security
#define SITE_DEF_HOST_CERT SITE_DEF_GRIDSEC_ROOT/hostcert.pem
#define SITE_DEF_HOST_KEY SITE_DEF_GRIDSEC_ROOT/hostkey.pem
#define SITE_DEF_GRIDMAP SITE_DEF_GRIDSEC_ROOT/grid-mapfile
#define SITE_DEF_GRIDMAPDIR SITE_DEF_GRIDSEC_ROOT/gridmapdir/
#define SITE_DEF_CERTDIR SITE_DEF_GRIDSEC_ROOT/certificates/
#define SITE_DEF_VOMSDIR SITE_DEF_GRIDSEC_ROOT/vomsdir/
#define SITE_DEF_WEBSERVICES_CERT SITE_DEF_GRIDSEC_ROOT/tomcatcert.pem
#define SITE_DEF_WEBSERVICES_KEY SITE_DEF_GRIDSEC_ROOT/tomcatkey.pem
/* DATA MGT PARAMETERS FOR SEVERAL NODE TYPES ----------------------------
--------------------------------------------------------------------------- */
/* These variables define which VOs your site supports. At least one
must be defined. For each accepted VO, LCFG will create 50 accounts and
add the VO to your mkgridmap.conf file.
*/
#define SE_VO_ALICE
#define SE_VO_ATLAS
#define SE_VO_CMS
#define SE_VO_LHCB
#define SE_VO_DTEAM
/* COMPUTING ELEMENT DEFINITIONS ------------------------------------------
--------------------------------------------------------------------------- */
/* ComputingElement hostname. CE_HOSTNAME is DEFINED ABOVE. */
/* Define the batch system used for the CE. Only ONE in std. config! */
#define CE_LRMS_PBS
/* #define CE_LRMS_LSF */
/* #define CE_LRMS_CONDOR */
/* If you want /home to be shared between CE and WN you must comment out the
following line */
#define NO_HOME_SHARE
/* Setup variables for different batch systems. */
/* With PBS the WP4 resource management may also be used. */
#ifdef CE_LRMS_PBS
#define CE_LRMS_NAME pbs
#define CE_USE_RTCS 0
/* If you want /home to be shared between CE and WN you must comment out the
following line */
#define CE_JM_TYPE lcgpbs
#endif
/* LSF requires manual configuration. */
#ifdef CE_LRMS_LSF
#define CE_LRMS_NAME lsf
/* #define CE_JM_TYPE lcglsf */
#endif
/* Condor requires manual configuration. */
#ifdef CE_LRMS_CONDOR
#define CE_LRMS_NAME condor
/* #define CE_JM_TYPE lcgcondor */
#endif
/* If the batch commands are not in the system default path, then you
must set the following with the necessary paths. */
/* #define CE_LRMS_PATHS /some/path /some/additional/path */
/* Full path of the certificate */
#define CE_CERT_PATH SITE_DEF_HOST_CERT
/* Full path of the secret key */
#define CE_SECKEY_PATH SITE_DEF_HOST_KEY
/* System administrator e-mail */
#define CE_SYSADMIN SITE_MAILROOT
/* Local queue names. This is a space-separated list of queue names. */
#define CE_QUEUES short long infinite
/* The following information refers to the WNs connected to your CE, not to the
CE itself. If you follow the procedure to publish WN info from your CE
described in the installation notes, these data will not be used.
*/
/* CPU model */
#define CE_IP_PROCESSMODEL Athlon
/* CPU vendor */
#define CE_IP_PROCESSVENDOR AMD
/* CPU speed */
#define CE_IP_PROCESSSPEED 1540
/* CE Operating System */
#define CE_IP_OS Redhat
/* CE Operating System Release */
#define CE_IP_OS_RELEASE 7.3
/* CE InformationProviders: MinPhysMemory */
#define CE_IP_MINPHYSMEM 512
/* CE InformationProviders: MinVirtMemory */
#define CE_IP_MINVIRTMEM 1024
/* CE InformationProviders: SMPSize (number of cpus in an SMP box) */
#define CE_IP_SMPSIZE 2
/* The following information also refers to your WNs but is defined statically
here. Please make sure that these information is ~correct for your WNs.
*/
/* CE InformationProviders: For some examples of SpecInt at
http://www.specbench.org/osg/cpu2000/results/cint2000.html */
/* CE InformationProviders: SpecInt 2000 */
#define CE_IP_SI00 620
/* CE InformationProviders: SpecFloat 2000 */
#define CE_IP_SF00 520
/* CE InformationProviders: OutboundIP */
#define CE_IP_OUTBOUNDIP TRUE
/* CE InformationProviders: InboundIP */
#define CE_IP_INBOUNDIP FALSE
/* CE InformationProviders: RunTimeEnvironment */
#define CE_IP_RUNTIMEENV LCG-2
/* Set this to 1 if you want to include the old MDS information providers. */
/* These are not necessary but may be included if desired. */
#define CE_USE_MDS_INFO 0
/* STORAGE ELEMENT DEFINITIONS --------------------------------------------
--------------------------------------------------------------------------- */
/* StorageElement hostname. SE_HOSTNAME is DEFINED ABOVE. */
/* Full path of the certificate */
#define SE_CERT_PATH SITE_DEF_HOST_CERT
/* Full path of the secret key */
#define SE_SECKEY_PATH SITE_DEF_HOST_KEY
/* This is the path on your SE of the storage area dedicated to LCG VOs.
* E.g.: at CERN it is set to /castor/cern.ch/grid. */
#define CE_CLOSE_SE_MOUNTPOINT /data
/* Within the area defined by CE_CLOSE_SE_MOUNTPOINT, each VO must have a
* dedicated area. The following variables define the sub-path to these areas.
* Changing the name of these areas from the given default is possible but
* requires some extra adjustements: contact us if you need to do that.
* Note that each area must be group-read/writable to the corresponding VO.
*/
#define SA_PATH_ALICE alice
#define SA_PATH_ATLAS atlas
#define SA_PATH_CMS cms
#define SA_PATH_LHCB lhcb
#define SA_PATH_DTEAM dteam
/* To publish GLUESLARchitectureType - disk or mss (effect unknown) */
#define SE_MSS disk
/* For your storage to be visible from the grid you must have a GRIS which
* publishes information about it. If you installed your SE using the classical
* SE configuration file provided by LCG (StorageElementClassic-cfg.h) then a
* GRIS is automatically started on that node and you can leave the default
* settings below. If your storage is based on a external MSS system which
* only provides a GridFTP interface (an example is the GridFTP-enabled CASTOR
* service at CERN), then you will have to install an external GRIS server
* using the provided PlainGRIS-cfg.h profile. In this case you must define
* SE_GRIS_HOSTNAME to point to this node and define the SE_DYNAMIC_CASTOR
* variable instead of SE_DYNAMIC_CLASSIC (Warning: defining both variables at
* the same time is WRONG!).
*
* Currently the only supported external MSS is the GridFTP-enabled CASTOR used
* at CERN.
*/
#define SE_GRIS_HOSTNAME SE_HOSTNAME
#define SE_DYNAMIC_CLASSIC
/* #define SE_DYNAMIC_CASTOR */
/* An SE can support several access protocols. By defining the port assigned to
* a protocol you also enable its publication on the information system. */
#define SE_PROTOCOL_GRIDFTP_PORT 2811
#define SE_PROTOCOL_RFIO_PORT 5001
/* Set this to 1 if you want to include the old MDS information providers. */
/* These are not necessary but may be included if desired. */
#define SE_USE_MDS_INFO 0
/* WORKER NODE DEFINITIONS ------------------------------------------------
--------------------------------------------------------------------------- */
/* Area on the WN for the installation of the experiment software */
/* If on your WNs you have predefined shared areas where VO managers can
pre-install software, then these variables should point to these areas.
If you do not have shared areas and each job must install the software,
then these variables should contain a dot ( . )
*/
/* #define WN_AREA_ALICE /opt/exp_software/alice */
/* #define WN_AREA_ATLAS /opt/exp_software/atlas */
/* #define WN_AREA_CMS /opt/exp_software/cms */
/* #define WN_AREA_LHCB /opt/exp_software/lhcb */
/* #define WN_AREA_DTEAM /opt/exp_software/dteam */
#define WN_AREA_ALICE .
#define WN_AREA_ATLAS .
#define WN_AREA_CMS .
#define WN_AREA_LHCB .
#define WN_AREA_DTEAM .
/* USER INTERFACE DEFINITIONS ---------------------------------------------
--------------------------------------------------------------------------- */
/* Resource broker */
#define UI_RESBROKER lcgrb02.gridpp.rl.ac.uk
/* Logging and Bookkeeping URL */
#define UI_LOGBOOK https://lcgrb02.gridpp.rl.ac.uk:7846
/* My Proxy Server */
#define MY_PROXY_SERVER px.gridpp.shef.ac.uk
/* GRIDICE MONITORING -----------------------------------------------------
--------------------------------------------------------------------------- */
/* The CE, SE, and RB nodes collect GridICE monitoring data and send it to a
collector node which will then be queried by the LCG central GridICE
monitor service. The settings below enable the collection of data for your
site using the the SE node as the collector. If at your site you have
multiple SEs then you should choose one of them as the GridICE collector and
point GRISHOST_FOR_SERVICES to it. It is also recommended that you comment
out the definition of GRIDICE_COLLECTOR here and copy it in the node
configuration file of the data collector. In this case, the
"#define GRIDICE_COLLECTOR" line should go immediately after
"#include site-cfg.h".
*/
#define GRIDICE_COLLECTOR
#define GRISHOST_FOR_SERVICES SE_GRIS_HOSTNAME
/* COMMON USER ACCOUNTS ---------------------------------------------------
----------------------------------------------------------------------------*/
/* Check that the defaults do not conflict with any site-specific users
or groups. If you are using the pooled accounts, edit the group and
user IDs separately in the Users-cfg.h file.
*/
/* Account for running information system daemons. */
#define USER_UID_EDGINFO 999
#define USER_GID_EDGINFO 999
/* Account for running workload mgt daemons. */
#define USER_UID_EDGUSER 995
#define USER_GID_EDGUSER 995
/* Account for running MySQL daemon. */
#define USER_UID_MYSQL 998
#define USER_GID_MYSQL 998
/* Priviledged account for RFIO. */
#define USER_UID_STAGE 994
#define USER_GID_STAGE 994
|