Dear Storage folks,
Have some q about pool account SE environment settings.
A Bristol CREAM-CE's site-info.def for itself + its WN has:
#SE_HOST="lcgse02.phy.bris.ac.uk"
# 1506 changing from StoRM to dm-lite!
SE_HOST="lcgse01.phy.bris.ac.uk"
SE_LIST=$SE_HOST
SE_MOUNT_INFO_LIST=none
SE_ARCH="multidisk" # "disk, tape, multidisk, other"
# this is used in vo.d/${vo} along with $SE_HOST
#SE_ACCESSPOINT=/gpfs_phys/storm
# 1506 changing from StoRM to dm-lite!
SE_ACCESSPOINT=/hdfs/dpm/phy.bris.ac.uk/home
and the vo.d/ops (for eg, other VOs similar) has:
SW_DIR=$VO_SW_DIR/ops
DEFAULT_SE=$SE_HOST
STORAGE_DIR="$SE_ACCESSPOINT/ops"
Having yaim'd an offline WN, su - opssgm finds
VO_OPS_DEFAULT_SE=lcgse01.phy.bris.ac.uk
set, but nothing in environment of where the default storage path is
now (/hdfs/etc/etc)
Now for real DPM I bet /dpm is not exactly, like, mounted on the WN.
But is there a pool account environment variable pointing to "your storage
path is: <whatever>" ?
In both storm & dm-lite case, it is on Bristol WN (was /gpfs_phys, is now
/hdfs).
Does opssgm or any other VO actually _DO_ anything with
VO_$VO_DEFAULT_SE env var setting? I know CMS doesn't - they completely
rely on some other cms-site-specific config. Neither ilc nor lhcb use T2
storage so they probably ignore it.
Also, I believe it is true that the gridppnagios ops tests for storage
come from outside, talk to the site SE, then assess + report success or
fail. They don't actually involve a job landing on a WN at the site &
testing SE access. Confirmed?
|