JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for GRIDPP-STORAGE Archives


GRIDPP-STORAGE Archives

GRIDPP-STORAGE Archives


GRIDPP-STORAGE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

GRIDPP-STORAGE Home

GRIDPP-STORAGE Home

GRIDPP-STORAGE  February 2008

GRIDPP-STORAGE February 2008

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

dcache 1.8.0, srm version mismatch and other animals

From:

John Bland <[log in to unmask]>

Reply-To:

John Bland <[log in to unmask]>

Date:

Thu, 14 Feb 2008 09:28:20 +0000

Content-Type:

multipart/mixed

Parts/Attachments:

Parts/Attachments

text/plain (89 lines) , dCacheSetup (788 lines) , srm.batch (393 lines)


Dear storage experts,

We have recently replaced an ailing dcache 1.7 SE 
(hepgrid5.ph.liv.ac.uk) with a shiny new 1.8 setup, keeping the old 
pnfs/postgresql databases and pool contents.

The system is running Scientific Linux 4.4 x86_64, jdk 1.6.0-03 and has 
been configured using yaim 4. We have not turned on the space manager or 
any other specific srm2.2 settings. We have new users/groups but are 
using the old permissions and dcache.kpwd for now.

We are able to see the contents of the pnfs namespace and access files 
directly through pnfs, and tools like lcg-cp, srmcp and globus-url-copy 
can copy files to and from the pools. We can register files after 
copying them, but trying to copy and register with lcg-cr results in no 
error but no file being copied or registered.

Also tools like srmls fail with errors such as

Wed Feb 13 15:23:27 GMT 2008: In SRMClient ExpectedName: host
Wed Feb 13 15:23:28 GMT 2008: SRMClient(https,srm/managerv1,true)
SRMClientV2 : user credentials are: 
/C=UK/O=eScience/OU=Liverpool/L=CSD/CN=john bland
SRMClientV2 : connecting to srm at 
httpg://hepgrid5.ph.liv.ac.uk:8443/srm/managerv1
SRMClientV2 :  srmLs, contacting service 
httpg://hepgrid5.ph.liv.ac.uk:8443/srm/managerv1
SRMClientV2 : put: try # 0 failed with error
SRMClientV2 : org.xml.sax.SAXException: Deserializing parameter 
'srmLsRequest':  could not find deserializer for type 
{http://srm.lbl.gov/StorageResourceManager}srmLsRequest
SRMClientV2 : put: try again
SRMClientV2 : sleeping for 10000 milliseconds before retrying

whether we specify version 1 or version 2 for srmls, or whether we 
specify srmVersion=version1 or not in dCacheSetup.

Ops and Steve Lloyd analysis tests show the same error accessing the SE of

httpg://hepgrid5.ph.liv.ac.uk:8443/srm/managerv1: CGSI-gSOAP: Could not 
open connection !

In catalina.out we see errors about the space manager:

02/13 16:15:21 Cell(SrmSpaceManager@srm-hepgrid5Domain) : Message 
arrived:  from [>SRM-hepgri
d5@srm-hepgrid5Domain]
02/13 16:15:21 Cell(SrmSpaceManager@srm-hepgrid5Domain) : 
SpaceException: SpaceManager is disa
bled in configuration
02/13 16:15:21 Cell(SrmSpaceManager@srm-hepgrid5Domain) : Sending reply 
(-2)=diskCacheV111.ser
vices.space.SpaceException: SpaceManager is disabled in configuration

We are also showing 0 space free in gstat.

Gstat also shows our endpoint as being

SRM     httpg://hepgrid5.ph.liv.ac.uk:8443/srm/managerv1

but isn't "SRM" supposed to be for srm2.2?

It seems to us that

a) there is some confusion about which version of srm is being used
b) the space manager is interfering with operations even though it has 
not been configured to be used.

We are thinking of activating space manager (and all the other painful 
reservations, links etc that go with it) but surely dcache 1.8 should be 
able to run as srm1 only?

This is the first dcache instance we've fully installed from scratch so 
we may be missing something obvious here. I've attached our dCacheSetup 
and srm.batch files. If you need any other files let me know.

Regards,

John

-- 
Dr John Bland, Systems Administrator
Room 210, Oliver Lodge
Particle Physics Group, University of Liverpool
Mail: [log in to unmask]
Tel : 0151 794 3396



# # based on dCacheSetup.template $Revision: 1.33 $ # # ----------------------------------------------------------------------- # config/dCacheSetup # ----------------------------------------------------------------------- # This is the central configuration file for a dCache instance. In most # cases it should be possible to keep it identical across the nodes of # one dCache instance. # # This template contains all options that can possibly be used. Most # may be left at the default value. If the option is commented out below # is indicates the default value. If it is not commented out it is set # to a reasonable value. # # To get a dCache instance running it suffices to change the options: # - java The java binary # - serviceLocatorHost The hostname of the admin node # The other values should only be changed when advised to do so by the # documentation. # # ----------------------------------------------------------------------- # Service Location # ----------------------------------------------------------------------- # ---- Service Locater Host and Port # Adjust this to point to one unique server for one and only one # dCache instance (usually the admin node) # #serviceLocatorHost=SERVER serviceLocatorHost=hepgrid5.ph.liv.ac.uk serviceLocatorPort=11111 # ----------------------------------------------------------------------- # Components # ----------------------------------------------------------------------- # To activate Replica Manager you need make changes in all 3 places: # 1) etc/node_config on ALL ADMIN NODES in this dcache instance. # 2) replicaSetup file on node where replica manager is runnig # 3) define Resilient pool group(s) in PoolManager.conf # ---- Will Replica Manager be started? # Values: no, yes # Default: no # # This has to be set to 'yes' on every node, if there is a replica # manager in this dCache instance. Where the replica manager is started # is controlled in 'etc/node_config'. If it is not started and this is # set to 'yes' there will be error messages in log/dCacheDomain.log. If # this is set to 'no' and a replica manager is started somewhere, it will # not work properly. # # #replicaManager=no # ---- Which pool-group will be the group of resilient pools? # Values: <pool-Group-Name>, a pool-group name existing in the PoolManager.conf # Default: ResilientPools # # Only pools defined in pool group ResilientPools in config/PoolManager.conf # will be managed by ReplicaManager. You shall edit config/PoolManager.conf # to make replica manager work. To use another pool group defined # in PoolManager.conf for replication, please specify group name by changing setting : # #resilientGroupName=ResilientPools # Please scroll down "replica manager tuning" make this and other changes. # ----------------------------------------------------------------------- # Java Configuration # ----------------------------------------------------------------------- # ---- The binary of the Java VM # Adjust to the correct location. # # shold point to <JDK>/bin/java #java="/usr/bin/java" java=/usr/java/default//bin/java # # ---- Options for the Java VM # Do not change unless yoy know what you are doing. # If the globus.tcp.port.range is changed, the # variable 'clientDataPortRange' below has to be changed accordingly. # #java_options="-server -Xmx512m -XX:MaxDirectMemorySize=512m \ # -Dsun.net.inetaddr.ttl=1800 \ # -Dorg.globus.tcp.port.range=20000,25000 \ # -Djava.net.preferIPv4Stack=true \ # -Dorg.dcache.dcap.port=0 \ # -Dorg.dcache.net.tcp.portrange=33115:33145 \ # -Dlog4j.configuration=file:${ourHomeDir}/config/log4j.properties \ # " java_options="-server -Xmx512m -XX:MaxDirectMemorySize=512m -Dsun.net.inetaddr.ttl=1800 -Dorg.globus.tcp.port.range=20000,25000 -Djava.net.preferIPv4Stack=true -Dorg.dcache.dcap.port=0 -Dorg.dcache.net.tcp.portrange=60000:62000 -Dlog4j.configuration=file:${ourHomeDir}/config/log4j.properties " # Option for Kerberos5 authentication: # -Djava.security.krb5.realm=FNAL.GOV \ # -Djava.security.krb5.kdc=krb-fnal-1.fnal.gov \ # Other options that might be useful: # -Dlog4j.configuration=/opt/d-cache/config/log4j.properties \ # -Djavax.security.auth.useSubjectCredsOnly=false \ # -Djava.security.auth.login.config=/opt/d-cache/config/jgss.conf \ # -Xms400m \ # ---- Classpath # Do not change unless yoy know what you are doing. # classesDir=${ourHomeDir}/classes classpath= # ---- Librarypath # Do not change unless yoy know what you are doing. # Currently not used. Might contain .so librarys for JNI # librarypath=${ourHomeDir}/lib # ----------------------------------------------------------------------- # Filesystem Locations # ----------------------------------------------------------------------- # ---- Location of the configuration files # Do not change unless yoy know what you are doing. # config=${ourHomeDir}/config # ---- Location of the ssh # Do not change unless yoy know what you are doing. # keyBase=${ourHomeDir}/config # ---- SRM/GridFTP authentication file # Do not change unless yoy know what you are doing. # kpwdFile=${ourHomeDir}/etc/dcache.kpwd # ----------------------------------------------------------------------- # pool tuning # ----------------------------------------------------------------------- # Do not change unless yoy know what you are doing. # # poolIoQueue= # checkRepository=true # waitForRepositoryReady=false # # ---- Which meta data repository implementation to use. # Values: org.dcache.pool.repository.meta.file.FileMetaDataRepository # org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository # Default: org.dcache.pool.repository.meta.file.FileMetaDataRepository # # Selects which meta data repository implementation to use. This is # essentially a choice between storing meta data in a large number # of small files in the control/ directory, or to use the embedded # Berkeley database stored in the meta/ directory (both directories # placed in the pool directory). # # metaDataRepository=org.dcache.pool.repository.meta.file.FileMetaDataRepository # # ---- Which meta data repository to import from. # Values: org.dcache.pool.repository.meta.file.FileMetaDataRepository # org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository # "" # Default: "" # # Selects which meta data repository to import data from if the # information is missing from the main repository. This is useful # for converting from one repository implementation to another, # without having to fetch all the information from the central PNFS # manager. # # metaDataRepositoryImport="" # # ----------------------------------------------------------------------- # gPlazma tuning # ----------------------------------------------------------------------- # Do not change unless yoy know what you are doing. # gplazmaPolicy=${ourHomeDir}/etc/dcachesrm-gplazma.policy # #gPlazmaNumberOfSimutaneousRequests 30 #gPlazmaRequestTimeout 30 # #useGPlazmaAuthorizationModule=false useGPlazmaAuthorizationModule=true #useGPlazmaAuthorizationCell=true useGPlazmaAuthorizationCell=false #delegateToGPlazma=false # # # ----------------------------------------------------------------------- # dcap tuning # ----------------------------------------------------------------------- # # gsidcapIoQueue= # gsidcapIoQueueOverwrite=denied # gsidcapMaxLogin=1500 # dcapIoQueue= # dcapIoQueueOverwrite=denied # dcapMaxLogin=1500 # # ----------------------------------------------------------------------- # gsiftp tuning # ----------------------------------------------------------------------- # # ---- Seconds between GridFTP performance markers # Set performanceMarkerPeriod to 180 to get performanceMarkers # every 3 minutes. # Set to 0 to disable performance markers. # Default: 180 # # performanceMarkerPeriod=180 # # gsiftpPoolManagerTimeout=5400 # gsiftpPoolTimeout=600 # gsiftpPnfsTimeout=300 # gsiftpMaxRetries=80 # gsiftpMaxStreamsPerClient=10 # gsiftpDeleteOnConnectionClosed=true # gsiftpMaxLogin=100 clientDataPortRange=20000:25000 # gsiftpIoQueue= # gsiftpAdapterInternalInterface= # remoteGsiftpIoQueue= # FtpTLogDir= # # ---- May pools accept incomming connection for GridFTP transfers? # Values: 'true', 'false' # Default: 'false' for FTP doors, 'true' for pools # # If set to true, pools are allowed accept incomming connections for # for FTP transfers. This only affects passive transfers. Only passive # transfers using GFD.47 GETPUT (aka GridFTP 2) can be redirected to # the pool. Other passive transfers will be channelled through a # proxy component at the FTP door. If set to false, all passive # transfers to through a proxy. # # This setting is interpreted by both FTP doors and pools, with # different defaults. If set to true at the door, then the setting # at the individual pool will be used. # # gsiftpAllowPassivePool=false # # # ----------------------------------------------------------------------- # common to gsiftp and srm # ----------------------------------------------------------------------- # # srmSpaceManagerEnabled=no # # will have no effect if srmSpaceManagerEnabled is "no" # srmImplicitSpaceManagerEnabled=yes # overwriteEnabled=no # # ---- Image and style directories for the dCache-internal web server # Do not change unless yoy know what you are doing. # images=${ourHomeDir}/docs/images styles=${ourHomeDir}/docs/styles # ----------------------------------------------------------------------- # Network Configuration # ----------------------------------------------------------------------- # ---- Port Numbers for the various services # Do not change unless yoy know what you are doing. # portBase=22 #dCapPort=${portBase}125 dCapPort=22125 ftpPort=${portBase}126 kerberosFtpPort=${portBase}127 #dCapGsiPort=${portBase}128 dCapGsiPort=22128 #gsiFtpPortNumber=2811 gsiFtpPortNumber=2811 srmPort=8443 xrootdPort=1094 # ---- GridFTP port range # Do not change unless yoy know what you are doing. # clientDataPortRange=20000:25000 #clientDataPortRange=33115:33215 # ---- Port Numbers for the monitoring and administration # Do not change unless yoy know what you are doing. # adminPort=${portBase}223 httpdPort=${portBase}88 sshPort=${portBase}124 # Telnet is only started if the telnetPort line is uncommented. # Debug only. #telnetPort=${portBase}123 # # ----------------------------------------------------------------------- # Maintenance Module Setup # ----------------------------------------------------------------------- # # maintenanceLibPath=${ourHomeDir}/var/lib/dCache/maintenance # maintenanceLibAutogeneratePaths=true # maintenanceLogoutTime=18000 # # ----------------------------------------------------------------------- # Database Configuration # ----------------------------------------------------------------------- # The variable 'srmDbHost' is obsolete. For compatibility reasons, # it is still used if it is set and if the following variables are # not set # The current setup assumes that one or more PostgreSQL servers are # used by the various dCache components. Currently the database user # 'srmdcache' with password 'srmdcache' is used by all components. # They use the databases 'dcache', 'replicas', 'companion', # 'billing'. However, these might be located on separate hosts. # The best idea is to have the database server running on the same # host as the dCache component which accesses it. Therefore, the # default value for the following variables is 'localhost'. # Uncomment and change these variables only if you have a reason to # deviate from this scheme. # (One possibility would be, to put the 'billing' DB on another host than # the pnfs server DB and companion, but keep the httpDomain on the admin # host.) # ---- pnfs Companion Database Host # Do not change unless yoy know what you are doing. # - Database name: companion # #companionDatabaseHost=localhost # ---- SRM Database Host # Do not change unless yoy know what you are doing. # - Database name: dcache # - If srmDbHost is set and this is not set, srmDbHost is used. # #srmDatabaseHost=localhost # ---- Space Manager Database Host # Do not change unless yoy know what you are doing. # - Database name: dcache # - If srmDbHost is set and this is not set, srmDbHost is used. # #spaceManagerDatabaseHost=localhost # ---- Pin Manager Database Host # Do not change unless yoy know what you are doing. # - Database name: dcache # - If srmDbHost is set and this is not set, srmDbHost is used. # #pinManagerDatabaseHost=localhost # ---- Replica Manager Database Host # Do not change unless yoy know what you are doing. # - Database name: replicas # # ---------------------------------------------------------------- # replica manager tuning # ---------------------------------------------------------------- # # replicaManagerDatabaseHost=localhost # replicaDbName=replicas # replicaDbUser=srmdcache # replicaDbPassword=srmdcache # replicaPasswordFile="" # resilientGroupName=ResilientPools # replicaPoolWatchDogPeriod=600 # replicaWaitDBUpdateTimeout=600 # replicaExcludedFilesExpirationTimeout=43200 # replicaDelayDBStartTimeout=1200 # replicaAdjustStartTimeout=1200 # replicaWaitReplicateTimeout=43200 # replicaWaitReduceTimeout=43200 # replicaDebug=false # replicaMaxWorkers=6 # replicaMin=2 # replicaMax=3 # # ---- Transfer / TCP Buffer Size # Do not change unless yoy know what you are doing. # bufferSize=1048576 tcpBufferSize=1048576 # ---- Allow overwrite of existing files via GSIdCap # allow=true, disallow=false # truncate=false # ---- pnfs Mount Point for (Grid-)FTP # The current FTP door needs pnfs to be mounted for some file exist # checks and for the directory listing. Therefore it needs to know # where pnfs is mounted. In future the Ftp and dCap deamons will # ask the pnfsManager cell for help and the directory listing is # done by a DirListPool. ftpBase=/pnfs/ftpBase # ----------------------------------------------------------------------- # pnfs Manager Configuration # ----------------------------------------------------------------------- # # ---- pnfs Mount Point # The mount point of pnfs on the admin node. Default: /pnfs/fs # pnfs=/pnfs/fs # An older version of the pnfsManager actually autodetects the # possible pnfs filesystems. The ${defaultPnfsServer} is choosen # from the list and used as primary pnfs filesystem. (currently the # others are ignored). The ${pnfs} variable can be used to override # this mechanism. # # defaultPnfsServer=localhost # # -- leave this unless you are running an enstore HSM backend. # # pnfsInfoExtractor=diskCacheV111.util.OsmInfoExtractor # # -- depending on the power of your pnfs server host you may # set this to up to 50. # # pnfsNumberOfThreads=4 # # -- don't change this # # namespaceProvider=diskCacheV111.namespace.provider.BasicNameSpaceProviderFactory # # --- change this if you configured you postgres instance # other then described in the Book. # # pnfsDbUser=srmdcache # pnfsDbPassword=srmdcache # pnfsPasswordFile= # # ---- Storage Method for cacheinfo: companion or pnfs # Values: 'comanion' -- cacheinfo will be stored in separate DB # other or missing -- cacheinfo will be stored in pnfs # Default: 'pnfs' -- for backward compatibility of old dCacheSetup files # # 'companion' is the default for new installs. Old installations have # to use 'pnfs register' in every pool after switching from 'pnfs' to # 'companion'. See the documentation. # cacheInfo=companion # # # # ---- Location of the trash directory # The cleaner (which can only run on the pnfs server machine itself) # autodetects the 'trash' directory. Non-empty 'trash' overwrites the # autodetect. # #trash= # The cleaner stores persistency information in subdirectories of # the following directory. # # cleanerDB=/opt/pnfsdb/pnfs/trash/2 # cleanerRefresh=120 # cleanerRecover=240 # cleanerPoolTimeout=100 # cleanerProcessFilesPerRun=500 # cleanerArchive=none # # ---- Whether to enable the HSM cleaner # Values: 'disabled', 'enabled' # Default: 'disabled' # # The HSM cleaner scans the PNFS trash directory for deleted # files stored on an HSM and sends a request to an attached # pool to delete that file from the HSM. # # The HSM cleaner by default runs in the PNFS domain. To # enable the cleaner, this setting needs to be set to enabled # at the PNFS domain *and* at all pools that are supposed # to delete files from an HSM. # # hsmCleaner=disabled # # # ---- Location of trash directory for files on tape # The HSM cleaner periodically scans this directory to # detect deleted files. # # hsmCleanerTrash=/opt/pnfsdb/pnfs/1 # # ---- Location of repository directory of the HSM cleaner # The HSM cleaner uses this directory to store information # about files in could not clean right away. The cleaner # will reattempt to clean the files later. # # hsmCleanerRepository=/opt/pnfsdb/pnfs/1/repository # # ---- Interval between scans of the trash directory # Specifies the time in seconds between two scans of the # trash directory. # # hsmCleanerScan=90 # # ---- Interval between retries # Specifies the time in seconds between two attempts to # clean files stored in the cleaner repository. # # hsmCleanerRecover=3600 # # ---- Interval between flushing failures to the repository # When the cleaner failes to clean a file, information about this # file is added to the repository. This setting specifies the time # in seconds between flushes to the repository. Until the # information is kept in memory and in the trash directory. # # Each flush will create a new file. A lower value will cause the # repository to be split into more files. A higher value will cause # a higher memory usage and a larger number of files in the trash # directory. # # hsmCleanerFlush=60 # # ---- Max. length of in memory queue of files to clean # When the trash directory is scanned, information about deleted # files is queued in memory. This setting specifies the maximum # length of this queue. When the queue length is reached, scanning # is suspended until files have been cleaned or flushed to the # repository. # # hsmCleanerCleanerQueue=10000 # # ---- Timeout for pool communication # Files are cleaned from an HSM by sending a message to a pool to # do so. This specifies the timeout in seconds after which the # operation is considered failed. # # hsmCleanerTimeout=120 # # ---- Maximum concurrent requests to a single HSM # Files are cleaned in batches. This specified the largest number # of files to include in a batch per HSM. # # hsmCleanerRequest=100 # # ----------------------------------------------------------------------- # Directory Pools # ----------------------------------------------------------------------- # #directoryPoolPnfsBase=/pnfs/fs # # ----------------------------------------------------------------------- # Srm Settings for experts # ----------------------------------------------------------------------- # srmVersion=version1 pnfsSrmPath=/ parallelStreams=10 #srmAuthzCacheLifetime=60 # srmGetLifeTime=14400000 # srmPutLifeTime=14400000 # srmCopyLifeTime=14400000 # srmTimeout=3600 # srmVacuum=true # srmVacuumPeriod=21600 # srmProxiesDirectory=/tmp # srmBufferSize=1048576 # srmTcpBufferSize=1048576 # srmDebug=true # srmGetReqThreadQueueSize=10000 # srmGetReqThreadPoolSize=250 # srmGetReqMaxWaitingRequests=1000 # srmGetReqReadyQueueSize=10000 # srmGetReqMaxReadyRequests=2000 # srmGetReqMaxNumberOfRetries=10 # srmGetReqRetryTimeout=60000 # srmGetReqMaxNumOfRunningBySameOwner=100 # srmPutReqThreadQueueSize=10000 # srmPutReqThreadPoolSize=250 # srmPutReqMaxWaitingRequests=1000 # srmPutReqReadyQueueSize=10000 # srmPutReqMaxReadyRequests=1000 # srmPutReqMaxNumberOfRetries=10 # srmPutReqRetryTimeout=60000 # srmPutReqMaxNumOfRunningBySameOwner=100 # srmCopyReqThreadQueueSize=10000 # srmCopyReqThreadPoolSize=250 # srmCopyReqMaxWaitingRequests=1000 # srmCopyReqMaxNumberOfRetries=10 # srmCopyReqRetryTimeout=60000 # srmCopyReqMaxNumOfRunningBySameOwner=100 # srmPoolManagerTimeout=300 # srmPoolTimeout=300 # srmPnfsTimeout=300 # srmMoverTimeout=7200 # remoteCopyMaxTransfers=150 # remoteHttpMaxTransfers=30 # remoteGsiftpMaxTransfers=${srmCopyReqThreadPoolSize} # # srmDbName=dcache # srmDbUser=srmdcache # srmDbPassword=srmdcache # srmDbLogEnabled=false # # This variable enables logging of the history # of the srm request transitions in the database # so that it can be examined though the srmWatch # monitoring tool # srmJdbcMonitoringLogEnabled=false # # turning this on turns off the latest changes that made service # to honor the srm client's prococol list order for # get/put commands # this is needed temprorarily to support old srmcp clients # srmIgnoreClientProtocolOrder=false # # -- Set this to /root/.pgpass in case # you need to have better security. # # srmPasswordFile= # # -- Set this to true if you want overwrite to be enabled for # srm v1.1 interface as well as for srm v2.2 interface when # client does not specify desired overwrite mode. # This option will be considered only if overwriteEnabled is # set to yes (or true) # # srmOverwriteByDefault=false # ----srmCustomGetHostByAddr enables using the BNL developed # procedure for host by ip resolution if standard # InetAddress method failed # srmCustomGetHostByAddr=false # ---- Allow automatic creation of directories via SRM # allow=true, disallow=false # RecursiveDirectoryCreation=true # ---- Allow delete via SRM # allow=true, disallow=false # AdvisoryDelete=true # # pinManagerDatabaseHost=${srmDbHost} # spaceManagerDatabaseHost=${srmDbHost} # # ----if space reservation request does not specify retention policy # we will assign this retention policy by default # SpaceManagerDefaultRetentionPolicy=CUSTODIAL # # ----if space reservation request does not specify access latency # we will assign this access latency by default # SpaceManagerDefaultAccessLatency=NEARLINE # # ----if the transfer request come from the door, and there was not prior # space reservation made for this file, should we try to reserve # space before satisfying the request # SpaceManagerReserveSpaceForNonSRMTransfers=false # LinkGroupAuthorizationFile contains the list of FQANs that are allowed to # make space reservations in a given link group #SpaceManagerLinkGroupAuthorizationFileName="" # # ----------------------------------------------------------------------- # Logging Configuration # ----------------------------------------------------------------------- # ---- Directory for the Log Files # Default: ${ourHomeDir}/log/ (if unset or empty) # logArea=/var/log # ---- Restart Behaviour # Values: 'new' -- logfiles will be moved to LOG.old at restart. # other or missing -- logfiles will be appended at restart. # Default: 'keep' # #logMode=keep # ----------------------------------------------------------------------- # Billing / Accounting # ----------------------------------------------------------------------- # The directory the billing logs are written to billingDb=${ourHomeDir}/billing # If billing information should be written to a # PostgreSQL database set to 'yes'. # A database called 'billing' has to be created there. #billingToDb=no # The PostgreSQL database host: #billingDatabaseHost=localhost # EXPERT: First is default if billingToDb=no, second for billingToDb=yes # Do NOT put passwords in setup file! They can be read by anyone logging into # the dCache admin interface! #billingDbParams= #billingDbParams="\ # -useSQL \ # -jdbcUrl=jdbc:postgresql://${billingDatabaseHost}/billing \ # -jdbcDriver=org.postgresql.Driver \ # -dbUser=srmdcache \ # -dbPass=srmdcache \ # " # ----------------------------------------------------------------------- # Info Provider # ----------------------------------------------------------------------- # # The following variables are used by the dynamic info provider, which # is used for integration of dCache as a storage element in the LCG # information system. All variables are used by the client side of the # dynamic info provider which is called regularly by the LCG GIP (generic # info provider). It consists of the two scripts # jobs/infoDynamicSE-plugin-dcache # jobs/infoDynamicSE-provider-dcache # # ---- Seconds between information retrievals # Default: 180 #infoCollectorInterval=180 # ---- The static file used by the GIP # This is also used by the plugin to determine the info it should # output. # Default: /opt/lcg/var/gip/ldif/lcg-info-static-se.ldif #infoProviderStaticFile=/opt/lcg/var/gip/ldif/lcg-info-static-se.ldif infoProviderStaticFile=/opt/glite/etc/gip/ldif/static-file-SE.ldif # ---- The host where the InfoCollector cell runs # Default: localhost #infoCollectorHost=localhost # ---- The port where the InfoCollector cell will listen # This will be used by the InfoCollector cell as well as the dynamic # info provider scripts # Default: 22111 #infoCollectorPort=22111 # ------------------------------------------------------------------------ # Statistics module # ------------------------------------------------------------------------ # - point to place where statistic will be store statisticsLocation=${ourHomeDir}/statistics # ------------------------------------------------------------------------ # xrootd section # ------------------------------------------------------------------------ # # forbids write access in general (to avoid unauthenticated writes). Overrides all other authorization settings. # xrootdIsReadOnly=true # # allow write access only to selected paths (and its subdirectories). Overrides any remote authorization settings (like from the filecatalogue) # xrootdAllowedPaths=/path1:/path2:/path3 # # This will allow to enable authorization in the xrootd door by specifying a valid # authorization plugin. There is only one plugin in the moment, implementing token based # authorization controlled by a remote filecatalogue. This requires an additional parameter # 'keystore', holding keypairs needed to do the authorization plugin. A template keystore # can be found in ${ourHomeDir}/etc/keystore.temp. # xrootdAuthzPlugin=org.dcache.xrootd.security.plugins.tokenauthz.TokenAuthorizationFactory # xrootdAuthzKeystore=${ourHomeDir}/etc/keystore # the mover queue on the pool where this request gets scheduled to # xrootdIoQueue=
# # $Id: srm.batch,v 1.35 2007-10-27 02:45:18 timur Exp $ # set printout default 2 set printout CellGlue none onerror shutdown # check -strong setupFile # copy file:${setupFile} context:setupContext # # import the variables into our $context. # don't overwrite already existing variables. # import context -c setupContext # # Make sure we got what we need. # check -strong serviceLocatorPort serviceLocatorHost check -strong srmPort # # create dmg.cells.services.RoutingManager RoutingMgr # # The LocationManager Part # create dmg.cells.services.LocationManager lm \        "${serviceLocatorHost} ${serviceLocatorPort}" # # # srm c e l l # # # Default values (it not specified in dCacheSetup # onerror continue set context -c srmVersion version1 set context -c srmDbHost localhost set context -c srmDatabaseHost ${srmDbHost} set context -c srmDbName dcache set context -c srmDbUser srmdcache set context -c srmDbPassword srmdcache set context -c srmPasswordFile "" set context -c useGPlazmaAuthorizationCell true set context -c delegateToGPlazma false set context -c useGPlazmaAuthorizationModule false set context -c gplazmaPolicy ${ourHomeDir}/etc/dcachesrm-gplazma.policy set context -c srmAuthzCacheLifetime 180 set context -c parallelStreams 10 set context -c srmTimeout 3600 set context -c srmVacuum true set context -c srmVacuumPeriod 21600 set context -c srmBufferSize 1048576 set context -c srmTcpBufferSize 1048576 set context -c srmDebug true set context -c srmGetReqThreadQueueSize 10000 set context -c srmGetReqThreadPoolSize 250 set context -c srmGetReqMaxWaitingRequests 1000 set context -c srmGetReqReadyQueueSize 10000 set context -c srmGetReqMaxReadyRequests 2000 set context -c srmGetReqMaxNumberOfRetries 10 set context -c srmGetReqRetryTimeout 60000 set context -c srmGetReqMaxNumOfRunningBySameOwner 100 set context -c srmPutReqThreadQueueSize 10000 set context -c srmPutReqThreadPoolSize 250 set context -c srmPutReqMaxWaitingRequests 1000 set context -c srmPutReqReadyQueueSize 10000 set context -c srmPutReqMaxReadyRequests 1000 set context -c srmPutReqMaxNumberOfRetries 10 set context -c srmPutReqRetryTimeout 60000 set context -c srmPutReqMaxNumOfRunningBySameOwner 100 set context -c srmCopyReqThreadQueueSize 10000 set context -c srmCopyReqThreadPoolSize 250 set context -c srmCopyReqMaxWaitingRequests 1000 set context -c srmCopyReqMaxNumberOfRetries 10 set context -c srmCopyReqRetryTimeout 60000 set context -c srmCopyReqMaxNumOfRunningBySameOwner 100 set context -c srmGetLifeTime 14400000 set context -c srmPutLifeTime 14400000 set context -c srmCopyLifeTime 14400000 set context -c srmVacuum true set context -c srmVacuumPeriod 21600 set context -c pnfsSrmPath / set context -c srmPoolManagerTimeout 300 set context -c srmPoolTimeout 300 set context -c srmPnfsTimeout 300 set context -c srmMoverTimeout 7200 set context -c remoteCopyMaxTransfers 150 set context -c remoteHttpMaxTransfers 30 set context -c remoteGsiftpMaxTransfers ${srmCopyReqThreadPoolSize} set context -c remoteGsiftpIoQueue "" set context -c srmDbLogEnabled false set context -c RecursiveDirectoryCreation true set context -c AdvisoryDelete true set context -c kpwdFile ${ourHomeDir}/etc/dcache.kpwd set context -c useLambdaStation false set context -c lsMapFile ${ourHomeDir}/lambdastation/config/l_station_map.xml set context -c lsScript ${ourHomeDir}/lambdastation/scripts/open_ls_ticket set context -c overwriteEnabled false set context -c srmOverwriteByDefault false # this is the directory in which the delegated user credentials will be stored # as files. We recommend set permissions to 700 on this dir set context -c srmUserCredentialsDirectory ${ourHomeDir}/credentials set context -c srmPnfsManager PnfsManager set context -c srmPoolManager PoolManager #login broker timeout in millis set context -c srmLoginBrokerUpdatePeriod 3000 #pool manager timeout in seconds set context -c srmPoolManagerTimeout 60 #number of doors in the random selection #srm will order doors according to their load #and select sertain number of the least loaded #and then randomly choose which one to use set context -c srmNumberOfDoorsInRandomSelection 5 #srm will hold srm requests and their history in database # for srmNumberOfDaysInDatabaseHistory days #after that they will be removed set context -c srmNumberOfDaysInDatabaseHistory 10 # how frequently to remove old requests from the database set context -c srmOldRequestRemovalPeriodSeconds 60 # srmJdbcMonitoringLogEnabled is set to true srm will store sufficient # information about srm requests and their execution history in database # for monitoring interface to work # if it is set to false, only the absiolutely necessary information will be stored set context -c srmJdbcMonitoringLogEnabled false #jdbc updates are now queued and their execution is #decoupled from the execution of the srm requests # the srmJdbcExecutionThreadNum controls the number of the threads #that will be dedicated to execution of these updates # and the srmMaxNumberOfJdbcTasksInQueue controls the maximum # length of the queue set context -c srmJdbcExecutionThreadNum 5 set context -c srmMaxNumberOfJdbcTasksInQueue 1000 # if space reservation request does not specify retention policy # we will assign this retention policy by default set context -c SpaceManagerDefaultRetentionPolicy CUSTODIAL # if space reservation request does not specify access latency # we will assign this access latency by default set context -c SpaceManagerDefaultAccessLatency NEARLINE #if the transfer request come from the door, and there was not prior # space reservation made for this file, should we try to reserve # space before satisfying the request set context -c SpaceManagerReserveSpaceForNonSRMTransfers false   # # ---- Usage of Srm Space Manager # # If srmSpaceManagerEnabled is on we need to use SrmSpaceManager # as both poolManager and poolProxy # onerror continue set context -c srmSpaceManagerEnabled no define env srmSpaceManagerOn.exe endExe   set env -c remoteTransferManagerPoolProxy "SrmSpaceManager"   set env -c remoteTransferManagerPoolManager "SrmSpaceManager"   set context -c srmImplicitSpaceManagerEnabled true   set context -c srmSpaceReservationStrict true endExe define env srmSpaceManagerOff.exe endExe   srmSpaceReservation=false   srmSpaceReservationStrict=false endExe eval ${srmSpaceManagerEnabled} yes == set env srmSpaceManagerIsOn ${rc} exec env srmSpaceManagerOn.exe -run -ifok=srmSpaceManagerIsOn eval ${srmSpaceManageriEnabled} yes != set env srmSpaceManagerIsOff ${rc} exec env srmSpaceManagerOff.exe -run -ifok=srmSpaceManagerIsOff set context -c remoteTransferManagerPoolProxy "PoolManager" set context -c remoteTransferManagerPoolManager "PoolManager" # srmCustomGetHostByAddr enables using the BNL developed procedure # for host by ip resolution if standard InetAddress method failed # set context -c srmCustomGetHostByAddr false # LinkGroupAuthorizationFile contains the list of FQANs that are allowed to # make space reservations in a given link group set context -c SpaceManagerLinkGroupAuthorizationFileName "" # # turning this on turns off the latest changes that made service # to honor the srm client's prococol list order for # get/put commands # this is needed temprorarily to support old srmcp clients set context -c srmIgnoreClientProtocolOrder false # # onerror shutdown # ### This would do the same and leave ${srmDbHost} unset #onerror continue #set context localhost.exe "set context -c srmDatabaseHost localhost" #set context srmdbhost.exe "set context -c srmDatabaseHost ${srmDbHost}" #check srmDbHost #set context srmDbHostIsSet ${rc} #exec context srmdbhost.exe -run -ifok=srmDbHostIsSet #exec context localhost.exe -run -ifnotok=srmDbHostIsSet #onerror shutdown # create diskCacheV111.util.ThreadManager ThreadManager \        "default \        -num-threads=200 \        -thread-timeout=15 \ " # # RemoteHttpTransferManager # # create diskCacheV111.doors.RemoteHttpTransferManager RemoteHttpTransferManager \         "default \         -export \         -pool_manager_timeout=${srmPoolManagerTimeout} \         -pool_timeout=${srmPoolTimeout} \         -pnfs_timeout=${srmPnfsTimeout} \         -mover_timeout=${srmMoverTimeout} \         -max_transfers=${remoteHttpMaxTransfers} \ " # # RemoteGsiftpTransferManager # create diskCacheV111.services.GsiftpTransferManager RemoteGsiftpTransferManager \         "default -export \         -pool_manager_timeout=${srmPoolManagerTimeout} \         -pool_timeout=${srmPoolTimeout} \         -pnfs_timeout=${srmPnfsTimeout} \         -mover_timeout=${srmMoverTimeout} \         -max_transfers=${remoteGsiftpMaxTransfers} \         -io-queue=${remoteGsiftpIoQueue} \         -jdbcUrl=jdbc:postgresql://${srmDatabaseHost}/${srmDbName} \         -jdbcDriver=org.postgresql.Driver \         -dbUser=${srmDbUser} \         -dbPass=${srmDbPassword} \         -pgPass=${srmPasswordFile} \         -doDbLog=${srmDbLogEnabled} \         -poolManager=${remoteTransferManagerPoolManager} \         -poolProxy=${remoteTransferManagerPoolProxy} \ " # # Copy Manager Cell # create diskCacheV111.doors.CopyManager CopyManager \        "default -export \         -pool_manager_timeout=${srmPoolManagerTimeout} \         -pool_timeout=${srmPoolTimeout} \         -pnfs_timeout=${srmPnfsTimeout} \         -mover_timeout=${srmMoverTimeout} \         -max_transfers=${remoteCopyMaxTransfers} \         -poolManager=${remoteTransferManagerPoolManager} \         -poolProxy=${remoteTransferManagerPoolProxy} \ " # # SRM Space Manager # create diskCacheV111.services.space.ManagerV2 SrmSpaceManager \        "default \         -export \         -jdbcUrl=jdbc:postgresql://${srmDatabaseHost}/${srmDbName} \         -jdbcDriver=org.postgresql.Driver \         -dbUser=${srmDbUser} \         -dbPass=${srmDbPassword} \         -poolManager=PoolManager \         -pnfsManager=PnfsManager \         -defaultRetentionPolicy=${SpaceManagerDefaultRetentionPolicy} \         -defaultAccessLatency=${SpaceManagerDefaultAccessLatency} \         -reserveSpaceForNonSRMTransfers=${SpaceManagerReserveSpaceForNonSRMTransfers} \         -deleteStoredFileRecord=false \         -returnFlushedSpaceToReservation=true \         -returnRemovedSpaceToReservation=true \         -linkGroupAuthorizationFileName=${SpaceManagerLinkGroupAuthorizationFileName} \         -spaceManagerEnabled=${srmSpaceManagerEnabled} \ " create diskCacheV111.srm.dcache.Storage SRM-${thisHostname} \        "-srmport=${srmPort} \         -export \         -srmversion=${srmVersion} \         -timout=${srmTimeout} \         -pnfsManager=${srmPnfsManager} \         -pnfs-timeout=${srmPnfsTimeout} \         -poolManager=${srmPoolManager} \         -pool-manager-timeout=${srmPoolManagerTimeout} \         -vacuum=${srmVacuum} \         -vacuum-period=${srmVacuumPeriod} \         -pnfs-srm-path=${pnfsSrmPath} \         -gsissl=true \         -reserve-space-implicitly=${srmImplicitSpaceManagerEnabled} \         -space-reservation-strict=${srmSpaceReservationStrict} \         -credentials-dir=${srmUserCredentialsDirectory} \         -buffer_size=${srmBufferSize} \         -tcp_buffer_size=${srmTcpBufferSize} \         -parallel_streams=${parallelStreams} \         -debug=${srmDebug} \         -usekftp=false \         -get-lifetime=${srmGetLifeTime} \         -put-lifetime=${srmPutLifeTime} \         -copy-lifetime=${srmCopyLifeTime} \         -get-req-thread-queue-size=${srmGetReqThreadQueueSize} \         -get-req-thread-pool-size=${srmGetReqThreadPoolSize} \         -get-req-max-waiting-requests=${srmGetReqMaxWaitingRequests} \         -get-req-ready-queue-size=${srmGetReqReadyQueueSize} \         -get-req-max-ready-requests=${srmGetReqMaxReadyRequests} \         -get-req-max-number-of-retries=${srmGetReqMaxNumberOfRetries} \         -get-req-retry-timeout=${srmGetReqRetryTimeout} \         -get-req-max-num-of-running-by-same-owner=${srmGetReqMaxNumOfRunningBySameOwner} \         -put-req-thread-queue-size=${srmPutReqThreadQueueSize} \         -put-req-thread-pool-size=${srmPutReqThreadPoolSize} \         -put-req-max-waiting-requests=${srmPutReqMaxWaitingRequests} \         -put-req-ready-queue-size=${srmPutReqReadyQueueSize} \         -put-req-max-ready-requests=${srmPutReqMaxReadyRequests} \         -put-req-max-number-of-retries=${srmPutReqMaxNumberOfRetries} \         -put-req-retry-timeout=${srmPutReqRetryTimeout} \         -put-req-max-num-of-running-by-same-owner=${srmPutReqMaxNumOfRunningBySameOwner} \         -copy-req-thread-queue-size=${srmCopyReqThreadQueueSize} \         -copy-req-thread-pool-size=${srmCopyReqThreadPoolSize} \         -copy-req-max-waiting-requests=${srmCopyReqMaxWaitingRequests} \         -copy-req-max-number-of-retries=${srmCopyReqMaxNumberOfRetries} \         -copy-req-retry-timeout=${srmCopyReqRetryTimeout} \         -copy-req-max-num-of-running-by-same-owner=${srmCopyReqMaxNumOfRunningBySameOwner} \         -recursive-dirs-creation=${RecursiveDirectoryCreation} \         -advisory-delete=${AdvisoryDelete} \         -jdbcUrl=jdbc:postgresql://${srmDatabaseHost}/${srmDbName} \         -jdbcDriver=org.postgresql.Driver \         -dbUser=${srmDbUser} \         -dbPass=${srmDbPassword} \         -pgPass=${srmPasswordFile} \         -jdbc-monitoring-log=${srmJdbcMonitoringLogEnabled} \         -num-days-history=${srmNumberOfDaysInDatabaseHistory} \         -old-request-remove-period-secs=${srmOldRequestRemovalPeriodSeconds} \         -jdbc-execution-thread-num=${srmJdbcExecutionThreadNum} \         -max-queued-jdbc-tasks-num=${srmMaxNumberOfJdbcTasksInQueue} \         -use-gplazma-authorization-cell=${useGPlazmaAuthorizationCell} \         -delegate-to-gplazma=${delegateToGPlazma} \         -use-gplazma-authorization-module=${useGPlazmaAuthorizationModule} \         -gplazma-authorization-module-policy=${gplazmaPolicy} \         -srm-authz-cache-lifetime=${srmAuthzCacheLifetime} \         -srmLoginBroker=srm-LoginBroker \         -protocolFamily=SRM \         -protocolVersion=1.1.1 \         -kpwd-file=${kpwdFile} \ # -loginBroker=LoginBroker \ # -brokerUpdateTime=300 \         -start_server=false \         -use_lambdastation=${useLambdaStation} \         -lambdastation_map_file=${lsMapFile} \         -lambdastation_script=${lsScript} \         -login-broker-update-period=${srmLoginBrokerUpdatePeriod} \         -num-doors-in-rand-selection=${srmNumberOfDoorsInRandomSelection} \         -overwrite=${overwriteEnabled} \         -overwrite_by_default=${srmOverwriteByDefault} \         -custom-get-host-by-addr=${srmCustomGetHostByAddr} \         -ignore-client-protocol-order=${srmIgnoreClientProtocolOrder}\        "

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager