Hi John,
Do you have the same dCacheSetup on the SRM and door/pool nodes?
Can you create the graph of your latest PoolManager.conf and put it
somewhere where I can see it?
Unrelated to the discussion, but I would recommend using the gPlazma
cell (not module).
Cheers,
Greig
On 07/03/08 12:10, John Bland wrote:
> Hi Matt,
>
> Matt Doidge wrote:
>> Heya,
>>
>> One difference I see straight up is in our LinkGroups, we have:
>> psu set linkGroup custodialAllowed dteamops-disk-link-group false
>> psu set linkGroup outputAllowed dteamops-disk-link-group true
>> psu set linkGroup replicaAllowed dteamops-disk-link-group true
>> psu set linkGroup onlineAllowed dteamops-disk-link-group true
>> psu set linkGroup nearlineAllowed dteamops-disk-link-group false
>>
>> And I assume as your set up is similar to ours (all disk right?) so
>> you'd want the same options (all our linkGroups are set up
>> identically).
>
> Yes, we're all disk. I got our settings from one of the space token
> install guides, but looking at it ours is obviously wrong for our
> disk-only setup. I've set our links to be the same as above.
>
>> In order to get things to work we had to set the corresponding tags to
>> these options for each directory in pnfs, but I don't think that's
>> needed in 12p6 (I could be wrong, if I am I can get you a script that
>> recursively sets them in pnfs).
>
> Seen it, pinched it, ran it. Just to be sure they're set to
>
> [jbland@hepgrid11 dteam]$ cat ".(tag)(RetentionPolicy)"
> REPLICA
> [jbland@hepgrid11 dteam]$ cat ".(tag)(AccessLatency)"
> ONLINE
>
> [settings]
>
> All settings identical.
>
>> There was no single fix that got us running, you have to have
>> everything tweaked just right. I hope this info helps get you running.
>
> We're probably closer but it still won't accept writes without a space
> token. Log from catalina.out:
>
> 03/07 12:06:01 Cell(SrmSpaceManager@srm-hepgrid5Domain) : Message
> arrived: from [>SRM-hepgrid5@srm-
> hepgrid5Domain]
> 03/07 12:06:01 Cell(SrmSpaceManager@srm-hepgrid5Domain) :
> reserveSpace(group=dteam001, role=, sz=246,
> latency=ONLINE, policy=REPLICA, lifetime=14399728, description=null
> 03/07 12:06:01 Cell(SrmSpaceManager@srm-hepgrid5Domain) : policy is
> REPLICA, needHsmBackup is false
> 03/07 12:06:01 Cell(SrmSpaceManager@srm-hepgrid5Domain) :
> findLinkGroupIds(sizeInBytes=246, voGroup=d
> team001 voRole=, AccessLatency=ONLINE, RetentionPolicy=REPLICA)
> 03/07 12:06:01 Cell(SrmSpaceManager@srm-hepgrid5Domain) : executing
> statement: SELECT lg.*,lg.freespa
> ceinbytes-lg.reservedspaceinbytes as available
> from srmlinkgroup lg, srmlinkgroupvos lgvo
> where lg.id=lgvo.linkGroupId and lg.lastUpdateTime >= ? and
> lg.onlineallowed = 1 and lg.replicaa
> llowed = 1 and ( lgvo.VOGroup = ? OR lgvo.VOGroup = '*' ) and (
> lgvo.VORole = ? OR lgvo.VORole =
> '*' ) and lg.freespaceinbytes-lg.reservedspaceinbytes >= ? order by
> available desc ?=1204891558048
> ?=dteam001?=?=246
> 03/07 12:06:01 Cell(SrmSpaceManager@srm-hepgrid5Domain) : find LinkGroup
> Ids returned 0 linkGroups, n
> o linkGroups found
> 03/07 12:06:01 Cell(SrmSpaceManager@srm-hepgrid5Domain) :
> SpaceException: no space available
> 03/07 12:06:01 Cell(SrmSpaceManager@srm-hepgrid5Domain) : Sending reply
> (-2)=diskCacheV111.services.s
> pace.NoFreeSpaceException: no space available
> 03/07 12:06:01 Cell(SRM-hepgrid5@srm-hepgrid5Domain) :
> ReserveSpaceCompanion : Space Reservation Fai
> led message.getReturnCode () != 0
> =>diskCacheV111.services.space.NoFreeSpaceException: no space avai
> lable
> 03/07 12:06:01 Cell(SRM-hepgrid5@srm-hepgrid5Domain) : PutFileRequest #:
> PutReserveSpaceCallbacks err
> or: no space available
> 03/07 12:06:02 Cell(SRM-hepgrid5@srm-hepgrid5Domain) : PutRequestHandler
> error: copy request state ch
> anged to Failed
>
>
> Thanks,
>
> John
>
>> cheers,
>> Matt
>>
>>
>> On 07/03/2008, John Bland <[log in to unmask]> wrote:
>>> Hi,
>>>
>>> We have a working dcache 1.8.0-12p6 setup, with srm2.2. Without
>>> spacemanager enabled srm1/2 operations function correctly.
>>>
>>> When the space manager is enabled all attempts to write a file to
>>> the SE
>>> hepgrid5.ph.liv.ac.uk are met with "No space left on device" for
>>> srm1 or
>>> "no space available" for srm2 transfers.
>>>
>>> We have link groups defined for all supported VOs in our dcache eg
>>>
>>> psu create linkGroup dteam-linkGroup
>>> psu set linkGroup custodialAllowed dteam-linkGroup true
>>> psu set linkGroup replicaAllowed dteam-linkGroup true
>>> psu set linkGroup nearlineAllowed dteam-linkGroup true
>>> psu set linkGroup outputAllowed dteam-linkGroup true
>>> psu set linkGroup onlineAllowed dteam-linkGroup false
>>> psu addto linkGroup dteam-linkGroup dteam-link
>>>
>>> and I can create a reservation for that link-group with a space token
>>> DTEAM_TEST. If I copy a file using that space token it is accepted,
>>> without it is rejected as above.
>>>
>>> How do I enable space tokens without preventing all other operations
>>> without space tokens defined failing? Lancaster had a similar trouble
>>> but I can't see what I've done differently. I have link-groups for all
>>> vos, I wait until all link groups are visible in SrmSpaceManager and
>>> all
>>> services are running after a restart. Do we need some sort of default
>>> space token that all vos can write to without a specified token?
>>>
>>> Our dCacheSetup for when we have spacemanager enabled is attached.
>>>
>>> Regards,
>>>
>>> John
>>>
>>>
>>> --
>>> Dr John Bland, Systems Administrator
>>> Room 210, Oliver Lodge
>>> Particle Physics Group, University of Liverpool
>>> Mail: [log in to unmask]
>>> Tel : 0151 794 3396
>>>
>>> #
>>> # based on dCacheSetup.template $Revision: 1.33 $
>>> #
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # config/dCacheSetup
>>> #
>>> -----------------------------------------------------------------------
>>> # This is the central configuration file for a dCache instance. In
>>> most
>>> # cases it should be possible to keep it identical across the
>>> nodes of
>>> # one dCache instance.
>>> #
>>> # This template contains all options that can possibly be used. Most
>>> # may be left at the default value. If the option is commented out
>>> below
>>> # is indicates the default value. If it is not commented out it is
>>> set
>>> # to a reasonable value.
>>> #
>>> # To get a dCache instance running it suffices to change the options:
>>> # - java The java binary
>>> # - serviceLocatorHost The hostname of the admin node
>>> # The other values should only be changed when advised to do so by
>>> the
>>> # documentation.
>>> #
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Service Location
>>> #
>>> -----------------------------------------------------------------------
>>>
>>> # ---- Service Locater Host and Port
>>> # Adjust this to point to one unique server for one and only one
>>> # dCache instance (usually the admin node)
>>> #
>>> #serviceLocatorHost=SERVER
>>> serviceLocatorHost=hepgrid5.ph.liv.ac.uk
>>> serviceLocatorPort=11111
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Components
>>> #
>>> -----------------------------------------------------------------------
>>>
>>> # To activate Replica Manager you need make changes in all 3 places:
>>> # 1) etc/node_config on ALL ADMIN NODES in this dcache instance.
>>> # 2) replicaSetup file on node where replica manager is runnig
>>> # 3) define Resilient pool group(s) in PoolManager.conf
>>>
>>> # ---- Will Replica Manager be started?
>>> # Values: no, yes
>>> # Default: no
>>> #
>>> # This has to be set to 'yes' on every node, if there is a replica
>>> # manager in this dCache instance. Where the replica manager is
>>> started
>>> # is controlled in 'etc/node_config'. If it is not started and
>>> this is
>>> # set to 'yes' there will be error messages in
>>> log/dCacheDomain.log. If
>>> # this is set to 'no' and a replica manager is started somewhere,
>>> it will
>>> # not work properly.
>>> #
>>> #
>>> #replicaManager=no
>>>
>>> # ---- Which pool-group will be the group of resilient pools?
>>> # Values: <pool-Group-Name>, a pool-group name existing in the
>>> PoolManager.conf
>>> # Default: ResilientPools
>>> #
>>> # Only pools defined in pool group ResilientPools in
>>> config/PoolManager.conf
>>> # will be managed by ReplicaManager. You shall edit
>>> config/PoolManager.conf
>>> # to make replica manager work. To use another pool group defined
>>> # in PoolManager.conf for replication, please specify group name
>>> by changing setting :
>>> # #resilientGroupName=ResilientPools
>>> # Please scroll down "replica manager tuning" make this and other
>>> changes.
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Java Configuration
>>> #
>>> -----------------------------------------------------------------------
>>>
>>> # ---- The binary of the Java VM
>>> # Adjust to the correct location.
>>> #
>>> # shold point to <JDK>/bin/java
>>> #java="/usr/bin/java"
>>> java=/usr/java/default/bin/java
>>>
>>> #
>>> # ---- Options for the Java VM
>>> # Do not change unless yoy know what you are doing.
>>> # If the globus.tcp.port.range is changed, the
>>> # variable 'clientDataPortRange' below has to be changed accordingly.
>>> #
>>> #java_options="-server -Xmx512m -XX:MaxDirectMemorySize=512m \
>>> # -Dsun.net.inetaddr.ttl=1800 \
>>> # -Dorg.globus.tcp.port.range=20000,25000 \
>>> # -Djava.net.preferIPv4Stack=true \
>>> # -Dorg.dcache.dcap.port=0 \
>>> # -Dorg.dcache.net.tcp.portrange=33115:33145 \
>>> #
>>> -Dlog4j.configuration=file:${ourHomeDir}/config/log4j.properties \
>>> # "
>>> java_options="-server -Xmx512m -XX:MaxDirectMemorySize=512m
>>> -Dsun.net.inetaddr.ttl=1800 -Dorg.globus.tcp.port.range=20000,25000
>>> -Djava.net.preferIPv4Stack=true -Dorg.dcache.dcap.port=0
>>> -Dorg.dcache.net.tcp.portrange=60000:62000
>>> -Dlog4j.configuration=file:${ourHomeDir}/config/log4j.properties "
>>> # Option for Kerberos5 authentication:
>>> # -Djava.security.krb5.realm=FNAL.GOV \
>>> # -Djava.security.krb5.kdc=krb-fnal-1.fnal.gov \
>>> # Other options that might be useful:
>>> #
>>> -Dlog4j.configuration=/opt/d-cache/config/log4j.properties \
>>> # -Djavax.security.auth.useSubjectCredsOnly=false \
>>> #
>>> -Djava.security.auth.login.config=/opt/d-cache/config/jgss.conf \
>>> # -Xms400m \
>>>
>>> # ---- Classpath
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> classesDir=${ourHomeDir}/classes
>>> classpath=
>>>
>>> # ---- Librarypath
>>> # Do not change unless yoy know what you are doing.
>>> # Currently not used. Might contain .so librarys for JNI
>>> #
>>> librarypath=${ourHomeDir}/lib
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Filesystem Locations
>>> #
>>> -----------------------------------------------------------------------
>>>
>>> # ---- Location of the configuration files
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> config=${ourHomeDir}/config
>>>
>>> # ---- Location of the ssh
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> keyBase=${ourHomeDir}/config
>>>
>>> # ---- SRM/GridFTP authentication file
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> kpwdFile=${ourHomeDir}/etc/dcache.kpwd
>>>
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # pool tuning
>>> #
>>> -----------------------------------------------------------------------
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> # poolIoQueue=
>>> # checkRepository=true
>>> # waitForRepositoryReady=false
>>> #
>>> # ---- Which meta data repository implementation to use.
>>> # Values:
>>> org.dcache.pool.repository.meta.file.FileMetaDataRepository
>>> #
>>> org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
>>> # Default:
>>> org.dcache.pool.repository.meta.file.FileMetaDataRepository
>>> #
>>> # Selects which meta data repository implementation to use. This is
>>> # essentially a choice between storing meta data in a large number
>>> # of small files in the control/ directory, or to use the embedded
>>> # Berkeley database stored in the meta/ directory (both directories
>>> # placed in the pool directory).
>>> #
>>> #
>>> metaDataRepository=org.dcache.pool.repository.meta.file.FileMetaDataRepository
>>>
>>> #
>>> # ---- Which meta data repository to import from.
>>> # Values:
>>> org.dcache.pool.repository.meta.file.FileMetaDataRepository
>>> #
>>> org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
>>> # ""
>>> # Default: ""
>>> #
>>> # Selects which meta data repository to import data from if the
>>> # information is missing from the main repository. This is useful
>>> # for converting from one repository implementation to another,
>>> # without having to fetch all the information from the central PNFS
>>> # manager.
>>> #
>>> # metaDataRepositoryImport=""
>>> #
>>> #
>>> -----------------------------------------------------------------------
>>> # gPlazma tuning
>>> #
>>> -----------------------------------------------------------------------
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> gplazmaPolicy=${ourHomeDir}/etc/dcachesrm-gplazma.policy
>>> #
>>> #gPlazmaNumberOfSimutaneousRequests 30
>>> #gPlazmaRequestTimeout 30
>>> #
>>> #useGPlazmaAuthorizationModule=false
>>> useGPlazmaAuthorizationModule=true
>>> #useGPlazmaAuthorizationCell=true
>>> useGPlazmaAuthorizationCell=false
>>> #delegateToGPlazma=false
>>> #
>>> #
>>> #
>>> -----------------------------------------------------------------------
>>> # dcap tuning
>>> #
>>> -----------------------------------------------------------------------
>>> #
>>> # gsidcapIoQueue=
>>> # gsidcapIoQueueOverwrite=denied
>>> # gsidcapMaxLogin=1500
>>> # dcapIoQueue=
>>> # dcapIoQueueOverwrite=denied
>>> # dcapMaxLogin=1500
>>> #
>>> #
>>> -----------------------------------------------------------------------
>>> # gsiftp tuning
>>> #
>>> -----------------------------------------------------------------------
>>> #
>>> # ---- Seconds between GridFTP performance markers
>>> # Set performanceMarkerPeriod to 180 to get performanceMarkers
>>> # every 3 minutes.
>>> # Set to 0 to disable performance markers.
>>> # Default: 180
>>> #
>>> # performanceMarkerPeriod=180
>>> #
>>> # gsiftpPoolManagerTimeout=5400
>>> # gsiftpPoolTimeout=600
>>> # gsiftpPnfsTimeout=300
>>> # gsiftpMaxRetries=80
>>> # gsiftpMaxStreamsPerClient=10
>>> # gsiftpDeleteOnConnectionClosed=true
>>> # gsiftpMaxLogin=100
>>> clientDataPortRange=20000:25000
>>> # gsiftpIoQueue=
>>> # gsiftpAdapterInternalInterface=
>>> # remoteGsiftpIoQueue=
>>> # FtpTLogDir=
>>> #
>>> # ---- May pools accept incomming connection for GridFTP transfers?
>>> # Values: 'true', 'false'
>>> # Default: 'false' for FTP doors, 'true' for pools
>>> #
>>> # If set to true, pools are allowed accept incomming connections for
>>> # for FTP transfers. This only affects passive transfers. Only
>>> passive
>>> # transfers using GFD.47 GETPUT (aka GridFTP 2) can be redirected to
>>> # the pool. Other passive transfers will be channelled through a
>>> # proxy component at the FTP door. If set to false, all passive
>>> # transfers to through a proxy.
>>> #
>>> # This setting is interpreted by both FTP doors and pools, with
>>> # different defaults. If set to true at the door, then the setting
>>> # at the individual pool will be used.
>>> #
>>> # gsiftpAllowPassivePool=false
>>> #
>>> #
>>> #
>>> -----------------------------------------------------------------------
>>> # common to gsiftp and srm
>>> #
>>> -----------------------------------------------------------------------
>>> #
>>> srmSpaceManagerEnabled=yes
>>> #
>>> # will have no effect if srmSpaceManagerEnabled is "no"
>>> srmImplicitSpaceManagerEnabled=yes
>>> # overwriteEnabled=no
>>> #
>>> # ---- Image and style directories for the dCache-internal web server
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> images=${ourHomeDir}/docs/images
>>> styles=${ourHomeDir}/docs/styles
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Network Configuration
>>> #
>>> -----------------------------------------------------------------------
>>>
>>> # ---- Port Numbers for the various services
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> portBase=22
>>> #dCapPort=${portBase}125
>>> dCapPort=22125
>>> ftpPort=${portBase}126
>>> kerberosFtpPort=${portBase}127
>>> #dCapGsiPort=${portBase}128
>>> dCapGsiPort=22128
>>> #gsiFtpPortNumber=2811
>>> gsiFtpPortNumber=2811
>>> srmPort=8443
>>> xrootdPort=1094
>>>
>>> # ---- GridFTP port range
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> clientDataPortRange=20000:25000
>>> #clientDataPortRange=33115:33215
>>>
>>>
>>> # ---- Port Numbers for the monitoring and administration
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> adminPort=${portBase}223
>>> httpdPort=${portBase}88
>>> sshPort=${portBase}124
>>> # Telnet is only started if the telnetPort line is uncommented.
>>> # Debug only.
>>> #telnetPort=${portBase}123
>>> #
>>> #
>>> -----------------------------------------------------------------------
>>> # Maintenance Module Setup
>>> #
>>> -----------------------------------------------------------------------
>>> #
>>> # maintenanceLibPath=${ourHomeDir}/var/lib/dCache/maintenance
>>> # maintenanceLibAutogeneratePaths=true
>>> # maintenanceLogoutTime=18000
>>> #
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Database Configuration
>>> #
>>> -----------------------------------------------------------------------
>>> # The variable 'srmDbHost' is obsolete. For compatibility reasons,
>>> # it is still used if it is set and if the following variables are
>>> # not set
>>>
>>> # The current setup assumes that one or more PostgreSQL servers are
>>> # used by the various dCache components. Currently the database user
>>> # 'srmdcache' with password 'srmdcache' is used by all components.
>>> # They use the databases 'dcache', 'replicas', 'companion',
>>> # 'billing'. However, these might be located on separate hosts.
>>>
>>> # The best idea is to have the database server running on the same
>>> # host as the dCache component which accesses it. Therefore, the
>>> # default value for the following variables is 'localhost'.
>>> # Uncomment and change these variables only if you have a reason to
>>> # deviate from this scheme.
>>>
>>> # (One possibility would be, to put the 'billing' DB on another
>>> host than
>>> # the pnfs server DB and companion, but keep the httpDomain on the
>>> admin
>>> # host.)
>>>
>>> # ---- pnfs Companion Database Host
>>> # Do not change unless yoy know what you are doing.
>>> # - Database name: companion
>>> #
>>> #companionDatabaseHost=localhost
>>>
>>> # ---- SRM Database Host
>>> # Do not change unless yoy know what you are doing.
>>> # - Database name: dcache
>>> # - If srmDbHost is set and this is not set, srmDbHost is used.
>>> #
>>> #srmDatabaseHost=localhost
>>>
>>> # ---- Space Manager Database Host
>>> # Do not change unless yoy know what you are doing.
>>> # - Database name: dcache
>>> # - If srmDbHost is set and this is not set, srmDbHost is used.
>>> #
>>> #spaceManagerDatabaseHost=localhost
>>>
>>> # ---- Pin Manager Database Host
>>> # Do not change unless yoy know what you are doing.
>>> # - Database name: dcache
>>> # - If srmDbHost is set and this is not set, srmDbHost is used.
>>> #
>>> #pinManagerDatabaseHost=localhost
>>>
>>> # ---- Replica Manager Database Host
>>> # Do not change unless yoy know what you are doing.
>>> # - Database name: replicas
>>> #
>>> # ----------------------------------------------------------------
>>> # replica manager tuning
>>> # ----------------------------------------------------------------
>>> #
>>> # replicaManagerDatabaseHost=localhost
>>> # replicaDbName=replicas
>>> # replicaDbUser=srmdcache
>>> # replicaDbPassword=srmdcache
>>> # replicaPasswordFile=""
>>> # resilientGroupName=ResilientPools
>>> # replicaPoolWatchDogPeriod=600
>>> # replicaWaitDBUpdateTimeout=600
>>> # replicaExcludedFilesExpirationTimeout=43200
>>> # replicaDelayDBStartTimeout=1200
>>> # replicaAdjustStartTimeout=1200
>>> # replicaWaitReplicateTimeout=43200
>>> # replicaWaitReduceTimeout=43200
>>> # replicaDebug=false
>>> # replicaMaxWorkers=6
>>> # replicaMin=2
>>> # replicaMax=3
>>> #
>>>
>>>
>>> # ---- Transfer / TCP Buffer Size
>>> # Do not change unless yoy know what you are doing.
>>> #
>>> bufferSize=1048576
>>> tcpBufferSize=1048576
>>>
>>> # ---- Allow overwrite of existing files via GSIdCap
>>> # allow=true, disallow=false
>>> #
>>> truncate=false
>>>
>>> # ---- pnfs Mount Point for (Grid-)FTP
>>> # The current FTP door needs pnfs to be mounted for some file exist
>>> # checks and for the directory listing. Therefore it needs to know
>>> # where pnfs is mounted. In future the Ftp and dCap deamons will
>>> # ask the pnfsManager cell for help and the directory listing is
>>> # done by a DirListPool.
>>> ftpBase=/pnfs/ftpBase
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # pnfs Manager Configuration
>>> #
>>> -----------------------------------------------------------------------
>>> #
>>> # ---- pnfs Mount Point
>>> # The mount point of pnfs on the admin node. Default: /pnfs/fs
>>> #
>>> pnfs=/pnfs/fs
>>>
>>> # An older version of the pnfsManager actually autodetects the
>>> # possible pnfs filesystems. The ${defaultPnfsServer} is choosen
>>> # from the list and used as primary pnfs filesystem. (currently the
>>> # others are ignored). The ${pnfs} variable can be used to override
>>> # this mechanism.
>>> #
>>> # defaultPnfsServer=localhost
>>> #
>>> # -- leave this unless you are running an enstore HSM backend.
>>> #
>>> # pnfsInfoExtractor=diskCacheV111.util.OsmInfoExtractor
>>> #
>>> # -- depending on the power of your pnfs server host you may
>>> # set this to up to 50.
>>> #
>>> # pnfsNumberOfThreads=4
>>> #
>>> # -- don't change this
>>> #
>>> #
>>> namespaceProvider=diskCacheV111.namespace.provider.BasicNameSpaceProviderFactory
>>>
>>> #
>>> # --- change this if you configured you postgres instance
>>> # other then described in the Book.
>>> #
>>> # pnfsDbUser=srmdcache
>>> # pnfsDbPassword=srmdcache
>>> # pnfsPasswordFile=
>>> #
>>> # ---- Storage Method for cacheinfo: companion or pnfs
>>> # Values: 'comanion' -- cacheinfo will be stored in separate DB
>>> # other or missing -- cacheinfo will be stored in pnfs
>>> # Default: 'pnfs' -- for backward compatibility of old dCacheSetup
>>> files
>>> #
>>> # 'companion' is the default for new installs. Old installations have
>>> # to use 'pnfs register' in every pool after switching from 'pnfs' to
>>> # 'companion'. See the documentation.
>>> #
>>> cacheInfo=companion
>>> #
>>> #
>>> #
>>>
>>>
>>> # ---- Location of the trash directory
>>> # The cleaner (which can only run on the pnfs server machine itself)
>>> # autodetects the 'trash' directory. Non-empty 'trash' overwrites
>>> the
>>> # autodetect.
>>> #
>>> #trash=
>>>
>>> # The cleaner stores persistency information in subdirectories of
>>> # the following directory.
>>> #
>>> # cleanerDB=/opt/pnfsdb/pnfs/trash/2
>>> # cleanerRefresh=120
>>> # cleanerRecover=240
>>> # cleanerPoolTimeout=100
>>> # cleanerProcessFilesPerRun=500
>>> # cleanerArchive=none
>>> #
>>>
>>> # ---- Whether to enable the HSM cleaner
>>> # Values: 'disabled', 'enabled'
>>> # Default: 'disabled'
>>> #
>>> # The HSM cleaner scans the PNFS trash directory for deleted
>>> # files stored on an HSM and sends a request to an attached
>>> # pool to delete that file from the HSM.
>>> #
>>> # The HSM cleaner by default runs in the PNFS domain. To
>>> # enable the cleaner, this setting needs to be set to enabled
>>> # at the PNFS domain *and* at all pools that are supposed
>>> # to delete files from an HSM.
>>> #
>>> # hsmCleaner=disabled
>>> #
>>> #
>>> # ---- Location of trash directory for files on tape
>>> # The HSM cleaner periodically scans this directory to
>>> # detect deleted files.
>>> #
>>> # hsmCleanerTrash=/opt/pnfsdb/pnfs/1
>>> #
>>> # ---- Location of repository directory of the HSM cleaner
>>> # The HSM cleaner uses this directory to store information
>>> # about files in could not clean right away. The cleaner
>>> # will reattempt to clean the files later.
>>> #
>>> # hsmCleanerRepository=/opt/pnfsdb/pnfs/1/repository
>>> #
>>> # ---- Interval between scans of the trash directory
>>> # Specifies the time in seconds between two scans of the
>>> # trash directory.
>>> #
>>> # hsmCleanerScan=90
>>> #
>>> # ---- Interval between retries
>>> # Specifies the time in seconds between two attempts to
>>> # clean files stored in the cleaner repository.
>>> #
>>> # hsmCleanerRecover=3600
>>> #
>>> # ---- Interval between flushing failures to the repository
>>> # When the cleaner failes to clean a file, information about this
>>> # file is added to the repository. This setting specifies the time
>>> # in seconds between flushes to the repository. Until the
>>> # information is kept in memory and in the trash directory.
>>> #
>>> # Each flush will create a new file. A lower value will cause the
>>> # repository to be split into more files. A higher value will cause
>>> # a higher memory usage and a larger number of files in the trash
>>> # directory.
>>> #
>>> # hsmCleanerFlush=60
>>> #
>>> # ---- Max. length of in memory queue of files to clean
>>> # When the trash directory is scanned, information about deleted
>>> # files is queued in memory. This setting specifies the maximum
>>> # length of this queue. When the queue length is reached, scanning
>>> # is suspended until files have been cleaned or flushed to the
>>> # repository.
>>> #
>>> # hsmCleanerCleanerQueue=10000
>>> #
>>> # ---- Timeout for pool communication
>>> # Files are cleaned from an HSM by sending a message to a pool to
>>> # do so. This specifies the timeout in seconds after which the
>>> # operation is considered failed.
>>> #
>>> # hsmCleanerTimeout=120
>>> #
>>> # ---- Maximum concurrent requests to a single HSM
>>> # Files are cleaned in batches. This specified the largest number
>>> # of files to include in a batch per HSM.
>>> #
>>> # hsmCleanerRequest=100
>>> #
>>> #
>>> -----------------------------------------------------------------------
>>> # Directory Pools
>>> #
>>> -----------------------------------------------------------------------
>>> #
>>> #directoryPoolPnfsBase=/pnfs/fs
>>> #
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Srm Settings for experts
>>> #
>>> -----------------------------------------------------------------------
>>> #
>>> #srmVersion=version1
>>> #pnfsSrmPath=/
>>> #parallelStreams=10
>>>
>>> #srmAuthzCacheLifetime=60
>>>
>>> # srmGetLifeTime=14400000
>>> # srmPutLifeTime=14400000
>>> # srmCopyLifeTime=14400000
>>>
>>>
>>> # srmTimeout=3600
>>> # srmVacuum=true
>>> # srmVacuumPeriod=21600
>>> # srmProxiesDirectory=/tmp
>>> # srmBufferSize=1048576
>>> # srmTcpBufferSize=1048576
>>> # srmDebug=true
>>>
>>> # srmGetReqThreadQueueSize=10000
>>> # srmGetReqThreadPoolSize=250
>>> # srmGetReqMaxWaitingRequests=1000
>>> # srmGetReqReadyQueueSize=10000
>>> # srmGetReqMaxReadyRequests=2000
>>> # srmGetReqMaxNumberOfRetries=10
>>> # srmGetReqRetryTimeout=60000
>>> # srmGetReqMaxNumOfRunningBySameOwner=100
>>>
>>> # srmPutReqThreadQueueSize=10000
>>> # srmPutReqThreadPoolSize=250
>>> # srmPutReqMaxWaitingRequests=1000
>>> # srmPutReqReadyQueueSize=10000
>>> # srmPutReqMaxReadyRequests=1000
>>> # srmPutReqMaxNumberOfRetries=10
>>> # srmPutReqRetryTimeout=60000
>>> # srmPutReqMaxNumOfRunningBySameOwner=100
>>>
>>> # srmCopyReqThreadQueueSize=10000
>>> # srmCopyReqThreadPoolSize=250
>>> # srmCopyReqMaxWaitingRequests=1000
>>> # srmCopyReqMaxNumberOfRetries=10
>>> # srmCopyReqRetryTimeout=60000
>>> # srmCopyReqMaxNumOfRunningBySameOwner=100
>>>
>>> # srmPoolManagerTimeout=300
>>> # srmPoolTimeout=300
>>> # srmPnfsTimeout=300
>>> # srmMoverTimeout=7200
>>> # remoteCopyMaxTransfers=150
>>> # remoteHttpMaxTransfers=30
>>> # remoteGsiftpMaxTransfers=${srmCopyReqThreadPoolSize}
>>>
>>> #
>>> # srmDbName=dcache
>>> # srmDbUser=srmdcache
>>> # srmDbPassword=srmdcache
>>> # srmDbLogEnabled=false
>>> #
>>> # This variable enables logging of the history
>>> # of the srm request transitions in the database
>>> # so that it can be examined though the srmWatch
>>> # monitoring tool
>>> # srmJdbcMonitoringLogEnabled=false
>>> #
>>> # turning this on turns off the latest changes that made service
>>> # to honor the srm client's prococol list order for
>>> # get/put commands
>>> # this is needed temprorarily to support old srmcp clients
>>> # srmIgnoreClientProtocolOrder=false
>>>
>>> #
>>> # -- Set this to /root/.pgpass in case
>>> # you need to have better security.
>>> #
>>> # srmPasswordFile=
>>> #
>>> # -- Set this to true if you want overwrite to be enabled for
>>> # srm v1.1 interface as well as for srm v2.2 interface when
>>> # client does not specify desired overwrite mode.
>>> # This option will be considered only if overwriteEnabled is
>>> # set to yes (or true)
>>> #
>>> # srmOverwriteByDefault=false
>>>
>>> # ----srmCustomGetHostByAddr enables using the BNL developed
>>> # procedure for host by ip resolution if standard
>>> # InetAddress method failed
>>> srmCustomGetHostByAddr=true
>>>
>>> # ---- Allow automatic creation of directories via SRM
>>> # allow=true, disallow=false
>>> #
>>> RecursiveDirectoryCreation=true
>>>
>>> # ---- Allow delete via SRM
>>> # allow=true, disallow=false
>>> #
>>> AdvisoryDelete=true
>>> #
>>> # pinManagerDatabaseHost=${srmDbHost}
>>> # spaceManagerDatabaseHost=${srmDbHost}
>>> #
>>> # ----if space reservation request does not specify retention policy
>>> # we will assign this retention policy by default
>>> SpaceManagerDefaultRetentionPolicy=REPLICA
>>> #
>>> # ----if space reservation request does not specify access latency
>>> # we will assign this access latency by default
>>> SpaceManagerDefaultAccessLatency=ONLINE
>>> #
>>> # ----if the transfer request come from the door, and there was not
>>> prior
>>> # space reservation made for this file, should we try to reserve
>>> # space before satisfying the request
>>> SpaceManagerReserveSpaceForNonSRMTransfers=true
>>>
>>> # LinkGroupAuthorizationFile contains the list of FQANs that are
>>> allowed to
>>> # make space reservations in a given link group
>>> SpaceManagerLinkGroupAuthorizationFileName=/opt/d-cache/etc/LinkGroupAuthorization.conf
>>>
>>>
>>> #
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Logging Configuration
>>> #
>>> -----------------------------------------------------------------------
>>>
>>> # ---- Directory for the Log Files
>>> # Default: ${ourHomeDir}/log/ (if unset or empty)
>>> #
>>> logArea=/var/log
>>>
>>> # ---- Restart Behaviour
>>> # Values: 'new' -- logfiles will be moved to LOG.old at restart.
>>> # other or missing -- logfiles will be appended at restart.
>>> # Default: 'keep'
>>> #
>>> #logMode=keep
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Billing / Accounting
>>> #
>>> -----------------------------------------------------------------------
>>>
>>> # The directory the billing logs are written to
>>> billingDb=${ourHomeDir}/billing
>>>
>>> # If billing information should be written to a
>>> # PostgreSQL database set to 'yes'.
>>> # A database called 'billing' has to be created there.
>>> billingToDb=yes
>>>
>>> # The PostgreSQL database host:
>>> billingDatabaseHost=localhost
>>>
>>> # EXPERT: First is default if billingToDb=no, second for
>>> billingToDb=yes
>>> # Do NOT put passwords in setup file! They can be read by anyone
>>> logging into
>>> # the dCache admin interface!
>>> #billingDbParams=
>>> billingDbParams="\
>>> -useSQL \
>>>
>>> -jdbcUrl=jdbc:postgresql://${billingDatabaseHost}/billing \
>>> -jdbcDriver=org.postgresql.Driver \
>>> -dbUser=srmdcache \
>>> -dbPass=srmdcache \
>>> "
>>>
>>> #
>>> -----------------------------------------------------------------------
>>> # Info Provider
>>> #
>>> -----------------------------------------------------------------------
>>> #
>>> # The following variables are used by the dynamic info provider,
>>> which
>>> # is used for integration of dCache as a storage element in the LCG
>>> # information system. All variables are used by the client side of
>>> the
>>> # dynamic info provider which is called regularly by the LCG GIP
>>> (generic
>>> # info provider). It consists of the two scripts
>>> # jobs/infoDynamicSE-plugin-dcache
>>> # jobs/infoDynamicSE-provider-dcache
>>> #
>>>
>>> # ---- Seconds between information retrievals
>>> # Default: 180
>>> #infoCollectorInterval=180
>>>
>>> # ---- The static file used by the GIP
>>> # This is also used by the plugin to determine the info it should
>>> # output.
>>> # Default: /opt/lcg/var/gip/ldif/lcg-info-static-se.ldif
>>> #infoProviderStaticFile=/opt/lcg/var/gip/ldif/lcg-info-static-se.ldif
>>> infoProviderStaticFile=/opt/glite/etc/gip/ldif/static-file-SE.ldif
>>>
>>> # ---- The host where the InfoCollector cell runs
>>> # Default: localhost
>>> infoCollectorHost=localhost
>>>
>>> # ---- The port where the InfoCollector cell will listen
>>> # This will be used by the InfoCollector cell as well as the dynamic
>>> # info provider scripts
>>> # Default: 22111
>>> #infoCollectorPort=22111
>>>
>>>
>>>
>>> #
>>> ------------------------------------------------------------------------
>>> # Statistics module
>>> #
>>> ------------------------------------------------------------------------
>>>
>>> # - point to place where statistic will be store
>>> statisticsLocation=${ourHomeDir}/statistics
>>>
>>> #
>>> ------------------------------------------------------------------------
>>> # xrootd section
>>> #
>>> ------------------------------------------------------------------------
>>> #
>>> # forbids write access in general (to avoid unauthenticated
>>> writes). Overrides all other authorization settings.
>>> # xrootdIsReadOnly=true
>>> #
>>> # allow write access only to selected paths (and its
>>> subdirectories). Overrides any remote authorization settings (like
>>> from the filecatalogue)
>>> # xrootdAllowedPaths=/path1:/path2:/path3
>>> #
>>> # This will allow to enable authorization in the xrootd door
>>> by specifying a valid
>>> # authorization plugin. There is only one plugin in the
>>> moment, implementing token based
>>> # authorization controlled by a remote filecatalogue. This
>>> requires an additional parameter
>>> # 'keystore', holding keypairs needed to do the authorization
>>> plugin. A template keystore
>>> # can be found in ${ourHomeDir}/etc/keystore.temp.
>>>
>>> #
>>> xrootdAuthzPlugin=org.dcache.xrootd.security.plugins.tokenauthz.TokenAuthorizationFactory
>>>
>>> # xrootdAuthzKeystore=${ourHomeDir}/etc/keystore
>>>
>>> # the mover queue on the pool where this request gets
>>> scheduled to
>>> # xrootdIoQueue=
>>>
>>>
>
>
|