JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for GRIDPP-STORAGE Archives


GRIDPP-STORAGE Archives

GRIDPP-STORAGE Archives


GRIDPP-STORAGE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

GRIDPP-STORAGE Home

GRIDPP-STORAGE Home

GRIDPP-STORAGE  February 2008

GRIDPP-STORAGE February 2008

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Help

From:

Matt Doidge <[log in to unmask]>

Reply-To:

Matt Doidge <[log in to unmask]>

Date:

Fri, 8 Feb 2008 18:16:45 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (1595 lines)

Thanks Greig for the configs. I think I created my reservations in
pretty much the same way, although I didn't specify the acclat and
retpol options, they seem to have got the correct defaults though if
you list them in the SpaceManager. I don't think it's the reservations
that's the problem, it's the SpaceManager and how it handles transfers
that aren't into reserved spaces. I did notice that my
SpaceManagerReserveSpaceForNonSRMTransfers option wasn't explicitly
set (the option was still commented out so would be set to the default
which I believe is "true")- this might cause some troubles. I notice
you have 2 pgroups and links for lhcb, is this for any particular
reason?

Brian pointed out what might be the reason why things are working for
you but not for us -we have a seperate pnfs node, which (with the
exception of all the tagging malarky) didn't get a new copy of the
dCacheSetup file or get fully restarted (it runs the pnfs and utility
domain) or otherwise recieve any changes. My next experiment will
involve playing with that. But first food.

cheers,
Matt

On 08/02/2008, Greig Alan Cowan <[log in to unmask]> wrote:
> Hi Matt,
>
> How's it going? After I set up the pnfs tags, I've found that I could
> get the Space manager working without too many problems. My PoolManager
> and dCacheSetup are attached.
>
> For the reservation I did something like this:
>
> reserve -vog=/lhcb -vor=lhcbprd -acclat=ONLINE -retpol=REPLICA
> -desc=LHCb_DST -lg=lhcb-linkGroup 24993207653 "-1"
>
> Note that there is a problem with (I think) gPlazma in that it caches
> user DNs for a short period. This means that if you try to transfer a
> file when belonging to one VO and then switch proxies to another, you
> are likely to get a permission denied error. Someone is working on
> fixing this.
>
> Cheers,
> Greig
>
> On 08/02/08 15:19, Matt Doidge wrote:
> > Helps if I attach the bloomin script doesn't it!
> >
> > Got that Friday feeling...
> >
> > Matt
> >
> > On 08/02/2008, Matt Doidge <[log in to unmask]> wrote:
> >> Heya guys,
> >>
> >> Here's the python script that I was given by Dmitri Litvinse that
> >> recursively sets the AccessLatency and RetentionPolicy tags in pnfs to
> >> ONLINE and REPLICA. Usage is:
> >>
> >> set_tag.py --dir=/pnfs/wherever/data/vo/foo
> >>
> >> or to be careful about it cd to the directory and
> >> /pathtoscript/set_tag.py --dir=`pwd`
> >>
> >> This took nearly 3 hours for my admittedly gargantuan atlas directory,
> >> so you're best off doing it in chunks. Oh and as a disclaimer this
> >> script comes with no gurantees, it was written for us as a favour.
> >>
> >> However doing this doesn't seem to have fixed our troubles, srmv2
> >> writes still work if you specify the space token but fail if you don't
> >> for dteam. I don't know about other VOs, as none of my collegues seem
> >> to be able to get a proxy today. I might have to fiddle with
> >> permissions and pretend to be in other VOs to test.
> >>
> >> cheers,
> >> Matt
> >>
> >> On 08/02/2008, Greig Alan Cowan <[log in to unmask]> wrote:
> >>> Hi Matt,
> >>>
> >>> Yep, you are bang on. I just set the PNFS tags to REPLICA-ONLINE and now
> >>> it's all working. Seems to me that things have really been setup to work
> >>> for dCache's with HSM backends and not thinking about the little guys.
> >>> I'll report this in the deployment meeting that's starting soon.
> >>>
> >>> $ echo ONLINE > ".(tag)(AccessLatency)"
> >>> $ echo REPLICA > ".(tag)(RetentionPolicy)"
> >>>
> >>> Can you send round that script?
> >>>
> >>> Cheers,
> >>> Greig
> >>>
> >>> On 08/02/08 14:46, Matt Doidge wrote:
> >>>> Heya guys,
> >>>>
> >>>> I've had similar experiances playing with Lancaster's dcache- I can
> >>>> get writes to work only if  I specify the token to write into, if I
> >>>> leave it unpsecified or try to use srmv1 I get "No Space Available.
> >>>>
> >>>> >From the logs of our experiments Dimitri and Timur have concluded that
> >>>> there's some confusion involving default space settings. Despite
> >>>> having set us to "REPLICA" and "ONLINE" in the dCacheSetup writes into
> >>>> our dcache with no write policies set (i.e. no token specified) are
> >>>> being made to look for a space which is "NEARLINE" and "CUSTODIAL".
> >>>>
> >>>> One fix suggested is to edit the srm.batch with:
> >>>> set context -c SpaceManagerDefaultRetentionPolicy REPLICA
> >>>> set context -c SpaceManagerDefaultAccessLatency ONLINE
> >>>> (these were set wrong for us)
> >>>>
> >>>> And also Dmitri advised setting the policy tags in the pnfs
> >>>> directories. Dmitri wrote a nice little python script to do that, I
> >>>> can forward it if you want, but be warned it took nearly 3 hours for
> >>>> it to get through our existing atlas directory. Luckily it should only
> >>>> ever have to be run once.
> >>>>
> >>>> I've set things up and am about to have a go at switching the Space
> >>>> Manager on without breaking our srm. Wish me luck.
> >>>>
> >>>> cheers,
> >>>> Matt
> >>>>
> >>>> On 08/02/2008, Greig Alan Cowan <[log in to unmask]> wrote:
> >>>>> Hi Chris, all,
> >>>>>
> >>>>> I've got the SRM2.2 transfers into a reserved space working for the
> >>>>> Edinburgh dCache.
> >>>>>
> >>>>> All I did was add a section to my PoolManager.conf file that created a
> >>>>> link group and added an existing dteam link to it, i.e.,
> >>>>>
> >>>>> psu create linkGroup dteam-linkGroup
> >>>>> psu set linkGroup custodialAllowed dteam-linkGroup false
> >>>>> psu set linkGroup replicaAllowed dteam-linkGroup true
> >>>>> psu set linkGroup nearlineAllowed dteam-linkGroup false
> >>>>> psu set linkGroup outputAllowed dteam-linkGroup false
> >>>>> psu set linkGroup onlineAllowed dteam-linkGroup true
> >>>>> psu addto linkGroup dteam-linkGroup dteam-link
> >>>>>
> >>>>> Nothing elsed changed in PoolManager.conf. In dCacheSetup on the SRM
> >>>>> node, I have
> >>>>>
> >>>>> srmSpaceManagerEnabled=yes
> >>>>> srmImplicitSpaceManagerEnabled=yes
> >>>>> SpaceManagerDefaultRetentionPolicy=REPLICA
> >>>>> SpaceManagerDefaultAccessLatency=ONLINE
> >>>>> SpaceManagerReserveSpaceForNonSRMTransfers=true
> >>>>> SpaceManagerLinkGroupAuthorizationFileName=/opt/d-cache/etc/LinkGroupAuthorization.conf
> >>>>>
> >>>>> It is also essential to have
> >>>>>
> >>>>> srmSpaceManagerEnabled=yes
> >>>>>
> >>>>> on all *door* nodes.
> >>>>>
> >>>>> I could then reserve a space in the newly created link group using the
> >>>>> "reserve" command line tool in the srmSpaceManager cell. You can then
> >>>>> test this with the latest dCache srmclient by doing something like:
> >>>>>
> >>>>> srmcp -2 -debug file:////etc/group
> >>>>> srm://srm.epcc.ed.ac.uk:8443/pnfs/epcc.ed.ac.uk/data/dteam/greig_test_dir/`date
> >>>>> +%s` -space_token=1
> >>>>>
> >>>>> Where space_token=1 is the numerical value of the space token
> >>>>> reservation that you made.
> >>>>>
> >>>>> Transfers using SRMv1 are still returning that there is no space
> >>>>> available. I need to investigate further why this is. I'll be in touch.
> >>>>>
> >>>>> Cheers,
> >>>>> Greig
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 07/02/08 17:41, Brew, CAJ (Chris) wrote:
> >>>>>> (I'm guessing jiscmail should be up now)
> >>>>>>
> >>>>>> Are there any sites without a MSS backend that have got this working?
> >>>>>>
> >>>>>> Chris.
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: Greig Alan Cowan [mailto:[log in to unmask]]
> >>>>>>> Sent: 07 February 2008 17:33
> >>>>>>> To: [log in to unmask]
> >>>>>>> Cc: cajbrew
> >>>>>>> Subject: Re: Help
> >>>>>>>
> >>>>>>> Hi Guys,
> >>>>>>>
> >>>>>>> Sorry for my silence this afternoon, I've been at CERN all week and
> >>>>>>> that's me just back home now. I've got a working
> >>>>>>> PoolManager.conf from
> >>>>>>> FZK which I'm scrutinising. I'll be in touch later/tomorrow
> >>>>>>> in order to
> >>>>>>> get you both up and running in SRM2.2 mode.
> >>>>>>>
> >>>>>>> Cheers,
> >>>>>>> Greig
> >>>>>>>
> >>>>>>> On 07/02/08 17:15, [log in to unmask] wrote:
> >>>>>>>> It's a pain in the arse, I'm managing to get some results,
> >>>>>>> but writes
> >>>>>>>> only work when the space token is implicity set in the
> >>>>>>> srmPut and they
> >>>>>>>> fail in every other case. And for some reason even if I
> >>>>>>> only set up a
> >>>>>>>> linkGroup for dteam I still seem to affect all other VOs as
> >>>>>>> soon as I
> >>>>>>>> throw the SpaceManager on, and they get the "No Space Availiable"
> >>>>>>>> error.
> >>>>>>>>
> >>>>>>>> At least I'm seeing some progress I suppose- I can technically get
> >>>>>>>> SpaceTokens to work. It just means nothing else will.....
> >>>>>>>>
> >>>>>>>> Oh, and your cutdown arcane ritual does indeed seem to work wonders-
> >>>>>>>> but according to the dcache bods a restart of doornodes (with the
> >>>>>>>> edits to dCacheSetup on board) is advisable after each change,
> >>>>>>>> something to do with the door processes retaining
> >>>>>>> infomation about the
> >>>>>>>> SpaceManager stuff (to use the technical terms).
> >>>>>>>>
> >>>>>>>> cheers,
> >>>>>>>> Matt
> >>>>>>>>
> >>>>>>>> On 07/02/2008, cajbrew <[log in to unmask]> wrote:
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> Thanks I'm back up now.
> >>>>>>>>>
> >>>>>>>>> OK, my arcane ritual was a bit short than yours so I'll share it:
> >>>>>>>>>
> >>>>>>>>> In dCacheSetup on the head node
> >>>>>>>>>
> >>>>>>>>> Reset
> >>>>>>>>> srmSpaceManagerEnabled=no
> >>>>>>>>>
> >>>>>>>>> and comment out:
> >>>>>>>>> #srmImplicitSpaceManagerEnabled=yes
> >>>>>>>>> #SpaceManagerDefaultRetentionPolicy=REPLICA
> >>>>>>>>> #SpaceManagerDefaultAccessLatency=ONLINE
> >>>>>>>>> #SpaceManagerReserveSpaceForNonSRMTransfers=true
> >>>>>>>>>
> >>>>>>> #SpaceManagerLinkGroupAuthorizationFileName=/opt/d-cache/etc/L
> >>>>>>> inkGroupAuthor
> >>>>>>>>> ization.conf
> >>>>>>>>>
> >>>>>>>>> In Poolmanager.conf file comment out all the LinkGroup
> >>>>>>> configuration.
> >>>>>>>>> Restart the dcache-core service on the head node.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> I thought I had SrmSpaceManager working for a while.
> >>>>>>>>>
> >>>>>>>>> I seemed to have a setup where it worked for babar but I
> >>>>>>> could only write to
> >>>>>>>>> directories where I had explicitly set the AccessLatency
> >>>>>>> and RetentionPolicy
> >>>>>>>>> using:
> >>>>>>>>>
> >>>>>>>>> echo  "ONLINE" > ".(tag)(AccessLatency)"; echo  "REPLICA" >
> >>>>>>>>> ".(tag)(RetentionPolicy)"
> >>>>>>>>>
> >>>>>>>>> But when I restarted to rty to replicate the config to CMS
> >>>>>>> and test it from
> >>>>>>>>> there it stopped working even for BaBar. Now whatever I
> >>>>>>> try I cannot get
> >>>>>>>>> writes working with SrmSpaceManager enabled.
> >>>>>>>>>
> >>>>>>>>> The trouble is we cannot test this without taking dCache
> >>>>>>> effectively offline
> >>>>>>>>> for everyone.
> >>>>>>>>>
> >>>>>>>>> Thanks,
> >>>>>>>>> Chris.
> >>>>>>>>>
> >>>>>>>>>> -----Original Message-----
> >>>>>>>>>> From: [log in to unmask] [mailto:[log in to unmask]]
> >>>>>>>>>> Sent: 07 February 2008 13:39
> >>>>>>>>>> To: cajbrew
> >>>>>>>>>> Cc: [log in to unmask]
> >>>>>>>>>> Subject: Re: Help
> >>>>>>>>>>
> >>>>>>>>>> When we broke our dcache with the SpaceManager we found
> >>>>>>> that in order
> >>>>>>>>>> to get things working again we had to:
> >>>>>>>>>>
> >>>>>>>>>> Cross fingers.
> >>>>>>>>>> Get rid of all the linkGroups in the PoolManager.conf (or at least
> >>>>>>>>>> remove all the links from them).
> >>>>>>>>>> Set dcacheSetup to have SpaceManager disabled on the srm and
> >>>>>>>>>> all the door nodes
> >>>>>>>>>> Rerun install.sh on srm node(I'm not sure if this is totally
> >>>>>>>>>> nessicery, but it seems to do the trick)
> >>>>>>>>>> Restart the srm node.
> >>>>>>>>>> Restart the door nodes.
> >>>>>>>>>> Throw holy water at your nodes till the the SpaceManager
> >>>>>>>>>> leaves them be.
> >>>>>>>>>>
> >>>>>>>>>> It's a bloody lot of hassle I tell you. To be honest half
> >>>>>>> those steps
> >>>>>>>>>> might be unnessicery, but I'm not sure which half so I'll
> >>>>>>> keep this
> >>>>>>>>>> arcane ritual.
> >>>>>>>>>>
> >>>>>>>>>> I'm totally stuck with the whole SpaceToken thing, after countless
> >>>>>>>>>> emails with attached configs and logs I've had to go and
> >>>>>>> give access
> >>>>>>>>>> to our dcache to Dmitri so he can have a good poke- which
> >>>>>>> goes against
> >>>>>>>>>> some University rules so I'm having to be a bit hush hush
> >>>>>>> about it.
> >>>>>>>>>> Hopefully he's not filling my SRM with naughty pictures, and finds
> >>>>>>>>>> some way to get us working that I can spread to the other
> >>>>>>> UK dcaches.
> >>>>>>>>>> Hope this gets your dcache up and running again,
> >>>>>>>>>>
> >>>>>>>>>> Matt
> >>>>>>>>>>
> >>>>>>>>>> On 07/02/2008, cajbrew <[log in to unmask]> wrote:
> >>>>>>>>>>> Hi Grieg, Matt
> >>>>>>>>>>>
> >>>>>>>>>>> (The Atlas center has lost power so my work mail and the
> >>>>>>>>>> maillist are all
> >>>>>>>>>>> down)
> >>>>>>>>>>>
> >>>>>>>>>>> I'm trying to enable space tokens but seem to have run into
> >>>>>>>>>> the same problem
> >>>>>>>>>>> as Matt.
> >>>>>>>>>>>
> >>>>>>>>>>> When I try to transfer some data in I get:
> >>>>>>>>>>>
> >>>>>>>>>>> heplnx101 - ~ $ lcg-cr -v --vo babar -d heplnx204.pp.rl.ac.uk -P
> >>>>>>>>>>> testfile.brew file:/opt/ppd/scratch/brew/LoadTestSeed
> >>>>>>>>>>> Using grid catalog type: lfc
> >>>>>>>>>>> Using grid catalog : lfcserver.cnaf.infn.it
> >>>>>>>>>>> Using LFN :
> >>>>>>>>>>>
> >>>>>>>>>> /grid/babar/generated/2008-02-07/file-de6e10d4-db82-4658-8dd7-
> >>>>>>>>>> 5b0390c4e8cc
> >>>>>>>>>>> Using SURL :
> >>>>>>>>>>>
> >>>>>>> srm://heplnx204.pp.rl.ac.uk/pnfs/pp.rl.ac.uk/data/babar/testfile.brew
> >>>>>>>>>>> Alias registered in Catalog:
> >>>>>>>>>>>
> >>>>>>>>>> lfn:/grid/babar/generated/2008-02-07/file-de6e10d4-db82-4658-8
> >>>>>>>>>> dd7-5b0390c4e8
> >>>>>>>>>>> cc
> >>>>>>>>>>> Source URL: file:/opt/ppd/scratch/brew/LoadTestSeed
> >>>>>>>>>>> File size: 2747015459
> >>>>>>>>>>> VO name: babar
> >>>>>>>>>>> Destination specified: heplnx204.pp.rl.ac.uk
> >>>>>>>>>>> Destination URL for copy:
> >>>>>>>>>>>
> >>>>>>>>>> gsiftp://heplnx172.pp.rl.ac.uk:2811//pnfs/pp.rl.ac.uk/data/bab
> >>>>>>>>>> ar/testfile.br
> >>>>>>>>>>> ew
> >>>>>>>>>>> # streams: 1
> >>>>>>>>>>> # set timeout to 0 seconds
> >>>>>>>>>>>             0 bytes      0.00 KB/sec avg      0.00 KB/sec
> >>>>>>>>>>> instglobus_ftp_client: the server responded with an error
> >>>>>>>>>>> 451 Operation failed: Non-null return code from
> >>>>>>>>>>> [>PoolManager@dCacheDomain:*@dCacheDomain] with error No
> >>>>>>> write pools
> >>>>>>>>>>> configured for <babar:babar@osm>
> >>>>>>>>>>>
> >>>>>>>>>>> Unfortunately when I try to back out and set
> >>>>>>>>>>>
> >>>>>>>>>>> srmSpaceManagerEnabled=no
> >>>>>>>>>>>
> >>>>>>>>>>> I still get the same error.
> >>>>>>>>>>>
> >>>>>>>>>>> So I now seem to be stuck, I cannot go forwards or back.
> >>>>>>>>>>>
> >>>>>>>>>>> No, actually I've gone further back and commented out all
> >>>>>>>>>> the LinkGroup
> >>>>>>>>>>> setting s in PoolManager.conf and I can at least transfer
> >>>>>>>>>> data in with both
> >>>>>>>>>>> srmv1 and srmv2
> >>>>>>>>>>>
> >>>>>>>>>>> So has Lancaster solved this or are we both in the same boat?
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks,
> >>>>>>>>>>> Chris.
> >>>>>>>>>>>
> >>>>>>>>>>>
>
> #
> # Setup of PoolManager (diskCacheV111.poolManager.PoolManagerV5) at Fri Feb 01 12:23:15 GMT 2008
> #
> set timeout pool 120
> #
> #
> # Printed by diskCacheV111.poolManager.PoolSelectionUnitV2 at Fri Feb 01 12:23:15 GMT 2008
> #
> #
> #
> # The units ...
> #
> psu create unit -store  atlas:GENERATED@osm
> psu create unit -store  babar:GENERATED@osm
> psu create unit -store  pheno:GENERATED@osm
> psu create unit -store  hone:GENERATED@osm
> psu create unit -net    0.0.0.0/0.0.0.0
> psu create unit -store  dteam:GENERATED@osm
> psu create unit -store  ops:GENERATED@osm
> psu create unit -store  ngs:GENERATED@osm
> psu create unit -store  pheno:STATIC@osm
> psu create unit -store  geant4:STATIC@osm
> psu create unit -store  minos:STATIC@osm
> psu create unit -store  ilc:STATIC@osm
> psu create unit -store  esr:GENERATED@osm
> psu create unit -store  magic:STATIC@osm
> psu create unit -store  alice:STATIC@osm
> psu create unit -store  hone:STATIC@osm
> psu create unit -store  zeus:STATIC@osm
> psu create unit -store  alice:GENERATED@osm
> psu create unit -store  cms:GENERATED@osm
> psu create unit -store  magic:GENERATED@osm
> psu create unit -store  dteam:STATIC@osm
> psu create unit -store  lhcb:GENERATED@osm
> psu create unit -store  t2k:GENERATED@osm
> psu create unit -store  geant4:GENERATED@osm
> psu create unit -store  cdf:GENERATED@osm
> psu create unit -store  biomed:GENERATED@osm
> psu create unit -store  cms:STATIC@osm
> psu create unit -store  ngs:STATIC@osm
> psu create unit -store  planck:GENERATED@osm
> psu create unit -store  biomed:STATIC@osm
> psu create unit -store  sixt:GENERATED@osm
> psu create unit -store  na48:GENERATED@osm
> psu create unit -store  fusion:STATIC@osm
> psu create unit -store  atlas:STATIC@osm
> psu create unit -store  ops:STATIC@osm
> psu create unit -store  fusion:GENERATED@osm
> psu create unit -store  ilc:GENERATED@osm
> psu create unit -store  zeus:GENERATED@osm
> psu create unit -store  babar:STATIC@osm
> psu create unit -store  na48:STATIC@osm
> psu create unit -store  planck:STATIC@osm
> psu create unit -store  minos:GENERATED@osm
> psu create unit -protocol */*
> psu create unit -store  dzero:GENERATED@osm
> psu create unit -store  cdf:STATIC@osm
> psu create unit -store  *@*
> psu create unit -store  t2k:STATIC@osm
> psu create unit -net    0.0.0.0/255.255.255.255
> psu create unit -store  dzero:STATIC@osm
> psu create unit -store  esr:STATIC@osm
> psu create unit -store  lhcb:STATIC@osm
> psu create unit -store  sixt:STATIC@osm
> #
> # The unit Groups ...
> #
> psu create ugroup ngs-groups
> psu addto ugroup ngs-groups ngs:STATIC@osm
> psu addto ugroup ngs-groups ngs:GENERATED@osm
> psu create ugroup na48-groups
> psu addto ugroup na48-groups na48:STATIC@osm
> psu addto ugroup na48-groups na48:GENERATED@osm
> psu create ugroup fusion-groups
> psu addto ugroup fusion-groups fusion:STATIC@osm
> psu addto ugroup fusion-groups fusion:GENERATED@osm
> psu create ugroup zeus-groups
> psu addto ugroup zeus-groups zeus:STATIC@osm
> psu addto ugroup zeus-groups zeus:GENERATED@osm
> psu create ugroup esr-groups
> psu addto ugroup esr-groups esr:GENERATED@osm
> psu addto ugroup esr-groups esr:STATIC@osm
> psu create ugroup geant4-groups
> psu addto ugroup geant4-groups geant4:GENERATED@osm
> psu addto ugroup geant4-groups geant4:STATIC@osm
> psu create ugroup alice-groups
> psu addto ugroup alice-groups alice:STATIC@osm
> psu addto ugroup alice-groups alice:GENERATED@osm
> psu create ugroup sixt-groups
> psu addto ugroup sixt-groups sixt:GENERATED@osm
> psu addto ugroup sixt-groups sixt:STATIC@osm
> psu create ugroup ops
> psu addto ugroup ops ops:GENERATED@osm
> psu addto ugroup ops ops:STATIC@osm
> psu create ugroup dzero-groups
> psu addto ugroup dzero-groups dzero:STATIC@osm
> psu addto ugroup dzero-groups dzero:GENERATED@osm
> psu create ugroup atlas-groups
> psu addto ugroup atlas-groups atlas:GENERATED@osm
> psu addto ugroup atlas-groups atlas:STATIC@osm
> psu create ugroup lhcb-groups
> psu addto ugroup lhcb-groups lhcb:GENERATED@osm
> psu addto ugroup lhcb-groups lhcb:STATIC@osm
> psu create ugroup cms-groups
> psu addto ugroup cms-groups cms:STATIC@osm
> psu addto ugroup cms-groups cms:GENERATED@osm
> psu create ugroup minos-groups
> psu addto ugroup minos-groups minos:STATIC@osm
> psu addto ugroup minos-groups minos:GENERATED@osm
> psu create ugroup hone-groups
> psu addto ugroup hone-groups hone:STATIC@osm
> psu addto ugroup hone-groups hone:GENERATED@osm
> psu create ugroup pheno-groups
> psu addto ugroup pheno-groups pheno:GENERATED@osm
> psu addto ugroup pheno-groups pheno:STATIC@osm
> psu create ugroup planck-groups
> psu addto ugroup planck-groups planck:GENERATED@osm
> psu addto ugroup planck-groups planck:STATIC@osm
> psu create ugroup babar-groups
> psu addto ugroup babar-groups babar:GENERATED@osm
> psu addto ugroup babar-groups babar:STATIC@osm
> psu create ugroup dteam-groups
> psu addto ugroup dteam-groups dteam:STATIC@osm
> psu addto ugroup dteam-groups dteam:GENERATED@osm
> psu create ugroup world-net
> psu addto ugroup world-net 0.0.0.0/0.0.0.0
> psu create ugroup magic-groups
> psu addto ugroup magic-groups magic:GENERATED@osm
> psu addto ugroup magic-groups magic:STATIC@osm
> psu create ugroup ilc-groups
> psu addto ugroup ilc-groups ilc:STATIC@osm
> psu addto ugroup ilc-groups ilc:GENERATED@osm
> psu create ugroup t2k-groups
> psu addto ugroup t2k-groups t2k:GENERATED@osm
> psu addto ugroup t2k-groups t2k:STATIC@osm
> psu create ugroup cdf-groups
> psu addto ugroup cdf-groups cdf:STATIC@osm
> psu addto ugroup cdf-groups cdf:GENERATED@osm
> psu create ugroup ops-groups
> psu addto ugroup ops-groups ops:GENERATED@osm
> psu addto ugroup ops-groups ops:STATIC@osm
> psu create ugroup dteam
> psu create ugroup any-store
> psu addto ugroup any-store atlas:GENERATED@osm
> psu addto ugroup any-store babar:GENERATED@osm
> psu addto ugroup any-store pheno:GENERATED@osm
> psu addto ugroup any-store hone:GENERATED@osm
> psu addto ugroup any-store dteam:GENERATED@osm
> psu addto ugroup any-store ngs:GENERATED@osm
> psu addto ugroup any-store ops:GENERATED@osm
> psu addto ugroup any-store pheno:STATIC@osm
> psu addto ugroup any-store geant4:STATIC@osm
> psu addto ugroup any-store minos:STATIC@osm
> psu addto ugroup any-store ilc:STATIC@osm
> psu addto ugroup any-store esr:GENERATED@osm
> psu addto ugroup any-store magic:STATIC@osm
> psu addto ugroup any-store alice:STATIC@osm
> psu addto ugroup any-store hone:STATIC@osm
> psu addto ugroup any-store zeus:STATIC@osm
> psu addto ugroup any-store alice:GENERATED@osm
> psu addto ugroup any-store cms:GENERATED@osm
> psu addto ugroup any-store magic:GENERATED@osm
> psu addto ugroup any-store dteam:STATIC@osm
> psu addto ugroup any-store t2k:GENERATED@osm
> psu addto ugroup any-store lhcb:GENERATED@osm
> psu addto ugroup any-store geant4:GENERATED@osm
> psu addto ugroup any-store cdf:GENERATED@osm
> psu addto ugroup any-store biomed:GENERATED@osm
> psu addto ugroup any-store cms:STATIC@osm
> psu addto ugroup any-store ngs:STATIC@osm
> psu addto ugroup any-store planck:GENERATED@osm
> psu addto ugroup any-store biomed:STATIC@osm
> psu addto ugroup any-store sixt:GENERATED@osm
> psu addto ugroup any-store na48:GENERATED@osm
> psu addto ugroup any-store fusion:STATIC@osm
> psu addto ugroup any-store atlas:STATIC@osm
> psu addto ugroup any-store fusion:GENERATED@osm
> psu addto ugroup any-store ops:STATIC@osm
> psu addto ugroup any-store ilc:GENERATED@osm
> psu addto ugroup any-store zeus:GENERATED@osm
> psu addto ugroup any-store babar:STATIC@osm
> psu addto ugroup any-store na48:STATIC@osm
> psu addto ugroup any-store planck:STATIC@osm
> psu addto ugroup any-store minos:GENERATED@osm
> psu addto ugroup any-store dzero:GENERATED@osm
> psu addto ugroup any-store cdf:STATIC@osm
> psu addto ugroup any-store *@*
> psu addto ugroup any-store t2k:STATIC@osm
> psu addto ugroup any-store dzero:STATIC@osm
> psu addto ugroup any-store esr:STATIC@osm
> psu addto ugroup any-store lhcb:STATIC@osm
> psu addto ugroup any-store sixt:STATIC@osm
> psu create ugroup biomed-groups
> psu addto ugroup biomed-groups biomed:STATIC@osm
> psu addto ugroup biomed-groups biomed:GENERATED@osm
> #
> # The pools ...
> #
> psu create pool pool1_23
> psu create pool pool1_01
> psu create pool pool1_24
> psu create pool pool2_4
> psu create pool pool1_04
> psu create pool pool1_02
> psu create pool pool2_00
> psu create pool pool1_26
> psu create pool pool1_05
> psu create pool pool1_03
> psu create pool pool1_27
> psu create pool pool1_14
> psu create pool pool2_01
> psu create pool pool2_2
> psu create pool pool1_19
> psu create pool pool1_16
> psu create pool pool1_25
> psu create pool pool2_06
> psu create pool pool1_20
> psu create pool pool1_06
> psu create pool pool2_7
> psu create pool pool1_28
> psu create pool pool1_12
> psu create pool pool2_1
> psu create pool pool2_03
> psu create pool pool2_04
> psu create pool pool1_09
> psu create pool pool2_5
> psu create pool pool2_05
> psu create pool pool1_10
> psu create pool pool2_6
> psu create pool pool1_08
> psu create pool pool1_07
> psu create pool pool1_21
> psu create pool pool1_17
> psu create pool pool1_18
> psu create pool pool2_02
> psu create pool pool2_3
> #
> # The pool groups ...
> #
> psu create pgroup na48
> psu create pgroup lhcb2
> psu addto pgroup lhcb2 pool1_09
> psu addto pgroup lhcb2 pool1_10
> psu addto pgroup lhcb2 pool1_19
> psu addto pgroup lhcb2 pool1_16
> psu addto pgroup lhcb2 pool2_00
> psu addto pgroup lhcb2 pool1_17
> psu addto pgroup lhcb2 pool1_18
> psu create pgroup ResilientPools
> psu create pgroup hone
> psu create pgroup ops
> psu addto pgroup ops pool2_01
> psu addto pgroup ops pool1_28
> psu create pgroup dzero
> psu create pgroup esr
> psu create pgroup minos
> psu create pgroup geant4
> psu create pgroup lhcb
> psu addto pgroup lhcb pool1_12
> psu addto pgroup lhcb pool2_03
> psu addto pgroup lhcb pool2_04
> psu addto pgroup lhcb pool1_09
> psu addto pgroup lhcb pool2_05
> psu addto pgroup lhcb pool2_01
> psu addto pgroup lhcb pool1_14
> psu addto pgroup lhcb pool1_19
> psu addto pgroup lhcb pool1_16
> psu addto pgroup lhcb pool2_06
> psu addto pgroup lhcb pool1_20
> psu addto pgroup lhcb pool1_18
> psu addto pgroup lhcb pool2_02
> psu addto pgroup lhcb pool1_17
> psu create pgroup zeus
> psu addto pgroup zeus pool1_12
> psu create pgroup planck
> psu create pgroup sixt
> psu create pgroup babar
> psu create pgroup cms
> psu addto pgroup cms pool1_12
> psu create pgroup pheno
> psu create pgroup cdf
> psu create pgroup magic
> psu create pgroup default
> psu create pgroup atlas
> psu addto pgroup atlas pool1_05
> psu addto pgroup atlas pool1_09
> psu addto pgroup atlas pool1_03
> psu addto pgroup atlas pool1_08
> psu addto pgroup atlas pool1_01
> psu addto pgroup atlas pool1_06
> psu addto pgroup atlas pool1_04
> psu addto pgroup atlas pool1_02
> psu addto pgroup atlas pool1_07
> psu create pgroup ngs
> psu create pgroup alice
> psu create pgroup ilc
> psu create pgroup t2k
> psu create pgroup biomed
> psu addto pgroup biomed pool1_12
> psu create pgroup dteam
> psu addto pgroup dteam pool1_23
> psu addto pgroup dteam pool1_27
> psu addto pgroup dteam pool1_25
> psu addto pgroup dteam pool1_24
> psu addto pgroup dteam pool1_28
> psu addto pgroup dteam pool1_26
> psu create pgroup fusion
> #
> # The links ...
> #
> psu create link ilc-link ilc-groups world-net
> psu set link ilc-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link ilc-link ilc
> psu create link geant4-link geant4-groups world-net
> psu set link geant4-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link geant4-link geant4
> psu create link lhcb-link world-net lhcb-groups
> psu set link lhcb-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link lhcb-link lhcb
> psu create link ngs-link ngs-groups world-net
> psu set link ngs-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link ngs-link ngs
> psu create link t2k-link t2k-groups world-net
> psu set link t2k-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link t2k-link t2k
> psu create link magic-link magic-groups world-net
> psu set link magic-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link magic-link magic
> psu create link babar-link babar-groups world-net
> psu set link babar-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link babar-link babar
> psu create link lhcb-link2 world-net lhcb-groups
> psu set link lhcb-link2 -readpref=20 -writepref=30 -cachepref=20 -p2ppref=-1
> psu add link lhcb-link2 lhcb2
> psu create link esr-link esr-groups world-net
> psu set link esr-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link esr-link esr
> psu create link planck-link planck-groups world-net
> psu set link planck-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link planck-link planck
> psu create link fusion-link fusion-groups world-net
> psu set link fusion-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link fusion-link fusion
> psu create link dteam-link dteam-groups world-net
> psu set link dteam-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link dteam-link dteam
> psu create link zeus-link zeus-groups world-net
> psu set link zeus-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link zeus-link zeus
> psu create link hone-link hone-groups world-net
> psu set link hone-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link hone-link hone
> psu create link sixt-link world-net sixt-groups
> psu set link sixt-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link sixt-link sixt
> psu create link biomed-link biomed-groups world-net
> psu set link biomed-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link biomed-link biomed
> psu create link ops-link ops world-net
> psu set link ops-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link ops-link ops
> psu create link cdf-link cdf-groups world-net
> psu set link cdf-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link cdf-link cdf
> psu create link na48-link na48-groups world-net
> psu set link na48-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link na48-link na48
> psu create link dzero-link dzero-groups world-net
> psu set link dzero-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link dzero-link dzero
> psu create link default-link any-store world-net
> psu set link default-link -readpref=10 -writepref=0 -cachepref=10 -p2ppref=-1
> psu add link default-link default
> psu create link cms-link cms-groups world-net
> psu set link cms-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link cms-link cms
> psu create link minos-link minos-groups world-net
> psu set link minos-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link minos-link minos
> psu create link alice-link alice-groups world-net
> psu set link alice-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link alice-link alice
> psu create link pheno-link pheno-groups world-net
> psu set link pheno-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
> psu add link pheno-link pheno
> psu create link atlas-link atlas-groups world-net
> psu set link atlas-link -readpref=20 -writepref=10 -cachepref=20 -p2ppref=-1
> psu add link atlas-link atlas
> #
> # The link Groups ...
> #
> # dteam, ops
> psu create linkGroup dteam-linkGroup
> psu set linkGroup custodialAllowed dteam-linkGroup false
> psu set linkGroup replicaAllowed dteam-linkGroup true
> psu set linkGroup nearlineAllowed dteam-linkGroup false
> psu set linkGroup outputAllowed dteam-linkGroup false
> psu set linkGroup onlineAllowed dteam-linkGroup true
> psu addto linkGroup dteam-linkGroup dteam-link
> psu addto linkGroup dteam-linkGroup ops-link
> # lhcb
> psu create linkGroup lhcb-linkGroup
> psu set linkGroup custodialAllowed lhcb-linkGroup false
> psu set linkGroup replicaAllowed lhcb-linkGroup true
> psu set linkGroup nearlineAllowed lhcb-linkGroup false
> psu set linkGroup outputAllowed lhcb-linkGroup false
> psu set linkGroup onlineAllowed lhcb-linkGroup true
> psu addto linkGroup lhcb-linkGroup lhcb-link2
> #
> # Submodule [rc] : class diskCacheV111.poolManager.RequestContainerV5
> #
> rc onerror suspend
> rc set max retries 3
> rc set retry 900
> rc set warning path billing
> rc set poolpingtimer 600
> rc set max restore unlimited
> rc set sameHostCopy besteffort
> rc set sameHostRetry notchecked
> rc set max threads 2147483647
> set pool decision -cpucostfactor=1.0 -spacecostfactor=2.0
> set costcuts -idle=0.0 -p2p=2.0 -alert=0.0 -halt=0.0 -fallback=0.0
> rc set p2p on
> rc set p2p oncost
> rc set stage oncost off
> rc set stage off
> rc set slope 0.0
> rc set max copies 500
>
>
> #
> # based on dCacheSetup.template $Revision: 1.33 $
> #
>
> #  -----------------------------------------------------------------------
> #          config/dCacheSetup
> #  -----------------------------------------------------------------------
> #   This is the central configuration file for a dCache instance. In most
> #   cases it should be possible to keep it identical across the nodes of
> #   one dCache instance.
> #
> #   This template contains all options that can possibly be used. Most
> #   may be left at the default value. If the option is commented out below
> #   is indicates the default value. If it is not commented out it is set
> #   to a reasonable value.
> #
> #   To get a dCache instance running it suffices to change the options:
> #    - java                     The java binary
> #    - serviceLocatorHost       The hostname of the admin node
> #   The other values should only be changed when advised to do so by the
> #   documentation.
> #
>
> #  -----------------------------------------------------------------------
> #          Service Location
> #  -----------------------------------------------------------------------
>
> #  ---- Service Locater Host and Port
> #   Adjust this to point to one unique server for one and only one
> #   dCache instance (usually the admin node)
> #
> serviceLocatorHost=srm.epcc.ed.ac.uk
> serviceLocatorPort=11111
>
> #  -----------------------------------------------------------------------
> #          Components
> #  -----------------------------------------------------------------------
>
> #  To activate Replica Manager you need make changes in all 3 places:
> #   1) etc/node_config on ALL ADMIN NODES in this dcache instance.
> #   2) replicaSetup file on node where replica manager is runnig
> #   3) define Resilient pool group(s) in PoolManager.conf
>
> #  ---- Will Replica Manager be started?
> #   Values:  no, yes
> #   Default: no
> #
> #   This has to be set to 'yes' on every node, if there is a replica
> #   manager in this dCache instance. Where the replica manager is started
> #   is controlled in 'etc/node_config'. If it is not started and this is
> #   set to 'yes' there will be error messages in log/dCacheDomain.log. If
> #   this is set to 'no' and a replica manager is started somewhere, it will
> #   not work properly.
> #
> #
> #replicaManager=no
>
> #  ---- Which pool-group will be the group of resilient pools?
> #   Values:  <pool-Group-Name>, a pool-group name existing in the PoolManager.conf
> #   Default: ResilientPools
> #
> #   Only pools defined in pool group ResilientPools in config/PoolManager.conf
> #   will be managed by ReplicaManager. You shall edit config/PoolManager.conf
> #   to make replica manager work. To use another pool group defined
> #   in PoolManager.conf for replication, please specify group name by changing setting :
> #       #resilientGroupName=ResilientPools
> #   Please scroll down "replica manager tuning" make this and other changes.
>
> #  -----------------------------------------------------------------------
> #          Java Configuration
> #  -----------------------------------------------------------------------
>
> #  ---- The binary of the Java VM
> #   Adjust to the correct location.
> #
> # shold point to <JDK>/bin/java
> #java="/usr/bin/java"
> java=/usr/java/jdk1.5.0_10/bin/java
>
> #
> #  ---- Options for the Java VM
> #   Do not change unless yoy know what you are doing.
> #   If the globus.tcp.port.range is changed, the
> #   variable 'clientDataPortRange' below has to be changed accordingly.
> #
> java_options="-server -Xmx512m -XX:MaxDirectMemorySize=512m \
>               -Dsun.net.inetaddr.ttl=1800 \
>               -Dorg.globus.tcp.port.range=50000,51000 \
>               -Djava.net.preferIPv4Stack=true \
>               -Dorg.dcache.dcap.port=0 \
>               -Dorg.dcache.net.tcp.portrange=51001:52000 \
>               -Dlog4j.configuration=file:${ourHomeDir}/config/log4j.properties \
>              "
> #   Option for Kerberos5 authentication:
> #              -Djava.security.krb5.realm=FNAL.GOV \
> #              -Djava.security.krb5.kdc=krb-fnal-1.fnal.gov \
> #   Other options that might be useful:
> #              -Dlog4j.configuration=/opt/d-cache/config/log4j.properties \
> #              -Djavax.security.auth.useSubjectCredsOnly=false \
> #              -Djava.security.auth.login.config=/opt/d-cache/config/jgss.conf \
> #              -Xms400m \
>
> #  ---- Classpath
> #   Do not change unless yoy know what you are doing.
> #
> classesDir=${ourHomeDir}/classes
> classpath=
>
> #  ---- Librarypath
> #   Do not change unless yoy know what you are doing.
> #   Currently not used. Might contain .so librarys for JNI
> #
> librarypath=${ourHomeDir}/lib
>
> #  -----------------------------------------------------------------------
> #          Filesystem Locations
> #  -----------------------------------------------------------------------
>
> #  ---- Location of the configuration files
> #   Do not change unless yoy know what you are doing.
> #
> config=${ourHomeDir}/config
>
> #  ---- Location of the ssh
> #   Do not change unless yoy know what you are doing.
> #
> keyBase=${ourHomeDir}/config
>
> #  ---- SRM/GridFTP authentication file
> #   Do not change unless yoy know what you are doing.
> #
> kpwdFile=${ourHomeDir}/etc/dcache.kpwd
>
>
> #  -----------------------------------------------------------------------
> #         pool tuning
> #  -----------------------------------------------------------------------
> #   Do not change unless yoy know what you are doing.
> #
> # poolIoQueue=
> # checkRepository=true
> # waitForRepositoryReady=false
> #
> #  ---- Which meta data repository implementation to use.
> #    Values: org.dcache.pool.repository.meta.file.FileMetaDataRepository
> #            org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
> #    Default: org.dcache.pool.repository.meta.file.FileMetaDataRepository
> #
> #   Selects which meta data repository implementation to use. This is
> #   essentially a choice between storing meta data in a large number
> #   of small files in the control/ directory, or to use the embedded
> #   Berkeley database stored in the meta/ directory (both directories
> #   placed in the pool directory).
> #
> # metaDataRepository=org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
> #
> #  ---- Which meta data repository to import from.
> #    Values: org.dcache.pool.repository.meta.file.FileMetaDataRepository
> #            org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
> #            ""
> #    Default: ""
> #
> #   Selects which meta data repository to import data from if the
> #   information is missing from the main repository. This is useful
> #   for converting from one repository implementation to another,
> #   without having to fetch all the information from the central PNFS
> #   manager.
> #
> # metaDataRepositoryImport=org.dcache.pool.repository.meta.file.FileMetaDataRepository
> #
> #  -----------------------------------------------------------------------
> #         gPlazma tuning
> #  -----------------------------------------------------------------------
> #   Do not change unless yoy know what you are doing.
> #
> gplazmaPolicy=${ourHomeDir}/etc/dcachesrm-gplazma.policy
> #
> #gPlazmaNumberOfSimutaneousRequests  30
> #gPlazmaRequestTimeout               30
> #
> useGPlazmaAuthorizationModule=false
> useGPlazmaAuthorizationCell=true
> #delegateToGPlazma=false
> #
> #
> #  -----------------------------------------------------------------------
> #         dcap tuning
> #  -----------------------------------------------------------------------
> #
> # gsidcapIoQueue=
> # gsidcapIoQueueOverwrite=denied
> # gsidcapMaxLogin=1500
> # dcapIoQueue=
> # dcapIoQueueOverwrite=denied
> # dcapMaxLogin=1500
> #
> #  -----------------------------------------------------------------------
> #         gsiftp tuning
> #  -----------------------------------------------------------------------
> #
> #  ---- Seconds between GridFTP performance markers
> #   Set  performanceMarkerPeriod to 180 to get performanceMarkers
> #   every 3 minutes.
> #   Set to 0 to disable performance markers.
> #   Default: 180
> #
> performanceMarkerPeriod=10
> #
> # gsiftpPoolManagerTimeout=5400
> # gsiftpPoolTimeout=600
> # gsiftpPnfsTimeout=300
> # gsiftpMaxRetries=80
> # gsiftpMaxStreamsPerClient=10
> # gsiftpDeleteOnConnectionClosed=true
> # gsiftpMaxLogin=100
> # clientDataPortRange=20000:25000
> # gsiftpIoQueue=
> # gsiftpAdapterInternalInterface=
> # remoteGsiftpIoQueue=
> # FtpTLogDir=
> #
> #  ---- May pools accept incomming connection for GridFTP transfers?
> #    Values: 'true', 'false'
> #    Default: 'false' for FTP doors, 'true' for pools
> #
> #    If set to true, pools are allowed accept incomming connections for
> #    for FTP transfers. This only affects passive transfers. Only passive
> #    transfers using GFD.47 GETPUT (aka GridFTP 2) can be redirected to
> #    the pool. Other passive transfers will be channelled through a
> #    proxy component at the FTP door. If set to false, all passive
> #    transfers to through a proxy.
> #
> #    This setting is interpreted by both FTP doors and pools, with
> #    different defaults. If set to true at the door, then the setting
> #    at the individual pool will be used.
> #
> # gsiftpAllowPassivePool=false
> #
> #
> #  -----------------------------------------------------------------------
> #         common to gsiftp and srm
> #  -----------------------------------------------------------------------
> #
> srmSpaceManagerEnabled=yes
> #
> # will have no effect if srmSpaceManagerEnabled is "no"
> srmImplicitSpaceManagerEnabled=yes
> # overwriteEnabled=no
> #
> #  ---- Image and style directories for the dCache-internal web server
> #   Do not change unless yoy know what you are doing.
> #
> images=${ourHomeDir}/docs/images
> styles=${ourHomeDir}/docs/styles
>
> #  -----------------------------------------------------------------------
> #          Network Configuration
> #  -----------------------------------------------------------------------
>
> #  ---- Port Numbers for the various services
> #   Do not change unless yoy know what you are doing.
> #
> portBase=22
> dCapPort=${portBase}125
> ftpPort=${portBase}126
> kerberosFtpPort=${portBase}127
> dCapGsiPort=${portBase}128
> #gsiFtpPortNumber=2811
> srmPort=8443
> xrootdPort=1094
>
> #  ---- GridFTP port range
> #   Do not change unless yoy know what you are doing.
> #
> clientDataPortRange=50000:52000
>
> #  ---- Port Numbers for the monitoring and administration
> #   Do not change unless yoy know what you are doing.
> #
> adminPort=${portBase}223
> httpdPort=${portBase}88
> sshPort=${portBase}124
> #   Telnet is only started if the telnetPort line is uncommented.
> #   Debug only.
> #telnetPort=${portBase}123
> #
> #  -----------------------------------------------------------------------
> #        Maintenance Module Setup
> #  -----------------------------------------------------------------------
> #
> # maintenanceLibPath=${ourHomeDir}/var/lib/dCache/maintenance
> # maintenanceLibAutogeneratePaths=true
> # maintenanceLogoutTime=18000
> #
>
> #  -----------------------------------------------------------------------
> #          Database Configuration
> #  -----------------------------------------------------------------------
> #   The variable 'srmDbHost' is obsolete. For compatibility reasons,
> #   it is still used if it is set and if the following variables are
> #   not set
>
> #   The current setup assumes that one or more PostgreSQL servers are
> #   used by the various dCache components. Currently the database user
> #   'srmdcache' with password 'srmdcache' is used by all components.
> #   They use the databases 'dcache', 'replicas', 'companion',
> #   'billing'.  However, these might be located on separate hosts.
>
> #   The best idea is to have the database server running on the same
> #   host as the dCache component which accesses it. Therefore, the
> #   default value for the following variables is 'localhost'.
> #   Uncomment and change these variables only if you have a reason to
> #   deviate from this scheme.
>
> #   (One possibility would be, to put the 'billing' DB on another host than
> #   the pnfs server DB and companion, but keep the httpDomain on the admin
> #   host.)
>
> #  ---- pnfs Companion Database Host
> #   Do not change unless yoy know what you are doing.
> #   - Database name: companion
> #
> #companionDatabaseHost=localhost
>
> #  ---- SRM Database Host
> #   Do not change unless yoy know what you are doing.
> #   - Database name: dcache
> #   - If srmDbHost is set and this is not set, srmDbHost is used.
> #
> #srmDatabaseHost=localhost
>
> #  ---- Space Manager Database Host
> #   Do not change unless yoy know what you are doing.
> #   - Database name: dcache
> #   - If srmDbHost is set and this is not set, srmDbHost is used.
> #
> #spaceManagerDatabaseHost=localhost
>
> #  ---- Pin Manager Database Host
> #   Do not change unless yoy know what you are doing.
> #   - Database name: dcache
> #   - If srmDbHost is set and this is not set, srmDbHost is used.
> #
> #pinManagerDatabaseHost=localhost
>
> #  ---- Replica Manager Database Host
> #   Do not change unless yoy know what you are doing.
> #   - Database name: replicas
> #
> # ----------------------------------------------------------------
> #   replica manager tuning
> # ----------------------------------------------------------------
> #
> # replicaManagerDatabaseHost=localhost
> # replicaDbName=replicas
> # replicaDbUser=srmdcache
> # replicaDbPassword=srmdcache
> # replicaPasswordFile=""
> # resilientGroupName=ResilientPools
> # replicaPoolWatchDogPeriod=600
> # replicaWaitDBUpdateTimeout=600
> # replicaExcludedFilesExpirationTimeout=43200
> # replicaDelayDBStartTimeout=1200
> # replicaAdjustStartTimeout=1200
> # replicaWaitReplicateTimeout=43200
> # replicaWaitReduceTimeout=43200
> # replicaDebug=false
> # replicaMaxWorkers=6
> # replicaMin=2
> # replicaMax=3
> #
>
>
> #  ---- Transfer / TCP Buffer Size
> #   Do not change unless yoy know what you are doing.
> #
> bufferSize=1048576
> tcpBufferSize=1048576
>
> #  ---- Allow overwrite of existing files via GSIdCap
> #   allow=true, disallow=false
> #
> truncate=false
>
> #  ---- pnfs Mount Point for (Grid-)FTP
> #   The current FTP door needs pnfs to be mounted for some file exist
> #   checks and for the directory listing. Therefore it needs to know
> #   where pnfs is mounted. In future the Ftp and dCap deamons will
> #   ask the pnfsManager cell for help and the directory listing is
> #   done by a DirListPool.
> ftpBase=/pnfs/ftpBase
>
> #  -----------------------------------------------------------------------
> #          pnfs Manager Configuration
> #  -----------------------------------------------------------------------
> #
> #  ---- pnfs Mount Point
> #   The mount point of pnfs on the admin node. Default: /pnfs/fs
> #
> pnfs=/pnfs/fs
>
> #   An older version of the pnfsManager actually autodetects the
> #   possible pnfs filesystems. The ${defaultPnfsServer} is choosen
> #   from the list and used as primary pnfs filesystem. (currently the
> #   others are ignored).  The ${pnfs} variable can be used to override
> #   this mechanism.
> #
> # defaultPnfsServer=localhost
> #
> #   -- leave this unless you are running an enstore HSM backend.
> #
> # pnfsInfoExtractor=diskCacheV111.util.OsmInfoExtractor
> #
> #   -- depending on the power of your pnfs server host you may
> #      set this to up to 50.
> #
> # pnfsNumberOfThreads=4
> #
> #   -- don't change this
> #
> # namespaceProvider=diskCacheV111.namespace.provider.BasicNameSpaceProviderFactory
> #
> #   --- change this if you configured you postgres instance
> #       other then described in the Book.
> #
> # pnfsDbUser=srmdcache
> # pnfsDbPassword=srmdcache
> # pnfsPasswordFile=
> #
> #  ---- Storage Method for cacheinfo: companion or pnfs
> #   Values:  'comanion' -- cacheinfo will be stored in separate DB
> #            other or missing -- cacheinfo will be stored in pnfs
> #   Default: 'pnfs' -- for backward compatibility of old dCacheSetup files
> #
> #   'companion' is the default for new installs. Old installations have
> #   to use 'pnfs register' in every pool after switching from 'pnfs' to
> #   'companion'. See the documentation.
> #
> cacheInfo=companion
> #
> #
> #
>
>
> #  ---- Location of the trash directory
> #   The cleaner (which can only run on the pnfs server machine itself)
> #   autodetects the 'trash' directory.  Non-empty 'trash' overwrites the
> #   autodetect.
> #
> #trash=
>
> #   The cleaner stores persistency information in subdirectories of
> #   the following directory.
> #
> # cleanerDB=/opt/pnfsdb/pnfs/trash/2
> # cleanerRefresh=120
> # cleanerRecover=240
> # cleanerPoolTimeout=100
> # cleanerProcessFilesPerRun=500
> # cleanerArchive=none
> #
>
> #  ---- Whether to enable the HSM cleaner
> #   Values:  'disabled', 'enabled'
> #   Default: 'disabled'
> #
> #   The HSM cleaner scans the PNFS trash directory for deleted
> #   files stored on an HSM and sends a request to an attached
> #   pool to delete that file from the HSM.
> #
> #   The HSM cleaner by default runs in the PNFS domain. To
> #   enable the cleaner, this setting needs to be set to enabled
> #   at the PNFS domain *and* at all pools that are supposed
> #   to delete files from an HSM.
> #
> # hsmCleaner=disabled
> #
> #
> #  ---- Location of trash directory for files on tape
> #   The HSM cleaner periodically scans this directory to
> #   detect deleted files.
> #
> # hsmCleanerTrash=/opt/pnfsdb/pnfs/1
> #
> #  ---- Location of repository directory of the HSM cleaner
> #   The HSM cleaner uses this directory to store information
> #   about files in could not clean right away. The cleaner
> #   will reattempt to clean the files later.
> #
> # hsmCleanerRepository=/opt/pnfsdb/pnfs/1/repository
> #
> #  ---- Interval between scans of the trash directory
> #   Specifies the time in seconds between two scans of the
> #   trash directory.
> #
> # hsmCleanerScan=90
> #
> #  ---- Interval between retries
> #   Specifies the time in seconds between two attempts to
> #   clean files stored in the cleaner repository.
> #
> # hsmCleanerRecover=3600
> #
> #  ---- Interval between flushing failures to the repository
> #   When the cleaner failes to clean a file, information about this
> #   file is added to the repository. This setting specifies the time
> #   in seconds between flushes to the repository. Until the
> #   information is kept in memory and in the trash directory.
> #
> #   Each flush will create a new file. A lower value will cause the
> #   repository to be split into more files. A higher value will cause
> #   a higher memory usage and a larger number of files in the trash
> #   directory.
> #
> # hsmCleanerFlush=60
> #
> #  ---- Max. length of in memory queue of files to clean
> #   When the trash directory is scanned, information about deleted
> #   files is queued in memory. This setting specifies the maximum
> #   length of this queue. When the queue length is reached, scanning
> #   is suspended until files have been cleaned or flushed to the
> #   repository.
> #
> # hsmCleanerCleanerQueue=10000
> #
> #  ---- Timeout for pool communication
> #   Files are cleaned from an HSM by sending a message to a pool to
> #   do so. This specifies the timeout in seconds after which the
> #   operation is considered failed.
> #
> # hsmCleanerTimeout=120
> #
> #  ---- Maximum concurrent requests to a single HSM
> #   Files are cleaned in batches. This specified the largest number
> #   of files to include in a batch per HSM.
> #
> # hsmCleanerRequest=100
> #
> #  -----------------------------------------------------------------------
> #         Directory Pools
> #  -----------------------------------------------------------------------
> #
> #directoryPoolPnfsBase=/pnfs/fs
> #
>
> #  -----------------------------------------------------------------------
> #          Srm Settings for experts
> #  -----------------------------------------------------------------------
> #
> # srmVersion=version1
> # pnfsSrmPath=/
> parallelStreams=1
>
> #srmAuthzCacheLifetime=60
>
> # srmGetLifeTime=14400000
> # srmPutLifeTime=14400000
> # srmCopyLifeTime=14400000
>
>
> # srmTimeout=3600
> # srmVacuum=true
> # srmVacuumPeriod=21600
> # srmProxiesDirectory=/tmp
> # srmBufferSize=1048576
> # srmTcpBufferSize=1048576
> # srmDebug=true
>
> # srmGetReqThreadQueueSize=10000
> # srmGetReqThreadPoolSize=250
> # srmGetReqMaxWaitingRequests=1000
> # srmGetReqReadyQueueSize=10000
> # srmGetReqMaxReadyRequests=2000
> # srmGetReqMaxNumberOfRetries=10
> # srmGetReqRetryTimeout=60000
> # srmGetReqMaxNumOfRunningBySameOwner=100
>
> # srmPutReqThreadQueueSize=10000
> # srmPutReqThreadPoolSize=250
> # srmPutReqMaxWaitingRequests=1000
> # srmPutReqReadyQueueSize=10000
> # srmPutReqMaxReadyRequests=1000
> # srmPutReqMaxNumberOfRetries=10
> # srmPutReqRetryTimeout=60000
> # srmPutReqMaxNumOfRunningBySameOwner=100
>
> # srmCopyReqThreadQueueSize=10000
> # srmCopyReqThreadPoolSize=250
> # srmCopyReqMaxWaitingRequests=1000
> # srmCopyReqMaxNumberOfRetries=10
> # srmCopyReqRetryTimeout=60000
> # srmCopyReqMaxNumOfRunningBySameOwner=100
>
> # srmPoolManagerTimeout=300
> # srmPoolTimeout=300
> # srmPnfsTimeout=300
> # srmMoverTimeout=7200
> # remoteCopyMaxTransfers=150
> # remoteHttpMaxTransfers=30
> # remoteGsiftpMaxTransfers=${srmCopyReqThreadPoolSize}
>
> #
> # srmDbName=dcache
> # srmDbUser=srmdcache
> # srmDbPassword=srmdcache
> # srmDbLogEnabled=false
> #
> # This variable enables logging of the history
> # of the srm request transitions in the database
> # so that it can be examined though the srmWatch
> # monitoring tool
> # srmJdbcMonitoringLogEnabled=false
> #
> # turning this on turns off the latest changes that made service
> # to honor the srm client's prococol list order for
> # get/put commands
> # this is needed temprorarily to support old srmcp clients
> # srmIgnoreClientProtocolOrder=false
>
> #
> #  -- Set this to /root/.pgpass in case
> #     you need to have better security.
> #
> # srmPasswordFile=
> #
> #  -- Set this to true if you want overwrite to be enabled for
> #     srm v1.1 interface as well as for srm v2.2 interface when
> #     client does not specify desired overwrite mode.
> #     This option will be considered only if overwriteEnabled is
> #     set to yes (or true)
> #
> # srmOverwriteByDefault=false
>
> # ----srmCustomGetHostByAddr enables using the BNL developed
> #  procedure for host by ip resolution if standard
> # InetAddress method failed
> # srmCustomGetHostByAddr=false
>
> #  ---- Allow automatic creation of directories via SRM
> #   allow=true, disallow=false
> #
> RecursiveDirectoryCreation=true
>
> #  ---- Allow delete via SRM
> #   allow=true, disallow=false
> #
> AdvisoryDelete=true
> #
> # pinManagerDatabaseHost=${srmDbHost}
> # spaceManagerDatabaseHost=${srmDbHost}
> #
> # ----if space reservation request does not specify retention policy
> #     we will assign this retention policy by default
> SpaceManagerDefaultRetentionPolicy=REPLICA
> #
> # ----if space reservation request does not specify access latency
> #     we will assign this access latency by default
> SpaceManagerDefaultAccessLatency=ONLINE
> #
> # ----if the transfer request come from the door, and there was not prior
> #     space reservation made for this file, should we try to reserve
> #     space before satisfying the request
> SpaceManagerReserveSpaceForNonSRMTransfers=true
>
> # LinkGroupAuthorizationFile contains the list of FQANs that are allowed to
> # make space reservations in a given link group
> SpaceManagerLinkGroupAuthorizationFileName=/opt/d-cache/etc/LinkGroupAuthorization.conf
>
> #
>
> #  -----------------------------------------------------------------------
> #          Logging Configuration
> #  -----------------------------------------------------------------------
>
> #  ---- Directory for the Log Files
> #   Default: ${ourHomeDir}/log/  (if unset or empty)
> #
> logArea=/var/log
>
> #  ---- Restart Behaviour
> #   Values:  'new' -- logfiles will be moved to LOG.old at restart.
> #            other or missing -- logfiles will be appended at restart.
> #   Default: 'keep'
> #
> #logMode=keep
>
> #  -----------------------------------------------------------------------
> #       Billing / Accounting
> #  -----------------------------------------------------------------------
>
> #   The directory the billing logs are written to
> billingDb=${ourHomeDir}/billing
>
> #   If billing information should be written to a
> #   PostgreSQL database set to 'yes'.
> #   A database called 'billing' has to be created there.
> billingToDb=yes
>
> #   The PostgreSQL database host:
> #billingDatabaseHost=localhost
>
> #   EXPERT: First is default if billingToDb=no, second for billingToDb=yes
> #   Do NOT put passwords in setup file! They can be read by anyone logging into
> #   the dCache admin interface!
> #billingDbParams=
> #billingDbParams="\
> #                 -useSQL \
> #                 -jdbcUrl=jdbc:postgresql://${billingDatabaseHost}/billing \
> #                 -jdbcDriver=org.postgresql.Driver \
> #                 -dbUser=srmdcache \
> #                 -dbPass=srmdcache \
> #                "
>
> #  -----------------------------------------------------------------------
> #       Info Provider
> #  -----------------------------------------------------------------------
> #
> #   The following variables are used by the dynamic info provider, which
> #   is used for integration of dCache as a storage element in the LCG
> #   information system. All variables are used by the client side of the
> #   dynamic info provider which is called regularly by the LCG GIP (generic
> #   info provider). It consists of the two scripts
> #     jobs/infoDynamicSE-plugin-dcache
> #     jobs/infoDynamicSE-provider-dcache
> #
>
> #  ---- Seconds between information retrievals
> #   Default: 180
> #infoCollectorInterval=180
>
> #  ---- The static file used by the GIP
> #   This is also used by the plugin to determine the info it should
> #   output.
> #   Default: /opt/lcg/var/gip/ldif/lcg-info-static-se.ldif
> infoProviderStaticFile=/opt/lcg/var/gip/ldif/static-file-SE.ldif
>
> #  ---- The host where the InfoCollector cell runs
> #   Default: localhost
> #infoCollectorHost=localhost
>
> #  ---- The port where the InfoCollector cell will listen
> #   This will be used by the InfoCollector cell as well as the dynamic
> #   info provider scripts
> #   Default: 22111
> #infoCollectorPort=22111
>
>
>
> # ------------------------------------------------------------------------
> #    Statistics module
> # ------------------------------------------------------------------------
>
> #  - point to place where statistic will be store
> statisticsLocation=${ourHomeDir}/statistics
>
> # ------------------------------------------------------------------------
> # xrootd section
> # ------------------------------------------------------------------------
> #
> #   forbids write access in general (to avoid unauthenticated writes). Overrides all other authorization settings.
> # xrootdIsReadOnly=true
> #
> #   allow write access only to selected paths (and its subdirectories). Overrides any remote authorization settings (like from the filecatalogue)
> # xrootdAllowedPaths=/path1:/path2:/path3
> #
> #       This will allow to enable authorization in the xrootd door by specifying a valid
> #       authorization plugin. There is only one plugin in the moment, implementing token based
> #       authorization controlled by a remote filecatalogue. This requires an additional parameter
> #   'keystore', holding keypairs needed to do the authorization plugin. A template keystore
> #   can be found in ${ourHomeDir}/etc/keystore.temp.
>
> # xrootdAuthzPlugin=org.dcache.xrootd.security.plugins.tokenauthz.TokenAuthorizationFactory
> # xrootdAuthzKeystore=${ourHomeDir}/etc/keystore
>
> #       the mover queue on the pool where this request gets scheduled to
> # xrootdIoQueue=
>
>

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager