On 30/07/15 17:04, Matthew Mottram wrote:
> Huh! This works - any reason the extra ‘/‘ is required compared to the
> SURL?
I think the path after the host for xroot can be an absolute path
(host//somedir/file) or relative to a predefined prefix (host/file), the
extra / indicates an absolute path. Or it could just be awkward, I'm not
a root expert!
> For QMUL the remote loading still hangs, followed by an error
QMUL is a special case as they use STORM/Lustre rather than DPM or
dCache. I don't know the status of the XROOTD plugin for that. I would
expect most, if not all, UK sites otherwise to have working XROOTD servers.
John
> like: Error in <TNetXNGFile::Open>: [FATAL] Connection error. I guess
> I’ll go through and compile a list of sites that we have problems at and
> ticket them. Is there someone I can cc on the ticket to help resolve
> issues at each site?
>
> Cheers,
> Matt
>
>> On 30 Jul 2015, at 16:57, John Bland <[log in to unmask]
>> <mailto:[log in to unmask]>> wrote:
>>
>>
>> Hi,
>>
>> I think this is a simple typo. Although it looks optional the extra
>> '/' before dpm/ in the TURL is required, something like this should work
>>
>> xrdcproot://t2se01.physics.ox.ac.uk//dpm/physics.ox.ac.uk/home/snoplus.snolab.ca/production/TeLoadedPeryleneTl208_Hdropes/r5200/TeLoadedPeryleneTl208_Hdropes_r5206_s0_p0.ntuple.rootlocal_file.ntuple.root
>>
>> John
>>
>> On 30/07/15 16:09, Matthew Mottram wrote:
>>> FYI, I get a similar error when using xrdcp:
>>>
>>> mottram@heppc401:~$ xrdcp
>>> root://t2se01.physics.ox.ac.uk/dpm/physics.ox.ac.uk/home/snoplus.snolab.ca/production/TeLoadedPeryleneTl208_Hdropes/r5200/TeLoadedPeryleneTl208_Hdropes_r5206_s0_p0.ntuple.root
>>> local_file.ntuple.root
>>> [0B/0B][100%][==================================================][0B/s]
>>> Run: [ERROR] Server responded with an error: [3010] Opening relative
>>> path
>>> 'dpm/physics.ox.ac.uk/home/snoplus.snolab.ca/production/TeLoadedPeryleneTl208_Hdropes/r5200/TeLoadedPeryleneTl208_Hdropes_r5206_s0_p0.ntuple.root'
>>> is disallowed.
>>>
>>>
>>>
>>>
>>>> On 30 Jul 2015, at 16:07, Matthew Mottram <[log in to unmask]
>>>> <mailto:[log in to unmask]>> wrote:
>>>>
>>>> Hi Sam,
>>>>
>>>> Maybe I’m doing something wrong then; I was just running an
>>>> interactive ROOT session locally at QMUL (with a valid proxy) and
>>>> tried loading files from the QMUL and Oxford SEs:
>>>>
>>>> For Oxford I get an error:
>>>>
>>>> root [1] TFile* tf
>>>> =
>>>> TFile::Open("root://t2se01.physics.ox.ac.uk/dpm/physics.ox.ac.uk/home/snoplus.snolab.ca/production/TeLoadedPeryleneTl208_Hdropes/r5200/TeLoadedPeryleneTl208_Hdropes_r5206_Error
>>>> in <TNetXNGFile::Open>: [ERROR] Server responded with an error: [3010]
>>>> Opening relative
>>>> path
>>>> 'dpm/physics.ox.ac.uk/home/snoplus.snolab.ca/production/TeLoadedPeryleneTl208_Hdropes/r5200/TeLoadedPeryleneTl208_Hdropes_r5206_s0_p0.ntuple.root'
>>>> <http://physics.ox.ac.uk/home/snoplus.snolab.ca/production/TeLoadedPeryleneTl208_Hdropes/r5200/TeLoadedPeryleneTl208_Hdropes_r5206_s0_p0.ntuple.root'>
>>>> <http://physics.ox.ac.uk/home/snoplus.snolab.ca/production/TeLoadedPeryleneTl208_Hdropes/r5200/TeLoadedPeryleneTl208_Hdropes_r5206_s0_p0.ntuple.root'>
>>>> is disallowed.
>>>>
>>>> For QMUL the command just hangs:
>>>>
>>>> root [0] TFile* tf
>>>> =
>>>> TFile::Open("root://se03.esc.qmul.ac.uk/snoplus.snolab.ca/production/TeLoadedPeryleneY88/r100/TeLoadedPeryleneY88_r159_s0_p0.ntuple.root")
>>>>
>>>> Is there something else I should do? I was just following
>>>> instructions from a Liverpool Wiki page
>>>> (https://hep.ph.liv.ac.uk/twiki/bin/view/Computing/GridStorageGuide#XROOTD).
>>>>
>>>> Cheers,
>>>> Matt
>>>>
>>>>
>>>>> On 30 Jul 2015, at 10:40, Sam Skipsey
>>>>> <[log in to unmask]
>>>>> <mailto:[log in to unmask]>
>>>>> <mailto:[log in to unmask]>> wrote:
>>>>>
>>>>> Following up on this, and as a test, I just successfully copied a
>>>>> test file (as the dteam VO) from our SE to a UI using xrdcp. There's
>>>>> no special VO configuration for dteam on our SE, so this demonstrates
>>>>> that VO specific configuration isn't needed for xrootd at a DPM site.
>>>>> (And it shouldn't be needed for dCache sites, either.)
>>>>>
>>>>> Sam
>>>>>
>>>>> On Thu, 30 Jul 2015 at 10:29 Sam Skipsey <[log in to unmask]
>>>>> <mailto:[log in to unmask]>
>>>>> <mailto:[log in to unmask]>> wrote:
>>>>>
>>>>> hi Matt,
>>>>>
>>>>> I'm not sure how other SE implementations set up xrootd access,
>>>>> but DPM sites should already support (all) VOs for xrootd access.
>>>>> (There's VO specific stuff for federation of sites, but that's
>>>>> really only a feature that ATLAS and CMS use).
>>>>>
>>>>> Specifically, looking at the values that Dan mentions, on our
>>>>> xrootd config here, I see:
>>>>>
>>>>> xrootd.export /
>>>>>
>>>>> (we don't have an auth file for DPM sites, as authorisation is
>>>>> delegated to the DPM itself).
>>>>>
>>>>> Can you tell us which SE endpoints you've tried using XrootD to
>>>>> access, so we can do some debugging?
>>>>>
>>>>>
>>>>> Sam
>>>>>
>>>>>
>>>>> On Thu, 30 Jul 2015 at 10:24 Matthew Mottram
>>>>> <[log in to unmask]
>>>>> <mailto:[log in to unmask]><mailto:[log in to unmask]>> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> Is there any update following Dan’s comments on this? Right
>>>>> now I’m unable to load any SNO+ files over XRootD. I assume
>>>>> this is because access is not setup for thesnoplus.snolab.ca
>>>>> <http://snoplus.snolab.ca/>
>>>>> <http://snoplus.snolab.ca/> VO.
>>>>>
>>>>> This is an issue of increasing importance for us - right now
>>>>> we have no way to manage reprocessing of our raw datasets.
>>>>>
>>>>> Cheers,
>>>>> Matt
>>>>>
>>>>>> On 22 Jul 2015, at 09:45, Daniel Traynor
>>>>>> <[log in to unmask]
>>>>>> <mailto:[log in to unmask]><mailto:[log in to unmask]>> wrote:
>>>>>>
>>>>>> But are not the xrootd instances that have been set up by
>>>>>> site atlas or cms only?i.e. in
>>>>>>
>>>>>> /etc/xrootd/auth_file
>>>>>>
>>>>>> I have only
>>>>>>
>>>>>> u * /atlas rl
>>>>>>
>>>>>> and in /etc/xrootd/xrootd-clustered.cfg
>>>>>>
>>>>>> I have
>>>>>>
>>>>>> all.export /atlas r/o
>>>>>>
>>>>>> which I understand as only allowing access to atlas
>>>>>>
>>>>>> how would you set up an xrootd server for another vo like
>>>>>> snoplus?
>>>>>>
>>>>>> dan
>>>>>>
>>>>>> * Dr Daniel Traynor, Grid cluster system manager
>>>>>> * Tel +44(0)20 7882 6560, Particle Physics,QMUL
>>>>>>
>>>>>> ________________________________________
>>>>>> From: Testbed Support for GridPP member institutes
>>>>>> <[log in to unmask] <mailto:[log in to unmask]>
>>>>>> <mailto:[log in to unmask]>> on behalf of Brian
>>>>>> Davies <[log in to unmask]
>>>>>> <mailto:[log in to unmask]>
>>>>>> <mailto:[log in to unmask]>>
>>>>>> Sent: 22 July 2015 09:14
>>>>>> To:[log in to unmask]
>>>>>> <mailto:[log in to unmask]><mailto:[log in to unmask]>
>>>>>> Subject: Re: Remote loading of Grid files for SNO+
>>>>>>
>>>>>> Hi Matt,
>>>>>> The xrootd protocol should allow you to access nonROOT
>>>>>> format files as well as ROOT format files and will allow you
>>>>>> to stream the data from the storage system rather than
>>>>>> copying the whole file to the local disk on the worker node.
>>>>>> ( Shaun DeWitt has some sample code for this if you are not
>>>>>> already doing so.)
>>>>>> Brian
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Testbed Support for GridPP member institutes
>>>>>> [mailto:[log in to unmask]] On Behalf Of Matthew Mottram
>>>>>> Sent: 20 July 2015 10:47
>>>>>> To:[log in to unmask]
>>>>>> <mailto:[log in to unmask]><mailto:[log in to unmask]>
>>>>>> Subject: Remote loading of Grid files for SNO+
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Gareth Smith suggested that someone on the TB support might
>>>>>> be able to help on the issue of loading files remotely for
>>>>>> SNO+. Our processing jobs need to able to run over large
>>>>>> datasets in single jobs (variable but potentially around 100
>>>>>> GB). Until now we’ve just been splitting these jobs, with
>>>>>> each sub-job downloading single files to local disk (via
>>>>>> lcg-cp or gfal-copy) and then processing. However, we’ve
>>>>>> reached a point where we now know we need to process the
>>>>>> entire dataset in a single job. I know that we should
>>>>>> instead load files over local network, but it’s not clear to
>>>>>> me how to do this with e.g. XRootD. Additionally, we may
>>>>>> have some non-ROOT format files, in which case we would want
>>>>>> in some way to map the LFN/SURL to a local path (while this
>>>>>> is possible with lustre, I’m guessing it might not be
>>>>>> possible with other storage systems).
>>>>>>
>>>>>> I’ve compiled ROOT with XRootD, but as I understand it there
>>>>>> are some extra configurations that we may need at each site
>>>>>> in order to support our loading files from the SURL
>>>>>> directly. If anyone could advise me of the necessary steps
>>>>>> (and whether these will also allow for loading of files from
>>>>>> non-grid nodes as well - as some of our processing is better
>>>>>> run on local batch systems) then I’d be very grateful.
>>>>>>
>>>>>> The output datasets of the jobs will also be large. We can
>>>>>> have a script that runs in parallel with the jobs to push
>>>>>> outputs to Grid storage as they are produced but if there’s
>>>>>> a better (e.g. writing directly to Grid storage) then I’d be
>>>>>> interested to hear of it.
>>>>>>
>>>>>> Thanks,
>>>>>> Matt
>>>>>>
>>>>>> -------------------------------------------------
>>>>>> Matthew Mottram
>>>>>> School of Physics and Astronomy
>>>>>> Queen Mary, University of London
>>>>>> -------------------------------------------------
>>>>>
>>>>
>>>
>>
>>
>> --
>> John Bland [log in to unmask] <mailto:[log in to unmask]>
>> System Administrator office: 220
>> High Energy Physics Division tel (int): 42911
>> Oliver Lodge Laboratory tel (ext): +44 (0)151 794 2911
>> University of Liverpool http://www.liv.ac.uk/physics/hep/
>> "I canna change the laws of physics, Captain!"
>
--
John Bland [log in to unmask]
System Administrator office: 220
High Energy Physics Division tel (int): 42911
Oliver Lodge Laboratory tel (ext): +44 (0)151 794 2911
University of Liverpool http://www.liv.ac.uk/physics/hep/
"I canna change the laws of physics, Captain!"
|