The first certainly sounds like a proxy problem.
We can limit the transfer rate in FTS if it turns out to be a problem - otherwise my philosophy is that we keep going as quickly as we can until someone complains :-)
Cheers
--jens
________________________________________
From: DiRAC Users [[log in to unmask]] on behalf of Lydia Heck [[log in to unmask]]
Sent: 23 June 2015 21:01
To: [log in to unmask]
Subject: Re: Tick tick...
Hi Brian,
I will use the voms-proxy-init next time.
~16:00 this afternoon any transmission stopped and has not been working since.
I will investigate tomorrow, first killing the waiting transmissions and then
resubmitted some failed transfers.
At the time when the transmission stopped the fileservers at RAL hiked up to
~1.5-2 GByte/sec ...
Is it likely that such hikes in input on the RAL servers can effectively kill
the DiRAC archive sessions?
Best wishes,
Lydia
On Tue, 23 Jun 2015, Brian Davies wrote:
> I think the vo being nil is probably due to you using myproxy-init rather than voms-proxy-init?
> I would guess that our webdav endpoint is not visible through our web cache.
> I have also tested the functionality that I can recall the file back form tape to the disk cache.
>
> -----Original Message-----
> From: Lydia Heck [mailto:[log in to unmask]]
> Sent: 23 June 2015 16:02
> To: Davies, Brian (STFC,RAL,SC)
> Cc: [log in to unmask]
> Subject: Re: Tick tick...
>
>
> Could it be because I used alice instead of dirac?
> Although I am now using dirac and have been for the last 90 minutes
>
> Do you know, why I get permission denied on the other webpage although I am using my certificate?
>
>
> Lydia
>
> On Tue, 23 Jun 2015, Brian Davies wrote:
>
>> Not sure why you are showing up as nil vo , but here is the generioc FTs monmitoring tool tracking your data placement.
>> http://dashb-fts-transfers.cern.ch/ui/#d.src.country=%22n/a%22&d.src.site=%22n/a%22&date.interval=2880&grouping.src=%28country,site%29&m.content=%28efficiency,successes,throughput%29&server=%28bnl,cmsfts3.fnal.gov,fts.hep.pnnl.gov,fts3-pilot.cern.ch,fts3.cern.ch,lcgfts3.gridpp.rl.ac.uk%29&src.country=%28%22n/a%22%29&tab=transfer_plots&vo=%28nil,vo.dirac.ac.uk%29
>>
>>
>> -----Original Message-----
>> From: DiRAC Users [mailto:[log in to unmask]] On Behalf Of Lydia Heck
>> Sent: 23 June 2015 15:28
>> To: [log in to unmask]
>> Subject: Re: Tick tick...
>>
>> Hi Jens and all,
>>
>> yes you did mention it, but as with everything new, some of the good stuff slips past :-).
>>
>> I canceled those jobs that had not yet started and changed the name from alice to dirac and on one of these submissions 71 files have already been transmitted.
>>
>> Best wishes,
>> Lydia
>>
>>
>> On Tue, 23 Jun 2015, Jens Jensen wrote:
>>
>>> On 23/06/2015 14:00, Lydia Heck wrote:
>>>> Hi Brian,
>>>>
>>>> thank you for the information and pointing out the change in naming.
>>>>
>>>> I was not (yet) aware that the host name had to be changed. I will do
>>>> that in the next submissions. Currently I have two or 3 more running
>>>> with the name alice.
>>>>
>>> I think it was mentioned in our Skype call yesterday but don't worry
>>> too much about it: you can always get them back via srm-dirac. @Brian
>>> - we should check that they are stored with the right service class,
>>> so we can make sure they go to the right tapes.
>>>> I have set no flags so the retry value is the default. Need to do
>>>> some more reading on using this.
>>>>
>>> You could just use --retry 3 for example.
>>>> Once a job is complete I can not check if there are any failures, as
>>>> it simply disappears. Is there a means of getting an email on a
>>>> completion of a transfer job?
>>>>
>>> fts-transfer-status should give you the status - in particular, -F
>>> should give you the list of files that have failed. See section 3.1
>>> in http://fts3-service.web.cern.ch/content/clients
>>>
>>> Cheers
>>> --jens
>>>
>>
>
|