Some folks always want the top brick off the chimney! :)
We are working on that issue now. We/I will be updating folks on that just as soon as I have an update.
D.
-----Original Message-----
From: Testbed Support for GridPP member institutes <[log in to unmask]> On Behalf Of Daniela Bauer
Sent: 05 June 2018 13:40
To: [log in to unmask]
Subject: Re: Tickets for the 5th of June.
Isn't that just a consequence of the missing bdii info ? Where is it meant to get the port number from ?
Cheers,
Daniela
On 5 June 2018 at 13:33, Darren Moore - UKRI STFC <[log in to unmask]> wrote:
> Daniela,
>
> I think folks will find that by specifying the port explicitly gfal will work correctly. It appears to default to port 80 rather than 8443.
>
> So typically, if you do this:
>
> $ gfal-ls
> srm://srm-<vo_name>.gridpp.rl.ac.uk/castor/ads.rl.ac.uk/prod/<vo_name>
> /raw/-<vo_name>datatape/data16_13TeV/DRAW_RPVLL/r9264/data16_13TeV.003
> 02872.physics_Main.recon.DRAW_RPVLL.r9264_tid11163882_00/DRAW_RPVLL.11
> 163882._015703.pool.root.1
>
> You get the following error :
>
> gfal-ls error: 70 (Communication error on send) - srm-ifce err:
> Communication error on send, err: [SE][Ls][]
> httpg://srm-<vo_name>.gridpp.rl.ac.uk/srm/managerv2: CGSI-gSOAP
> running on lcgui04.gridpp.rl.ac.uk reports could not open connection
> to srm-atlas.gridpp.rl.ac.uk:80
>
> however, if you try ( spot the difference!) :
>
> $ gfal-ls
> srm://srm-<vo_name>.gridpp.rl.ac.uk:8443/castor/ads.rl.ac.uk/prod/<vo_
> name>/raw/-<vo_name>datatape/data16_13TeV/DRAW_RPVLL/r9264/data16_13Te
> V.00302872.physics_Main.recon.DRAW_RPVLL.r9264_tid11163882_00/DRAW_RPV
> LL.11163882._015703.pool.root.1
>
> Things should work correctly.
>
> D.
>
>
> -----Original Message-----
> From: Testbed Support for GridPP member institutes
> <[log in to unmask]> On Behalf Of Daniela Bauer
> Sent: 05 June 2018 13:18
> To: [log in to unmask]
> Subject: Re: Tickets for the 5th of June.
>
> And gfal should work, which magic bit is missing ?
>
> On 5 June 2018 at 13:18, Daniela Bauer
> <[log in to unmask]> wrote:
>> Yes, I know this broke MICE and solidexperiment as well. There is
>> something that went systematically wrong here.
>>
>> Cheers,
>> Daniela
>>
>> On 5 June 2018 at 13:09, Henry Nebrensky <[log in to unmask]> wrote:
>>> To be fair, Zdenek is quoting Darren's reply in my ticket - 135308 -
>>> and I was the one still using lcg-* ...
>>>
>>> Though I didn't get gfal-copy working flawlessly either.
>>>
>>> I guess this is also the sort of thing that we have the Tier1
>>> Liaison meetings for, especially if we pay attention in them!
>>>
>>> I'm separately concerned that I put in a "top priority" ticket after
>>> 9 on a Thursday morning, yet it barely reached the site by
>>> clocking-off time on the Friday.
>>>
>>> Thanks
>>>
>>> Henry
>>>
>>> On Tue, 5 Jun 2018, Daniela Bauer wrote:
>>>>
>>>> That ticket is a prime example on how not to treat a small VO.
>>>> Surely RAL must have a spare VM somewhere to host an interim
>>>> solution and someone to help snoplus to move to the new (if any ?) system.
>>>> (I'd like to know how this is meant to work as
>>>> well...) And the "work around" involves lcg-ls (coming from an EGI
>>>> person !?).
>>>>
>>>> snoplus has been one of the VOs that took the grid seriously and I
>>>> think we owe them a bit more support.
>>>>
>>>> Daniela
>>>>
>>>>
>>>> On 5 June 2018 at 11:56, Terry Froy <[log in to unmask]> wrote:
>>>>
>>>> Hi folks,
>>>>
>>>>
>>>> I do not seem to be able to connect to Vidyo again :-(
>>>>
>>>>
>>>> One of our local academics has asked me to get an update on
>>>> this
>>>> ticket:
>>>>
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135367
>>>>
>>>>
>>>> If somebody from RAL (Brian ?) could have a peek and confirm
>>>> that the CIP is definitely not coming back, I can at
>>>> least give our academic some closure on this.
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Terry
>>>>
>>>> --
>>>> Terry Froy
>>>> Cluster Systems Manager, Particle Physics Queen Mary University of
>>>> London
>>>> Tel: +44 (0)207 882 6560
>>>> E-mail: [log in to unmask]
>>>>
>>>>
>>>>
>>>> ___________________________________________________________________
>>>> _ __________________________________________________________
>>>>
>>>> From: Testbed Support for GridPP member institutes
>>>> <[log in to unmask]> on behalf of Matt Doidge
>>>> <[log in to unmask]>
>>>> Sent: 04 June 2018 16:30:29
>>>> To: [log in to unmask]
>>>> Subject: Tickets for the 5th of June.
>>>> Happy June everyone!
>>>>
>>>> As per the now ancient tradition it's the first Monday of the month
>>>> which means a look at all the tickets.
>>>>
>>>> 45 Open UK Tickets this month.
>>>>
>>>> IPv6 Tickets.
>>>>
>>>> SUSSEX: https://ggus.eu/?mode=ticket_info&ticket_id=131617
>>>> GGUS /login
>>>> ggus.eu
>>>> Global Grid User Support
>>>>
>>>> Some good progress here with the last update on Friday painting a
>>>> hopeful picture of IPv6 come the autumn.
>>>>
>>>> RALPP: https://ggus.eu/?mode=ticket_info&ticket_id=131616
>>>> Last update had Chris trying to beat his dual-stacked PS boxes into
>>>> shape - but this was back in January. Needless to say the ticket
>>>> needs an update!
>>>>
>>>> OXFORD: https://ggus.eu/?mode=ticket_info&ticket_id=131615
>>>> Last update was back in March, with summer the likely timeframe for
>>>> v6 deployment. Three months on the ticket could do with a slight
>>>> update to re-confirm this is still the case.
>>>>
>>>> CAMBRIDGE: https://ggus.eu/?mode=ticket_info&ticket_id=131614
>>>> It's a similar case for Cambridge.
>>>>
>>>> BRISTOL: https://ggus.eu/?mode=ticket_info&ticket_id=131613
>>>> Any news on your plans from back in April to get your PS box onto a
>>>> v6-enabled network?
>>>>
>>>> BIRMINGHAM: https://ggus.eu/?mode=ticket_info&ticket_id=131612
>>>> Some recent good news here with Mark getting his PS box (kindof) v6
>>>> pingable, just waiting on the v6 DNS now.
>>>>
>>>> GLASGOW: https://ggus.eu/?mode=ticket_info&ticket_id=131611
>>>> Gareth covered his bases well with his update back in February.
>>>> Hopefully the new build is on schedule.
>>>>
>>>> ECDF: https://ggus.eu/?mode=ticket_info&ticket_id=131610
>>>> Andy gave a mixed update a few weeks ago, citing some v6 routing
>>>> differences and an upcoming wholesale networking overhaul scheduled
>>>> for September so the ticket is freshly on hold pending more information.
>>>>
>>>> DURHAM: https://ggus.eu/?mode=ticket_info&ticket_id=131609
>>>> A quick update last month reports no significant progress.
>>>>
>>>> SHEFFIELD: https://ggus.eu/?mode=ticket_info&ticket_id=131608
>>>> Elena gave an update at the end of April, with work on the border
>>>> routers scheduled for May. Hopefully that went well and you'll have
>>>> more information soon.
>>>>
>>>> MANCHESTER: https://ggus.eu/?mode=ticket_info&ticket_id=131607
>>>> Any plans on dual-stacking your storage after your Perfsonar successes?
>>>>
>>>> LIVERPOOL: https://ggus.eu/?mode=ticket_info&ticket_id=131606
>>>> Any news on those ongoing negotiations mentioned in the last March update?
>>>>
>>>> UCL: https://ggus.eu/?mode=ticket_info&ticket_id=131604
>>>> Have re-poked their network admins over this.
>>>>
>>>> RHUL: https://ggus.eu/?mode=ticket_info&ticket_id=131603
>>>> No news for a while after the February update that v6
>>>> reverse-lookup wasn't working.
>>>>
>>>> Back to the regular tickets...
>>>>
>>>> NGI
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135038 (9/5) Review of
>>>> the GOCDB info for the NGI. On to the second stage of the review
>>>> now, but it's still a good time for sites to double-check their
>>>> gocdb entries if they haven't recently. In progress (22/5)
>>>>
>>>> OXFORD
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135485 (3/6) A fresh in
>>>> ticket from Sno+, concerning the bdii information disappearing from
>>>> their feeds. Assigned (4/6)
>>>>
>>>> BRISTOL
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135121 (15/5) A ROD
>>>> ticket for failed webdav tests. The tests were doomed to never
>>>> work, so Lukasz disabled the endpoint in the gocdb. Daniela
>>>> reckoned the ticket needs to be closed to try and see if it disables the alarms.
>>>> In progress (24/5)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135120 (15/5) Another
>>>> week or so and this availability ticket should be able to be closed
>>>> - until then it should be On-Hold'd. Reopened (4/6)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135302 (23/5) CMS
>>>> transfer failure ticket. It looks like this ticket hasn't been
>>>> noticed yet. Assigned (23/5)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134820 (29/4) This CMS
>>>> pledge enquiry ticket has had the question answered. I suspect it
>>>> can be closed. In progress (1/5)
>>>>
>>>> BIRMINGHAM
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=129930 (4/8/17) The old
>>>> atlas http test failure ticket. How goes the EOS migration? On hold
>>>> (23/4)
>>>>
>>>> GLASGOW
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134689 (23/4) Perfsonar
>>>> update ticket. Gareth is waiting on 4.1 to be released (which I
>>>> can't find any news on). On Hold (24/4)
>>>>
>>>> ECDF
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135243 (21/5) ROD
>>>> ticket for failed srm-put tests- Rob had to restart things to get
>>>> them working but no shifting the alarms joy at first. The tests
>>>> seem okay for the last day. In progress (24/5)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135314 (24/5) Another
>>>> ROD ticket, this one for old IGTF rpms on the workers. As a quick
>>>> note that may be helpful, an up-to-date version of the certificates
>>>> is kept in /cvmfs/grid.cern.ch/etc/grid-security/ . In progress
>>>> (28/5)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135404 (30/5) The
>>>> resulting low availability ticket for the previous issues. In
>>>> progress (30/5)
>>>>
>>>> DURHAM
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134687 (23/4) Request
>>>> to update the Durham perfsonar. Any news? In progress (30/4)
>>>>
>>>> SHEFFIELD
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134947 (4/5) Atlas
>>>> transfer failures - one of the C7 DPM problem tickets- see
>>>> https://its.cern.ch/jira/browse/LCGDM-2604. On hold (31/5)
>>>>
>>>> MANCHESTER
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134684 (23/4) Perfsonar
>>>> upgrade request ticket. Alessandra still wants to know how
>>>> necessary this update is (my thoughts are it will be quite
>>>> necessary,
>>>> *once* Perfsonar 4.1 is out, but I don't have Duncan's expertise).
>>>> Waiting for reply (23/4)
>>>>
>>>> UCL
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134686 (23/4) Another
>>>> perfsonar upgrade ticket, Ben was looking at it at the last update.
>>>> Any joy? On Hold (23/4)
>>>>
>>>> RHUL
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134945 (4/5) Another
>>>> atlas transfer ticket due to the C7 DPM troubles. On hold (17/5)
>>>>
>>>> QMUL
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134532 (12/4) The
>>>> return of an old LHCB download problem, where the turl can't be resolved.
>>>> Daniel has applied a fix to his production SE. Any news that it's
>>>> worked? In progress (14/5)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134573 (17/4) CMS
>>>> request to install singularity, on hold until the Summer move to C7.
>>>> On hold (17/4)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=132929 (18/1) CMS
>>>> seeing SLURM accounting problems. The APEL devs are involved now,
>>>> and have asked for some parser outputs to test some stuff. In
>>>> progress
>>>> (10/5)
>>>>
>>>> IMPERIAL
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135464 (1/6) A CMS
>>>> ticket about checksum failures that came in on Friday afternoon.
>>>> Files are being declared invalid after being double-checked, and
>>>> another transfer failure query has been tacked onto the ticket
>>>> today. In progress (4/6)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134567 (17/4) A ticket
>>>> concerning the site rather then a site ticket, the declaration of
>>>> some lost Pheno files. I poked it today. In progress (4/6)
>>>>
>>>> BRUNEL
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=133956 (9/3) A CMS
>>>> xroot config change ticket. Any luck with rolling out these changes
>>>> after your troubles getting the new hardware to roll them out onto?
>>>> In progress (23/4)
>>>>
>>>> THE TIER 1
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135367 (28/5) Another
>>>> SNO+ information system ticket, this one has a lot of conversation
>>>> going on in it about Castor publishing even before it landed at the
>>>> Tier 1 (see the mice ticket below). In progress (4/6)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135133 (15/5) CMS
>>>> spotting corrupt files on ECHO, which looked not just be a problem
>>>> with the file but perhaps with their metadata as well? A lot of
>>>> conversation has occurred in this ticket so I'm not entirely sure
>>>> what has occurred, but corrupt files have been deleted. Waiting for
>>>> reply (4/6)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134685 (23/4) Another
>>>> request to upgrade Perfsonar to C7. At last check some C7
>>>> perfsonars were up and running in testing. Any luck getting them
>>>> into production? In progress (2/5)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135308 (24/5) MICE
>>>> problems after the loss of Castor publishing. Henry has hit a
>>>> problem when trying to combine the workarounds with LFC entries. In
>>>> progress (1/6)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135293 (23/5) ROD
>>>> tickets, again related to the loss of castor publishing. Alastair
>>>> has put in a request for the SRM Ops tests for Castor to be removed.
>>>> On Hold (31/5)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=134703 (23/4) CMS
>>>> transfers failing from RAL_disk. It appears files were being sent
>>>> to the wrong namespace. There has since been a lot of lists of
>>>> files being searched for. Any luck getting to the bottom of this?
>>>> In progress (25/5)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=135455 (31/5) CMS
>>>> checksum verification at RAL. This looks to be a duplication of
>>>> 135133 but I think you guys already spotted that. In progress (4/6)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=127597 (7/4/17) CMS
>>>> wanting to know about the RAL networking. After the new firewall
>>>> went in at the end of April Chris asked for some RAL/RALPP job
>>>> performance comparisons to try to see how xroot proxies could
>>>> affect things. No news back, but the question could be lost in the noise.
>>>> On Hold (30/4)
>>>>
>>>> https://ggus.eu/?mode=ticket_info&ticket_id=124876 (7/11/16)
>>>> Gridftp tests failing for ECHO due to a problem with the tests -
>>>> after
>>>> 117683 was left unsolved this is our oldest ticket. Not a hint of
>>>> movement on the counter ticket (125026) for a long time. I think we
>>>> could do with weighing up our options here. On hold (13/11/17)
>>>>
>>>> And that's all the tickets! Thanks for bearing with the all!
>>>>
>>>> Cheers!
>>>> Matt
>>>>
>>>> ###################################################################
>>>> #
>>>> ####
>>>>
>>>> To unsubscribe from the TB-SUPPORT list, click the following link:
>>>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
>>>>
>>>>
>>>> ___________________________________________________________________
>>>> _ __________________________________________________________
>>>>
>>>> To unsubscribe from the TB-SUPPORT list, click the following link:
>>>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Sent from the pit of despair
>>>>
>>>> -----------------------------------------------------------
>>>> [log in to unmask]
>>>> HEP Group/Physics Dep
>>>> Imperial College
>>>> London, SW7 2BW
>>>> Tel: +44-(0)20-75947810
>>>> http://www.hep.ph.ic.ac.uk/~dbauer/
>>>>
>>>>
>>>> ___________________________________________________________________
>>>> _ __________________________________________________________
>>>>
>>>> To unsubscribe from the TB-SUPPORT list, click the following link:
>>>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
>>>>
>>>>
>>>>
>>>
>>> --
>>> Dr. Henry Nebrensky [log in to unmask]
>>> http://people.brunel.ac.uk/~eesrjjn
>>> "The opossum is a very sophisticated animal.
>>> It doesn't even get up until 5 or 6 p.m."
>>>
>>>
>>> ####################################################################
>>> #
>>> ###
>>>
>>> To unsubscribe from the TB-SUPPORT list, click the following link:
>>> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
>>
>>
>>
>> --
>> Sent from the pit of despair
>>
>> -----------------------------------------------------------
>> [log in to unmask]
>> HEP Group/Physics Dep
>> Imperial College
>> London, SW7 2BW
>> Tel: +44-(0)20-75947810
>> http://www.hep.ph.ic.ac.uk/~dbauer/
>
>
>
> --
> Sent from the pit of despair
>
> -----------------------------------------------------------
> [log in to unmask]
> HEP Group/Physics Dep
> Imperial College
> London, SW7 2BW
> Tel: +44-(0)20-75947810
> http://www.hep.ph.ic.ac.uk/~dbauer/
>
> ######################################################################
> ##
>
> To unsubscribe from the TB-SUPPORT list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
>
> ######################################################################
> ##
>
> To unsubscribe from the TB-SUPPORT list, click the following link:
> https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
--
Sent from the pit of despair
-----------------------------------------------------------
[log in to unmask]
HEP Group/Physics Dep
Imperial College
London, SW7 2BW
Tel: +44-(0)20-75947810
http://www.hep.ph.ic.ac.uk/~dbauer/
########################################################################
To unsubscribe from the TB-SUPPORT list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
########################################################################
To unsubscribe from the TB-SUPPORT list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=TB-SUPPORT&A=1
|