Do you think we can also get from LHCb (as mentioned by Raja) what timescale they mean by "ASAP"? I am afraid aI am still unclear if this applies just to the LHCb Tier1s or if it includes their T2s. ( I point out that of the 6 sites LHCb are using in FTS transfers, only QMUL and IC-HEP are dual stack; still need RALPP, Liverpool, Manchester and Glasgow.
Also of note is that the end of run 2 is the timeframe for T2s for ALICE which gives us the deadline time for Birmingham to upgrade.
On separate note; in the case of the ipv6WG is keeping this page up to date for sites they are looking at:
http://hepix-ipv6.web.cern.ch/sites-connectivity
I guess we (Ops and sites) need to make sure that our page for the UK specifically is also kept up to date ( we can use the .
https://www.gridpp.ac.uk/wiki/IPv6_site_status
Brian
-----Original Message-----
From: GRIDPP2: Deployment and support of SRM and local storage management [mailto:[log in to unmask]] On Behalf Of Alastair Dewhurst
Sent: 08 March 2017 09:57
To: [log in to unmask]
Subject: Re: providing ipv6 at T2s; a cunning plan....
Hi All
Its great that there is a real interest towards IPv6 recently but could we look at the bigger picture first.
As I am sure most of you are already aware that there is an IPv6 mailing list: [log in to unmask] . There is a good chance peoples needs will have been discussed before.
Everybody has a different site setup, but it would be fair to say, most sites do not have full control over their networking and have to rely on a central university team for this. If that Central Networking team has neither the means or desire to upgrade things to support IPv6 then it doesn't matter what the site does. To help sites put pressure on the relevant management to get over this hurdle, the IPv6 working group came up with the proposal to allow IPv6 only WN. This has helped some sites significantly. However that agreement could only mandate Tier 1s to do things, so it isn't enough for some Tier 2s.
It is however possible to put pressure on Tier 2 network teams, by getting the VOs to explicitly require IPv6 support. I know CMS have been pushing their Tier 2s quite hard on this. As I am in charge of IPv6 for ATLAS, I am also going to make it a requirement for all ATLAS sites to provide dual storage by the end of Run 2 (January 2017). If you look at Dave Kelsey talk today at the GDB[1] you can see that the statement is in there. I will finalise the exact wording for the ATLAS Software and Computing workshop next week. We are trying to keep jargon like T2Ds, Nucleus, etc out of the statement so it is easier to send it to people not familiar with the experiments. Feel free to email me if you have any particular preference for the choice of wording. Also, if you don't feel you would be covered by an ATLAS request I can contact the other VOs and get them to produce a similar statement if necessary.
Is Duncan Rand on this list? He did a lot of work on perfSonar with IPv6 and I believe he would be a good person to talk to about early stress testing (or just the IPv6 email list in general).
Alastair
[1] https://indico4.twgrid.org/indico/event/2/session/23/contribution/195/material/slides/1.pdf
> On 7 Mar 2017, at 23:28, Doidge, Matthew <[log in to unmask]> wrote:
>
> Hi Brian,
> I assume it wouldn't! But we're kind of stuck. Our networking guys are really worried that significant IPv6 traffic will destroy all they hold dear. To that end I'm trying to at least attempt to Macgyver something to stress the system in a controlled manner. And my only externally connected 10Gb servers are my perfsonar boxes and the storage nodes.
>
> No precision is needed, we just need to pump as many IPv6 packets to and from our site as possible with an easy on/off switch. It would be easier if we could do this properly, but I promised I would see what could be done without dual stacking 3 dozen servers fronting 2PB of disk space and crossing my fingers.
>
> As awkward as this is, it could well be a useful exercise to go through, IIRC Lancaster aren't the only site with networking teams nervous about opening the IPv6 floodgates. Having a recipe for testing things without dual stacking a whole SE could be useful.
>
> Cheers,
> Matt
>
> ________________________________________
> From: [log in to unmask] [[log in to unmask]]
> Sent: 07 March 2017 16:57
> To: Doidge, Matthew; [log in to unmask]
> Subject: RE: providing ipv6 at T2s; a cunning plan....
>
> Matt, re Lancaster, not sure how having a storage system partially dual host will work. Is head node also dual hosted?
> Brian
>
> -----Original Message-----
> From: Matt Doidge [mailto:[log in to unmask]]
> Sent: 07 March 2017 16:49
> To: Davies, Brian (STFC,RAL,SC); [log in to unmask]
> Subject: Re: providing ipv6 at T2s; a cunning plan....
>
> Hi Brian,
> Lancaster is in the middle of a stage rollout of Ipv6 to its storage nodes. We currently have 4 10Gb pools dual stacked (stor0[19-22].hec.lancs.ac.uk).
>
> Why only 4? Because our networking guys would like us to stress test
> in a controlled manner what happens when we start throwing about a lot
> of
> v6 traffic (i.e. see if we can cap out our 10Gb bandwidth for a bit
> with
> Ipv6 traffic and see if anything melts).
>
> Between these 4 nodes and the perfsonar boxes we should be able to do this - if we have a "target" (or two) - i.e. a couple of 10Gb boxes with Iperf at a site that we could throw tests at and receive tests from.
>
> Sadly I'm not sure I'll be back from a trip to the vets in time tomorrow to get to the meeting to discuss this, but if anyone wants to volunteer to help that would be great (especially if this would help you test your own IPv6 pipes as well).
>
> Cheers,
> Matt
>
> On 07/03/17 16:32, Brian Davies wrote:
>> So If I were to order the remaining sites for priority for ipv6 (for
>> storage ) I would put me as follows.
>>
>> I have split them into two groups ( dependent on size, number of VOS
>> supported and Importance within the scheme of their supported VOs).
>>
>>
>>
>> RALPP
>>
>> Glasgow
>>
>> Manchester
>>
>> Birmingham
>>
>>
>>
>> Lancaster
>>
>> ECDF
>>
>> Liverpool
>>
>> RHUL
>>
>>
>>
>> And then remaining sites
>>
>>
>>
>> Timescale for when they need/should/requested/required is up for
>> debate.... But in terms of prioritisation order that we ( the storage
>> group) should care about does this make sense?
>>
>>
>>
>> Brian
>>
>>
>>
>>
>>
>> *From:*GRIDPP2: Deployment and support of SRM and local storage
>> management [mailto:[log in to unmask]] *On Behalf Of *L
>> Kreczko
>> *Sent:* 01 March 2017 16:31
>> *To:* [log in to unmask]
>> *Subject:* Re: providing ipv6 at T2s; a cunning plan....
>>
>>
>>
>> Same in Bristol, but we have still some teething problems as IPv6 is
>> low
>> prio:
>>
>> https://ggus.eu/index.php?mode=ticket_info&ticket_id=126865
>>
>>
>>
>>
>>
>>
>>
>> On 1 March 2017 at 16:07, Raul Lopes <[log in to unmask]
>> <mailto:[log in to unmask]>> wrote:
>>
>> Brunel storage is DPM and has been on dual-stack for 2+ years.
>>
>> raul
>>
>> Brian Davies wrote:
>>
>>> Is it a stupid idea to consider that a cunning way to provide ipv6
>> data access to storage at T2s is to dual host webdav interfaces?
>>> Does this work for DPM/dCache/Storm ?
>>> Would this be enough for the VOs?
>>> Brian
>>
>>
>>
>>
>>
>> --
>>
>> *********************************************************
>> *Dr Lukasz Kreczko
>> Research Associate*
>>
>> * Department of Physics*
>>
>> * Particle Physics Group
>> *
>> University of Bristol
>> HH Wills Physics Lab
>> University of Bristol
>> Tyndall Avenue
>> Bristol
>> BS8 1TL
>>
>> +44 (0)117 928 8724
>>
>> [log in to unmask] <mailto:[log in to unmask]>
>>
>>
>>
>> A top 5 UK university with leading employers (2015)
>>
>> A top 5 UK university for research (2014 REF)
>>
>> A world top 40 university (QS Ranking 2015)
>>
>> *********************************************************
>>
|