Print

Print


Testbed Support for GridPP member institutes [mailto:TB-
> [log in to unmask]] On Behalf Of Matt Doidge said:
> OXFORD https://ggus.eu/ws/ticket_info.php?ticket=91996 (In Progress) - Is
> the deadline to upgrade the end of April, or do we need to be sorted before
> then?

https://operations-portal.egi.eu/broadcast/archive/id/880

"IMPORTANT. According to the EGI decommissioning policy [2], the decommissioning deadline expires one month after the end of security updates and support of the software. 
For EMI 1 products this is: 31-05-2013"

Stephen





> 
> BRISTOL https://ggus.eu/ws/ticket_info.php?ticket=91995 (In progress) -
> Winnie has asked for clarification for what's going on.
> 
> BIRMINGHAM https://ggus.eu/ws/ticket_info.php?ticket=91994 (In
> progress)
> - Mark will get onto this as soon as Birmingham's AC starts behaving.
> 
> GLASGOW https://ggus.eu/ws/ticket_info.php?ticket=91992 (In progress) -
> There are some red herrings at Glasgow due to hanging CE bdiis. Just the
> WMSes and LB to go, these are being handled.
> 
> SHEFFIELD https://ggus.eu/ws/ticket_info.php?ticket=91990 (In progress)
> - Elena plans to upgrade this month.
> 
> RHUL https://ggus.eu/ws/ticket_info.php?ticket=91987 (Assigned)
> https://ggus.eu/ws/ticket_info.php?ticket=91982 (Assigned)
> https://ggus.eu/ws/ticket_info.php?ticket=91981 (Assigned) (Poor RHUL
> getting 3 tickets - I assume this is the ROD dashboard being silly as Daniela
> mentioned)
> 
> LIVERPOOL https://ggus.eu/ws/ticket_info.php?ticket=91984 (In progress)
> - The Liver lads are working on it.
> 
> QMUL https://ggus.eu/ws/ticket_info.php?ticket=91980 (In Progress) - Chris
> has updated his BDII, so hopefully things will be sorted.
> 
> IC https://ggus.eu/ws/ticket_info.php?ticket=91978 (In Progress) - wms
> updated, last CE has a scheduled downtime, um, scheduled.
> 
> BRUNEL https://ggus.eu/ws/ticket_info.php?ticket=91975 (In Progress) -
> Raul plans to upgrade things at the end of the month. He asks about dangers
> upgrading the CE from EMI1 to 2 - Daniela replies that the DB change means
> that it's recommended to drain your CE first.
> 
> TIER 1 https://ggus.eu/ws/ticket_info.php?ticket=91974 (In Progress) - The
> team plan to have all services updated by the end of March.
> 
> 
> Atlas data moving tickets:
> https://ggus.eu/ws/ticket_info.php?ticket=90242 (Lancaster)
> https://ggus.eu/ws/ticket_info.php?ticket=90243 (Liverpool)
> https://ggus.eu/ws/ticket_info.php?ticket=90244 (RALPP)
> https://ggus.eu/ws/ticket_info.php?ticket=90245 (Oxford)
> https://ggus.eu/ws/ticket_info.php?ticket=89804 (Glasgow)
> 
> Nearing the end of these. Lancaster and Oxford are down to their last
> few files (which might need to be manually fixed at the site end- the
> one left at Lancaster is lost for good). RALPP similarly have dark data
> files that might need to be cleaned up locally. Liverpool are waiting on
> atlas after giving them a new list of files. Glasgow have been asked for
> a fresh file dump.
> 
> 
> The Rest:
> TIER 1
> https://ggus.eu/ws/ticket_info.php?ticket=91687 (21/2)
> Support for the epic VO on the RAL WMS. Request for pool accounts went
> out but no word since. In progress (21/2)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=91658 (20/2)
> Request from Chris W for webdav redirection support on the RAL LFC. As
> reported last week waiting on the next release which has better,
> stronger, faster webdav support in it. In Progress (22/2)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=91146 (4/2)
> atlas tracking RAL bandwidth issues. The ticket was waiting on last
> week's downtime to hopefully sort things out. Did the picture improve?
> In progress (12/2)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=91029 (30/1)
> Again from atlas, this is the FTS queries failing for some jobs
> involving users with odd characters in the name ticket. A fix either
> needs to be implemented by the srm developers or atlas need to
> workaround by changing their robot DNs. On hold (27/2)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=90528 (17/1)
> Sno+ Jobs weren't making their way to Sheffield, tracked to a problem
> with one wms. As the cause of the problem is unknown and completely
> unobvious it was suggested restricting Sno+ jobs to the working WMS, but
> still no reply from Sno+. Waiting for reply (19/2)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=86152 (17/9/2012)
> Correlated packet loss on the RAL Perfsonar host. Did last week's
> network intervention fix things? Or maybe the problem evapourated (I'm
> ever the optimist)? On hold (16/1)
> 
> IMPERIAL
> https://ggus.eu/ws/ticket_info.php?ticket=91866 (28/2)
> It looks like atlas jobs were running afoul of some cvmfs problems on
> some nodes. They've been given a kick, it's worth seeing if the problem
> has gone away. In progress (28/2)
> 
> GLASGOW
> https://ggus.eu/ws/ticket_info.php?ticket=91792 (26/2)
> Atlas thought that they had lost some files, but it turns out that they
> just had bad permissions on a pool node (root.root) - the problem's been
> fixed and Sam is investigating with his DPM hat on, whilst checking the
> filesystems for more possible bad files. In progress (4/3)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=90362 (13/1)
> All Glasgow's CEs have been switched over to use the GridPP voms server
> for ngs.ac.uk, they just need some testing. Waiting for reply (25/2)
> 
> SHEFFIELD
> https://ggus.eu/ws/ticket_info.php?ticket=91770 (25/2)
> lhcb complaining about the default value being published for Max CPU
> time. No news from Sheffield beyond the acknowledgement of the ticket.
> In Progress (25/2)
> 
> DURHAM
> https://ggus.eu/ws/ticket_info.php?ticket=91745 (24/2)
> enmr.eu having trouble with lcg-tagging things at DUrham. Mike gave this
> a kick, and asked if the problem has gone away. Waiting for reply (25/2)
> 
> RHUL
> https://ggus.eu/ws/ticket_info.php?ticket=91711 (21/2)
> atlas having trouble copying files into RHUL. It's being looked at but
> PRODDISK and ANALY_RHUL have been put offline. In Progress (28/2)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=89751 (17/12/12)
> Path MTU discovery problems to RHUL. On hold since being handed over to
> the Network guys, who were following it up with Janet. On hold (28/1)
> 
> LANCASTER
> https://ggus.eu/ws/ticket_info.php?ticket=91304 (8/2)
> LHCB having trouble on one of Lancaster's cluster as they like to run
> their jobs in the home directory rather then $TMPDIR. Forcing this
> behaviour is harder then it should be in LSF, so it looks like we're
> going to have to relocate the lhcb home directories. In Progress (1/3)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=90395 (14/1)
> dteam jobs failed at Lancaster, due to our old CE being rubbish. Its
> since been reborn with new disks, but embarrassingly I haven't found the
> time to set a UI up for dteam and test it myself (which I intend to do
> as part of testing the UI tarball, but that's a whole other story). In
> progress (18/2)
> 
> ECDF
> https://ggus.eu/ws/ticket_info.php?ticket=90878 (27/1)
> lhcb were having problem with cvmfs at Edinburgh, but the fixes
> attempted can't be checked due to dirac problems at the site. In
> progress (could be knocked back to waiting for reply) (28/2)
> 
> BRISTOL
> https://ggus.eu/ws/ticket_info.php?ticket=90328 (11/1)
> The Bristol SE is publishing some odd values - zero used space. Waiting
> on another, similar ticket (90325) to be resolved. On hold (11/2)
> 
> https://ggus.eu/ws/ticket_info.php?ticket=90275 (10/1)
> The CVMFS taskforce have asked for Bristol's CVMFS plans. One Bristol CE
> is migrated to using it, with one left to go. On hold (5/2)
> 
> EFDA-JET
> https://ggus.eu/ws/ticket_info.php?ticket=88227 (6/11/2012)
> Jet have exhausted all options trying to fix this biomed job publishing
> problem. They're looking at reinstalling the CE to fix it, which seems
> like using a sledgehammer to crack a walnut(but I don't have any better
> ideas). On hold (25/2)
-- 
Scanned by iCritical.