Hi Mark
You don't need the patch for EMI - the fix is already in the 1.8.3 release (but this wasn't released in gLite).
Cheers
Wahid
On 7 Sep 2012, at 07:40, Mark Slater wrote:
> repo grabbed, update installed, fingers crossed and.... Success!!
>
> Thanks so much for that John! I had found references to this patch fixing a recent segfault issue but didn't realise that a new repo was required to get access to it!
>
> Out of interest - does anyone if this patch is included in EMI2 already or would I need to download this repo as well?
>
> Thanks!
>
> Mark
>
>
>
> On 07/09/12 00:03, John Bland wrote:
>> If it's the same problem some of us have been seeing then there is some
>> guidance in the GRIDPP-STORAGE thread on DPM segfaulting. To summarise
>> quickly
>>
>> There is a patch upgrade for glite DPM 1.8.2 to 1.8.2-5
>> There is some info here:
>> https://svnweb.cern.ch/trac/lcgdm/blog/glite-release-1-8-2-5
>> Add the GEP repository then upgrade the base dpm stuff with something like
>>
>> yum clean all; yum update dpm* DPM*
>> then restart all of the dpm services, no need to re-yaim or anything
>> then cross your fingers.
>>
>> John
>>
>> On 09/06/12 23:24, Mark Slater wrote:
>>> Hi Alessandra,
>>>
>>> Today, after running without any problems for the last 6 months or so,
>>> the DPM service at Bham has just started segfaulting for no reason (it
>>> won't even start up!). Have you fixed this and if so, how? Or does
>>> anyone else have any ideas? I'm looking at using this as an excuse to
>>> upgrade to EMI2 but if I can avoid it for a bit that would be preferable :)
>>>
>>> THanks,
>>>
>>> Mark
>>>
>>> On 04/09/12 17:30, Alessandra Forti wrote:
>>>> Manchester had DPM segfaulting a lot in August.
>>>>
>>>> cheers
>>>> alessandra
>>>>
>>>> On 04/09/2012 16:34, Jeremy Coles wrote:
>>>>> Dear All
>>>>>
>>>>> A few more sites requiring follow-up for the August
>>>>> availability/reliability but overall the Tier-2s are fine. Please
>>>>> take a look and let me know of any concerns, we'll review these next
>>>>> Tuesday.
>>>>>
>>>>> Many thanks,
>>>>> Jeremy
>>>>>
>>>>>
>>>>>
>>>>> Begin forwarded message:
>>>>>
>>>>>> *From: *WLCG Office <[log in to unmask] <mailto:[log in to unmask]>>
>>>>>> *Subject: **T2 Availability & Reliability - August 2012*
>>>>>> *Date: *4 September 2012 16:20:46 GMT+01:00
>>>>>> *To: *"project-wlcg-cb (Members of the WLCG CB)"
>>>>>> <[log in to unmask] <mailto:[log in to unmask]>>
>>>>>> *Cc: *"project-lcg-gdb (LCG - Grid Deployment Board)"
>>>>>> <[log in to unmask] <mailto:[log in to unmask]>>,
>>>>>> "[log in to unmask] <mailto:[log in to unmask]>"
>>>>>> <[log in to unmask] <mailto:[log in to unmask]>>,
>>>>>> "[log in to unmask] <mailto:[log in to unmask]>"
>>>>>> <[log in to unmask] <mailto:[log in to unmask]>>, "sam-support
>>>>>> (SAM support)" <[log in to unmask] <mailto:[log in to unmask]>>
>>>>>>
>>>>>> Dear all,
>>>>>>
>>>>>> Please find below the draft T2 Reliability & Availability report for
>>>>>> August 2012:
>>>>>>
>>>>>> http://sam-reports.web.cern.ch/sam-reports/2012/201208/wlcg/WLCG_Tier2_Aug2012.pdf
>>>>>>
>>>>>> Please verify your data and send comments to [log in to unmask]
>>>>>> <mailto:[log in to unmask]> by Friday 14 September.
>>>>>>
>>>>>> Requests for re-computations are to be entered via GGUS within 10
>>>>>> calendar days of this e-mail being sent. Full details are
>>>>>> here: https://tomtools.cern.ch/confluence/display/SAMDOC/Availability+Re-computation+Policy
>>>>>>
>>>>>> The final T2 reports are stored in the WLCG document repository
>>>>>> under http://cern.ch/wlcg-docs/ReliabilityAvailability/Tier-2 and
>>>>>> reported to the Overview Board.
>>>>>>
>>>>>> Kind regards,
>>>>>> Cath
>>>>>>
>>>>>> -----------------------------------------------
>>>>>> WLCG Office
>>>>>> IT Dept - CERN
>>>>>> CH-1211 Genève, Switzerland
>>>>>> www.cern.ch/wlcg <http://www.cern.ch/wlcg>
>>>>
>>>> --
>>>> Facts aren't facts if they come from the wrong people. (Paul Krugman)
>>
>
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
|