I suspect that cvmfs_fsck ing would also have fixed the problem...
(and in a way which is less dangerous to running jobs... but if lhcb
jobs are broken anyway, breaking cvmfs more, briefly, might matter
less?)
Sam
On 2 December 2014 at 15:53, Kashif Mohammad
<[log in to unmask]> wrote:
> Hi Sam
>
> Thanks, removing complete cvmfs cache directory solved this problem. Earlier I tried 'cvmfs_config reload ' but it didn’t clean cache directory. I removed cache directory on a WN which was not running jobs but I am not sure that how a running job will react if I remove cvmfs cache directory?
>
> Cheers
> Kashif
>
>
>> -----Original Message-----
>> From: Testbed Support for GridPP member institutes [mailto:TB-
>> [log in to unmask]] On Behalf Of Sam Skipsey
>> Sent: 02 December 2014 15:31
>> To: [log in to unmask]
>> Subject: Re: cvmfs problem with some WN
>>
>> Have you tried clearing the cvmfs cache on those nodes?
>>
>> Sam
>>
>> On 2 December 2014 at 15:24, Kashif Mohammad
>> <[log in to unmask]> wrote:
>> > Hi
>> >
>> > I have strange problem with cvmfs on some WN’s
>> >
>> > cd lhcb.cern.ch
>> > -bash: cd: lhcb.cern.ch: No such file or directory
>> >
>> > Cat /var/log/messages
>> >
>> > c 2 15:18:04 t2wn86 cvmfs2: (lhcb.cern.ch) SQlite3: API called with
>> > NULL prepared statement (21) Dec 2 15:18:04 t2wn86 cvmfs2:
>> > (lhcb.cern.ch) SQlite3: misuse at line 63669 of [118a3b3569] (21) Dec
>> > 2 15:18:04 t2wn86 cvmfs2: (lhcb.cern.ch) Failed to initialize root
>> > file catalog (16 - file catalog failure)
>> >
>> > ATLAS and CMS are fine on same WN.
>> >
>> > This issue is limited to few WNs. CVMFS packages are same and I have
>> > rebooted one of the WN but the problem persists. Has anyone seen this
>> > issue?
>> >
>> > Thanks
>> > Kashif
>> >
>> >
>> >
|