Hi,
We are at 2.1.69-2 except for edg-wl-globus-gridftp and edg-wl-bypass
which are at 1.12 and 2.5.3. I don't know what other rpms might involve
the milliseconds.
There was still a log-mon-js process running. I killed that one and
restarted everything but that didn't help.
Probably installing this on an empty machine is easier than upgrading an
existing one.
Cheers,
Ron
> -----Original Message-----
> From: LHC Computer Grid - Rollout
> [mailto:[log in to unmask]] On Behalf Of Laurence Field
> Sent: woensdag 14 december 2005 12:20
> To: [log in to unmask]
> Subject: Re: [LCG-ROLLOUT] Move to authenticated R-GMA
> connectors: lcg-mon-job status rpm broken???
>
> There was an RB service update on Sep 15. The RB rpms should
> now be at
> version 2.1.69-2. One of the differences in this version was that the
> timestamps changed from milliseconds to seconds.
>
> Laurence
>
>
> Ron Trompert wrote:
>
> >It is a 32-bit machine and the RB is plain LCG-2.6.0.
> >
> >Cheers,
> >
> >Ron
> >
> >
> >
> >>-----Original Message-----
> >>From: LHC Computer Grid - Rollout
> >>[mailto:[log in to unmask]] On Behalf Of
> Laurence Field
> >>Sent: woensdag 14 december 2005 11:57
> >>To: [log in to unmask]
> >>Subject: Re: [LCG-ROLLOUT] Move to authenticated R-GMA
> >>connectors: lcg-mon-job status rpm broken???
> >>
> >>What version of the RB code you have and is your machine 64 bit?
> >>
> >>
> >>
> >>Ron Trompert wrote:
> >>
> >>
> >>
> >>>Hi,
> >>>
> >>>I have created the link by hand. But now it doesn't start
> >>>
> >>>
> >>because of the
> >>
> >>
> >>>following error.
> >>>
> >>>started at Wed Dec 14 11:44:11 2005
> >>>Traceback (most recent call last):
> >>> File "/opt/lcg/libexec/lcg-mon-logfile-daemon.py", line 249, in ?
> >>> main(params["LOG_FILE"], params["TABLE"])
> >>> File "/opt/lcg/libexec/lcg-mon-logfile-daemon.py", line
> >>>
> >>>
> >>182, in main
> >>
> >>
> >>> command = parser.parse_entry(entry)
> >>> File "/opt/lcg/lib/python/jobstatus.py", line 66, in parse_entry
> >>> rgma_time = map(lambda x: "%.2d"%x,
> >>>time.gmtime(int(stateEnterTime)))
> >>>ValueError: int() literal too large: 1134553404116
> >>>
> >>>By the way, we run a BDII and RB on the same machine.
> >>>
> >>>Cheers,
> >>>
> >>>Ron
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>>-----Original Message-----
> >>>>From: LHC Computer Grid - Rollout
> >>>>[mailto:[log in to unmask]] On Behalf Of
> >>>>
> >>>>
> >>Laurence Field
> >>
> >>
> >>>>Sent: woensdag 14 december 2005 11:30
> >>>>To: [log in to unmask]
> >>>>Subject: Re: [LCG-ROLLOUT] Move to authenticated R-GMA
> >>>>connectors: lcg-mon-job status rpm broken???
> >>>>
> >>>>Try to remove the rpm and re-install it.
> >>>>
> >>>>The job-status monitor and gridftp monitor both do almost the
> >>>>job, they
> >>>>both parse a log file and insert it into R-GMA.
> >>>>They were both changed so that they use the same core program,
> >>>>lcg-mon-logfile-common and the new rpms only contain a
> parser and a
> >>>>configuration file. There is a post install script in the
> rpm that
> >>>>creates a link from /etc/rc.d/init.d/lcg-mon-job-status to
> >>>>/opt/lcg/etc/init.d/lcg-mon-logfile-daemon and I think that
> >>>>there is a
> >>>>bug in this post install script that under certain
> >>>>
> >>>>
> >>circumstances, the
> >>
> >>
> >>>>link is not created.
> >>>>
> >>>>Laurence
> >>>>
> >>>>
> >>>>Ron Trompert wrote:
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>>Hi,
> >>>>>
> >>>>>There is something weird with the lcg-mon-job-status rpm. On
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>the Wiki it
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>>was advocated that on the RB lcg-mon-job-status should be
> >>>>>
> >>>>>
> >>restarted.
> >>
> >>
> >>>>>However, I couldn't do this because
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>/etc/rc.d/init.d/lcg-mon-job-status
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>>simply was not there. I found a peculiar difference between
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>the old and
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>>the new rpm by the way.
> >>>>>
> >>>>>The old lcg-mon-job-status-1.0.5 rpm contained the files:
> >>>>>/etc/rc.d/init.d/lcg-mon-job-status
> >>>>>/opt/lcg/etc/init.d/lcg-mon-job-status
> >>>>>/opt/lcg/etc/lcg-mon-job-status.conf
> >>>>>/opt/lcg/sbin/lcg-mon-job-status
> >>>>>/opt/lcg/var/log
> >>>>>
> >>>>>while the new 2.0.3 contains:
> >>>>>
> >>>>>/opt/lcg/etc/lcg-mon-job-status.conf
> >>>>>/opt/lcg/lib/python/jobstatus.py
> >>>>>
> >>>>>Any ideas on how to get this working??
> >>>>>
> >>>>>Cheers,
> >>>>>
> >>>>>Ron Trompert
> >>>>><[log in to unmask]>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
>
|