Print

Print


Good evening ROLLOUTers

I've been trying -and trying- to upgrade IPSL-IPGP-LCG2 to LCG-2_3_0,
on RH7.3 with LCFG.

Seems to be successfull on the UI, and maybe on SE where no packages _at all_ 
were replaced nor added so I really won't be surprised if it finally failed 
(I had no time yet to check the rpm lists for the SE for confirmation).

But on the CE and the WN it went really bad :

First, I was warned that 

[WARNING] updaterpms: Couldn't find RPM header file for lcg-bdii-3.1.13-1
[WARNING] updaterpms: Couldn't find RPM header file for 
lcg-info-dynamic-pbs-1.0.3-1

that surprised me a bit because the files .lcg-bdii-3.1.13-1.noarch.rpm and
.lcg-info-dynamic-pbs-1.0.3-1 are present in the same dir as the rpms 
themselves.
When I run
genhdfile-static-402 lcg-bdii-3.1.13-1.noarch.rpm 
genhdfile-static-402 lcg-info-dynamic-pbs-1.0.3-1.noarch.rpm 
I get no complains, but that doesn't change anything.

Anyway, I would feel very very happy if that was the only problem I 
encountered.

The fact is that the installation leaves the computing nodes without a batch 
system! I requested for PBS in the profiles, but the upgrade just *removes*
the packages listed in pbs-(server|client)-rpm.h, despite many efforts.

So finally, my idea was to install them again (I used the old LCG-2_2_0 
profiles, not to miss something behind, thus the title of this post), set the 
updaterpms.localpkgs flag to yes (with updaterpms.localpkgs set to cdb), and 
then try the 2_3 upgrade again. 

So this didn't fail that much, expect that without (at least ?) 
lcg-info-dynamic-pbs, the advertising of the queues by the ldap tool is very 
close to swiss cheese : full of holes. 

So maybe I still could force the install of this two packages by typing the 
appropriate rpm commands (hopefully I just take care of 1 CE and 4 WNs),
but finally, I took the wise (?) decision to seek for further advice before 
actually doing it, as I also felt this situation  should be reported somehow.

Thank you very much for having read this message up to here.

Yours, kindly,

David Weissenbach.