Hi,
It sounds like one of your colleagues is trying to help you ... this "error" is not an error, it's a debug statement. On our production CE, the code says:
DEBUG = 0
if DEBUG:
print "dumping parse tree for static ldif file"
for d in dndict.keys():
(this is from /opt/lcg/libexec/lcg-info-dynamic-scheduler)
-r-xr-xr-x 1 root root 14950 Aug 24 2007 /opt/lcg/libexec/lcg-info-dynamic-scheduler
The file comes from
lcg-info-dynamic-scheduler-generic-2.2.2-1
I don't see how it could be printing anything unless someone has changed "DEBUG" to a nonzero value.
JT
On Nov 9, 2010, at 09:26 , [log in to unmask] wrote:
> Hi,
>
> I searched deeper and I found that if I'm running the following command I
> got an error:
> [root@tbit01 ~]# ./glite-info-dynamic-scheduler-wrapper |less
>
> dumping parse tree for static ldif file
> For dn:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-ifops,mds-vo-name=
> resource,o=grid
> LocalID: None
> CEUniqueID:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-ifops
> ,mds-vo-name=resource,o=grid
> Queue Name: ifops
> ACBRs: [('VO', 'ifops')]
> For dn:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-alice,mds-vo-name=
> resource,o=grid
> LocalID: None
> CEUniqueID:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-alice
> ,mds-vo-name=resource,o=grid
> Queue Name: alice
> ACBRs: [('VO', 'alice')]
> For dn:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-ops,mds-vo-name=re
> source,o=grid
> LocalID: None
> CEUniqueID:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-ops,m
> ds-vo-name=resource,o=grid
> Queue Name: ops
> ACBRs: [('VO', 'ops')]
> For dn:
> GlueVOViewLocalID=lhcb,GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lc
> gpbs-lhcb,mds-vo-name=resource,o=grid
> LocalID: lhcb
> CEUniqueID:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-lhcb,
> mds-vo-name=resource,o=grid
> Queue Name: lhcb
> ACBRs: [('VO', 'lhcb')]
> For dn:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-dteam,mds-vo-name=
> resource,o=grid
> LocalID: None
> CEUniqueID:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-dteam
> ,mds-vo-name=resource,o=grid
> Queue Name: dteam
> ACBRs: [('VO', 'dteam')]
> For dn:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-lhcb,mds-vo-name=r
> esource,o=grid
> LocalID: None
> CEUniqueID:
> GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-lcgpbs-lhcb,
> mds-vo-name=resource,o=grid
> Queue Name: lhcb
> ACBRs: [('VO', 'lhcb')]
> For dn:
> GlueVOViewLocalID=ifops,GlueCEUniqueID=tbit01.nipne.ro:2119/jobmanager-l
> cgpbs-ifops,mds-vo-name=resource,o=grid
> LocalID: ifops
>
> If I'm running manually :
>
> [root@tbit01 ~]# /opt/glite/libexec/glite-info-generic
> /opt/glite/etc/gip/glite-info-generic.conf|grep subclusterlogical
> gluesubclusterlogicalcpus: 8
> gluesubclusterlogicalcpus: 800
> Which are the values for both CE's tbit03(8) tbit01(800)
>
> So glite-info-generic it's running but not publishing in GIP file, so I
> think that the problem occurs when it's running
> glite-info-dynamic-scheduler-wrapper, which reads static-file-CE.ldif, who
> is not correct.
> But, (if this is the problem) how I can fix the static-file-CE.ldif?
>
> Cheers,
> Mihai
>
>
>> LHC Computer Grid - Rollout [mailto:[log in to unmask]] said:
>>> As you can see I don't get the CPU's for subcluster tbit01.nipne.ro
>>
>> The top BDII at CERN can see both:
>>
>> ldapsearch -x -h lcg-bdii.cern.ch -p 2170 -b o=grid
>> 'gluesubclusteruniqueid=*tbit01*' | grep CPUs:
>> GlueSubClusterPhysicalCPUs: 406
>> GlueSubClusterLogicalCPUs: 406
>>
>> ldapsearch -x -h lcg-bdii.cern.ch -p 2170 -b o=grid
>> 'gluesubclusteruniqueid=*tbit03*' | grep CPUs:
>> GlueSubClusterPhysicalCPUs: 8
>> GlueSubClusterLogicalCPUs: 8
>>
>> Incidentally, those numbers look wrong unless all your CPUs are
>> single-core - LogicalCPUs/PhysicalCPUs should be the average number of
> cores.
>>
>> Anyway, you're right that there is something strange. According to the
>> GOC DB your site BDII is on tbit01 and indeed that only has the CPUs for
> tbit03, as you say. I think something in your site BDII configuration is
> wrong. Among other things you appear to have *three* site BDIIs, on
> tbit00, tbit01 and tbit03, and it's the one on tbit00 which sees the
> tbit01 CPUs:
>>
>> ldapsearch -x -h tbit00.nipne.ro -p 2170 -b o=grid
>> 'gluesubclusteruniqueid=*tbit01*' | grep CPUs:
>> GlueSubClusterPhysicalCPUs: 406
>> GlueSubClusterLogicalCPUs: 406
>>
>> as well as tbit03:
>>
>> ldapsearch -x -h tbit00.nipne.ro -p 2170 -b o=grid
>> 'gluesubclusteruniqueid=*tbit03*' | grep CPUs:
>> GlueSubClusterPhysicalCPUs: 8
>> GlueSubClusterLogicalCPUs: 8
>>
>> Presumably that's where the top-level BDII is getting the information,
> although I'm not sure how! Anyway I think you need to rationalise your
> configuration so you only have a single site BDII, preferably on a
> standalone machine.
>>
>> Stephen
>>
|