On Thu, 20 Jan 2005, Laurence wrote:
>> 1. yaim configure_CE does not configure a site giis. Adding the
>> appropriate lines into globus.conf and restarting globus-mds made it work
>> without issues though.
>>
> By default the BDII is used as a site GIIS. Using a BDII has proved to
> be more stable that running a GIIS. This runs on port 2170 rather than
> port 2135. You can run an MDS GIIS in parallel if you really need to.
Ah, ok. That certainly explains that.
>> 2. yaim configures GlueCESEBindGroupSEUniqueID as "SE1", not the actual
>> SEUniqueID.
>>
> What version of yaim are you using? I think that this was a bug in an
> earlier version. The latest version is.
>
> lcg-yaim-2.3.0-9.noarch.rpm
> <http://grid-deployment.web.cern.ch/grid-deployment/gis/yaim/lcg-yaim-2.3.0-9.noarch.rpm>
I'm using what the apt source think is the latest version. Actually, this
seems to be it:
Filename: lcg-yaim-2.3.0-9.noarch.rpm
But perhaps some old file is still around since it isn't a clean install.
>> 3. As was discussed earlier, lcg-info-dynamic-pbs assumes that the pbs
>> server lives on the CE, this is my only code change this time around (so
>> far, pending an answer to the question above).
>>
> There is a new version for this plug-in 1.0.3-1 and this fixes this
> problem. There is now and option to specify the host running the pbs
> server.
Thank you.
Now for the next thing, I need to have the jobs source enviroment
variables on the worker nodes, since the tar wn has different paths than
the install on the CE. PBS doesn't source any of the system shell files,
just replicated the enviroment from the one where qsub was run.
My current workaround for this is to add a line in the pbs job script that
does a "source $HOME/.bashrc" (all pool accoutns have bash as login shell
anyway) after all the "#PBS"-lines but before the rest of the shell
script. Is there a better way?
/Mattias Wadenstein
|