Print

Print


Hi Winnie,

I have enquired about this (on LCG-GOLLOUT, I believe) a few months  
back when I upgraded our WN tarball. The response was that wn- 
list.conf should include a single line that is the headnode hostname,  
the actual machine on which the tarball is being configured.

Cheers,
Gianfranco

On 21 Sep 2009, at 15:23, Winnie Lacesso wrote:

> Dear All,
>
> On our HPC of which PP/LCG gets a portion, the WN are busy running  
> jobs.
> So we run yaim for the HPC WN tarball install on the HPC headnode, as
> people are supposed to compile etc etc on headnode, not on WN.
>
> We were at 3.1.24-0 & are getting ready 3.1.33-0.
> Yaim seems to have broken now as it expects the node on which yaim is
> run to be a WN. But this is nonsensical for HPC; the WN are busy
> running jobs, so running yaim on the WN (where gpfs is having a very
> hard time, so let's not add to it) would interfere with user jobs!
>
>   ERROR: The WN you are configuring is not defined in the WN_LIST file
>   /exports/gpfs/gridpp-shared/lcg/yaim-conf/wn-list.conf
>   ERROR: Configuration error !
>   ERROR: Configuration error !
> lcg@bluecrystal1>
>
> That's true, bluecrystal1 = HPC head node, it's not a WN & is not in
> wn-list.conf.
> This has never been a problem in previous working yaim versions.
>
> What is the solution - fake that the HPC headnode is a WN in wn- 
> list.conf,
> or fix yaim, or interfere with user jobs by running yaim on a busy WN?
>
> Grateful for advice

-- 
Dr. Gianfranco Sciacca			Tel: +44 (0)20 7679 3044
Dept of Physics and Astronomy		Internal: 33044
University College London		D15 - Physics Building
London WC1E 6BT