Print

Print


On Fri, 2006-04-07 at 14:07 +0100, Peter Gronbech wrote:

>     PP LCG CE/SE/MON -> uni.cluster.PBS.headnode => many.cluster.WN
>
> What constraints are there re: configuration &/or OS of the
> cluster.PBS.headnode & cluster.WN to be 'the farm' of the PP LCG
> CE/SE/MON? 

I currently have a (mostly) working site that uses bi-arch 32bit/64bit
RHEL3 and GridEngine. 

(Because I use GridEngine, not PBS, your results may vary.)

> Does it matter if the remote cluster is Linux of some species (probably
> not Scientific Linux), or not? 

Binary compatibility with 32bit RHEL3 / SL3 is probably a necessity.

> Does there need to be _any_ LCG-specific software or config of the
> cluster.PBS.Headnode or cluster.WN? What & how much?

So far, I've managed to get away without any extensive changes.  The
only changes needed to the worker nodes is creation of the pool accounts
(already centrally managed locally) and the associated home directories
-- exported from a local NFS server.  

A network-accessible volume for the storage of the WN TAR environment is
necessary.

Similarly, a network-accessible volume for the storage of experimental
software packages is necessary.

By modifying the JobManager to source the grid_env.[sh|csh]
environment-setting script before the execution of every job, I haven't
need to modify anything on the worker nodes themselves (modulo
users/groups/mounts) -- the grid_env script updates LD_LIBRARY_PATH,
PATH and other environment variables as necessary.

> There might be restrictions on having ports open on the cluster.WN to
> external machines (ie Bristol's PP LCG CE/SE/MON). Would that cause
> problems?

Depends on which way the blocks go.  IIRC, the WNs don't need to be able
to receive any incoming connections, but they will likely need to be
able to make outbound TCP connections on various commonly-used ports.

Hope this helps.

Cheers,
David
-- 
David McBride <[log in to unmask]>
Department of Computing, Imperial College, London