Print

Print


Hi Stephen and Steve...

[log in to unmask]" type="cite">
The CEs (i.e. the glue CEs, corresponding to the queues) have to point to different clusters and subclusters. See:

http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_different_memory_limits_for_different_queues_on_the_same_CE

http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_the_OS_name

http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_my_machine_architecture

  

Does yaim supports these GlueClusters? If not, this implementations are simply not feasible for us... At the rithm we have middleware updates (once a week), most of them with yaim reconfiguration, you would break everything you did by hand. I would not mind to set up such a mechanism if there is the guaranty that it is not destroyed by a yaim reconfiguration, otherwise it is just simpler to start a new CE, as Steve pointed out.

[log in to unmask]" type="cite">
The information system is presenting a 
"GlueCEStateFreeJobSlots" value 
(in the "dn: GlueVOViewLocalID" fields) representing the sum of 
resources for both queues,
    

  

[log in to unmask]" type="cite">
That sounds like you don't have the info provider configured correctly, but I have no idea how the sge provider works - do you know who supports it?
  

This is presently a problem. It was supported by LeSC, but unfortunately the guy went away and no one is taking this job (it is just on a best effor basis). Indeed, the two clusters are not sharing WNs, and therefore, they should present proper values. I'll try to look to the script myself...

Cheers
Goncalo