[log in to unmask]"
type="cite">
2009/8/6 Gonçalo Borges <[log in to unmask]>:
Hi Stephen and Steve...
The CEs (i.e. the glue CEs, corresponding to the queues) have to point to
different clusters and subclusters. See:
http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_different_memory_limits_for_different_queues_on_the_same_CE
http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_the_OS_name
http://goc.grid.sinica.edu.tw/gocwiki/How_to_publish_my_machine_architecture
Does yaim supports these GlueClusters? If not, this implementations are
simply not feasible for us... At the rithm we have middleware updates (once
a week), most of them with yaim reconfiguration, you would break everything
you did by hand. I would not mind to set up such a mechanism if there is the
guaranty that it is not destroyed by a yaim reconfiguration, otherwise it is
just simpler to start a new CE, as Steve pointed out.
As I mentioned no.
https://twiki.cern.ch/twiki/bin/view/EGEE/WNWorkingGroup
has some cutting edge stuff to do it but it needs some testing... As
in don't use
it unless you want to test and provide feed back.
Steve
The information system is presenting a
"GlueCEStateFreeJobSlots" value
(in the "dn: GlueVOViewLocalID" fields) representing the sum of
resources for both queues,
That sounds like you don't have the info provider configured correctly, but
I have no idea how the sge provider works - do you know who supports it?
This is presently a problem. It was supported by LeSC, but unfortunately the
guy went away and no one is taking this job (it is just on a best effor
basis). Indeed, the two clusters are not sharing WNs, and therefore, they
should present proper values. I'll try to look to the script myself...
Cheers
Goncalo