Jeff Templon escribió:
> Hi
>
> Aside from middleware limitations that should be fixed, I see no
> reason why it would be wrong to publish multiple CEs that all point to
> the same Cluster object.
In fact, this is already the case for the different queues on the same
CE node, right? (infinite, short queue...). From GLUE (or WMS'
matchmaking) perspective, these are different CEs and it should be just
the same if they are actually different machines.
Antonio.
>
> If this is done, then the goals of having these variables published
> have been reached, and I can do what I want to do :-)
>
> JT
>
>
> Steve Traylen wrote:
>> On Sep 20, 2007, at 2:17 PM, Ulrich Schwickerath wrote:
>>
>>> Hi,
>>>
>>> my 2 cents ;-)
>>>
>>>> But isn't that exactly what e.g. the CERN CEs are doing?
>>>> For example, ce101 and ce102 are fronting the same WNs.
>>>
>>> Plus some more identical CEs because a single CE is not able to cope
>>> with the work load. If this service could be distributed over a load
>>> balanced cluster of (stateless) machines this problem would not exist,
>>> right ?
>>>
>>> If only one CE is supposed to publish these values per sub cluster,
>>> which one should be picked ? What happens if this CE goes down (*) ?
>>> Also, the number of cores aka CPUs in this environment is not at all
>>> static but very dynamic. It changes every day because machines come and
>>> go for various reasons. Is the number of cores/cpus a useful number at
>>> all, if the resources are in fact shared (as they are at CERN) with
>>> local users ? (*)
>>>
>>
>> Okay needs some explanation.
>>
>> As you say multiple CE nodes are definitely required and each of
>> these can have their GlueCEs.
>>
>> ce101.cern.ch/jobmanager-lcglsf-short
>>
>> but there is suggestion that only one GlueCluster (say
>> cern-prod-cluster) and multiple GlueSubClusters (cern-sl3-nodes,
>> cern-sl4-nodes)
>> should be published. Yaim does not support this as has been mentioned
>> and as far as I know WMS has
>> never been tested for such a setup. As for where these single
>> GlueCluster and GlueSubClusters
>> get published from then the point about not making one CE node
>> special in someway is noted. If we find a solution to do this then
>> this should and will be taken
>> into account to avoid a special CEnode.
>>
>> Really did not mean to single out CERN here. Every single site on the
>> grid is doing it this way.
>>
>>> I'm surprised to hear statements like "misuse of the schema" while this
>>> is the only possibility for us to actually survive the load. Please
>>> believe me that it is not easy to maintain a cluster of 24 machines
>>> in a
>>> CE cluster, and if we have to start to make individual machines special
>>> in some way, the system will become unmaintainable very soon. As a
>>> matter of fact we have a job throughput right now of 130k jobs per day
>>> and about 50k jobs in the system, increasing.
>>>
>>> Practical suggestions for a solution are very welcome.
>>>
>>> Cheers,
>>> Ulrich
>>>
>>> (*) one could indeed set up a non existing "fake" CE per cluster in the
>>> BDII which would publish these numbers. I do not really like this idea
>>> because it is just another ugly hack to get around a limitations of the
>>> system, isn't it ?
>>
>> In someways it is step in the correct direction of only publishing a
>> Gluecluster and
>> GlueSubCluster.
>>
>>
|