LHC Computer Grid - Rollout
> [mailto:[log in to unmask]] On Behalf Of Gonçalo Borges said:
(NB I'm reading the mails in this thread in order, it seems easier than replying to all of them at once!)
> I have 113 hosts (2 quadcore cpus each) offering (113x8=) 904 cores.
> Here are my doubts:
>
> 1./ What's the correct value for GlueSubClusterPhysicalCPUs?
>
> I have set it to 226 (113 x 2 quadcore cpus)
226 is correct, the number of physical chips.
> Using those variables for some simple math operation like
> (GlueSubClusterPhysicalCPUs=226) x (SMPSize=8), you get a wrong value
> for the total number of logical CPUs (Cores).
True, but that's the wrong calculation!
> 2./ What's the correct value for GlueSubClusterLogicalCPUs?
>
> I've set it up to 904 (the total number of cores)
That's correct.
> but since we have
> sufficient memory per core, we allow some overbooking, and I
> wonder if I
> should publish the total number of slots (allowed jobs) instead...
No. The CPU power for your whole site is LogicalCPUs*HEPSPEC - you can't make your hardware more powerful just by defining more job slots!
In theory you could say that you could reduce the benchmark to compensate, but in practice it would be too complicated and the gain would be very small. Also it would only apply if your site is completely full, most of the time you are probably only running 1 job per core or fewer.
> 3./ I've set up "GlueHostProcessorOtherDescription:
> Cores=8,Benchmark=54.90-HEP-SPEC06". I assume that the
> benchmark value
> should be published by server (and not by core), and that the
> number of
> cores is the total number of cores per server, and not per
> CPU as stated
> in the docs (which in my case would be 4 since each host has
> 2 quadcore
> cpus). Is my assumption right?
No, as I just replied privately the benchmark is per core. Most HEP jobs can only run on a single core (at least at the moment!) so that's what we're interested in.
Stephen
--
Scanned by iCritical.
|