On 09/03/2012 16:33, Sam Skipsey wrote:
> Hi Rob,
>
> On 9 March 2012 16:28, Rob Fay <[log in to unmask]
> <mailto:[log in to unmask]>> wrote:
>
> Hi Alessandra,
>
> On 09/03/2012 16:13, Alessandra Forti wrote:
>
> Hi Rob,
>
>
> This is what Manchester is doing following the Glue schema. Of
> course we loose
> out when the cluster is half full but we try to keep it full and we
> use all the
> 24 slots so not being full is less systematic than what you want to do.
>
>
> That's an option, but I don't think we'd really want to do that on our
> nodes. Aside from the increased contention for I/O, our benchmarking
> shows total HS06 not only virtually flatlines but actually goes down as
> you reach full hyperthreaded capacity on these boxes - the total HS06
> for 24 runs is less than the total HS06 for 18 runs.
>
>
>
> Sure, but if you mean you're actually benchmarking with HS06, there's a
> caveat to consider: real life performance of jobs (especially analysis jobs,
> which are more I/O, and less CPU, heavy) is not precisely HS06-like.
> The real-life performance figures I've managed to get out of ATLAS's
> experimental monitoring system (built out of HammerCloud) suggest that
> performance for QMUL's hyperthreaded cores converges on 50% (from above)
> those of non-hyperthreaded cores, for their analysis load. This is somewhat
> better than the pessimistic figure that HS06 gives you (although I suspect
> production will look more like the HS06 figures).
Don't QMUL use direct POSIX IO to their Lustre rather than copy-to-disk like
DPM sites, though? That's a different load profile that we would expect, if so.
John
--
John Bland [log in to unmask]
System Administrator office: 220
High Energy Physics Division tel (int): 42911
Oliver Lodge Laboratory tel (ext): +44 (0)151 794 2911
University of Liverpool http://www.liv.ac.uk/physics/hep/
"I canna change the laws of physics, Captain!"
|