Andrew McNab wrote: > On 06/04/2011 12:28, John Gordon wrote: >> >> Andrew, if you look at the HEPiX Benchmark site > http://w3.hepix.org/benchmarks/doku.php?id=bench:results_sl5_x86_64_gcc_412 > you will see multiple examples of the same chip being benchmarked with > one job/core HT off and 2 jobs/core HT on. The latter is typically > 20-25% bigger in HS06/chip. Obviously you get less HS06/logical cpu but > it looks like there will be a benefit in total throughput. >> >> You are obviously right about the theoretical seconds/core but HEP > codes and the HS06 benchmark are not 100% efficient so doubling up > helps. I guess there is also some parallel capability in even a single > core. Overcommitting will presumably give you an advantage when waiting for disk/network I/O even without hyperthreading. Wikipedia http://en.wikipedia.org/wiki/Hyper-Threading says "and especially when the processor is stalled, a hyper-threading equipped processor can use those execution resources to execute another scheduled task. (The processor may stall due to a cache miss, branch misprediction, or data dependency.)" > > Yes to all of the above. I'm not arguing against hyperthreading - just > saying it's not true that "hyperthreaded CPUs and cores are effectively > the same thing". > > There's a further complication that the "CPU seconds used" reported by > the operating system _includes_ the idle times when one hyperthread is > waiting for access to the shared execution units etc (ie due to the > internal queue within each core) even though that time doesn't actually > correspond to instructions being carried out by that hyperthread. Presumably that's why we get a lower hepspec06 per core if hyperthreading is turned on. It doesn't half make the accounting complicated though... Chris > > Andrew > >> -----Original Message----- >> From: Testbed Support for GridPP member institutes >> [mailto:[log in to unmask]] On Behalf Of Andrew McNab >> Sent: 06 April 2011 11:21 >> To: [log in to unmask] >> Subject: Re: HEPSPEC06 numbers for GridPP metrics >> >> On 05/04/2011 17:11, Stephen Burke wrote: >>> Testbed Support for GridPP member institutes >>>> [mailto:[log in to unmask]] On Behalf Of Andrew McNab said: >>>> The most you could ever deliver in total is one second per second for >>>> each core, irrespective of hyperthreading, but two cores can >>>> potentially >>>> deliver two seconds per second in total for some types of task. >>> >>> Only if the code is running multiple threads, and the underlying >>> assumption is that HEP code is single-threaded. >> >> When I say "the most you ever deliver in total", the total I'm referring >> to is across all jobs on that machine. >> >> The most CPU time you can ever deliver in total to all jobs on a machine >> "is one second per second for each core, irrespective of hyperthreading". >> >> But for some extreme types of task (eg generating random numbers), two >> cores can deliver two seconds per second in total to the jobs. >> >> The point is that although sites can use hyperthreading to increase >> their efficiency (by using the execution units of cores to deliver CPU >> time to other jobs while one is waiting for I/O) it's not the same as >> having more cores. So when we deliver a second of CPU time to a job, >> it's had the exclusive use of one core's execution resources during all >> the time slices that make up that second. So it's really the cores that >> count. >> >> Cheers, >> >> Andrew >> >> -------------------------------------------------------------- >> Dr Andrew McNab, High Energy Physics, University of Manchester > >