Hi All,
Thanks for all the info! I sort-of assumed that this was going to be the
answer in the end (i.e. that HEPSPEC isn't a good indicator of real life
job performance) but I was still a bit surprised that the difference was
so much and indeed, that there's such a large dependence on CPU
frequency alone. You live and learn I suppose :)
Thanks,
Mark
P.S. As regards the desktops, they were 540 GBP inc. VAT without montior
and I haven't bothered to check other prices as I *have* to buy these
due to an agreement between the University and the supplier. I'm just
glad they aren't doing the same for servers...
On 07/04/14 11:09, Sam Skipsey wrote:
> On 7 April 2014 10:56, Ewan MacMahon <[log in to unmask]> wrote:
>>> -----Original Message-----
>>> From: Testbed Support for GridPP member institutes [mailto:TB-
>>> [log in to unmask]] On Behalf Of Sam Skipsey
>>>
>>> (Xeons have hyperthreading, more on-die cache, more memory bandwidth, and
>>> better performance-per-watt... and given the memory use of WLCG jobs, you
>>> might expect that those things would be important in the overall real-
>>> world performance of code, vs the pure-CPU test values in HEPSPEC).
>>>
>> Though from a funding model POV, the ideal CPU is the one with the
>> highest ratio of HS06 score to actual speed.
> Sure, but (to give the response your statement is artfully constructed
> to produce) that's because the funding model assumes that HS06 (or any
> cpu-bound test with little memory-bandwidth dependance) is a good
> enough proxy for real job performance.
> (It's clear reading between the lines of several test results that
> HS06 scores do not scale the same way that real jobs do, and this is
> an entirely expected phenomenon.)
>
> From an "actually letting experiments do more work per unit time"
> perspective, the Xeons win ;)
>
> Sam
>
>> Also, £600 for an i5 desktop seems a bit high; is that with a monitor?
>>
>> Ewan
|