So we would be using UK ATLAS testing for this metric and not the central ATLAS dashboard? I don't see how we will be able to weight the cpu that ATLAS record.
The main problem is that APEL accounting which can normalise cpu using a site average doesn't know which are production and which analysis jobs. If we took the raw cpu from ATLAS for each and normalised it using the site average HS06 then we would have a reasonable approximation. Is that the plan?
The site average HEPSPEC06 is already published in the BDII though. I can't see what value are you adding? Or does the atlas dashboard store cpu/queuename?
John
-----Original Message-----
From: Testbed Support for GridPP member institutes [mailto:[log in to unmask]] On Behalf Of Andrew McNab
Sent: 04 April 2011 13:03
To: [log in to unmask]
Subject: Re: HEPSPEC06 numbers for GridPP metrics
On 04/04/2011 12:45, Sam Skipsey wrote:
> On 4 April 2011 12:06, John Gordon<[log in to unmask]> wrote:
>> Andrew, my understanding was that Graeme etc wanted a lookup table that they could use from within a job to find out the HS06 value from the cputype returned by the OS, not a lookup per site. The wiki lists all the cputypes used but I suspect that it is not in the form that is visible to a job.
>
> This is roughly what I recall too - at least, the mapping the table
> represents was supposed to be cpuid -> HEPSPEC , not site-> HEPSPEC
> (which would be poorly specified for sites).
That was the initial idea, but the HEPSPEC06 figure depends on more
than the CPU model and MHz (eg the kernel version) so it would need
to be at least per-site, and really per subcluster. That just gets us
back to the CE-queue mapping, which is something Steve's script can
do without having to modify what ATLAS makes available via the dashboard
jobsummary.
Cheers,
Andrew
--------------------------------------------------------------
Dr Andrew McNab, High Energy Physics, University of Manchester
|