Hi Pete,
That's a typo in the script; I thought I'd mentioned it on TB-Support,
but I see now that it was just communicated to Duncan Rand et al.
So, the fact is that lines 289 & 290 of the script should have the
HEPSPEC scores per core for each of our two classes of worker nodes,
i.e.
hepSpec1core= 7.63
hepSpec2core= 8.08
not
hepSpec1core= 7.63/4
hepSpec2core= 8.08/8
(as was coded in the original script).
For clarity, I've attached a version of the script with this 'bug'
fixed, though of course, you will all have to change the values anyway
to reflect your own HEPSPEC scores.
Sorry for any confusion,
Mike.
2009/7/7 Peter Gronbech <[log in to unmask]>:
> Hi Mike,
> I'm confused, first of all are you using HEPSPEC06? or specint2k.
>
> Assuming you are using hepspec06 then your value for newer cpu's from the wiki is say 65.24 which you divided by 8 (cores) to get 8.15 per core.
> Does the below syntax then divide it again by the number of cores?
>
> If you are trying to use the fudge factor to get specint2k /core then you would divide by 4.
>
> Can you explain please.
>
> Thanks Pete
>
>
>
> #these are the HEPSPEC values per core of your (assumed two) clusters. If you only have one cluster, set these equal
> hepSpec1core= 7.63/4
> hepSpec2core= 8.08/8
>
>
> --
> ----------------------------------------------------------------------
> Peter Gronbech Senior Systems Manager and Tel No. : 01865 273389
> SouthGrid Technical Co-ordinator Fax No. : 01865 273418
> Department of Particle Physics,
> University of Oxford,
> Keble Road, Oxford OX1 3RH, UK E-mail : [log in to unmask]
> ----------------------------------------------------------------------
>
> -----Original Message-----
> From: Testbed Support for GridPP member institutes [mailto:[log in to unmask]] On Behalf Of Mike Kenyon
> Sent: 07 July 2009 12:17
> To: [log in to unmask]
> Subject: PBS accounting script
>
> As requested during today's dteam meeting; here's the script again.
>
> Cheers,
> Mike
>
>
> ---------- Forwarded message ----------
> From: Mike Kenyon <[log in to unmask]>
> Date: 2009/6/5
> Subject: PBS accounting script
> To: Testbed Support for GridPP member institutes <[log in to unmask]>
> Cc: "Coles, J (Jeremy)" <[log in to unmask]>, Scotgrid Glasgow
> <[log in to unmask]>
>
>
> Hi All,
>
> As requested, I've knocked a script together that goes through the pbs
> logs and calculates the CPU (or wall) time delivered per group in
> HEPSPEChours, where group in this context is a unix group, such as
> atlas or atlasprd etc. If TB-support allows attachments, then the
> file's attached here. Yeah it's hacky, but it grew up from a
> grep-awk-sed one-liner.
>
> Running the script is fairly easy (at Glasgow ;-) for example, to get
> the CPU-delivered (in HEPSPEChours and hours) of all jobs since Oct
> 1st 2008, I run
>
> ./account.py -f 20081001 -l 20090604
>
> to get the walltime instead, I'd have added the -w flag
>
> There's a --help option available and the code's fairly well
> commented. Officially this is unsupported, so, in the words of our T2
> coordinator - "if it breaks, you get to keep both parts".
>
>
> The following assumptions are made:
>
> Your logs live at /var/spool/pbs/server_priv/accounting/ (see --help)
> and are named YYYYMMDD
> Your worker nodes are named nodeXXX (hack if they aren't)
> You have up to two sub-clusters with different HEPSPEC scores. If you
> don't have sub-clusters, and all nodes have the same HEPSPEC score,
> set hepSpec1core and hepSpec2core to be equal in the code. If you have
> more than two sub-clusters, you'll have to do deeper fiddling.
>
> If there are any major bugs/problems, let me know...otherwise, enjoy.
>
> Cheers,
> Mike.
>
|