Print

Print


> -----Original Message-----
> From: Testbed Support for GridPP member institutes [mailto:TB-
> [log in to unmask]] On Behalf Of Peter Grandi
> 
> > is almost certainly IO bound, not CPU bound.
> 
> And I suspect that EwanMM was asking "how much" so he can make tradeoffs
> as to power consumption/cost (I guess).
>
Sort-of, but more the other way around. I wouldn't really think about
saving power/price etc. by buying slower CPUs (maybe fewer of them, 
but not slower ones), but it's more a question of:

"I have x HS06; how fast does my SE disk and network interconnect
 need to be to keep that fed?"

or very specifically in this case:

 "Given the number of worker nodes attached to them, do my rack 
  switches each need one, two or three 10Gbit uplinks to the core?"

Clearly the answer to that question in strongly quantized, so I
can tolerate quite large rounding errors. For the record, these
figures seem to suggest an answer of about one-and-a-half, so 
two uplinks each it is.

> It would be nice to have some guidelines from other experiments to, 
>
AIUI, as far as tier 2 disk goes, there's basically ATLAS and CMS
to worry about, and the latter only at the specific CMS sites.

> As to my site, we are mostly ATLAS/LHCb oriented (with 'pheno'
> filling in) and strictly "production" so I guess for that the rule is
> simply "more cores", or as my local users would probably say, "more more
> more more more more more cores" :-).

Indeed :-) And as the EFDA-JET experience shows, you can deliver an 
awful lot of useful CPU work with very little storage space.

Ewan