On 14/04/11 16:16, Ewan MacMahon wrote: [ ... ] >> Suggestion is 5MB/s per analysis job. What the error on >> this value is; well........ > Hmm. That clearly assumes a constant performance per CPU core, I think that the question that EwanMM wanted to ask was really: "How many HS06s does it take to analyze 1MB/s of ATLAS data?" veru much on average, assuming no stalls from networking and IO. > but it's probably good enough for what I'm going at the moment. Not quite, I think that the reasoning is the same as that behind the "at least 20MB/s simultaneous RW per TB of space" that CERN use for disk servers: to have a balanced setup, that is one that has enough network and storage bandwidth (and cooling and power) to feed the processors. The idea is that ATLAS don't *require* a processor to be able to process more than 5MB/s (so if you are considering a faster processor you can save power and cooling by using a slower processor), but they expect the network and IO for a worker node to be able to feed at least 5MB/s to each processor. So a worker node with 24 processors needs to have least 120MB/s of networking and 120MB/s of local disk bandwidth with 24 threads active. Since those are pretty much upper bounds currently (without switching to 10Gb/s networking and wide SSDs/wide RAID10s per worker node) what the ATLAS guidelines are suggesting is to go for smaller cheaper lower TCO worker nodes than top-of-the-line ones with 24 really fast cores. Right now probably 12-16 core worker nodes is about as big as required.