Hi Peter,
that figure refers to the whole simulation chain: generation,
digitization and reconstruction. The size of the event is not specified
I assume because these are all heavily cpu bound tasks (and this is a
CPU benchmark) while analysis requires reading a lot of eventually
unused information to select the interesting events with mostly binary
cuts. For these sort of tasks (simu,digi,reco) a faster cpu is better;
for analysis, I agree with your previous email, sluggish cpus do not
affect the jobs performance disk is more important.
cheers
alessandra
On 14/04/2011 17:33, Peter Grandi wrote:
> On 14/04/11 17:03, Peter Grandi wrote:
>> On 14/04/11 16:16, Ewan MacMahon wrote:
>>
>> [ ... ]
>>
>>>> Suggestion is 5MB/s per analysis job. What the error on
>>>> this value is; well........
>>> Hmm. That clearly assumes a constant performance per CPU core,
>> I think that the question that EwanMM wanted to ask was really:
>>
>> "How many HS06s does it take to analyze 1MB/s of ATLAS data?"
> I was curious so I did a web search and this came up:
>
> http://iopscience.iop.org/1742-6596/219/4/042037/pdf/1742-6596_219_4_042037.pdf
>
> Figure 3 on page 6 shows a rate of 0.01 events per 50 HS06, or 1 event per 5000 HS06.
>
> IIRC even size is described in GraemeS's presentation at GridPP26, but I have seen different numbers in this thread.
|