> -----Original Message-----
> From: Testbed Support for GridPP member institutes [mailto:TB-
>
> An issue wqhich also needs top be loked into is the amount of memory
> available.
>
2G per job slot, more or less universally, I'd have thought.
> When ATLAS files were small, then the entire file would be
> cached in memory and so disk IO wasx greatly reduced. Now that the
> Input files are large and the number of jobs is increasing, this is
> not the case , hence why we now are getting local system disk IO wait.
> This is why this analysis is a calculation that needs to be done for
> each site.
>
I'm not really seeing that; there's some variation in hardware between
sites, but the vast majority of worker node are going to be 8 cores,
16Gb
of RAM, and a single disk. Some people might have slightly faster disks,
or fewer cores, but regardless, if you try to run more than a couple of
heavily seeky processes like these on a single disk it's going to suck,
and there's really not a lot that sites can do about that.
Ewan
|