Leif Nixon пишет: > > Those memory limits are a mixed blessing; our nodes don't run out of > memory, but it's a bit of a pity that jobs are being killed when they > exceed their asked-for memory limit of, say, 800 MB, when they happen > to be running on a dedicated node with 2 GB RAM. > Leif, it *is* a blessing: if such job keeps running (I have to admit that I increased this limit to 1300 MB two days ago), it gets stuck and occupies the processor not for 30 minutes, but for 5 hours, doing nothing (well, filling up the disk with garbage messages perhaps), only to be killed by the requested (and correctly enforced) CPU time consumption limit. On the actual problem: it is a bug in ATLAS s/w; perhaps we'll have to abort the jobs. Fixed s/w so far is not expected to be deployed for this stage of production. Oxana