Discussion thread from a couple of weeks ago.
Summary: setup an env var on WNs
export ATLAS_RECOVERDIR=/pool
where /pool is our local WN scratch area. I guess you can try NFS area
if you want. Atlas needs to know when your site supports this so reply
to this list when done.
Cheers,
Peter
----- Forwarded message from Alessandra Forti <[log in to unmask]> -----
Hi Graeme,
I would bet turning on and off if done on a site basis and not on a per
node basis. I was just warning you that on some nodes the expected area
might not be there in case the software expected it. The env variable is
fine by me.
cheers
alessandra
cheers
alessandra
Graeme Stewart wrote:
>Hi Alessandra
>
>Job recovery is turned on and off at a site level, not triggered on a
>per-node basis. However, you can set the recovery directory to be an
>environment variable - Peter chose $ATLAS_RECOVERDIR. You could then set
>this on a per-node basis if you wanted.
>
>I believe if the pilot cannot write into the recovery dir then the job
>fails as per-normal, but I need to check.
>
>Cheers
>
>Graeme
>
>On 25 Jan 2008, at 11:12, Alessandra Forti wrote:
>
>>Hi Graeme,
>>
>>In manchester I really don't want /scratch or /tmp used for storing
>>data because filling them is one of the most common failure causes.
>>However on most of the nodes we have a /data1 partition originally
>>created for dcache but never used. I can create a /data1/atlas
>>directory. On 200 machines that is the xrootd directory so you will
>>not have access to it and have to check if /data1/atlas (or any
>>variable pointing to it).
>>
>>let me know
>>
>>cheers
>>alessandra
>>
>>Graeme Stewart wrote
>>>Hi Guys
>>>I am going to try and get job recovery enabled for atlas production
>>>in the UK - targeting big sites first. So, would you folks be willing
>>>to have it enabled for your sites?
>>>Just a reminder of the issues: if the job runs correctly, but somehow
>>>fails to store its outputs on your site, then it will either leave
>>>them in its working directory, or shove them off into a site defined
>>>directory. The next pilot can then search this directory and, if it
>>>finds job outputs, attempt to re-register them. This means that
>>>walltime isn't lost for short/medium outages of site
>>>storage/catalog/infosystem errors. In panda the jobs enter the
>>>"holding" status and they can be recovered for up to 3 days afterwards.
>>>As a site, you can clean out the holding area any time you like - it
>>>means the job fails (as it would have before).
>>>I need to clarify all the issues with the panda people, as to how to
>>>configure this, but things you should consider:
>>>1. If you put this on a shared filesystem then it means that any
>>>other pilot can pickup lost outputs, so the "hit rate" is
>>>automatically 100%. On the other hand, you then have to be careful of
>>>nfs loading issues if a lot of jobs ended up here - you probably
>>>don't want to provoke an nfs collapse!
>>>2. If you leave the outputs on a WN then the pilot has to land on
>>>this node to pickup the lost job. On the other hand, if you have a
>>>big fair share for atlas (don't we all?) then the chances of a hit in
>>>3 days are quite high. On the other hand, you are leaving outputs on
>>>WN scratch and you might run out of space and start blackholing if
>>>you are not careful.
>>>Let me know what you think,
>>>Cheers
>>>Graeme
>
----- End forwarded message -----
|