Frederic,
Is the user supposed to make copies of all the data he wants? How
does he choose where to make them? I thought the Grid was supposed to
hide all this from the user. If not isn't this something ATLAS should do
centrally? Surely they wouldn't want every user making his own copies at
random places.
Cheers Steve
--
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Steve Lloyd Queen Mary, University of London +
+ E-mail: [log in to unmask] Physics Department +
+ Phone: +44-(0)20-7882-5057 Mile End Road +
+ Fax: +44-(0)20-8981-9465 London E1 4NS, UK +
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Frederic Brochu wrote:
> Dear Steves ;-)
>
> Well, somewhere it is a good thing to see that castor(ftp,grid).cnaf.infn.it
> is not declared as closeSE of any existing CE (including
> ce01-lcg.cr.cnaf.infn.it which is supposed to be related to, but CNAF-T1
> is not listed anywhere (JL error from the Gstat page).
>
> Castors have been misused throughout DC2 and Rome production as fast
> access data servers while they are actually data repositories.
>
> May I suggest to make a replica of this file on a "disk" SE
> (preferably one which will match your job's requirements) before
> submitting your job with "InputData" requirements ?
> I know it is not the way InputData is meant to be used, but I think lots
> of Jimmy users will thank you afterwards for doing that (you included,
> as your job execution time will be shortened by a significant amount of
> time).
>
> Cheers,
> Frederic
>
> PS: the lfn states "Jimmy", not "UJimmy" and is valid as far as ATLAS is
> concerned. So...
>
> long while On Thu, 19 May 2005, Steve Traylen wrote:
>
>
>>On Thu, May 19, 2005 at 10:37:32AM +0100 or thereabouts, Steve Lloyd wrote:
>>
>>>Hi,
>>> I'm trying to read ATLAS AOD from my Grid job. I read the various LCG
>>>Manuals - they say to put something like this in the jdl:
>>>
>>>InputData = {"lfn:rome.004201.recov10.ZeeJimmy._00018.AOD.pool.root"};
>>>DataAccessProtocol = {"gsifpt"};
>>>
>>> The job is then rejected by the resource broker - no matching
>>>resources. If I remove these from the jdl and put this in my job script
>>>it works:
>>>lcg-cp -v --vo atlas
>>>lfn:rome.004201.recov10.ZeeJimmy._00018.AOD.pool.root
>>>file://`pwd`/Zee_18.pool.root
>>>
>>> I don't understand what the jdl commands are supposed to do and why
>>>they break it.
>>
>>Hi Steve,
>>
>> So the JDL is meant to land your job on CE(queue) that is where
>> your data is.
>>
>> Your data is located using
>> $ lcg-lr --vo atlas fn:rome.004201.recov10.ZeeJimmy._00018.AOD.pool.root
>>
>> which shows it is at castorftp.cnaf.infn.it.
>>
>> So the RB looks for a queue that is close to castorftp.cnaf.infn.it
>> and here lies the problem.
>>
>> Looking at a top level BDII
>>
>> $ ldapsearch -x -H ldap://lcgbdii02.gridpp.rl.ac.uk:2170 \
>> -b mds-vo-name=local,o=grid \
>> '(GlueCESEBindGroupSEUniqueID=castorgrid.cnaf.infn.it)'
>>
>> There is no binding object anywhere for this SE.
>>
>> You can also retrieve the same lack of information with:
>>
>> $ lcg-infosites --vo atlas closeSE | grep castorgrid.cnaf
>>
>>
>> I don't though if this is strictly an error as such in deployment and
>> is not something that is checked by gstat and so followed up by the
>> CODs. It is possible to have an SE attached to no queue as this is
>> but in this case I would say there is a problem CNAFs deployment.
>>
>> Steve Burke,
>>
>> Do you think every SE should be bound to a queue for all VOs that
>> it supports? If so we can look at adding to gstat.
>>
>> Steve
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>> Cheers Steve
>>>--
>>>+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>+ Steve Lloyd Queen Mary, University of London +
>>>+ E-mail: [log in to unmask] Physics Department +
>>>+ Phone: +44-(0)20-7882-5057 Mile End Road +
>>>+ Fax: +44-(0)20-8981-9465 London E1 4NS, UK +
>>>+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>
>>--
>>Steve Traylen
>>[log in to unmask]
>>http://www.gridpp.ac.uk/
>>
|