On 6 April 2011 16:27, Ewan MacMahon <[log in to unmask]> wrote:
>> -----Original Message-----
>> From: Sam Skipsey [mailto:[log in to unmask]]
>> Sent: 06 April 2011 16:22
>>
>> > What exactly does being a T2D imply, and require? Does this just mean
>> > being used in the caching mode that Graeme was describing at Sussex?
>> >
>>
>> Essentially, IIRC... but that also has concomitant effects on the amount
>> of work a site would expect (plus, it can only be good for a region to
>> have T2Ds, since they may end up caching copies of important data for the
>> cloud).
>>
>
> OK; I've just been having a look at the March GDB atlas presentation,
> and if I'm reading it correctly the T2D idea seems to be the associating
> a Tier 2 with multiple Tier 1s/clouds. based on Graeme's Sussex talk the
> 'tier 2 as cache' thing is PD2P.
>
> So; to unpick this a bit - am I right in thinking that some sites will
> be T2D sites (and if so, what exactly are the requirements?) and all
> Tier 2 space will be used according to the PD2P 'caching' mode, and none
> by planned data placement?
Okay, so there are 2 ATLAS things that are changes to the model that
is currently used ("PD2P") that the Sonar tests are significant as
indicators for.
One is that T2s may be used as the primary sources for some data
(stuff in MC and GROUPDISK tokens).
The other is the "Multi-site T2" == T2D thing.
(I thought when you said caching you meant the first of those things.)
Both of these things require that the T2 in question has good "mesh
transfer" capacity - i.e., not just good networking to its T1 (which
is all you need for the old tree model), but also good networking to
(at least) the other T1s.
"Good" in this sense is, IIRC, > 10MB/s transfer speeds for large
(2GB) files in FTS.
It also doesn't hurt for your site to have a big whack of storage in
its SE, of course.
Sam
>
> Ewan
>
|