On 18 Aug 2011, at 19:22, Christopher J.Walker wrote:
> Another optimisation is that (at least AIUI), if you have a pair of
> squids, you can get them to try the other one first.
So long as you specify both squids in your default.local then the cvmfs client automatically fails over to the other ones (and picks one at random on start up to somewhat spread the load).
Experience at a number of sites has been that if traffic goes direct to the replicas at remote sites then a larger number of jobs do fail, typically at setup. We have yet to see significant load issues at the replicas even with very large amounts of direct traffic.
Cheers,
--Ian
>
> Chris
>
>>
>> Ian
>>
>>
>>
>>
>>
>>> Sam
>>>
>>> On 18 August 2011 15:25, Ewan MacMahon <[log in to unmask]> wrote:
>>>> Hi all,
>>>>
>>>> I've just had an interesting lesson in what happens to cvmfs
>>>> when its one and only squid server dies underneath it, and
>>>> now I'm thinking about getting some redundancy into the
>>>> system.
>>>>
>>>> cvmfs itself supports multiple squidd, so it's just a matter
>>>> of having multiple squid servers available. The Tier 1 has
>>>> a couple, but for everyone else I was wondering whether it
>>>> would make sense to handle this in a similar manner to how
>>>> we deal with the Frontier squids and have cross-site failover?
>>>>
>>>> As far as I can see we could:
>>>> - use the same basic idea,
>>>> - use the exact same relationships, so everyone fails over
>>>> their cvmfs squid to the same place they fail over their
>>>> Frontier squid (which makes sense because in a lot of
>>>> cases they're the same squid),
>>>> - have everyone fail over to the Tier 1,
>>>> - something else,
>>>> - nothing at all, and just leave it to each site to run
>>>> a pair of squids.
>>>>
>>>> I think I'd favour either the second or third options, but
>>>> I'd be interested to know what everyone thinks, and indeed
>>>> how everyone else with deployed cvmfs is handling this now.
>>>>
>>>> Ewan
>>>>
|