Thanks for your input, Peter Sam & Brian. Thanks also to Pete Clarke and
Robin Tasker for private comments.
At RHUL we currently have 2 separate 1Gb links - one for our Tier2 and
one for the rest of the college. We really do connect our Tier2 directly
into JANET equipment.
The new RHUL network connection can deliver up to 8 1Gb links and we
plan to make the case for another of these to take us to 2x 1Gb.
The role of LMN has recently been taken over by JANET, so we both
interact with and connect directly to them. Actually it was LMN who said
they don't do link aggregation so we'll ask again now it's JANET.
Brian, I will sketch out what we have in mind for you.
Do you mean problems with internal ganglia monitoring of our site, or FTS?
brian davies wrote:
> As a word of caution. I would say though that having seperately routed
> subnets for your site may cause unexpected issues. to start with
> thewre have in the the past been issues with storage system setups and
> with ganglia monitoring which are complicated by having multiple
> subnets and routing betweeen them.
> Any further advice would probably depend exactly on the network
> configuration you had in mind.
> Brian
>
> On 11 January 2011 19:00, Peter Grandi <[log in to unmask]> wrote:
>> On 11/01/11 15:57, Simon George wrote:
>>
>>> now that we almost have the 1Gb/s link to RHUL set up (champagne
>>> is on ice) I am turning my thoughts to 2Gb/s. This would be
>>> provided by 2x1Gb/s links. I'm told by my network expert that
>>> Janet doesn't support link aggregation,
>> My guess is that it is unlikely that how your institution connects to the regional network that is connected to JANET is relevant to what you are doing. What matters to your situation is just the router (hopefully) or switch that your network uplinks to.
>>
>> BTW, 10Gb/s connections are *much* cheaper than they used to be, both at the server and at the router or switch level, and if your links are fibre (especially if singlemode/OS1) they can carry similarly well 1Gb/s and 10Gb/s traffic.
>>
>> I have liked for example Myri (or Dell) 10Gb/s cards and Dell (or Nortel or F10) 10Gb/s switches, and I would also look at newer suppliers like Arista.
>>
>> http://www.myri.com/Myri-10G/10gbe_solutions.html
>> http://www1.euro.dell.com/content/products/productdetails.aspx/nic-intel-10gb-at?c=uk&l=en
>> http://www1.euro.dell.com/content/products/productdetails.aspx/nic-broadcom-57711-standard?c=uk&l=en
>> http://www1.euro.dell.com/content/products/productdetails.aspx/switch-powerconnect-8024?c=uk&l=en
>> http://www1.euro.dell.com/content/products/productdetails.aspx/switch-powerconnect-8024f?c=uk&l=en
>> http://www1.euro.dell.com/content/products/productdetails.aspx/pwcnt_6224?c=uk
>>
>> The last one (Del 6224) is a particularly interesting entry level product, which can have 4x 10Gb/s ports (e.g. 2x copper and 2x fibre) plus 24 1Gb/s ones for what can be an amazing price.
>>
>>> so it seems the simplest setup will be to split our storage nodes
>>> into two subnets and assign these each to one of the links.
>> A split into two subnets may be a good idea anyhow as various forms of bonding are often fiddly, and probably the load on your storage nodes probably is pretty uniform anyhow from many clients, and putting in two routers (or switches) may give some extra resilience.
>>
|