Hi Santanu,
we have the same bonding setup as you at RHUL and it works.
i.e. nothing special on the switch,
module options:
options bond0 miimon=100 updelay=2000 downdelay=2000 mode=6
Once we had a load of problems due to one of the ports used in the bond
being faulty. But this looked like intermittent packet loss, not what
you are seeing.
Simon
On 09/03/2011 14:28, Santanu Das wrote:
> On 09/03/11 14:17, Sam Skipsey wrote:
>> [ ... ]
>> We do IEEE 802.3ad link aggregation (that's mode=4 ) here, which seems
>> to work for everything, if you take care to use the right
>> xmit_hash_policy.
>
> I have one machine with LACP/802.3ad link aggregation but that was for
> virtual machine and that's the only way VMware ESXi works. I don't see
> any difference between these two types, in terms or connecting nodes,
> data transfer etc. Does anyone have a clue why it should be a problem
> for DPM?
>
> Cheers,
> Santanu
>
>
>> Sam
>>
>>> Cheers,
>>> Santanu
>>>
>>>
>>>
>>> On 09/03/11 13:16, Ewan MacMahon wrote:
>>>
>>> -----Original Message-----
>>> From: GRIDPP2: Deployment and support of SRM and local storage
>>> management
>>> [mailto:[log in to unmask]] On Behalf Of Santanu Das
>>>
>>> There are the configuration files:
>>>
>>>
>>> [root@disk09 network-scripts]# cat save.ifcfg-bond0
>>> DEVICE=bond0
>>> BOOTPROTO=dhcp
>>> TYPE=bonding
>>> ONBOOT=yes
>>>
>>> You don't appear to be setting BONDING_OPTS. Also, how have you
>>> configured the switch?
>>>
>>> Ewan
>>>
>>>
|