[Ocfs2-users] Generic question about cluster communication(with user mode heartbeat)
Daniel Keisling
daniel.keisling at austin.ppdi.com
Mon Dec 1 10:47:58 PST 2008
We also use active-passive (bond=1) networking bonding on both the
cluster interconnect (two separate switches) and the public interfaces
(also two separate switches). The setup works fine. We initially tried
active-active configuration (bond=0), but saw terrible packet loss when
a significant amount of traffic was being passed. Also, our switches
either did not have the ability to do 802.3ad (bond=4) or we didn't want
to pay for the capability.
Daniel
> -----Original Message-----
> From: ocfs2-users-bounces at oss.oracle.com
> [mailto:ocfs2-users-bounces at oss.oracle.com] On Behalf Of Ulf
> Zimmermann
> Sent: Wednesday, November 26, 2008 3:18 PM
> To: Petri Asikainen; Sunil Mushran
> Cc: ocfs2-users at oss.oracle.com
> Subject: Re: [Ocfs2-users] Generic question about cluster
> communication(with user mode heartbeat)
>
> > -----Original Message-----
> > From: ocfs2-users-bounces at oss.oracle.com [mailto:ocfs2-users-
> > bounces at oss.oracle.com] On Behalf Of Petri Asikainen
> > Sent: 11/26/2008 11:57
> > To: Sunil Mushran
> > Cc: ocfs2-users at oss.oracle.com
> > Subject: Re: [Ocfs2-users] Generic question about cluster
> communication
> > (with user mode heartbeat)
> >
> > No, I have not thinking about bonding, maybe I should. ;)
> >
> > I assumed that with user mode heartbeat all communication between
> nodes
> > goes via hearbeat. But that not seems be the case.
> >
> > An other generic question? Does bonding work nicely between
> switches?
> > Or
> > does it require some special feature avaible switch?
>
> There are different modes of bonding. The most simple method does not
> require support on the switch. One example to configure this is this
> way:
>
> /etc/modprobe.conf:
>
> alias eth0 tg3
> alias eth1 tg3
> install bond0 /sbin/modprobe bonding -o bond0 miimon=100
> mode=active-backup primary=eth0
>
> This will create a bond0 interface which uses active-backup mode. It
> will monitor the mii link status of an interface every 100 ms
> and change
> to the backup interface in case of failure
>
> Then you can do network interface configs like:
>
> /etc/sysconfig/network-scripts/ifcfg-eth0:
>
> DEVICE=eth0
> HWADDR=00:0B:CD:F0:76:25
> ONBOOT=yes
> TYPE=Ethernet
> BOOTPROTO=none
> MASTER=bond0
> SLAVE=yes
> MTU=9000
> USERCTL=no
>
> /etc/sysconfig/network-scripts/ifcfg-eth0:
>
> DEVICE=eth1
> HWADDR=00:0B:CD:F0:76:26
> ONBOOT=yes
> TYPE=Ethernet
> BOOTPROTO=none
> MASTER=bond0
> SLAVE=yes
> MTU=9000
> USERCTL=no
>
> /etc/sysconfig/network-scripts/ifcfg-bond0:
>
> DEVICE=bond0
> IPADDR=192.168.201.1
> NETMASK=255.255.255.0
> ONBOOT=yes
> BOOTPROTO=none
> USERCTL=no
> MTU=9000
>
> The above config is for RedHat EL4 and the interface is set to Jumbo
> Frames as we also use this bonded interface for Oracle 10g
> intercluster
> traffic. The way to configure it on other distributions might be
> different.
>
> >
> > Regards,
> >
> > Petri
> >
> >
> >
> >
> >
> > Sunil Mushran wrote:
> > > When eth0 went down, ocfs2 (specifically o2net/o2dlm) was
> unable to
> > talk
> > > to the other node. So the result is expected.
> > >
> > > Have you looked into bonding the two and using that
> bonded interface
> > for
> > > both heartbeat and ocfs2?
> > >
> > > Sunil
> > >
> > > Petri Asikainen wrote:
> > >> I'm using ocfs2 on sles 10 (sp2)with user mode heartbeat on two
> node
> > >> cluster.
> > >>
> > >> ocfs2-tools-devel-1.4.0-0.5
> > >> ocfs2-tools-1.4.0-0.5
> > >> ocfs2console-1.4.0-0.5
> > >> heartbeat-2.1.4-0.4
> > >>
> > >> heartbeat clustering is configured to use two separate network
> > >> interfaces eth0 and eth1. iSCSI SAN is accessed via eth2
> and eth3.
> > >>
> > >> on /etc/ocfs2/cluster.conf I have configured ocfs2 cluster nodes
> > with
> > >> addresses from interface eth0.
> > >>
> > >> What happens if interface eth0 (or switch behind it)
> goes down when
> > >> cluster is on line? Do nodes still see others/heartbeat via eth1?
> > >>
> > >> Reason I'm asking this that during offline hardware maintenance
> > network
> > >> cable (eth) of second node was disconnected. And when I
> was trying
> > to
> > >> start cluster I cannot get ocfs2 up before I connected
> that cable.
> > >>
> > >> User mode heartbeat and other cluster services were
> working fine at
> > same
> > >> time with degraded network connections.
> > >>
> > >> Regards
> > >>
> > >> Petri Asikainen
> > >>
> > >
> >
> >
> > _______________________________________________
> > Ocfs2-users mailing list
> > Ocfs2-users at oss.oracle.com
> > http://oss.oracle.com/mailman/listinfo/ocfs2-users
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>
>
______________________________________________________________________
This email transmission and any documents, files or previous email
messages attached to it may contain information that is confidential or
legally privileged. If you are not the intended recipient or a person
responsible for delivering this transmission to the intended recipient,
you are hereby notified that you must not read this transmission and
that any disclosure, copying, printing, distribution or use of this
transmission is strictly prohibited. If you have received this transmission
in error, please immediately notify the sender by telephone or return email
and delete the original transmission and its attachments without reading
or saving in any manner.
More information about the Ocfs2-users
mailing list