[Ocfs2-users] Generic question about cluster communication (with user mode heartbeat)

Ulf Zimmermann ulf at openlane.com
Wed Nov 26 13:18:00 PST 2008


> -----Original Message-----
> From: ocfs2-users-bounces at oss.oracle.com [mailto:ocfs2-users-
> bounces at oss.oracle.com] On Behalf Of Petri Asikainen
> Sent: 11/26/2008 11:57
> To: Sunil Mushran
> Cc: ocfs2-users at oss.oracle.com
> Subject: Re: [Ocfs2-users] Generic question about cluster
communication
> (with user mode heartbeat)
> 
> No, I have not thinking about bonding, maybe I should. ;)
> 
> I assumed that with user mode heartbeat all communication between
nodes
> goes via hearbeat. But that not seems be the case.
> 
> An other generic question? Does bonding work nicely between switches?
> Or
> does it require some special feature avaible switch?

There are different modes of bonding. The most simple method does not
require support on the switch. One example to configure this is this
way:

/etc/modprobe.conf:

alias eth0 tg3
alias eth1 tg3
install bond0 /sbin/modprobe bonding -o bond0 miimon=100
mode=active-backup primary=eth0

This will create a bond0 interface which uses active-backup mode. It
will monitor the mii link status of an interface every 100 ms and change
to the backup interface in case of failure

Then you can do network interface configs like:

/etc/sysconfig/network-scripts/ifcfg-eth0:

DEVICE=eth0
HWADDR=00:0B:CD:F0:76:25
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
MTU=9000
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth0:

DEVICE=eth1
HWADDR=00:0B:CD:F0:76:26
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
MTU=9000
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-bond0:

DEVICE=bond0
IPADDR=192.168.201.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MTU=9000

The above config is for RedHat EL4 and the interface is set to Jumbo
Frames as we also use this bonded interface for Oracle 10g intercluster
traffic. The way to configure it on other distributions might be
different.

> 
> Regards,
> 
> Petri
> 
> 
> 
> 
> 
> Sunil Mushran wrote:
> > When eth0 went down, ocfs2 (specifically o2net/o2dlm) was unable to
> talk
> > to the other node. So the result is expected.
> >
> > Have you looked into bonding the two and using that bonded interface
> for
> > both heartbeat and ocfs2?
> >
> > Sunil
> >
> > Petri Asikainen wrote:
> >> I'm using ocfs2 on sles 10 (sp2)with user mode heartbeat on two
node
> >> cluster.
> >>
> >> ocfs2-tools-devel-1.4.0-0.5
> >> ocfs2-tools-1.4.0-0.5
> >> ocfs2console-1.4.0-0.5
> >> heartbeat-2.1.4-0.4
> >>
> >> heartbeat clustering is configured to use two separate network
> >> interfaces eth0 and eth1. iSCSI SAN is accessed via eth2 and eth3.
> >>
> >> on /etc/ocfs2/cluster.conf I have configured ocfs2 cluster nodes
> with
> >> addresses from interface eth0.
> >>
> >> What happens if interface eth0 (or switch behind it) goes down when
> >> cluster is on line? Do nodes still see others/heartbeat via eth1?
> >>
> >> Reason I'm asking this that during offline hardware maintenance
> network
> >> cable (eth)  of second node was disconnected. And when I was trying
> to
> >> start cluster I cannot get ocfs2 up before I connected that cable.
> >>
> >> User mode heartbeat and other cluster services were working fine at
> same
> >> time with degraded network connections.
> >>
> >> Regards
> >>
> >> Petri Asikainen
> >>
> >
> 
> 
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users



More information about the Ocfs2-users mailing list