[Ocfs2-users] 2-node configuration ?

Luis Freitas lfreitas34 at yahoo.com
Thu Feb 28 13:32:36 PST 2008


Laurent,

  I never used DRDB (We have a SAN here...), or heartbeat either, but I suspect you will need to configure the OCFS2 timeouts to be larger than the heartbeat2 timeouts so that DRDB can resolve itself before the OCFS2 causes the machine to fence.

  Also you probably will want to configure the heartbeat2 to use a third machine (try to ping a router for example) to decide which server to keep alive. When using Oracle the CRS stack uses the ethernet interface status to decide.

Regards,
Luis

Laurent Neiger <Laurent.Neiger at grenoble.cnrs.fr> wrote: Hi all,

We're building a 2-node cluster that we'd like to work in active/active 
mode.
Since drbd 8.x this feature is possible so we hope to achieve a cluster 
where
the 2 nodes would be able to share the load, not only one and the other node
in stanby mode, waiting for the primary's failure...

In order to have a drbd active/active mode, we need to have a cluster 
filesystem,
handling DLM. Our choice went to OCFS-2 for its numerous features.

But we encountered a problem : when we cut off the network link of, 
let's say
node2, for simulating a crash, we managed to make node 1 fence node2 via 
drbd
in order to avoid a split-brain configuration, but node1 then self-fence 
itself
apparently due to ocfs2.

After some researches, we understood this seems to be a "normal" feature :
in a 2-node cluster, when communication (and so ocfs2 heartbeat) is lost,
the remaining node has no way to know if it's its peer which is down or
itself which is away from the cluster, so it self-fences.

Our first idea was so to add a third node, so that the 2 remainning nodes
still communicate and no ocfs2 self-fence is launched.

But ocfs2 heartbeat, as explained in the FAQ, writes to the heartbeat 
system file,
which has to be shared. If we set up a third node with a little ocfs2 
partition
and o2cb, it doesn't appear into the cluster, even when declared as 
third node
in /etc/ocfs2/cluster.conf (of each node). Because the ocfs2 partition 
on the
third node is not shared, so ocfs2 heartbeat is not shared.

If we run ocfs2_hb_ctl -I -d /dev/drbd0 on node0 and node1, we get back
the same reference for heartbeat, but a different one on node2 (third node).
And in /var/log/kern.log on node 0, we have
...
Feb 28 11:46:42 maq1 kernel: ocfs2_dlm: Node 1 joins domain 
FB305B8298D94DCA9F9BF75D0AA09B8D
Feb 28 11:46:42 maq1 kernel: ocfs2_dlm: Nodes in domain 
("FB305B8298D94DCA9F9BF75D0AA09B8D"): 0 1

But nothing about node2...

And we cannot share a common partition as drbd only works with 2 peers...

Would anyone have any hint about how we could solve this issue ?

Is there a way to make a 2-node ocfs2 cluster work
or must we have at least 3 nodes ?
But if 3 nodes are required, how to make it work with DRBD ?
Or in 2-node config., can we block self-fencing (but is it desirable) ?

Many thanks in advance for your help,

Best regards,

Laurent.

begin:vcard
fn:Laurent Neiger
n:Neiger;Laurent
org;quoted-printable:CNRS Grenoble;Centre R=C3=A9seau & Informatique Commun
adr:B.P. 166;;25, avenue des Martyrs;Grenoble;;38042;France
email;internet:Laurent.Neiger at grenoble.cnrs.fr
title;quoted-printable:Administrateur Syst=C3=A8mes & R=C3=A9seaux
tel;work:(0033) (0)4 76 88 79 91
tel;fax:(0033) (0)4 76 88 12 95
note:Certificats : http://igc.services.cnrs.fr/Doc/General/trust.html
x-mozilla-html:TRUE
url:http://cric.grenoble.cnrs.fr
version:2.1
end:vcard

_______________________________________________
Ocfs2-users mailing list
Ocfs2-users at oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

       
---------------------------------
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20080228/88e78fe8/attachment.html


More information about the Ocfs2-users mailing list