[Ocfs-users] cluster with 2 nodes - heartbeat problem fencing
Sunil Mushran
Sunil.Mushran at oracle.com
Wed Mar 5 09:55:44 PST 2008
http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html#QUORUM
g.digiambelardini at fabaris.it wrote:
> Hi,
> now the problem is different,
> this is My cluster.conf:
>
> ----------------------------------------------------------
> node:
> ip_port = 7777
> ip_address = 1.1.1.1
> number = 0
> name = virtual1
> cluster = ocfs2
>
> node:
> ip_port = 7777
> ip_address = 1.1.1.2
> number = 1
> name = virtual2
> cluster = ocfs2
>
> cluster:
> node_count = 2
> name = ocfs2
> -----------------------------------------------------
> now seems the one of the cluster is a master, or better the virtual1 is a
> master, so when we shutdown the heartbeat interface ( eth0 - with partition
> mounted ) on the virtual1, the virtual2 gone in kernel panic. Instead if we
> shutdown the eth0 on virtual2, virtual1 work well.
> some body can help us?
> obviously if we reboot any server, so the partition gone unmounted before
> network gone down, avery thing work well.
> THANKS
>
>
>
>
> -----ocfs-users-bounces at oss.oracle.com wrote: -----
>
> To: ocfs-users at oss.oracle.com
> From: g.digiambelardini at fabaris.it
> Sent by: ocfs-users-bounces at oss.oracle.com
> Date: 05/03/2008 13.51
> Subject: [Ocfs-users] cluster with 2 nodes - heartbeat problem fencing
>
>
>
> Hi to all, this is My first time on this mailinglist.
> I have a problem with Ocfs2 on Debian etch 4.0
> I'd like when a node go down or freeze without unmount the ocfs2 partition
> the heartbeat not fence the server that work well ( kernel panic ).
> I'd like disable or heartbeat or fencing. So we can work also with only 1
> node.
> Thanks
>
>
> _______________________________________________
> Ocfs-users mailing list
> Ocfs-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs-users
>
>
> _______________________________________________
> Ocfs-users mailing list
> Ocfs-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs-users
>
More information about the Ocfs-users
mailing list