[Ocfs2-users] about 2 nodes enviroment and metalink note 394827.1

Sunil Mushran Sunil.Mushran at oracle.com
Thu Nov 9 11:05:51 PST 2006


I would imagine you are using RHEL4. If so, upgrade the ocfs2-tools
to 1.2.2. The previous version of the ocfs2 init script did not always
umount ocfs2 volumes on clean shutdowns leading to this problem.

dballester.david at gmail.com wrote:
> Hi to all:
>
> In 2 nodes environment I've 'suffered' the 'reboot 1st node hangs 2nd
> one', has described in metalink note 394827.1
>
> Exactly this note says that this occurs when interconnect fails. Then i
> understand that if interconnect fails the idea is that node 1 stay up
> and running and node 2 'kills' itself to avoid split-brain. 
>
> When 1st node reboots ( planned reboot )  ocfs2 thinks that interconnect
> has failed? If this is true, the cluster is condemned to die, cause node
> 1 is rebooting and node 2 kills itself, isn't it?
>
> Under a well know 2 nodes environment, does not exist some type of
> message like '2nd node,I'm rebooting, don't panic and stay tuned ' ? :)
>
> Any tip to avoid this behaviour ?
>
> I think that one way ( not optimal in any way ) could be adding another
> node, only for ocfs2,  to help second node to think that it is in the
> max nodes group, when the 1st node reboots...
>
>
> Regards and TIA
>
> D. 
>
>
>
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>   



More information about the Ocfs2-users mailing list