[Ocfs2-devel] 答复: dln domain_map is not consistent after a node was fencing

Shichangkuo shi.changkuo at h3c.com
Tue Jan 12 22:07:41 PST 2016


Hi, Joseph, Junxiao and Jiufei,
	Thanks for your reply.
	I am sorry for my mistake after checking syslog on Node2 again. The disk is indeed unavailable on Node2 and Node3 when Node1 was mounting.
    The slotmap also changed, as Node1 will do "recovery" on Node2 and Node3
root at cvknode1:~# debugfs.ocfs2 -R slotmap /dev/sdd
	Slot#   Node#
	    1       2

When the storage link recovered , and mounted another LUN on Node1, the tcp connections to Node2 and Node3 established, but the dlm map of LUN1 still inconsistent.
Then the cluster enters split-brain state, dose this issue can automatically fix when heartbeat changed to be active?

-----------------
发件人: xuejiufei [mailto:xuejiufei at huawei.com] 
发送时间: 2016年1月13日 10:58
收件人: shichangkuo 09727 (CCPL); ocfs2-devel at oss.oracle.com
主题: Re: [Ocfs2-devel] dln domain_map is not consistent after a node was fencing

Hi Changkuo,

On 2016/1/11 19:52, Shichangkuo wrote:
> Hi,
> 
> I have three nodes in one cluster. Node1 disconnect TCP with Node2 and Node3, and then fencing by restarting.
> When re-mounted LUNA on Node1, I found strange issue: Node1 this it was the only node mounted on LUNA, the kernel log as follows:
> Jan  4 16:46:14 cvk12 kernel: [   99.223321] o2dlm: Joining domain 0400EAB0791F4B4C85F3FCB5AAC76A1D ( 5 ) 1 nodes
> 
> I don't think Node1 had a mistake by reading heartbeat on other slots.
Maybe Node2 and Node3 is busy writing user data and can not write heartbeat to disk when Node1 remount.
So Node1 think Node2 and Node3 is dead. So the dlm domain_map on Node1 only contain itself.

> The most possible reason maybe other nodes responded JOIN_OK_NO_MAP after receiving dlm_query_join messages by Node1.
> Threr are two ways to respond JOIN_OK_NO_MAP in dlm_query_join_handler:
>     1) dlm->dlm_state == DLM_CTXT_LEAVING
>     2) dlm->dlm_state == DLM_CTXT_NEW &&
>                     dlm->joining_node == DLM_LOCK_RES_OWNER_UNKNOWN 
> But neither of them is suitable as node already mounted.
> I have no idea about it, If any other encount or familiar with this issue, please let me know.
> 
> Thanks,
> Changkuo.
> 
> ----------------------------------------------------------------------
> ---------------------------------------------------------------
> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
> 邮件!
> This e-mail and its attachments contain confidential information from 
> H3C, which is intended only for the person or entity whose address is 
> listed above. Any use of the information contained herein in any way 
> (including, but not limited to, total or partial disclosure, 
> reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, 
> please notify the sender by phone or email immediately and delete it!
> _______________________________________________
> Ocfs2-devel mailing list
> Ocfs2-devel at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-devel
> 



More information about the Ocfs2-devel mailing list