[Ocfs2-users] another node is heartbeating in our slot!

Patrick Donker padonker at technoware.nl
Sun Dec 17 05:41:49 PST 2006


Hi everybody,
First of all, I am new to this list and ocfs2, so forgive my ignorance.
Anyhow, what I'm doing is this:
I'm experimenting on a 2 node debian etch shared fs and have installed 
ocfs2-tools 1.2.1.
The debs run on a vmware esx 3.0.0 server and are clones of a default 
template.
This is my cluster.conf:
cluster:
        node_count = 2
        name = san         
node:
        ip_port = 7777
        ip_address = 192.168.100.2
        number = 0
        name = mail  
        cluster = san         
node:
        ip_port = 7777
        ip_address = 192.168.100.5
        number = 1
        name = san
        cluster = san

If I start and mount the fs on one of the nodes, everything goes fine. 
However, as soon as I mount the fs on the other node I get a kernel 
panic with this message:

Dec 17 13:06:01 san kernel: (2797,0):o2hb_do_disk_heartbeat:854 ERROR: 
Device "sdb": another node is heartbeating in our slot!

 mounted.ocfs2 -d on both nodes tell me this:

/dev/sdb              ocfs2  6616a964-f474-4c5e-94b9-3a20343a7178 

fsck.ocfs2 -n /dev/sdb

 Checking OCFS2 filesystem in /dev/sdb:
  label:              <NONE>
  uuid:               66 16 a9 64 f4 74 4c 5e 94 b9 3a 20 34 3a 71 78
  number of blocks:   26214400
  bytes per block:    4096
  number of clusters: 3276800
  bytes per cluster:  32768
  max slots:  16

Somehow both nodes use the same slot to heartbeat in. Not sure what 
causes this or how to change this. Please help me debug this problem 
because I'm stuck.

Thanks
Patrick




More information about the Ocfs2-users mailing list