[Ocfs2-users] heartbeat and slot issues.

Ulf Zimmermann ulf at openlane.com
Wed Nov 24 15:33:47 PST 2010


Then you haven't cloned the volume, but it is the same, would be my guess.


From: brad hancock [mailto:braddhancock at gmail.com]
Sent: Wednesday, November 24, 2010 1:54 PM
To: Ulf Zimmermann
Cc: ocfs2-users at oss.oracle.com
Subject: Re: [Ocfs2-users] heartbeat and slot issues.

Thanks for the response.
Is it normal for when I change it on one node, the other node reflects the same UUID?


node1:
tunefs.ocfs2 -q -Q "BS=%5B\nUUID=%U\n" /dev/sdb1
BS= 4096
UUID=ea0778bd-bdaa-44af-8fbf-cb4a5d85e79f


node2:
 tunefs.ocfs2 -q -Q "BS=%5B\nUUID=%U\n" /dev/sdb1
BS= 4096
UUID=ea0778bd-bdaa-44af-8fbf-cb4a5d85e79f



On Wed, Nov 24, 2010 at 3:00 PM, Ulf Zimmermann <ulf at openlane.com<mailto:ulf at openlane.com>> wrote:
After the clone, you want to probably run tunefs.ocfs2 -U to reset the UUID. This is one of the steps we do when cloning volumes for database refreshes.


From: ocfs2-users-bounces at oss.oracle.com<mailto:ocfs2-users-bounces at oss.oracle.com> [mailto:ocfs2-users-bounces at oss.oracle.com<mailto:ocfs2-users-bounces at oss.oracle.com>] On Behalf Of brad hancock
Sent: Wednesday, November 24, 2010 12:35 PM
To: ocfs2-users at oss.oracle.com<mailto:ocfs2-users at oss.oracle.com>
Subject: [Ocfs2-users] heartbeat and slot issues.

I setup a host with an ocfs partition on a san and then cloned that host to another and renamed. Both machines mount their ocfs partitions but give the following errors.


Host that was cloned:
(1888,0):o2hb_do_disk_heartbeat:762 ERROR: Device "sdb1": another node is heartbeating in our slot!
[345413.242260] sd 1:0:0:0: reservation conflict
[345413.242270] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_OK,SUGGEST_OK
[345413.242274] end_request: I/O error, dev sdb, sector 1735
[345413.242536] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.242788] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[345413.243159] sd 1:0:0:0: reservation conflict
[345413.243163] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_OK,SUGGEST_OK
[345413.243166] end_request: I/O error, dev sdb, sector 1735
[345413.243401] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.243639] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[448460.370132] sd 1:0:0:0: reservation conflict
[448460.370145] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_OK,SUGGEST_OK
[448460.370149] end_request: I/O error, dev sdb, sector 1735
[448460.370395] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[448460.370638] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5


Clone:

 sd 1:0:0:0: reservation conflict
[17643.588011] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_OK,SUGGEST_OK
[17643.588011] end_request: I/O error, dev sdb, sector 1735
[17643.588011] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[17643.588011] (1859,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[17643.588011] sd 1:0:0:0: reservation conflict

This didn't seem to be a problem, but im noticing the host are no longer seeing the same data. I unmount the drives and remounted and they were the same again.


Thanks for any guidance,


cat /etc/ocfs2/cluster.conf
node:
        ip_port = 7777
        ip_address = 10.x.x.248
        number = 0
        name = smes01
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 10.x.x.249
        number = 1
        name = smes02
        cluster = ocfs2

cluster:
        node_count = 2
        name = ocfs2

cluster.conf same on both hosts.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20101124/5edbce29/attachment.html 


More information about the Ocfs2-users mailing list