[Ocfs2-users] OCFS2 works like standalone

Ulf Zimmermann ulf at openlane.com
Wed May 5 19:14:43 PDT 2010


OCFS needs shared storage, your /dev/sda sounds like local storage, not shared.


From: ocfs2-users-bounces at oss.oracle.com [mailto:ocfs2-users-bounces at oss.oracle.com] On Behalf Of vm at ghs.l.google.com
Sent: Thursday, March 18, 2010 11:16 AM
To: ocfs2-users at oss.oracle.com
Subject: [Ocfs2-users] OCFS2 works like standalone

I have installed OCFS2 on two nodes SuSE 10.
Seems all works superb and nice from the first sight.
But,
/dev/sda ocfs2 rac1 is not sharing through net (port 7777) with rac0.

On both nodes I have 500Mb /dev/sda disks that are mounted (and are ocfs2). But they did not share the content with each other (files and folders in it). So when I am creating the file in one node I am expecting to receive this file in another node, but it does not appeared. So how to make OCFS share the same disk between both nodes ("mounted.ocfs2 -f" - shows only one node that handle with this disk



1.      Two nodes are connected with interconnect 1Gb cards.

2.      netstat on two nodes says that they are listening on 7777

3.      I can go with telnet from one node to another on 7777 port. (connection established and then closing with ^ character - so that works)

4.      Both nodes configured well, see below (there is rac1, - the rac0 has analogue result)

5.      ocfs2console -> configure nodes shows this two nodes + Propagate was performed + the device  is mounted in mounpoin

6.      On both nodes I have 500Mb /dev/sda disks that are mounted (and are ocfs2). But they did not share the content: files and folders in it.So when I am creating the file in one node I am expecting to receive this file in another node, but it does not appeared. So how to make OCFS share the same disk on between both nodes ("mounted.ocfs2 -f" - shows only one node that handle with this disk)


rac1:/var/log # modinfo ocfs2
filename:       /lib/modules/2.6.16.21-0.8-default/kernel/fs/ocfs2/ocfs2.ko
author:         Oracle
license:        GPL
description:    OCFS2 1.2.1-SLES Tue Apr 25 14:46:36 PDT 2006 (build sles)
version:        1.2.1-SLES
vermagic:       2.6.16.21-0.8-default 586 REGPARM gcc-4.1
supported:      yes
depends:        ocfs2_nodemanager,ocfs2_dlm,jbd,configfs
srcversion:     B45E2E0A0B86D1E2295CD6B
rac1:/var/log #


rac1:/var/log # vi /etc/ocfs/cluster.conf
node:
        ip_port = 7777
        ip_address = 192.168.56.121
        number = 0
        name = rac1
        cluster = ocfs2
node:
        ip_port = 7777
        ip_address = 192.168.56.101
        number = 1
        name = rac0
        cluster = ocfs2
cluster:
        node_count = 2
        name = ocfs2


rac1:~ # netstat -anlp | grep 7777
tcp        0      0 0.0.0.0:7777<http://0.0.0.0:7777>            0.0.0.0:*               LISTEN      -
rac1:~ #


rac1:~ # /etc/rc.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking cluster ocfs2: Online
Checking heartbeat: Active
rac1:~ #


rac1:~ # /etc/rc.d/ocfs2 status
Active OCFS2 mountpoints:  /mnt/u01
rac1:~ #


rac1:~ # mounted.ocfs2 -f
Device                FS     Nodes
/dev/sda              ocfs2  rac1

gmesg says:
ocfs2_dlm: Nodes in domain ("6BC17BABF90444138BFD125263D82586"): 0
kjournald starting.  Commit interval 5 seconds
ocfs2: Mounting device (8,0) on (node 0, slot 0)

SeSe Linux 10
#uname -r
2.6.16.21-0.8-defaults

Thank in advance
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20100505/b868e7ba/attachment-0001.html 


More information about the Ocfs2-users mailing list