[Ocfs-users] Mouting Filesystem problems

David Sharples davidsharples at gmail.com
Thu Mar 10 13:16:17 CST 2005


Hi, thanks for he reply.  Here are the messages from the time I tried
to mount it on the second node

Mar 10 17:41:47 LNCSTRTMIT03 kernel: (30369) ERROR: The volume must be
mounted by node 0 before it can be used and you are node 1,
Linux/ocfsmount.c, 289
Mar 10 17:41:47 LNCSTRTMIT03 kernel: (30369) ERROR: Error mounting
volume, Linux/ocfsmain.c, 314
Mar 10 17:46:58 LNCSTRTMIT03 sshd(pam_unix)[27281]: session closed for user root
Mar 10 18:01:07 LNCSTRTMIT03 sshd(pam_unix)[31795]: session opened for
user root by (uid=0)
Mar 10 18:01:08 LNCSTRTMIT03 sshd(pam_unix)[31795]: session closed for user root
Mar 10 18:01:09 LNCSTRTMIT03 sshd(pam_unix)[31804]: session opened for
user root by (uid=0)
Mar 10 18:01:10 LNCSTRTMIT03 sshd(pam_unix)[31804]: session closed for user root
Mar 10 18:05:17 LNCSTRTMIT03 kernel: (32120) ERROR: Invalid volume
signature, Common/ocfsgenmisc.c, 820
Mar 10 18:05:17 LNCSTRTMIT03 kernel: (32120) ERROR: Device (8,0)
failed verification, Linux/ocfsmount.c, 232
Mar 10 18:05:17 LNCSTRTMIT03 kernel: (32120) ERROR: Error mounting
volume, Linux/ocfsmain.c, 314

The node part interests me, do I need a node 0 in a /etc/ocfs.conf
(the users guide I have doesnt have that parameter mentioned)


On Thu, 10 Mar 2005 10:32:56 -0800, Sunil Mushran
<Sunil.Mushran at oracle.com> wrote:
> When you do the mount, the actual errors are dumped in /var/log/messages.
> What are those messages?
> 
> David Sharples wrote:
> 
> >Hi,
> >
> >I have blatantly done something wrong here but I dont know what.
> >
> >Two nodes and my SA presented the same SAN space to both nodes
> >
> >on node1 the device is /dev/sda and on node2 the device is /dev/sdb
> >
> >So on node I did this
> >
> >mkfs.ocfs -F -b 1024 -g dba -u oracle -L OCFS1 -m /ocfs1 -p 755 /dev/sdb
> >
> >and the output was
> >
> >Cleared volume header sectors
> >Cleared node config sectors
> >Cleared publish sectors
> >Cleared vote sectors
> >Cleared bitmap sectors
> >Cleared data block
> >Wrote volume header
> >
> >
> >All returned OK with no problems, when I try to mount that file system I get
> >
> >mount -t ocfs /dev/sdb /ocfs1
> >mount: wrong fs type, bad option, bad superblock on /dev/sdb, or too
> >many mounted filesystems
> >
> >So ok, I did fsck.ocfs
> >
> >fsck.ocfs /dec/sdb
> >
> >fsck.ocfs 1.1.2-PROD1 Fri Apr  2 13:59:23 PST 2004 (build
> >2df841d19c031db220f8cfb73439339d)
> >Checking Volume Header...
> >Volume has never been mounted on any node. Exiting
> >/dev/sdb: clean, 0 objects, 0/20478 blocks
> >
> >I tried the exact same thing on the other node and got the same
> >errors.  These partitions has already been mounted as linux
> >filesystems before I did the mkfs.ocfs stuff.
> >
> >ocfs.conf
> >
> >
> >        node_name = mit04
> >        node_number = 2
> >        ip_address = 10.85.151.11
> >        ip_port = 7000
> >        comm_voting = 1
> >        guid = 6EFDD701A21BAEED6A36000E7FAE352A
> >
> >ocfs-support-1.1.2-1
> >ocfs-tools-1.1.2-1
> >ocfs-2.4.9-e-smp-1.0.13-1
> >ocfs-2.4.9-e-enterprise-1.0.13-1
> >
> >
> >ocfs module is loaded
> >
> >lsmod | grep ocfs
> >ocfs                  306016   0
> >
> >Any ideas?
> >
> >Thanks
> >_______________________________________________
> >Ocfs-users mailing list
> >Ocfs-users at oss.oracle.com
> >http://oss.oracle.com/mailman/listinfo/ocfs-users
> >
> >
>


More information about the Ocfs-users mailing list