[Ocfs-users] Mouting Filesystem problems

Sunil Mushran Sunil.Mushran at oracle.com
Thu Mar 10 14:00:01 CST 2005


I just checked, stop does not unload the module.
So either you explicitly
rmmod ocfs
or
reboot

David Sharples wrote:

>Hi, 
>
>here is what I did now
>
>on one of the nodes
>
>vi /etc/ocfs.conf
> removed the node name parameter value
>
>[root at LNCSTRTMIT03 etc]# more ocfs.conf
>#
># ocfs config
># Ensure this file exists in /etc
>#
>
>        node_name = lncstrtmit03
>        node_number =
>        ip_address = 10.85.151.21
>        ip_port = 7000
>        comm_voting = 1
>        guid = DF961E85CB3A49C9F7CD000E7FAF2AC4
>
>[root at LNCSTRTMIT03 etc]# cd /etc/init.d
>[root at LNCSTRTMIT03 init.d]# ./ocfs stop
>[root at LNCSTRTMIT03 init.d]# ./ocfs start
>Loading OCFS:                                              [  OK  ]
>
>[root at LNCSTRTMIT03 init.d]# mkfs.ocfs -F -b 1024 -g root -u root -L
>OCSF1 -m /ocfs1 -p 755 /dev/sda
>Cleared volume header sectors
>Cleared node config sectors
>Cleared publish sectors
>Cleared vote sectors
>Cleared bitmap sectors
>Cleared data block
>Wrote volume header
>
>[root at LNCSTRTMIT03 init.d]# mount -t ocfs /dev/sda /ocfs1
>mount: wrong fs type, bad option, bad superblock on /dev/sda,
>       or too many mounted file systems
>
>
>[root at LNCSTRTMIT03 init.d]# fsck.ocfs /dev/sda
>fsck.ocfs 1.1.2-PROD1 Fri Apr  2 13:59:23 PST 2004 (build
>2df841d19c031db220f8cfb73439339d)
>Checking Volume Header...
>Volume has never been mounted on any node. Exiting
>/dev/sda: clean, 0 objects, 0/20478 blocks
>
>
>Also tried it with the parameter completely gone and with a value of
>0, same errors
>
>
>On Thu, 10 Mar 2005 11:33:53 -0800, Sunil Mushran
><Sunil.Mushran at oracle.com> wrote:
>  
>
>>mkfs.ocfs does not build the root directory. That is upto the module
>>to do and it does so at the very first mount. To prevent races, we require
>>node 0 to mount the volume first. Once done, the root dir will be created
>>and parallel mounts can happen.
>>
>>However, you have specified an optional node_number parameter in
>>ocfs.conf. And it is set to 2. Hence the problem.
>>
>>Either remove the node_number parameter from ocfs.conf. You will
>>need to reload the module as that file is only read during load.
>>
>>Or, mount the volume from another node which either has
>>no node_number set or set to 0.
>>
>>David Sharples wrote:
>>
>>    
>>
>>>Hi, thanks for he reply.  Here are the messages from the time I tried
>>>to mount it on the second node
>>>
>>>Mar 10 17:41:47 LNCSTRTMIT03 kernel: (30369) ERROR: The volume must be
>>>mounted by node 0 before it can be used and you are node 1,
>>>Linux/ocfsmount.c, 289
>>>Mar 10 17:41:47 LNCSTRTMIT03 kernel: (30369) ERROR: Error mounting
>>>volume, Linux/ocfsmain.c, 314
>>>Mar 10 17:46:58 LNCSTRTMIT03 sshd(pam_unix)[27281]: session closed for user root
>>>Mar 10 18:01:07 LNCSTRTMIT03 sshd(pam_unix)[31795]: session opened for
>>>user root by (uid=0)
>>>Mar 10 18:01:08 LNCSTRTMIT03 sshd(pam_unix)[31795]: session closed for user root
>>>Mar 10 18:01:09 LNCSTRTMIT03 sshd(pam_unix)[31804]: session opened for
>>>user root by (uid=0)
>>>Mar 10 18:01:10 LNCSTRTMIT03 sshd(pam_unix)[31804]: session closed for user root
>>>Mar 10 18:05:17 LNCSTRTMIT03 kernel: (32120) ERROR: Invalid volume
>>>signature, Common/ocfsgenmisc.c, 820
>>>Mar 10 18:05:17 LNCSTRTMIT03 kernel: (32120) ERROR: Device (8,0)
>>>failed verification, Linux/ocfsmount.c, 232
>>>Mar 10 18:05:17 LNCSTRTMIT03 kernel: (32120) ERROR: Error mounting
>>>volume, Linux/ocfsmain.c, 314
>>>
>>>The node part interests me, do I need a node 0 in a /etc/ocfs.conf
>>>(the users guide I have doesnt have that parameter mentioned)
>>>
>>>
>>>On Thu, 10 Mar 2005 10:32:56 -0800, Sunil Mushran
>>><Sunil.Mushran at oracle.com> wrote:
>>>
>>>
>>>      
>>>
>>>>When you do the mount, the actual errors are dumped in /var/log/messages.
>>>>What are those messages?
>>>>
>>>>David Sharples wrote:
>>>>
>>>>
>>>>
>>>>        
>>>>
>>>>>Hi,
>>>>>
>>>>>I have blatantly done something wrong here but I dont know what.
>>>>>
>>>>>Two nodes and my SA presented the same SAN space to both nodes
>>>>>
>>>>>on node1 the device is /dev/sda and on node2 the device is /dev/sdb
>>>>>
>>>>>So on node I did this
>>>>>
>>>>>mkfs.ocfs -F -b 1024 -g dba -u oracle -L OCFS1 -m /ocfs1 -p 755 /dev/sdb
>>>>>
>>>>>and the output was
>>>>>
>>>>>Cleared volume header sectors
>>>>>Cleared node config sectors
>>>>>Cleared publish sectors
>>>>>Cleared vote sectors
>>>>>Cleared bitmap sectors
>>>>>Cleared data block
>>>>>Wrote volume header
>>>>>
>>>>>
>>>>>All returned OK with no problems, when I try to mount that file system I get
>>>>>
>>>>>mount -t ocfs /dev/sdb /ocfs1
>>>>>mount: wrong fs type, bad option, bad superblock on /dev/sdb, or too
>>>>>many mounted filesystems
>>>>>
>>>>>So ok, I did fsck.ocfs
>>>>>
>>>>>fsck.ocfs /dec/sdb
>>>>>
>>>>>fsck.ocfs 1.1.2-PROD1 Fri Apr  2 13:59:23 PST 2004 (build
>>>>>2df841d19c031db220f8cfb73439339d)
>>>>>Checking Volume Header...
>>>>>Volume has never been mounted on any node. Exiting
>>>>>/dev/sdb: clean, 0 objects, 0/20478 blocks
>>>>>
>>>>>I tried the exact same thing on the other node and got the same
>>>>>errors.  These partitions has already been mounted as linux
>>>>>filesystems before I did the mkfs.ocfs stuff.
>>>>>
>>>>>ocfs.conf
>>>>>
>>>>>
>>>>>      node_name = mit04
>>>>>      node_number = 2
>>>>>      ip_address = 10.85.151.11
>>>>>      ip_port = 7000
>>>>>      comm_voting = 1
>>>>>      guid = 6EFDD701A21BAEED6A36000E7FAE352A
>>>>>
>>>>>ocfs-support-1.1.2-1
>>>>>ocfs-tools-1.1.2-1
>>>>>ocfs-2.4.9-e-smp-1.0.13-1
>>>>>ocfs-2.4.9-e-enterprise-1.0.13-1
>>>>>
>>>>>
>>>>>ocfs module is loaded
>>>>>
>>>>>lsmod | grep ocfs
>>>>>ocfs                  306016   0
>>>>>
>>>>>Any ideas?
>>>>>
>>>>>Thanks
>>>>>_______________________________________________
>>>>>Ocfs-users mailing list
>>>>>Ocfs-users at oss.oracle.com
>>>>>http://oss.oracle.com/mailman/listinfo/ocfs-users
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>          
>>>>>


More information about the Ocfs-users mailing list