[Ocfs2-users] ocfs2 mounting fails

Marcos E. Matsunaga Marcos.Matsunaga at oracle.com
Wed Jun 13 15:15:02 PDT 2007


Alison,

I think I didn't understand it before. The mount problem only happens
when you want to mount automatically on boot time? If that's the
situation, the OCFS2 FAQ
(http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html)
has the following:

# What do I need to do to mount OCFS2 volumes on boot?

    * Enable o2cb service using:

      	# chkconfig --add o2cb
          

    * Enable ocfs2 service using:

      	# chkconfig --add ocfs2
          

    * Configure o2cb to load on boot using:

      	# /etc/init.d/o2cb configure
          

    * Add entries into /etc/fstab as follows:

      	/dev/sdX	/dir	ocfs2	_netdev	0	0
          

Using 'chkconfig --list|grep o2cb' and 'chkconfig --list|grep ocfs2'
would tell you if they are properly configured. The output should show
that it is "on" on init levels 2, 3 and 5.

Regarding the mount, I got this from the manpages:


    DUPLICATE LABELS

*mount * includes support for systems where same partition is shared
between different devices (e.g. multipath kernel drivers). In particular
case when mounting device by LABEL, mount command reports problem with
duplicate labels.

You can define priority of devices in file //etc/fstab.order/ as simple
list of devices (with full pathname for the devices). The devices listed
in this file have greater priority than other devices. Devices in
configuration file have descending priority. 



Alison Jolley wrote:
> Marcos,
> I've attached the files you wanted.
> I didn't get any errors on the stop or online commands - I can mount
> fine after booting... Just not during. It somehow didn't like the
> LABEL parameter in the mount command. See attached file.
> Thanks!
>
> Alison
>
> Marcos E. Matsunaga wrote:
>> Alison,
>>
>> Can  you perform the following on the node that is failing:
>>
>> # date
>> # /etc/init.d/o2cb stop
>> # /etc/init.d/o2cb online
>> # mount LABEL=u01_oradata /u02
>>
>> And then copy the /var/log/messages file piece about 5 minutes before
>> the output of the date command above and send it to me. Also, if you
>> can cut/paste the output of the sequence above and attach to the
>> email, that would help.
>>
>> Alison Jolley wrote:
>>> I'm able to ping both IP addresses. I also am using the same ocfs2
>>> version. Also, I've redone the mounted command and I've attached a
>>> file containing the output.
>>> Thanks!
>>>
>>> Alison
>>>
>>> Marcos E. Matsunaga wrote:
>>>> Alison,
>>>>
>>>> The output of mounted.ocfs2 are both from node 1 (tap2d2), but I
>>>> believe you see the same UUID from node 0 (tap2d1).
>>>>
>>>> Make sure you can ping each other using the IP addresses you have
>>>> specified on cluster.conf.
>>>>
>>>> Also, make sure both nodes have the same ocfs2 version (rpm
>>>> -qa|grep ocfs2).
>>>>
>>>>
>>>>
>>>> Alison Jolley wrote:
>>>>> Marcos,
>>>>> I've attached the error I get while starting up. As well as the
>>>>> output from mounted.ocfs2 and the cluster.conf files.
>>>>>
>>>>> Here is the other information you have requested:
>>>>> OS Version: Redhat 2.6.9-42.ELsmp
>>>>> OCFS2 Version: 1.2.5-1
>>>>> Environment: EMC Clariion connected via fibre to 2 Dell 1950.
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Alison
>>>>>
>>>>> Marcos E. Matsunaga wrote:
>>>>>> Alison,
>>>>>>
>>>>>> You should use screen (depending on the distribution, it is
>>>>>> installed automatically) while mounting on the second node and
>>>>>> capture the output (ctrl-a and shift-h will start/stop the
>>>>>> capture). That may give you a clue on what is going on. If you
>>>>>> don't find the problem, please add some details like, kernel
>>>>>> version, disk storage (iscsi, FC, scsi, etc), ocfs2 versions, and
>>>>>> network interface that ocfs2 is using for DLM. Also, it would be
>>>>>> interesting to see the /etc/ocfs2/cluster.conf from both nodes
>>>>>> and the output of mounted.ocfs2 -d on both nodes.
>>>>>>
>>>>>> Alison Jolley wrote:
>>>>>>> I'm having issues on ocfs2 mounting. I have a 2 node cluster,
>>>>>>> and one of the clusters works fine. The other one is
>>>>>>> experiencing "flakey" behavior. Upon server startup, it attempts
>>>>>>> to mount the drive, however it fails and produces an error which
>>>>>>> scrolls by too fast (I checked /var/log/messages and
>>>>>>> /var/log/dmesg with no luck). I can immediately mount the drive
>>>>>>> after startup, which produces no errors, however it unmounts
>>>>>>> itself within a matter of hours (again - no errors in messages
>>>>>>> or dmesg). Is there another log I should look at? Does anyone
>>>>>>> have any ideas as to why this keeps failing?
>>>>>>> thanks!
>>>>>>>
>>>>>>> Alison
>>>>>>> _______________________________________________
>>>>>>> Ocfs2-users mailing list
>>>>>>> Ocfs2-users at oss.oracle.com
>>>>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>>>>>
>>>>>> -- 
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Marcos Eduardo Matsunaga
>>>>>>
>>>>>> Oracle USA
>>>>>> Linux Engineering
>>>>>>
>>>>>>
>>>>>>   
>>>>
>>>> -- 
>>>>
>>>> Regards,
>>>>
>>>> Marcos Eduardo Matsunaga
>>>>
>>>> Oracle USA
>>>> Linux Engineering
>>>>
>>>>
>>>>   
>>> _______________________________________________
>>> Ocfs2-users mailing list
>>> Ocfs2-users at oss.oracle.com
>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>
>> -- 
>>
>> Regards,
>>
>> Marcos Eduardo Matsunaga
>>
>> Oracle USA
>> Linux Engineering
>>
>>
>>   
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users

-- 

Regards,

Marcos Eduardo Matsunaga

Oracle USA
Linux Engineering



-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20070613/ec043ea5/attachment.html


More information about the Ocfs2-users mailing list