[Ocfs2-users] mounted.ocfs2 -f shows Unknown

Sunil Mushran sunil.mushran at oracle.com
Tue Jan 12 14:49:09 PST 2010


Do you see this from all nodes?

Also, which version of ocfs2-tools?

# rpm -qa | grep ocfs2
# which mounted.ocfs2


David Johle wrote:
>
> So to doublecheck that, I get this on the unknown node (grate):
>
>
> # cd /sys/kernel/config/cluster/live2/node
> # for x in `find . -type f` ; do echo -n "$x:" ; cat $x; done
> ./grate/local:1
> ./grate/ipv4_address:1.2.3.47
> ./grate/ipv4_port:7777
> ./grate/num:3
> ./whip/local:0
> ./whip/ipv4_address:1.2.3.42
> ./whip/ipv4_port:7777
> ./whip/num:2
> ./pig/local:0
> ./pig/ipv4_address:1.2.3.41
> ./pig/ipv4_port:7777
> ./pig/num:1
>
>
> Did this on the other node that is up (whip) and the results were the 
> same except for the "local" value being a 0 for grate and a 1 for whip 
> like it should.
>
>
>
>
> At 04:06 PM 1/12/2010, Sunil Mushran wrote:
>> mounted.ocfs2 looks up 
>> /sys/kernel/config/cluster/<clustername>/node/<nodename>/num
>> to match the node number with the node name. If it does not find it, 
>> it prints unknown.
>>
>>
>> David Johle wrote:
>>> # debugfs.ocfs2 -R "slotmap" /dev/dm-6
>>> Slot# Node#
>>> 0 3
>>> 1 2
>>>
>>> # debugfs.ocfs2 -R "hb" /dev/dm-6
>>> node: node seq generation checksum
>>> 1: 1 000000004b3949ce 0000000000000000 fb8894ff
>>> 2: 2 000000004b4cd7d0 e869dccb731433ed 805d51fc
>>> 3: 3 000000004b4cd796 2713837530adc63c 7c4bb936
>>>
>>> # debugfs.ocfs2 -R "hb" /dev/dm-6
>>> node: node seq generation checksum
>>> 1: 1 000000004b3949ce 0000000000000000 fb8894ff
>>> 2: 2 000000004b4cd7d8 e869dccb731433ed 03cb88bc
>>> 3: 3 000000004b4cd79e 2713837530adc63c ffdd6076
>>>
>>> At 06:46 PM 1/11/2010, Sunil Mushran wrote:
>>>> Email me the outputs of the following:
>>>>
>>>> # debugfs.ocfs2 -R "slotmap" /dev/dm-6
>>>>
>>>> # debugfs.ocfs2 -R "hb" /dev/dm-6
>>>> wait 10 seconds.
>>>> # debugfs.ocfs2 -R "hb" /dev/dm-6
>>>>
>>>>
>>>> David Johle wrote:
>>>>> I was setting up a new 3 node cluster of systems and just so 
>>>>> happened to have only 2 of the nodes online after a kernel update. 
>>>>> Those nodes were numbers 2 & 3 in the cluster.conf.
>>>>>
>>>>> It seems that the issue referenced here...
>>>>> http://kr.forums.oracle.com/forums/thread.jspa?threadID=1005937
>>>>>
>>>>> ...is still around. Not so much that I have configured a node 
>>>>> number too high (they are 1,2,3 and node count is 3), but rather 
>>>>> there is a node that joined the domain who's number is higher than 
>>>>> the total number of nodes joined thus far.
>>>>>
>>>>> ----sample output----
>>>>>
>>>>> # mounted.ocfs2 -f
>>>>>
>>>>> Device FS Nodes
>>>>> /dev/dm-6 ocfs2 Unknown, whip
>>>>>
>>>>> ----sample output----
>>>>>
>>>>>
>>>>>
>>>>> Has anyone else run across this, or can anyone else confirm this 
>>>>> happens and that it's not just something configured wrong on my 
>>>>> system (I've double checked many times).
>>>>>
>>>>> It seems to only be a cosmetic thing for this command, everything 
>>>>> works fine as far as filesystem access and whatnot.




More information about the Ocfs2-users mailing list