[Ocfs2-users] more ocfs2_delete_inode dmesg questions

Sunil Mushran sunil.mushran at oracle.com
Mon Aug 24 18:12:19 PDT 2009


So a delete was called for some inodes that had not been orphaned.
The pre-checks detected the same and correctly aborted the deletes.
No harm done.

No, the messages do not pinpoint the device. It's something we discussed
adding, but have not done it as yet.

Next time this happens and you can identify the volume, do:
# debugfs.ocfs2 -R "findpath <613069>" /dev/sdX

This will tell you the pathname for the inode#. Then see if you can remember
performing any op on that file. Anything. It may help us narrow down the
issue.

Sunil

Brian Kroth wrote:
> I recently brought up a mail server with two ocfs2 volumes on it, one
> large one for the user maildirs, and one small one for queue/spool
> directories.  More information on the specifics below.  When flushing
> the queues from the MXs I saw the messages listed below fly by, but
> since then nothing.
>
> A couple of questions:
> - Should I be worried about these?  They seemed similar yet different to
>   a number of other "out of space" and "failure to delete" reports of
>   late.
> - How can I tell which volume has the problem inodes?
> - Is there anything to be done about them?
>
> Here's the snip from the tail of dmesg:
>
> [   34.578787] netconsole: network logging started
> [   36.695679] ocfs2: Registered cluster interface o2cb
> [   43.354897] OCFS2 1.5.0
> [   43.373100] ocfs2_dlm: Nodes in domain ("94468EF57C9F4CA18C8D218C63E99A9C"): 1 
> [   43.386623] kjournald2 starting: pid 2328, dev sdb1:36, commit interval 5 seconds
> [   43.395413] ocfs2: Mounting device (8,17) on (node 1, slot 0) with ordered data mode.
> [   44.984201] eth1: no IPv6 routers present
> [   54.362580] warning: `ntpd' uses 32-bit capabilities (legacy support in use)
> [ 1601.560932] ocfs2_dlm: Nodes in domain ("10BBA4EB7687450496F7FCF0475F9372"): 1 
> [ 1601.581106] kjournald2 starting: pid 7803, dev sdc1:36, commit interval 5 seconds
> [ 1601.593065] ocfs2: Mounting device (8,33) on (node 1, slot 0) with ordered data mode.
> [ 3858.778792] (26441,0):ocfs2_query_inode_wipe:882 ERROR: Inode 613069 (on-disk 613069) not orphaned! Disk flags  0x1, inode flags 0x80
> [ 3858.779005] (26441,0):ocfs2_delete_inode:1010 ERROR: status = -17
> [ 4451.007580] (5053,0):ocfs2_query_inode_wipe:882 ERROR: Inode 613118 (on-disk 613118) not orphaned! Disk flags  0x1, inode flags 0x80
> [ 4451.007711] (5053,0):ocfs2_delete_inode:1010 ERROR: status = -17
> [ 4807.908463] (11859,0):ocfs2_query_inode_wipe:882 ERROR: Inode 612899 (on-disk 612899) not orphaned! Disk flags  0x1, inode flags 0x80
> [ 4807.908611] (11859,0):ocfs2_delete_inode:1010 ERROR: status = -17
> [ 5854.377155] (31074,1):ocfs2_query_inode_wipe:882 ERROR: Inode 612867 (on-disk 612867) not orphaned! Disk flags  0x1, inode flags 0x80
> [ 5854.377302] (31074,1):ocfs2_delete_inode:1010 ERROR: status = -17
> [ 6136.297464] (3463,0):ocfs2_query_inode_wipe:882 ERROR: Inode 612959 (on-disk 612959) not orphaned! Disk flags  0x1, inode flags 0x80
> [ 6136.297555] (3463,0):ocfs2_delete_inode:1010 ERROR: status = -17
> [19179.000100] NOHZ: local_softirq_pending 80
>
>
> There's actually three nodes, all VMs, that are setup for the ocfs2
> cluster volumes, but only one has it mounted.  The others are available
> as cold standbys that may eventually be managed by heartbeat, so there
> shouldn't be any locking contention going on.
>
> All nodes are running 2.6.30 with ocfs2-tools 1.4.2.
>
> Here's the commands used to make the volumes:
> mkfs.ocfs2 -v -L ocfs2mailcluster2 -N 8 -T mail /dev/sdb1
> mkfs.ocfs2 -v -L ocfs2mailcluster2spool -N 8 -T mail /dev/sdc1
>
> The features the were setup with:
> tunefs.ocfs2 -Q "Label: %V\nFeatures: %H %O\n" /dev/sdb1
> Label: ocfs2mailcluster2
> Features: sparse inline-data unwritten
>
> tunefs.ocfs2 -Q "Label: %V\nFeatures: %H %O\n" /dev/sdc1
> Label: ocfs2mailcluster2spool
> Features: sparse inline-data unwritten
>
> And their mount options:
> mount | grep cluster
> /dev/sdb1 on /cluster type ocfs2 (rw,noexec,nodev,_netdev,relatime,localflocks,heartbeat=local)
> /dev/sdc1 on /cluster-spool type ocfs2 (rw,noexec,nodev,_netdev,relatime,localflocks,heartbeat=local)
>
> localflocks because I ran into a problem with them previously, and since
> it's a single active node model currently there's no reason for them
> anyways.
>
> Let me know if you need any other information.
>
> Thanks,
> Brian
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>   




More information about the Ocfs2-users mailing list