[Ocfs2-users] ocfs2 - Kernel panic on many write/read from both servers

Dzianis Kahanovich mahatma at bspu.unibel.by
Sat Dec 24 07:58:54 PST 2011


No. I replaced my second/failover kernel from "git" to "vanilla" (in Gentoo's 
terms). Now vanilla 3.1.6 working wihout errors.

But 3.2.0-rc6 still have minor problems: seconds after starting I have runtime 
(non-fatal) reg-dumps or "panics" (sorry for unclean terminology) about "inode" 
area of ocfs2 (not precise now). Also keeping multiple apache's processes, not 
killed by "kill -s KILL", IMHO there are accessing this "unconnected inodes" 
(may be not precise literally). ocfs2 is partially accessible and have no errors 
after rebooting in 3.1 kernel. May be there are mmap access? (sorry, now don't 
want to panish system by testing)

I use ocfs2 from 1.4 and still not use some of new features - like directory 
indexing (just waiting for a moment to backup and reformat - I not beleaving in 
tunefs after old troubles).

Marek Królikowski пишет:
> So You sugest to use kernel with gentoo patches?
> I use all the time vanilla downloaded from kernel.org - for now i check
> 2.6.39.4, 3.1.4 and  today 3.2-rc5 - on all got this same effect...
>
>
> -----Oryginalna wiadomość-----
> From: Dzianis Kahanovich
> Sent: Friday, December 16, 2011 6:38 PM
> To: Marek Królikowski
> Cc: ocfs2-users at oss.oracle.com
> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from both
> servers
>
> I got kernel-panic on 3.2.0-rc2, but 3.1.5-gentoo - no. "Gentoo" means
> Gentoo
> patches, but I believe there are not related to ocfs2.
>
> Marek Królikowski пишет:
>> I got everything new on server and this is a problem - on old servers
>> OCFS2
>> working good so new version ocfs2 or kernel got BUG.
>> Check kernel 3.1.X and ocfs-tools 1.6.X and u see u get kernel panic i
>> check
>> forums and many ppl got this same effect on new instances of OCFS2
>> clusters..
>>
>> -----Oryginalna wiadomość-----
>> From: Eduardo Diaz - Gmail
>> Sent: Friday, December 16, 2011 4:34 PM
>> To: Marek Królikowski
>> Cc: ocfs2-users at oss.oracle.com
>> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from
>> both
>> servers
>>
>> My recomendation is upgrade all y recreate the filesystem in the cluster..
>>
>> If you need profesional help please make a support or use
>> profesional... ocfs2 is very hard to use if you don't know how to use
>> it..
>>
>> the list is for help, or not, but it is free..
>>
>> 2011/12/15 Marek Królikowski<admin at wset.edu.pl>:
>>> Anyone can help wme with this?
>>>
>>>
>>>
>>> -----Oryginalna wiadomość-----
>>> From: Marek Królikowski
>>> Sent: Sunday, December 04, 2011 11:15 AM
>>> To: ocfs2-users at oss.oracle.com
>>> Subject: ocfs2 - Kernel panic on many write/read from both servers
>>>
>>> I do for all night tests with write/read files from ocfs2 on both servers
>>> something like this:
>>> On MAIL1 server:
>>> #!/bin/bash
>>> while true
>>> do
>>> rm -rf /mnt/EMC/MAIL1
>>> mkdir /mnt/EMC/MAIL1
>>> cp -r /usr /mnt/EMC/MAIL1
>>> rm -rf /mnt/EMC/MAIL1
>>> done;
>>> On MAIL2 server:
>>> #!/bin/bash
>>> while true
>>> do
>>> rm -rf /mnt/EMC/MAIL2
>>> mkdir /mnt/EMC/MAIL2
>>> cp -r /usr /mnt/EMC/MAIL2
>>> rm -rf /mnt/EMC/MAIL2
>>> done;
>>>
>>> Today i check logs and see:
>>> o2dlm: Node 1 joins domain EAC7942B71964050AE2046D3F0CDD7B2
>>> o2dlm: Nodes in domain EAC7942B71964050AE2046D3F0CDD7B2: 0 1
>>> (rm,26136,0):ocfs2_unlink:953 ERROR: status = -2
>>> (touch,26137,0):ocfs2_check_dir_for_entry:2120 ERROR: status = -17
>>> (touch,26137,0):ocfs2_mknod:461 ERROR: status = -17
>>> (touch,26137,0):ocfs2_create:631 ERROR: status = -17
>>> (rm,26142,0):ocfs2_unlink:953 ERROR: status = -2
>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>> "echo 0>   /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>> Call Trace:
>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>>
>>>
>>> _______________________________________________
>>> Ocfs2-users mailing list
>>> Ocfs2-users at oss.oracle.com
>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>
>>
>> _______________________________________________
>> Ocfs2-users mailing list
>> Ocfs2-users at oss.oracle.com
>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>
>>
>>
>
>


-- 
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/



More information about the Ocfs2-users mailing list