[Ocfs2-users] ocfs2 - Kernel panic on many write/read from both servers

Eduardo Diaz - Gmail ediazrod at gmail.com
Wed Dec 7 04:32:31 PST 2011


I make more test I think my problem are for network interface not,
from ocfs2.. I will change the network interfaces for other..

Dec  7 11:56:41 servidoradantra2 kernel: [77153.937731] r8169
0000:05:01.0: eth3: link down
Dec  7 11:56:47 servidoradantra2 kernel: [77160.049405] r8169
0000:05:01.0: eth3: link up

Dec  7 11:54:48 servidoradantra2 kernel: [77040.644084] o2quot/0
D 00000000     0  2721      2 0x00000000
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644089]  f67dd940
00000046 00000400 00000000 c0426bc0 c1441e20 c1441e20 f68df800
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644096]  f67ddafc
c3a08e20 00000000 c10d5782 c0426e80 f6dfa600 ed599b40 f68df800
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644103]  00000000
f67ddafc f54a22d8 00000005 00000000 f54a2308 0c000000 f54a227c
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644111] Call Trace:
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644118]  [<c10d5782>]
? __find_get_block+0x163/0x16d
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644125]  [<c127f0e9>]
? schedule_timeout+0x20/0xb0
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644141]  [<fcfe7bce>]
? ocfs2_metadata_cache_unlock+0x11/0x12 [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644155]  [<fcfe7d08>]
? ocfs2_buffer_cached+0xe2/0x137 [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644171]  [<fb3de16e>]
? o2cb_dlm_lock+0x40/0x57 [ocfs2_stack_o2cb]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644176]  [<c127eff2>]
? wait_for_common+0xa4/0x100
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644182]  [<c10335ff>]
? default_wake_function+0x0/0x8
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644200]  [<fcfaf3e0>]
? __ocfs2_cluster_lock+0x7eb/0x808 [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644211]  [<fcfb30d6>]
? ocfs2_inode_lock_full_nested+0x156/0xa85 [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644220]  [<fcfed2d7>]
? ocfs2_lock_global_qf+0x1d/0x6d [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644229]  [<fcfed2d7>]
? ocfs2_lock_global_qf+0x1d/0x6d [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644238]  [<fcfeddca>]
? ocfs2_sync_dquot_helper+0x9f/0x2c5 [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644241]  [<c10ec21d>]
? dquot_scan_active+0x63/0xab
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644250]  [<fcfedd2b>]
? ocfs2_sync_dquot_helper+0x0/0x2c5 [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644259]  [<fcfed25b>]
? qsync_work_fn+0x23/0x3b [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644262]  [<c1047917>]
? worker_thread+0x141/0x1bd
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644271]  [<fcfed238>]
? qsync_work_fn+0x0/0x3b [ocfs2]
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644274]  [<c104a65a>]
? autoremove_wake_function+0x0/0x2d
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644277]  [<c10477d6>]
? worker_thread+0x0/0x1bd
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644280]  [<c104a428>]
? kthread+0x61/0x66
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644282]  [<c104a3c7>]
? kthread+0x0/0x66
Dec  7 11:54:48 servidoradantra2 kernel: [77040.644286]  [<c1008d87>]
? kernel_thread_helper+0x7/0x10
Dec  7 11:56:41 servidoradantra2 kernel: [77153.937731] r8169
0000:05:01.0: eth3: link down
Dec  7 11:56:47 servidoradantra2 kernel: [77160.049405] r8169
0000:05:01.0: eth3: link up
Dec  7 11:56:48 servidoradantra2 kernel: [77160.644100] o2quot/0
D 00000000     0  2721      2 0x00000000
Dec  7 11:56:48 servidoradantra2 kernel: [77160.644106]  f67dd940
00000046 00000400 00000000 c0426bc0 c1441e20 c1441e20 f68df800
Dec  7 11:56:48 servidoradantra2 kernel: [77160.644114]  f67ddafc
c3a08e20 00000000 c10d5782 c0426e80 f6dfa600 ed599b40 f68df800
Dec  7 11:56:48 servidoradantra2 kernel: [77160.644121]  00000000
f67ddafc f54a22d8 00000005 00000000 f54a2308 0c000000 f54a227c
Dec  7 11:56:48 servidoradantra2 kernel: [77160.644128] Call Trace:


On Wed, Dec 7, 2011 at 12:02 PM, Eduardo Diaz - Gmail
<ediazrod at gmail.com> wrote:
> I make the same test that you...
>
> and.. error too :-(
>
> 75120.532071] INFO: task o2quot/0:3714 blocked for more than 120 seconds.
> [75120.532091] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [75120.532108] o2quot/0      D f5939dc8     0  3714      2 0x00000000
> [75120.532113]  f6bc4c80 00000046 c1441e20 f5939dc8 f6bc4e3c c1441e20
> c1441e20 f9bbdbf2
> [75120.532121]  f6bc4e3c c3a08e20 00000000 c33d1f43 0000442f f9a7c716
> f6bc4c80 f6bc4c80
> [75120.532128]  eedd5b7c f6bc4e3c f6bd6414 f6bd643c 00000004 00000000
> 011cd9ce f6bd6630
> [75120.532136] Call Trace:
> [75120.532154]  [<f9bbdbf2>] ? ocfs2_metadata_cache_io_unlock+0x11/0x12 [ocfs2]
> [75120.532163]  [<f9a7c716>] ? start_this_handle+0x2fb/0x37e [jbd2]
> [75120.532171]  [<c127f4d7>] ? __mutex_lock_common+0xe8/0x13b
> [75120.532176]  [<c127f539>] ? __mutex_lock_slowpath+0xf/0x11
> [75120.532180]  [<c127f5ca>] ? mutex_lock+0x17/0x24
> [75120.532184]  [<c127f5ca>] ? mutex_lock+0x17/0x24
> [75120.532198]  [<f9bc3e91>] ? ocfs2_sync_dquot_helper+0x166/0x2c5 [ocfs2]
> [75120.532204]  [<c10ec21d>] ? dquot_scan_active+0x63/0xab
> [75120.532217]  [<f9bc3d2b>] ? ocfs2_sync_dquot_helper+0x0/0x2c5 [ocfs2]
> [75120.532231]  [<f9bc325b>] ? qsync_work_fn+0x23/0x3b [ocfs2]
> [75120.532236]  [<c1047917>] ? worker_thread+0x141/0x1bd
> [75120.532249]  [<f9bc3238>] ? qsync_work_fn+0x0/0x3b [ocfs2]
> [75120.532254]  [<c104a65a>] ? autoremove_wake_function+0x0/0x2d
> [75120.532258]  [<c10477d6>] ? worker_thread+0x0/0x1bd
> [75120.532262]  [<c104a428>] ? kthread+0x61/0x66
> [75120.532266]  [<c104a3c7>] ? kthread+0x0/0x66
> [75120.532271]  [<c1008d87>] ? kernel_thread_helper+0x7/0x10
> [75120.532276] INFO: task jbd2/drbd0-21:3723 blocked for more than 120 seconds.
>
> On Wed, Dec 7, 2011 at 11:42 AM, Eduardo Diaz - Gmail
> <ediazrod at gmail.com> wrote:
>> You can make a fsck.ocfs2 to see if the filesystem are broken?
>>
>> On Wed, Dec 7, 2011 at 11:30 AM, Marek Krolikowski <admin at wset.edu.pl> wrote:
>>> Hello
>>> I use: sys-fs/ocfs2-tools-1.6.4 and create file system with all features:
>>> mkfs.ocfs2 -N 2 -L MAIL --fs-feature-level=max-features /dev/dm-0
>>> and after this got kernel panic :(
>>>
>>>
>>>
>>> -----Oryginalna wiadomość----- From: Eduardo Diaz - Gmail
>>> Sent: Wednesday, December 07, 2011 11:08 AM
>>> To: Marek Krolikowski
>>>
>>> Cc: ocfs2-users at oss.oracle.com
>>> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from both
>>> servers
>>>
>>> Try to use other filesystem for example xfs and make a full test.
>>>
>>> Try create a one node cluster and filesystem and make tests..
>>> (fileindex can be the problem)
>>>
>>> make a fsck to the filesystem
>>>
>>> Try upgrade ocfs2 to last version and use the max features, only has
>>> two nodes?..
>>>
>>> I will do. make a backup, create a new filesystem with all features
>>> that you need and make mkfs. the cluster only with the number of the
>>> nodes that you will use.
>>>
>>> Restore de data.
>>>
>>> Make extensive test for a week before put in production :-)..
>>>
>>> On Tue, Dec 6, 2011 at 2:04 PM, Marek Krolikowski <admin at wset.edu.pl> wrote:
>>>>
>>>> hey m8
>>>> Like i say i am not expert too but when i use ext3 and write/read working
>>>> with np.
>>>>
>>>>
>>>> -----Oryginalna wiadomość----- From: Eduardo Diaz - Gmail
>>>> Sent: Tuesday, December 06, 2011 3:06 AM
>>>> To: Marek Królikowski
>>>> Cc: ocfs2-users at oss.oracle.com
>>>> Subject: Re: [Ocfs2-users] ocfs2 - Kernel panic on many write/read from
>>>> both
>>>> servers
>>>>
>>>>
>>>> I am not a expert, but you have a problem in you EMC system (multipath
>>>> system) or drivers.
>>>>
>>>> Did you test this before put in production? or test this NAS with
>>>> other filesystem, xfs for example?.
>>>>
>>>> As I read I see" that hung_task_timeout_secs" some task wait more that
>>>> 120 seg may be a problem of you EMC/Fiber/cable problem...
>>>>
>>>> 2011/12/4 Marek Królikowski <admin at wset.edu.pl>:
>>>>>
>>>>>
>>>>> I do for all night tests with write/read files from ocfs2 on both servers
>>>>> something like this:
>>>>> On MAIL1 server:
>>>>> #!/bin/bash
>>>>> while true
>>>>> do
>>>>> rm -rf /mnt/EMC/MAIL1
>>>>> mkdir /mnt/EMC/MAIL1
>>>>> cp -r /usr /mnt/EMC/MAIL1
>>>>> rm -rf /mnt/EMC/MAIL1
>>>>> done;
>>>>> On MAIL2 server:
>>>>> #!/bin/bash
>>>>> while true
>>>>> do
>>>>> rm -rf /mnt/EMC/MAIL2
>>>>> mkdir /mnt/EMC/MAIL2
>>>>> cp -r /usr /mnt/EMC/MAIL2
>>>>> rm -rf /mnt/EMC/MAIL2
>>>>> done;
>>>>>
>>>>> Today i check logs and see:
>>>>> o2dlm: Node 1 joins domain EAC7942B71964050AE2046D3F0CDD7B2
>>>>> o2dlm: Nodes in domain EAC7942B71964050AE2046D3F0CDD7B2: 0 1
>>>>> (rm,26136,0):ocfs2_unlink:953 ERROR: status = -2
>>>>> (touch,26137,0):ocfs2_check_dir_for_entry:2120 ERROR: status = -17
>>>>> (touch,26137,0):ocfs2_mknod:461 ERROR: status = -17
>>>>> (touch,26137,0):ocfs2_create:631 ERROR: status = -17
>>>>> (rm,26142,0):ocfs2_unlink:953 ERROR: status = -2
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>>>> INFO: task kworker/u:2:20246 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> kworker/u:2     D ffff88107f4525c0     0 20246      2 0x00000000
>>>>> ffff880b730b57d0 0000000000000046 ffff8810201297d0 00000000000125c0
>>>>> ffff880f5a399fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff880f5a398000 00000000000125c0 ffff880f5a399fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff81481b71>] ? __mutex_lock_slowpath+0xd1/0x140
>>>>> [<ffffffff814818d3>] ? mutex_lock+0x23/0x40
>>>>> [<ffffffffa0937d95>] ? ocfs2_wipe_inode+0x105/0x690 [ocfs2]
>>>>> [<ffffffffa0935cfb>] ? ocfs2_query_inode_wipe.clone.9+0xcb/0x370 [ocfs2]
>>>>> [<ffffffffa09385a4>] ? ocfs2_delete_inode+0x284/0x3f0 [ocfs2]
>>>>> [<ffffffffa0919a10>] ? ocfs2_dentry_attach_lock+0x5a0/0x5a0 [ocfs2]
>>>>> [<ffffffffa093872e>] ? ocfs2_evict_inode+0x1e/0x50 [ocfs2]
>>>>> [<ffffffff81145900>] ? evict+0x70/0x140
>>>>> [<ffffffffa0919322>] ? __ocfs2_drop_dl_inodes.clone.2+0x32/0x60 [ocfs2]
>>>>> [<ffffffffa0919a39>] ? ocfs2_drop_dl_inodes+0x29/0x90 [ocfs2]
>>>>> [<ffffffff8106e56f>] ? process_one_work+0x11f/0x440
>>>>> [<ffffffff8106f279>] ? worker_thread+0x159/0x330
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff8106f120>] ? manage_workers.clone.21+0x120/0x120
>>>>> [<ffffffff81073fa6>] ? kthread+0x96/0xa0
>>>>> [<ffffffff8148bb24>] ? kernel_thread_helper+0x4/0x10
>>>>> [<ffffffff81073f10>] ? kthread_worker_fn+0x1a0/0x1a0
>>>>> [<ffffffff8148bb20>] ? gs_change+0x13/0x13
>>>>> INFO: task rm:5192 blocked for more than 120 seconds.
>>>>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>>> rm              D ffff88107f2725c0     0  5192  16338 0x00000000
>>>>> ffff881014ccb040 0000000000000082 ffff8810206b8040 00000000000125c0
>>>>> ffff8804d7697fd8 00000000000125c0 00000000000125c0 00000000000125c0
>>>>> ffff8804d7696000 00000000000125c0 ffff8804d7697fd8 00000000000125c0
>>>>> Call Trace:
>>>>> [<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
>>>>> [<ffffffffa0886162>] ? dlmconvert_master+0xe2/0x190 [ocfs2_dlm]
>>>>> [<ffffffffa08878bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
>>>>> [<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
>>>>> [<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
>>>>> [<ffffffffa0928a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
>>>>> [<ffffffffa0928fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
>>>>> [<ffffffffa0943c1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
>>>>> [<ffffffffa09454ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
>>>>> [<ffffffffa094616f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
>>>>> [<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
>>>>> [<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
>>>>> [<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
>>>>> [<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
>>>>> [<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
>>>>> [<ffffffff81166324>] ? dnotify_flush+0x54/0x110
>>>>> [<ffffffff8112b07f>] ? filp_close+0x5f/0x90
>>>>> [<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Ocfs2-users mailing list
>>>>> Ocfs2-users at oss.oracle.com
>>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>>>
>>>>
>>>>
>>>



More information about the Ocfs2-users mailing list