[Ocfs2-users] Oops during umount
Laurence Mayer
laurence at istraresearch.com
Thu Oct 1 00:30:32 PDT 2009
So currently there is nothing for me to do?
Would you know to which Official version of OCFS2 translates 1.3.9ubuntu1?
Thanks again
Laurence
Sunil Mushran wrote:
> This bug requires two mainline patches specified in the following bugs:
> http://oss.oracle.com/bugzilla/show_bug.cgi?id=914
> http://oss.oracle.com/bugzilla/show_bug.cgi?id=1162
>
> Laurence Mayer wrote:
>> Looks very like bugzilla 914
>>
>> http://oss.oracle.com/bugzilla/show_bug.cgi?id=914
>>
>>
>> Laurence Mayer wrote:
>>> OS: Ubuntu 8.04 x64
>>> Kern: Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009
>>> x86_64 GNU/Linux
>>> 10 Node Cluster
>>> OCFS2 Version: 1.3.9-0ubuntu1 (BTW: how does this translate to
>>> Official OCFS2 versions?)
>>>
>>> During routine umount of the ocfs2 volume from a single node, I
>>> receive the below oops.
>>>
>>> I see that there is a bugzilla 785, not sure if it is the same problem.
>>>
>>> Any ideas why?
>>>
>>> Thanks
>>> Laurence
>>>
>>>
>>> Sep 30 09:47:44 n8 kernel: [1702585.411505]
>>> (26626,3):dlm_empty_lockres:2774 ERROR: lockres
>>> M000000000000000030ba1100000000 still has local locks!
>>> Sep 30 09:47:44 n8 kernel: [1702585.413492] ------------[ cut here
>>> ]------------
>>> Sep 30 09:47:44 n8 kernel: [1702585.413526] kernel BUG at
>>> /build/buildd/linux-2.6.24/fs/ocfs2/dlm/dlmmaster.c:2775!
>>> Sep 30 09:47:44 n8 kernel: [1702585.413585] invalid opcode: 0000 [1]
>>> SMP
>>> Sep 30 09:47:44 n8 kernel: [1702585.413622] CPU 3
>>> Sep 30 09:47:44 n8 kernel: [1702585.413651] Modules linked in: ocfs2
>>> ocfs2_dlmfs ocfs2_dlm ocfs2_nodemanager configfs ib_iser rdma_cm
>>> ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp
>>> libiscsi scsi_transport_iscsi nfs lockd nfs_acl sunrpc crc32c
>>> libcrc32c ipmi_devintf ipmi_si ipmi_msghandler iptable_filter
>>> ip_tables x_tables xfs ipv6 parport_pc lp parport loop serio_
>>> raw psmouse i2c_piix4 dcdbas i2c_core pcspkr button k8temp evdev
>>> shpchp pci_hotplug ext3 jbd mbcache sg sd_mod sr_mod cdrom
>>> ata_generic usbhid hid pata_acpi sata_svw ehci_hcd pata_serve
>>> rworks ohci_hcd tg3 libata usbcore scsi_mod thermal processor fan
>>> fbcon tileblit font bitblit softcursor fuse
>>> Sep 30 09:47:44 n8 kernel: [1702585.414110] Pid: 26626, comm: umount
>>> Not tainted 2.6.24-24-server #1
>>> Sep 30 09:47:44 n8 kernel: [1702585.414145] RIP:
>>> 0010:[ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0]
>>> [ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0]
>>> :ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0
>>> Sep 30 09:47:44 n8 kernel: [1702585.414220] RSP:
>>> 0018:ffff8102f1d95be8 EFLAGS: 00010292
>>> Sep 30 09:47:44 n8 kernel: [1702585.414255] RAX: 000000000000007b
>>> RBX: ffff81037b08d200 RCX: 0000000000000001
>>> Sep 30 09:47:44 n8 kernel: [1702585.414311] RDX: ffffffff80590f68
>>> RSI: 0000000000000082 RDI: ffffffff80590f60
>>> Sep 30 09:47:44 n8 kernel: [1702585.414369] RBP: 00000000ffffffd9
>>> R08: 0000742df56cf5a4 R09: 0000000000000000
>>> Sep 30 09:47:44 n8 kernel: [1702585.414426] R10: ffff810220068c60
>>> R11: ffffffff802231e0 R12: ffff81037b08d200
>>> Sep 30 09:47:44 n8 kernel: [1702585.414480] R13: ffff81041190f000
>>> R14: 0000000000000001 R15: ffff8102891b8280
>>> Sep 30 09:47:44 n8 kernel: [1702585.414537] FS:
>>> 00007fe016e746e0(0000) GS:ffff810416da9c80(0000) knlGS:00000000f34f5b90
>>> Sep 30 09:47:44 n8 kernel: [1702585.414596] CS: 0010 DS: 0000 ES:
>>> 0000 CR0: 000000008005003b
>>> Sep 30 09:47:44 n8 kernel: [1702585.414632] CR2: 00007f36e7000000
>>> CR3: 0000000105a1e000 CR4: 00000000000006e0
>>> Sep 30 09:47:44 n8 kernel: [1702585.414689] DR0: 0000000000000000
>>> DR1: 0000000000000000 DR2: 0000000000000000
>>> Sep 30 09:47:44 n8 kernel: [1702585.414744] DR3: 0000000000000000
>>> DR6: 00000000ffff0ff0 DR7: 0000000000000400
>>> Sep 30 09:47:44 n8 kernel: [1702585.414801] Process umount (pid:
>>> 26626, threadinfo ffff8102f1d94000, task ffff81040ec617d0)
>>> Sep 30 09:47:44 n8 kernel: [1702585.414861] Stack: ffff8102891b8280
>>> ffffffff00000008 ffff810317045b30 ffffffff80682c00
>>> Sep 30 09:47:44 n8 kernel: [1702585.414931] ffffffff80682c00
>>> ffff81041190f0b8 ffff81041190f250 ffff81041190f268
>>> Sep 30 09:47:44 n8 kernel: [1702585.414996] ffff81037b08d300
>>> ffff8102f1d95ca0 ffff81041190f168 ffff81037b08d228
>>> Sep 30 09:47:44 n8 kernel: [1702585.415041] Call Trace:
>>> Sep 30 09:47:44 n8 kernel: [1702585.415106] [<ffffffff80254510>]
>>> autoremove_wake_function+0x0/0x30
>>> Sep 30 09:47:44 n8 kernel: [1702585.415149]
>>> [ocfs2_dlm:__dlm_lockres_unused+0x33/0x60]
>>> :ocfs2_dlm:__dlm_lockres_unused+0x33/0x60
>>> Sep 30 09:47:44 n8 kernel: [1702585.415194]
>>> [ocfs2_dlm:__dlm_lockres_calc_usage+0x4f/0x200]
>>> :ocfs2_dlm:__dlm_lockres_calc_usage+0x4f/0x200
>>> Sep 30 09:47:44 n8 kernel: [1702585.415254]
>>> [ocfs2_nodemanager:kref_get+0x3/0x1cf0] kref_get+0x3/0x40
>>> Sep 30 09:47:44 n8 kernel: [1702585.415294]
>>> [ocfs2_dlm:dlm_unregister_domain+0x127/0x780]
>>> :ocfs2_dlm:dlm_unregister_domain+0x127/0x780
>>> Sep 30 09:47:44 n8 kernel: [1702585.415359] [<ffffffff80236510>]
>>> default_wake_function+0x0/0x10
>>> Sep 30 09:47:44 n8 kernel: [1702585.415419] [<ffffffff88477cdb>]
>>> :ocfs2:ocfs2_dlm_shutdown+0x9b/0x170
>>> Sep 30 09:47:44 n8 kernel: [1702585.415471] [<ffffffff8849d883>]
>>> :ocfs2:ocfs2_dismount_volume+0x183/0x490
>>> Sep 30 09:47:44 n8 kernel: [1702585.415526] [<ffffffff8849df20>]
>>> :ocfs2:ocfs2_put_super+0x30/0xd0
>>> Sep 30 09:47:44 n8 kernel: [1702585.415567]
>>> [generic_shutdown_super+0x62/0x110] generic_shutdown_super+0x62/0x110
>>> Sep 30 09:47:44 n8 kernel: [1702585.415605]
>>> [fuse:kill_block_super+0xd/0x20] kill_block_super+0xd/0x20
>>> Sep 30 09:47:44 n8 kernel: [1702585.415641]
>>> [deactivate_super+0x6f/0x90] deactivate_super+0x6f/0x90
>>> Sep 30 09:47:44 n8 kernel: [1702585.415680] [sys_umount+0x6b/0x2e0]
>>> sys_umount+0x6b/0x2e0
>>> Sep 30 09:47:44 n8 kernel: [1702585.415720] [sys_newstat+0x27/0x50]
>>> sys_newstat+0x27/0x50
>>> Sep 30 09:47:44 n8 kernel: [1702585.415758] [__up_write+0x22/0x130]
>>> __up_write+0x22/0x130
>>> Sep 30 09:47:44 n8 kernel: [1702585.415801] [system_call+0x7e/0x83]
>>> system_call+0x7e/0x83
>>> Sep 30 09:47:44 n8 kernel: [1702585.415840]
>>> Sep 30 09:47:44 n8 kernel: [1702585.415865]
>>> Sep 30 09:47:44 n8 kernel: [1702585.415866] Code: 0f 0b eb fe 48 b8
>>> 00 09 00 00 01 00 00 00 48 85 05 9b d0 fc
>>> Sep 30 09:47:44 n8 kernel: [1702585.415999] RIP
>>> [ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0]
>>> :ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0
>>> Sep 30 09:47:44 n8 kernel: [1702585.416061] RSP <ffff8102f1d95be8>
>>> Sep 30 09:47:44 n8 kernel: [1702585.416496] ---[ end trace
>>> ecaa160501849e7b ]---
>>> Sep 30 09:47:44 n8 kernel: [1702585.416582] WARNING: at
>>> /build/buildd/linux-2.6.24/kernel/exit.c:916 do_exit()
>>> Sep 30 09:47:44 n8 kernel: [1702585.416690] Pid: 26626, comm: umount
>>> Tainted: G D 2.6.24-24-server #1
>>> Sep 30 09:47:44 n8 kernel: [1702585.416776]
>>> Sep 30 09:47:44 n8 kernel: [1702585.416776] Call Trace:
>>> Sep 30 09:47:44 n8 kernel: [1702585.416934] [do_exit+0x7ac/0x940]
>>> do_exit+0x7ac/0x940
>>> Sep 30 09:47:44 n8 kernel: [1702585.417025] [oops_end+0x34/0x60]
>>> oops_end+0x34/0x60
>>> Sep 30 09:47:44 n8 kernel: [1702585.417115] [die+0x52/0x70]
>>> die+0x52/0x70
>>> Sep 30 09:47:44 n8 kernel: [1702585.417200]
>>> [do_invalid_op+0x86/0xa0] do_invalid_op+0x86/0xa0
>>> Sep 30 09:47:44 n8 kernel: [1702585.417290]
>>> [ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0]
>>> :ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0
>>> Sep 30 09:47:44 n8 kernel: [1702585.417382]
>>> [try_to_wake_up+0x66/0x3c0] try_to_wake_up+0x66/0x3c0
>>> Sep 30 09:47:44 n8 kernel: [1702585.417473] [sg:printk+0x4e/0x150]
>>> printk+0x4e/0x60
>>> Sep 30 09:47:44 n8 kernel: [1702585.417562] [error_exit+0x0/0x51]
>>> error_exit+0x0/0x51
>>> Sep 30 09:47:44 n8 kernel: [1702585.417650]
>>> [flat_send_IPI_mask+0x0/0x70] flat_send_IPI_mask+0x0/0x70
>>> Sep 30 09:47:44 n8 kernel: [1702585.417747]
>>> [ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0]
>>> :ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0
>>> Sep 30 09:47:44 n8 kernel: [1702585.417842]
>>> [ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0]
>>> :ocfs2_dlm:dlm_empty_lockres+0xee0/0x17d0
>>> Sep 30 09:47:44 n8 kernel: [1702585.417939] [<ffffffff80254510>]
>>> autoremove_wake_function+0x0/0x30
>>> Sep 30 09:47:44 n8 kernel: [1702585.418031]
>>> [ocfs2_dlm:__dlm_lockres_unused+0x33/0x60]
>>> :ocfs2_dlm:__dlm_lockres_unused+0x33/0x60
>>> Sep 30 09:47:44 n8 kernel: [1702585.418126]
>>> [ocfs2_dlm:__dlm_lockres_calc_usage+0x4f/0x200]
>>> :ocfs2_dlm:__dlm_lockres_calc_usage+0x4f/0x200
>>> Sep 30 09:47:44 n8 kernel: [1702585.418237]
>>> [ocfs2_nodemanager:kref_get+0x3/0x1cf0] kref_get+0x3/0x40
>>> Sep 30 09:47:44 n8 kernel: [1702585.418328]
>>> [ocfs2_dlm:dlm_unregister_domain+0x127/0x780]
>>> :ocfs2_dlm:dlm_unregister_domain+0x127/0x780
>>> Sep 30 09:47:44 n8 kernel: [1702585.418433] [<ffffffff80236510>]
>>> default_wake_function+0x0/0x10
>>> Sep 30 09:47:44 n8 kernel: [1702585.418527] [<ffffffff88477cdb>]
>>> :ocfs2:ocfs2_dlm_shutdown+0x9b/0x170
>>> Sep 30 09:47:44 n8 kernel: [1702585.418619] [<ffffffff8849d883>]
>>> :ocfs2:ocfs2_dismount_volume+0x183/0x490
>>> Sep 30 09:47:44 n8 kernel: [1702585.418716] [<ffffffff8849df20>]
>>> :ocfs2:ocfs2_put_super+0x30/0xd0
>>> Sep 30 09:47:44 n8 kernel: [1702585.418797]
>>> [generic_shutdown_super+0x62/0x110] generic_shutdown_super+0x62/0x110
>>> Sep 30 09:47:44 n8 kernel: [1702585.418879]
>>> [fuse:kill_block_super+0xd/0x20] kill_block_super+0xd/0x20
>>> Sep 30 09:47:44 n8 kernel: [1702585.418957]
>>> [deactivate_super+0x6f/0x90] deactivate_super+0x6f/0x90
>>> Sep 30 09:47:44 n8 kernel: [1702585.419042] [sys_umount+0x6b/0x2e0]
>>> sys_umount+0x6b/0x2e0
>>> Sep 30 09:47:44 n8 kernel: [1702585.419124] [sys_newstat+0x27/0x50]
>>> sys_newstat+0x27/0x50
>>> Sep 30 09:47:44 n8 kernel: [1702585.419204] [__up_write+0x22/0x130]
>>> __up_write+0x22/0x130
>>> Sep 30 09:47:44 n8 kernel: [1702585.419286] [system_call+0x7e/0x83]
>>> system_call+0x7e/0x83
>>> Sep 30 09:47:44 n8 kernel: [1702585.419368]
>>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Ocfs2-users mailing list
>> Ocfs2-users at oss.oracle.com
>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>
More information about the Ocfs2-users
mailing list