[Ocfs2-users] Stopping O2CB failed / kernel panic on shutdown .... ???

Sunil Mushran Sunil.Mushran at oracle.com
Mon Nov 6 11:35:02 PST 2006


Shutdown ordering is incorrect. Shutdown ocfs2 first. Means all
ocfs2 volumes should be umounted. Followed by o2cb, followed by
drdb followed by network.

Sébastien CRAMATTE wrote:
> Hi
>
> I've setup ocfs2  1.2.3 +  drbd 0.8  seems that works excepts ...
> that when I shutdown   I obtain this ...
> Note that I haven't setup heartbeat yet ...
>
> ------------[ cut here ]------------
> kernel BUG at /usr/src/ocfs2-1.2.3/fs/ocfs2/dlm/dlmmaster.c:1312!
> invalid opcode: 0000 [#1]
> SMP
> Modules linked in: drbd ocfs2_dlmfs md5 ipv6 ocfs2 ocfs2_dlm
> ocfs2_nodemanager configfs cn
> CPU:    0
> EIP:    0061:[<4db645e2>]    Not tainted VLI
> EFLAGS: 00010202   (2.6.16.19-xen #1)
> EIP is at dlm_do_master_request+0x63c/0x6b1 [ocfs2_dlm]
> eax: 00000043   ebx: 00000000   ecx: 00000000   edx: 00000000
> esi: fffffffd   edi: 46ce1baf   ebp: 00000001   esp: 46ce1b5c
> ds: 007b   es: 007b   ss: 0069
> Process umount (pid: 1513, threadinfo=46ce0000 task=47280030)
> Stack: <0>4db7dfc0 000005e9 00000000 4db7b085 0000051f 46ce1b84 4c715200
> 00000000
>        4015ac80 461dbe00 00000000 00001f00 00000000 3030304d 30303030
> 30303030
>        30303030 30303030 32373130 39333635 00613234 00000000 00000000
> 00000000
> Call Trace:
>  [<4015ac80>] cache_grow+0x162/0x1c9
>  [<4db5db8f>] __dlm_insert_lockres+0x6b/0x8e [ocfs2_dlm]
>  [<4db628fc>] dlm_get_lock_resource+0x38f/0xca8 [ocfs2_dlm]
>  [<4db6d90f>] dlmlock+0x2e5/0xd96 [ocfs2_dlm]
>  [<4d8bd6d7>] ocfs2_lock_create+0x12c/0x3ba [ocfs2]
>  [<4d8bbedd>] ocfs2_inode_ast_func+0x0/0x7ea [ocfs2]
>  [<4d8bc960>] ocfs2_inode_bast_func+0x0/0x198 [ocfs2]
>  [<4d8be337>] ocfs2_cluster_lock+0x8e2/0xa87 [ocfs2]
>  [<4013e2c5>] find_get_pages_tag+0x41/0xa5
>  [<401461fc>] pagevec_lookup_tag+0x36/0x40
>  [<40182f20>] mpage_writepages+0x15d/0x3db
>  [<4d8bd9bd>] ocfs2_status_completion_cb+0x0/0x12 [ocfs2]
>  [<4d8c00d7>] ocfs2_meta_lock_full+0x15e/0x698 [ocfs2]
>  [<4012fd4a>] autoremove_wake_function+0x0/0x57
>  [<401488d9>] kzalloc+0x25/0x55
>  [<401781cf>] igrab+0x39/0x3b
>  [<4d8f03e1>] ocfs2_get_system_file_inode+0x49/0x9f [ocfs2]
>  [<4d8d7990>] ocfs2_shutdown_local_alloc+0x11a/0x723 [ocfs2]
>  [<4d8ed666>] ocfs2_dismount_volume+0x4d/0x3a1 [ocfs2]
>  [<4013db91>] filemap_fdatawait+0x67/0x69
>  [<4013dbd0>] filemap_write_and_wait+0x3d/0x44
>  [<4d8ec61a>] ocfs2_put_super+0x3b/0xd2 [ocfs2]
>  [<40165053>] generic_shutdown_super+0x136/0x146
>  [<40165aa8>] kill_block_super+0x2d/0x49
>  [<40164e41>] deactivate_super+0x5a/0xa0
>  [<4017abb2>] sys_umount+0x3f/0x8e
>  [<401500a4>] sys_munmap+0x45/0x66
>  [<4017ac18>] sys_oldumount+0x17/0x1b
>  [<40104d15>] syscall_call+0x7/0xb
> Code: d2 a1 44 a8 85 4d 83 e0 01 89 d3 09 c3 74 1c f7 05 48 a8 85 4d 00
> 09 00 00 75 10 31 d2 a1 4c a8 85 4d
> 83                                                 e0 01 89 d3 09 c3 74
> 0d <0f> 0b 20 05 c0 d2 b7 4d e9 24 fe ff ff b8 00 e0 ff ff 21 e0 8b
>  Badness in do_exit at kernel/exit.c:802
>  [<4011e72e>] do_exit+0x3cc/0x3d1
>  [<401056bf>] do_trap+0x0/0x108
>  [<40105a40>] do_invalid_op+0x0/0xc3
>  [<40105aee>] do_invalid_op+0xae/0xc3
>  [<4db645e2>] dlm_do_master_request+0x63c/0x6b1 [ocfs2_dlm]
>  [<4011c418>] release_console_sem+0x98/0xd6
>  [<4011c248>] vprintk+0x1a3/0x26a
>  [<401429d2>] get_page_from_freelist+0x8c/0xa6
>  [<40104ea7>] error_code+0x2b/0x30
>  [<4db645e2>] dlm_do_master_request+0x63c/0x6b1 [ocfs2_dlm]
>  [<4015ac80>] cache_grow+0x162/0x1c9
>  [<4db5db8f>] __dlm_insert_lockres+0x6b/0x8e [ocfs2_dlm]
>  [<4db628fc>] dlm_get_lock_resource+0x38f/0xca8 [ocfs2_dlm]
>  [<4db6d90f>] dlmlock+0x2e5/0xd96 [ocfs2_dlm]
>  [<4d8bd6d7>] ocfs2_lock_create+0x12c/0x3ba [ocfs2]
>  [<4d8bbedd>] ocfs2_inode_ast_func+0x0/0x7ea [ocfs2]
>  [<4d8bc960>] ocfs2_inode_bast_func+0x0/0x198 [ocfs2]
>  [<4d8be337>] ocfs2_cluster_lock+0x8e2/0xa87 [ocfs2]
>  [<4013e2c5>] find_get_pages_tag+0x41/0xa5
>  [<401461fc>] pagevec_lookup_tag+0x36/0x40
>  [<40182f20>] mpage_writepages+0x15d/0x3db
>  [<4d8bd9bd>] ocfs2_status_completion_cb+0x0/0x12 [ocfs2]
>  [<4d8c00d7>] ocfs2_meta_lock_full+0x15e/0x698 [ocfs2]
>  [<4012fd4a>] autoremove_wake_function+0x0/0x57
>  [<401488d9>] kzalloc+0x25/0x55
>  [<401781cf>] igrab+0x39/0x3b
>  [<4d8f03e1>] ocfs2_get_system_file_inode+0x49/0x9f [ocfs2]
>  [<4d8d7990>] ocfs2_shutdown_local_alloc+0x11a/0x723 [ocfs2]
>  [<4d8ed666>] ocfs2_dismount_volume+0x4d/0x3a1 [ocfs2]
>  [<4013db91>] filemap_fdatawait+0x67/0x69
>  [<4013dbd0>] filemap_write_and_wait+0x3d/0x44
>  [<4d8ec61a>] ocfs2_put_super+0x3b/0xd2 [ocfs2]
>  [<40165053>] generic_shutdown_super+0x136/0x146
>  [<40165aa8>] kill_block_super+0x2d/0x49
>  [<40164e41>] deactivate_super+0x5a/0xa0
>  [<4017abb2>] sys_umount+0x3f/0x8e
>  [<401500a4>] sys_munmap+0x45/0x66
>  [<4017ac18>] sys_oldumount+0x17/0x1b
>  [<40104d15>] syscall_call+0x7/0xb
> /etc/rc6.d/K91ocfs2: line 177:  1513 Segmentation fault      umount -a
> -t ocfs2 2>/dev/null
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> OK
> Stopping all DRBD resourcesFailure: (137) FailedToClaimMyself
> Command '/sbin/drbdsetup /dev/drbd0 down' terminated with exit code 10
> drbdsetup exited with code 10
> ERROR: Module drbd is in use
> .
> Sending all processes the TERM signal...done.
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> Sending all processes the KILL signal...done.
> Saving random seed...done.
> Unmounting remote and non-toplevel virtual filesystems...done.
> Deconfiguring network interfaces...(1415,0):o2hb_do_disk_heartbeat:963
> ocfs2_heartbeat: no configured nodes
> fo                                                und!
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> done.
> Cleaning up ifupdown...done.
> Deactivating swap...(1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat:
> no configured nodes found!
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> (1415,0):o2hb_do_disk_heartbeat:963 ocfs2_heartbeat: no configured nodes
> found!
> (4,0):o2hb_write_timeout:269 ERROR: Heartbeat write timeout to device
> drbd0 after 12000 milliseconds
> Heartbeat thread (4) printing last 24 blocking operations (cur = 22):
> Heartbeat thread stuck at msleep, stuffing current time into that
> blocker (index 22)
> Index 23: took 16 ms to do waiting for read completion
> Index 0: took 0 ms to do bio alloc write
> Index 1: took 0 ms to do bio add page write
> Index 2: took 0 ms to do submit_bio for write
> Index 3: took 0 ms to do checking slots
> Index 4: took 14 ms to do waiting for write completion
> Index 5: took 1973 ms to do msleep
> Index 6: took 0 ms to do allocating bios for read
> Index 7: took 0 ms to do bio alloc read
> Index 8: took 0 ms to do bio add page read
> Index 9: took 0 ms to do submit_bio for read
> Index 10: took 10 ms to do waiting for read completion
> Index 11: took 0 ms to do bio alloc write
> Index 12: took 0 ms to do bio add page write
> Index 13: took 0 ms to do submit_bio for write
> Index 14: took 0 ms to do checking slots
> Index 15: took 13 ms to do waiting for write completion
> Index 16: took 1974 ms to do msleep
> Index 17: took 2004 ms to do msleep
> Index 18: took 2004 ms to do msleep
> Index 19: took 2004 ms to do msleep
> Index 20: took 2004 ms to do msleep
> Index 21: took 2004 ms to do msleep
> Index 22: took 0 ms to do msleep
> (4,0):o2hb_stop_all_regions:1908 ERROR: stopping heartbeat on all active
> regions.
> Kernel panic - not syncing: ocfs2 is very sorry to be fencing this
> system by panicing
>
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>   



More information about the Ocfs2-users mailing list