[Ocfs2-users] system crash

Sunil Mushran sunil.mushran at oracle.com
Thu Feb 11 10:50:24 PST 2010


It looks similar to bz#1202.
http://oss.oracle.com/bugzilla/show_bug.cgi?id=1202

File a bugzilla with novell. They should be able to look at the
objdump and confirm.

If so, the fix is already in mainline.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=71656fa6ec10473eb9b646c10a2173fdea2f83c9

Sunil

Charlie Sharkey wrote:
>
>  
>
> We had a system crash while unmounting some volumes. It is a 6 node
>
> cluster, hostnames are n1 -> n6. Info below,  Any ideas ?
>
>  
>
> Thanks in advance,
>
>  
>
> charlie
>
>  
>
>  
>
>  
>
> ocfs2-tools-1.4.0-0.5
>
> ocfs2console-1.4.0-0.5
>
> Linux sr2600-1 2.6.16.60-0.34-smp #1 SMP Fri Jan 16 14:59:01 UTC 2009 
> x86_64 x86_64 x86_64 GNU/Linux
>
>  
>
> OCFS2 Node Manager 1.4.1-1-SLES Wed Jul 23 18:33:42 UTC 2008 (build 
> f922955d99ef972235bd0c1fc236c5ddbb368611)
>
> o2cb heartbeat: registered disk 
> mode                                                                          
>
>
> OCFS2 DLM 1.4.1-1-SLES Wed Jul 23 18:33:42 UTC 2008 (build 
> f922955d99ef972235bd0c1fc236c5ddbb368611)         
>
> OCFS2 DLMFS 1.4.1-1-SLES Wed Jul 23 18:33:42 UTC 2008 (build 
> f922955d99ef972235bd0c1fc236c5ddbb368611)       
>                                                                      
>
>  
>
> from /var/log/messages
>
> Feb  2 15:04:05 n2 kernel: ocfs2_dlm: Node 3 leaves domain 
> 27169AFBA10C425B91D386F55FC37AF5
>
> Feb  2 15:04:05 n2 kernel: ocfs2_dlm: Nodes in domain 
> ("27169AFBA10C425B91D386F55FC37AF5"): 1 2 4
>
> Feb  2 15:04:05 n2 kernel: ocfs2_dlm: Node 4 leaves domain 
> 27169AFBA10C425B91D386F55FC37AF5
>
> Feb  2 15:04:05 n2 kernel: ocfs2_dlm: Nodes in domain 
> ("27169AFBA10C425B91D386F55FC37AF5"): 1 2
>
> Feb  2 15:04:05 n2 kernel: ocfs2: Unmounting device (253,8) on (node 1)
>
> Feb  2 15:04:05 n2 kernel: ocfs2_dlm: Node 5 leaves domain 
> 65EAD2E63E7549B69335BCAD58C5A476
>
> Feb  2 15:04:05 n2 kernel: ocfs2_dlm: Nodes in domain 
> ("65EAD2E63E7549B69335BCAD58C5A476"): 0 1 2 3 4
>
> Feb  2 15:04:06 n2 kernel: o2net: no longer connected to node n6 (num 
> 5) at 192.168.100.60:7777
>
> Feb  2 15:04:08 n2 kernel: ocfs2_dlm: Node 0 leaves domain 
> 65EAD2E63E7549B69335BCAD58C5A476
>
> Feb  2 15:04:08 n2 kernel: ocfs2_dlm: Nodes in domain 
> ("65EAD2E63E7549B69335BCAD58C5A476"): 1 2 3 4
>
> Feb  2 15:04:09 n2 kernel: o2net: no longer connected to node n1 (num 
> 0) at 192.168.100.10:7777
>
> Feb  2 15:04:09 n2 kernel: ocfs2: Unmounting device (253,9) on (node 1)
>
> Feb  2 15:04:13 n2 kernel: ocfs2: Unmounting device (253,6) on (node 1)
>
> Feb  2 15:04:18 n2 kernel: ocfs2: Unmounting device (253,7) on (node 1)
>
>  Feb  2 15:07:22 n2 syslog-ng[5679]: syslog-ng version 1.6.8 
> starting               ß------------ system restart
>
>  
>
>  
>
> stack trace from core dump file:
>
> PID: 14669  TASK: ffff8101289cb7b0  CPU: 1   COMMAND: "umount"
>
>  #0 [ffff8101040ef810] machine_kexec at ffffffff8011c036
>
>  #1 [ffff8101040ef8e0] crash_kexec   at ffffffff80153ea9
>
>  #2 [ffff8101040ef9a0] __die         at ffffffff802ec510
>
>  #3 [ffff8101040ef9e0] die           at ffffffff8010c786
>
>  #4 [ffff8101040efa10] do_invalid_op at ffffffff8010cd2d
>
>  #5 [ffff8101040efad0] error_exit    at ffffffff8010bc91
>
>     [exception RIP: dlm_add_lock_to_array+249]
>
>     RIP: ffffffff8852750f  RSP: ffff8101040efb88  RFLAGS: 00010286
>
>     RAX: ffff810103897ac8  RBX: ffff810103e02ec0  RCX: 0000000000016a72
>
>     RDX: 0000000000000000  RSI: 0000000000000292  RDI: ffffffff8035a99c
>
>     RBP: ffff810104056000   R8: ffffffff8045a260   R9: 0000000000000001
>
>     R10: ffff810103897a80  R11: 0000000000000000  R12: ffff810104056080
>
>     R13: 0000000000000000  R14: 0000000000000001  R15: ffff810104056000
>
>     ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
>
>  #6 [ffff8101040efb80] dlm_add_lock_to_array  at ffffffff8852750f
>
>  #7 [ffff8101040efba0] dlm_send_one_lockres   at ffffffff88527a55
>
>  #8 [ffff8101040efc80] dlm_empty_lockres      at ffffffff88520ae2
>
>  #9 [ffff8101040efdb0] ocfs2_dlm_shutdown     at ffffffff8856fc8a
>
> #10 [ffff8101040efdd0] ocfs2_dismount_volume  at ffffffff8859416c
>
> #11 [ffff8101040efe40] ocfs2_put_super        at ffffffff88594668
>
> #12 [ffff8101040efe50] generic_shutdown_super at ffffffff8018c305
>
> #13 [ffff8101040efe70] kill_block_super       at ffffffff8018c3d6
>
> #14 [ffff8101040efe90] deactivate_super       at ffffffff8018c4ac
>
> #15 [ffff8101040efeb0] sys_umount             at ffffffff801a1376
>
> #16 [ffff8101040eff80] system_call            at ffffffff8010adba
>
>     RIP: 00002b4ba5ba0987  RSP: 00007fff050e3990  RFLAGS: 00010246
>
>     RAX: 00000000000000a6  RBX: ffffffff8010adba  RCX: 00007fff050e4140
>
>     RDX: 000000000000006e  RSI: 0000000000000001  RDI: 00000000005101e0
>
>     RBP: 00000000005101c0   R8: 000000000000006f   R9: fefeff344d544b2e
>
>     R10: 0000000000000000  R11: 0000000000000206  R12: 0000000000510250
>
>     R13: 00000000005101e0  R14: 0000000000510200  R15: 00007fff050e42e0
>
>     ORIG_RAX: 00000000000000a6  CS: 0033  SS: 002b
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users




More information about the Ocfs2-users mailing list