Hello<br><br>filed as bugzilla number 912.<br><br><div><span class="gmail_quote">On 8/28/07, <b class="gmail_sendername">Sunil Mushran</b> &lt;<a href="mailto:Sunil.Mushran@oracle.com">Sunil.Mushran@oracle.com</a>&gt; wrote:
</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Please file a bugzilla. It is very hard to track issue via email.<br>Attach the trace below. You should also see a corresponding
<br>message in one of the other nodes. Specifically node 0. Add<br>that too in the bugzilla.<br><br>Daniel wrote:<br>&gt; Hello<br>&gt;<br>&gt; I&#39;m still having weekly panics on my system, but now I&#39;ve at least got
<br>&gt; something to report back from the netconsole.<br>&gt;<br>&gt; To summarize system: 2x Dell 1950 connected to a EMC CX3-20 SAN.<br>&gt; Centos 5 x86_64 2.6.18-8.1.8.el5 #1 SMP.<br>&gt;<br>&gt; Tonight both servers locked up - both while idling afaik. But this
<br>&gt; time tilesrv2 reported the following via netconsole before it went dead.<br>&gt;<br>&gt; (4225,2):dlm_drop_lockres_ref:2289 ERROR: while dropping ref on<br>&gt; 359E1C1D38374654BC5E5896EB7D5187:M0000000000000009cb578f4d7803fc
<br>&gt; (master=0) got -22.<br>&gt; (4225,2):dlm_print_one_lock_resource:294 lockres:<br>&gt; M0000000000000009cb578f4d7803fc, owner=0, state=64<br>&gt; (4225,2):__dlm_print_one_lock_resource:309 lockres:<br>&gt; M0000000000000009cb578f4d7803fc, owner=0, state=64
<br>&gt; (4225,2):__dlm_print_one_lock_resource:311&nbsp;&nbsp; last used: 4354492857, on<br>&gt; purge list: yes<br>&gt; (4225,2):dlm_print_lockres_refmap:277&nbsp;&nbsp; refmap nodes: [ ], inflight=0<br>&gt; (4225,2):__dlm_print_one_lock_resource:313&nbsp;&nbsp; granted queue:
<br>&gt; (4225,2):__dlm_print_one_lock_resource:328&nbsp;&nbsp; converting queue:<br>&gt; (4225,2):__dlm_print_one_lock_resource:343&nbsp;&nbsp; blocked queue:<br>&gt; ----------- [cut here ] --------- [please bite here ] ---------<br>&gt; Kernel BUG at ...mushran/BUILD/ocfs2-
1.2.6/fs/ocfs2/dlm/dlmmaster.c:2291<br>&gt; invalid opcode: 0000 [1] SMP<br>&gt; last sysfs file: /devices/pci0000:00/0000:00:<br>&gt; 04.0/0000:0c:00.0/host1/rport-1:0-1/target1:0:1/1:0:1:4/vendor<br>&gt; CPU 2<br>&gt; Modules linked in: netconsole autofs4 hidp ocfs2(U) nfs lockd fscache
<br>&gt; nfs_acl rfcomm l2cap bluetooth ocfs2_dlmfs(U) ocfs2_dlm(U)<br>&gt; ocfs2_nodemanager(U) configfs sunrpc ipt_REJECT ip6t_REJECT xt_tcpudp<br>&gt; ip6table_filter ip6_tables x_tables dm_emc dm_round_robin dm_multipath
<br>&gt; video sbs i2c_ec i2c_core button battery asus_acpi acpi_memhotplug ac<br>&gt; ipv6 parport_pc lp parport joydev shpchp bnx2 sr_mod ide_cd serio_raw<br>&gt; cdrom sg pcspkr dm_snapshot dm_zero dm_mirror dm_mod usb_storage
<br>&gt; qla2xxx scsi_transport_fc megaraid_sas sd_mod scsi_mod ext3 jbd<br>&gt; ehci_hcd ohci_hcdPid: 4225, comm: dlm_thread Not tainted<br>&gt; 2.6.18-8.1.8.el5 #1<br>&gt;&nbsp;&nbsp;[&lt;ffffffff884d60d3&gt;] :ocfs2_dlm:dlm_drop_lockres_ref+0x1d3/0x1ec
<br>&gt; RDX: 00000000ffffffff RSI: 0000000000000000 RDI: ffffffff802da65c<br>&gt; R13: ffff81012d087000 R14: ffff8100435c5f60 R15: ffffffff8009b4f6<br>&gt; CR2: 000000001ec07000 CR3: 00000001289c9000 CR4: 00000000000006e0
<br>&gt;&nbsp;&nbsp;303030303030304d<br>&gt;&nbsp;&nbsp;0000000000000000 0000000000000000 ffff81001defe648<br>&gt; [&lt;ffffffff884e9031&gt;] :ocfs2_dlm:dlm_purge_lockres+0x175/0x34a<br>&gt;&nbsp;&nbsp;[&lt;ffffffff8009b6b9&gt;] autoremove_wake_function+0x0/0x2e
<br>&gt;&nbsp;&nbsp;[&lt;ffffffff884e93c2&gt;] :ocfs2_dlm:dlm_thread+0x0/0x579<br>&gt;&nbsp;&nbsp;[&lt;ffffffff80032189&gt;] kthread+0xfe/0x132<br>&gt;&nbsp;&nbsp;[&lt;ffffffff8005bfe5&gt;] child_rip+0xa/0x11<br>&gt;&nbsp;&nbsp;[&lt;ffffffff8009b4f6&gt;] keventd_create_kthread+0x0/0x61
<br>&gt;&nbsp;&nbsp;[&lt;ffffffff8005bfdb&gt;] child_rip+0x0/0x11<br>&gt; 0f d6 c2 83 d8 5c&nbsp;&nbsp;[&lt;ffffffff884d60d3&gt;]<br>&gt; :ocfs2_dlm:dlm_drop_lockres_ref+0x1d3/0x1ec<br>&gt; &lt;0&gt;Kernel panic - not syncing: Fatal exception
<br>&gt;<br>&gt; I&#39;d be happy to provide more info or open a bug-report. Just tell me<br>&gt; what you need. I hope this is a better report than last time :)<br>&gt;<br>&gt; Daniel<br><br></blockquote></div><br>