Sunil,<br><br>It has been happening every week or two. One of our production environments experienced this issue today with its logs displayed below. At the time of that crash the kdump handler captured an incomplete vmcore. That file took longer to write than we had anticipated and unfortunately wasn&#39;t able to finish before the fence agent killed power.<br>
<br>(10826,0):dlm_deref_lockres_handler:2363 ERROR: 15DE52931133472797A07E1747BC9364:M0000000000000010f0e3e6277279c9: node 1 trying to drop ref but it is already dropped!<br>(10826,0):dlm_print_one_lock_resource:461 lockres: M0000000000000010f0e3e6277279c9, owner=2, state=0<br>
(10826,0):__dlm_print_one_lock_resource:476 lockres: M0000000000000010f0e3e6277279c9, owner=2, state=0<br>(10826,0):__dlm_print_one_lock_resource:478   last used: 0, on purge list: no<br>(10826,0):dlm_print_lockres_refmap:444   refmap nodes: [<br>
], inflight=0<br>(10826,0):__dlm_print_one_lock_resource:480   granted queue:<br>(10826,0):__dlm_print_one_lock_resource:492     type=3, conv=-1, node=2, cookie=2:4061845, ast=(empty=y,pend=n), bast=(empty=y,pend=n)<br>(10826,0):__dlm_print_one_lock_resource:495   converting queue:<br>
(10826,0):__dlm_print_one_lock_resource:510   blocked queue:<br>(11672,7):dlm_drop_lockres_ref:2298 ERROR: while dropping ref on 15DE52931133472797A07E1747BC9364:M0000000000000011a3a0d2277333a8 (master=1) got -22.<br>(11672,7):dlm_print_one_lock_resource:461 lockres: M0000000000000011a3a0d2277333a8, owner=1, state=64<br>
(11672,7):__dlm_print_one_lock_resource:476 lockres: M0000000000000011a3a0d2277333a8, owner=1, state=64<br>(11672,7):__dlm_print_one_lock_resource:478   last used: 4683571329, on purge list: yes<br>(11672,7):dlm_print_lockres_refmap:444   refmap nodes: [<br>
], inflight=0<br>(11672,7):__dlm_print_one_lock_resource:480   granted queue:<br>(11672,7):__dlm_print_one_lock_resource:495   converting queue:<br>(11672,7):__dlm_print_one_lock_resource:510   blocked queue:<br>----------- [cut here ] --------- [please bite here ] ---------<br>
Kernel BUG at .../redhat/BUILD/ocfs2-1.2.9/fs/ocfs2/dlm/dlmmaster.c:2300<br>invalid opcode: 0000 [1]<br>SMP<br><br>last sysfs file: /devices/pci0000:00/0000:00:04.0/0000:17:00.0/0000:18:02.0/0000:22:00.2/0000:24:05.0/irq<br>
CPU 7<br><br>Modules linked in:<br>iptable_filter ip_tables x_tables nfsd exportfs auth_rpcgss netconsole autofs4 hidp ocfs2(U) nls_utf8 nls_iso8859_1 cifs nfs lockd fscache nfs_acl rfcomm l2cap bluetooth ocfs2_dlmfs(U) ocfs2_dlm(U) ocfs2_nodemanager(U) lock_dlm gfs2(U) dlm configfs sunrpc bonding ipv6 xfrm_nalgo crypto_api dm_round_robin dm_emc dm_multipath video sbs backlight i2c_ec i2c_core button battery asus_acpi acpi_memhotplug ac parport_pc lp parport joydev sg ata_piix libata ide_cd shpchp bnx2 cdrom pcspkr serio_raw dm_snapshot dm_zero dm_mirror dm_mod qla2xxx scsi_transport_fc cciss sd_mod scsi_mod ext3 jbd uhci_hcd ohci_hcd ehci_hcd<br>
<br>Pid: 11672, comm: dlm_thread Tainted: G      2.6.18-92.1.22.el5 #1<br>RIP: 0010:[&lt;ffffffff885b76a0&gt;]<br>[&lt;ffffffff885b76a0&gt;] :ocfs2_dlm:dlm_drop_lockres_ref+0x1dc/0x1f5<br>RSP: 0018:ffff81065a3c1de0  EFLAGS: 00010246<br>
RAX: ffff8102ba2f2a38 RBX: 0000000000000000 RCX: ffffffff802ee9a8<br>RDX: ffffffff802ee9a8 RSI: 0000000000000000 RDI: ffffffff802ee9a0<br>RBP: 000000000000001f R08: ffffffff802ee9a8 R09: 0000000000000046<br>R10: 0000000000000000 R11: 0000000000000280 R12: ffff8102ba2f2a00<br>
R13: ffff81102c7cec00 R14: ffff8103ebbaf520 R15: ffffffff8009dc54<br>FS:  0000000000000000(0000) GS:ffff81102fea7340(0000) knlGS:0000000000000000<br>CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b<br>CR2: 00002b05068eb0a0 CR3: 0000000642cb0000 CR4: 00000000000006e0<br>
Process dlm_thread (pid: 11672, threadinfo ffff81065a3c0000, task ffff81065db52040)<br>Stack:<br> 1f02000000000000<br> 303030303030304d<br> 3130303030303030<br> 3232643061336131<br><br> 0038613333333737<br> 0000000000000000<br>
last message repeated 2 times<br><br> 0000000000000000<br> ffffffea00100100<br> ffff8102ba2f2a48<br> ffff8102ba2f2a00<br><br>Call Trace:<br> [&lt;ffffffff885ca733&gt;] :ocfs2_dlm:dlm_purge_lockres+0x175/0x34f<br> [&lt;ffffffff885caba0&gt;] :ocfs2_dlm:dlm_thread+0xd7/0x579<br>
 [&lt;ffffffff8009de6c&gt;] autoremove_wake_function+0x0/0x2e<br> [&lt;ffffffff885caac9&gt;] :ocfs2_dlm:dlm_thread+0x0/0x579<br> [&lt;ffffffff80032569&gt;] kthread+0xfe/0x132<br> [&lt;ffffffff8005dfb1&gt;] child_rip+0xa/0x11<br>
 [&lt;ffffffff8009dc54&gt;] keventd_create_kthread+0x0/0xc4<br> [&lt;ffffffff8003246b&gt;] kthread+0x0/0x132<br> [&lt;ffffffff8005dfa7&gt;] child_rip+0x0/0x11<br><br>Code:<br>0f 0b 68 53 e0 5c 88<br><br>Thanks,<br>Ward<br>
<br><div class="gmail_quote">On Mon, Mar 9, 2009 at 9:29 PM, Sunil Mushran <span dir="ltr">&lt;<a href="mailto:sunil.mushran@oracle.com">sunil.mushran@oracle.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Known issue. We have a potential fix for it. It is in testing.<br>
<br>
How often do you hit this?<br>
<div><div></div><div class="h5"><br>
On Mon, Mar 09, 2009 at 06:52:57PM -0400, Ward Fenton wrote:<br>
&gt;    We have been experiencing unplanned outages on a subset of the<br>
&gt;    clustered systems we have deployed to support SAP. The following<br>
&gt;    captured information came from one of three ocfs2 clusters which handle<br>
&gt;    SAP SEM/BW functionality. Each of those has experienced multiple kernel<br>
&gt;    panics which get reported as dlm_drop_lockres_ref and<br>
&gt;    dlm_defer_lockres_handler errors.<br>
</div></div></blockquote></div><br>