<div dir="ltr"><div>Yet another Panic again today:</div>
<div> </div>
<div> </div>
<div><font size="1">Oct 8 12:36:00 n9 kernel: [79230.175890] Unable to handle kernel NULL pointer dereference at 0000000000000258 RIP: <br>Oct 8 12:36:00 n9 kernel: [79230.175917] [<ffffffff88473a7e>] :ocfs2:ocfs2_get_dentry_osb+0xe/0x20<br>
Oct 8 12:36:00 n9 kernel: [79230.176023] PGD 3d08c5067 PUD 331112067 PMD 0 <br>Oct 8 12:36:00 n9 kernel: [79230.176059] Oops: 0000 [1] SMP <br>Oct 8 12:36:00 n9 kernel: [79230.176091] CPU 3 <br>Oct 8 12:36:00 n9 kernel: [79230.176117] Modules linked in: nfs lockd nfs_acl sunrpc ocfs2 crc32c libcrc32c ipmi_devintf ipmi_si ipmi_msghandler ocfs2_dlmfs ocfs2_dlm ocfs2_nodemanager configfs iptabl<br>
e_filter ip_tables x_tables xfs ipv6 ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi scsi_transport_iscsi parport_pc lp parport loop i2c_piix4 dcdbas i2c_core psmouse button<br> shpchp pci_hotplug k8temp serio_raw pcspkr evdev ext3 jbd mbcache sr_mod cdrom sg sd_mod pata_serverworks usbhid hid ata_generic tg3 ehci_hcd pata_acpi sata_svw ohci_hcd libata scsi_mod usbcore therma<br>
l processor fan fbcon tileblit font bitblit softcursor fuse<br>Oct 8 12:36:00 n9 kernel: [79230.176537] Pid: 4915, comm: o2net Not tainted 2.6.24-24-server #1<br>Oct 8 12:36:00 n9 kernel: [79230.176571] RIP: 0010:[<ffffffff88473a7e>] [<ffffffff88473a7e>] :ocfs2:ocfs2_get_dentry_osb+0xe/0x20<br>
Oct 8 12:36:00 n9 kernel: [79230.176636] RSP: 0000:ffff8104119b3ca8 EFLAGS: 00010282<br>Oct 8 12:36:00 n9 kernel: [79230.176667] RAX: 0000000000000000 RBX: ffff8103def84018 RCX: 0000000000000005<br>Oct 8 12:36:00 n9 kernel: [79230.176703] RDX: ffff8103def83100 RSI: 0000000000000005 RDI: ffff8103def84018<br>
Oct 8 12:36:00 n9 kernel: [79230.176738] RBP: ffff8103def84400 R08: ffff8103def84400 R09: ffff8103dee43a00<br>Oct 8 12:36:00 n9 kernel: [79230.176774] R10: 000000000000004e R11: ffffffff8847b580 R12: 0900000000007aa4<br>
Oct 8 12:36:00 n9 kernel: [79230.176809] R13: 0000000000000005 R14: 0000000000000000 R15: 000000000000001f<br>Oct 8 12:36:00 n9 kernel: [79230.176845] FS: 00002ad989b79670(0000) GS:ffff810416d4ac80(0000) knlGS:00000000f5420b90<br>
Oct 8 12:36:00 n9 kernel: [79230.176899] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b<br>Oct 8 12:36:00 n9 kernel: [79230.176931] CR2: 0000000000000258 CR3: 0000000370517000 CR4: 00000000000006e0<br>Oct 8 12:36:00 n9 kernel: [79230.176966] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000<br>
Oct 8 12:36:00 n9 kernel: [79230.177002] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400<br>Oct 8 12:36:00 n9 kernel: [79230.177037] Process o2net (pid: 4915, threadinfo ffff8104119b2000, task ffff8104115247f0)<br>
Oct 8 12:36:00 n9 kernel: [79230.177092] Stack: ffffffff8847b5a6 ffff810411440400 00000000161974a2 ffff8104114c1028<br>Oct 8 12:36:00 n9 kernel: [79230.177155] 0000000000000000 ffff8103def84400 0900000000007aa4 ffff8104114c1018<br>
Oct 8 12:36:00 n9 kernel: [79230.177215] 0000000000000000 000000000000001f ffffffff8840bef4 000000000000012c<br>Oct 8 12:36:00 n9 kernel: [79230.177256] Call Trace:<br>Oct 8 12:36:00 n9 kernel: [79230.177312] [<ffffffff8847b5a6>] :ocfs2:ocfs2_blocking_ast+0x26/0x310<br>
Oct 8 12:36:00 n9 kernel: [79230.177366] [ocfs2_dlm:dlm_proxy_ast_handler+0x824/0x830] :ocfs2_dlm:dlm_proxy_ast_handler+0x824/0x830<br>Oct 8 12:36:00 n9 kernel: [79230.177427] [ocfs2_nodemanager:do_gettimeofday+0x2f/0x2fb90] do_gettimeofday+0x2f/0xc0<br>
Oct 8 12:36:00 n9 kernel: [79230.177481] [ocfs2_nodemanager:o2net_process_message+0x4cc/0x5b0] :ocfs2_nodemanager:o2net_process_message+0x4cc/0x5b0<br>Oct 8 12:36:00 n9 kernel: [79230.177540] [__dequeue_entity+0x3d/0x50] __dequeue_entity+0x3d/0x50<br>
Oct 8 12:36:00 n9 kernel: [79230.177580] [ocfs2_nodemanager:o2net_recv_tcp_msg+0x65/0x80] :ocfs2_nodemanager:o2net_recv_tcp_msg+0x65/0x80<br>Oct 8 12:36:00 n9 kernel: [79230.177643] [ocfs2_nodemanager:o2net_rx_until_empty+0x38b/0x900] :ocfs2_nodemanager:o2net_rx_until_empty+0x38b/0x900<br>
Oct 8 12:36:00 n9 kernel: [79230.177707] [ocfs2_nodemanager:o2net_rx_until_empty+0x0/0x900] :ocfs2_nodemanager:o2net_rx_until_empty+0x0/0x900<br>Oct 8 12:36:00 n9 kernel: [79230.177765] [run_workqueue+0xcc/0x170] run_workqueue+0xcc/0x170<br>
Oct 8 12:36:00 n9 kernel: [79230.177799] [worker_thread+0x0/0x110] worker_thread+0x0/0x110<br>Oct 8 12:36:00 n9 kernel: [79230.177832] [worker_thread+0x0/0x110] worker_thread+0x0/0x110<br>Oct 8 12:36:00 n9 kernel: [79230.177865] [worker_thread+0xa3/0x110] worker_thread+0xa3/0x110<br>
Oct 8 12:36:00 n9 kernel: [79230.177899] [<ffffffff80254510>] autoremove_wake_function+0x0/0x30<br>Oct 8 12:36:00 n9 kernel: [79230.177935] [worker_thread+0x0/0x110] worker_thread+0x0/0x110<br>Oct 8 12:36:00 n9 kernel: [79230.177969] [worker_thread+0x0/0x110] worker_thread+0x0/0x110<br>
Oct 8 12:36:00 n9 kernel: [79230.178001] [kthread+0x4b/0x80] kthread+0x4b/0x80<br>Oct 8 12:36:00 n9 kernel: [79230.178036] [child_rip+0xa/0x12] child_rip+0xa/0x12<br>Oct 8 12:36:00 n9 kernel: [79230.177969] [worker_thread+0x0/0x110] worker_thread+0x0/0x110<br>
Oct 8 12:36:00 n9 kernel: [79230.178001] [kthread+0x4b/0x80] kthread+0x4b/0x80<br>Oct 8 12:36:00 n9 kernel: [79230.178036] [child_rip+0xa/0x12] child_rip+0xa/0x12<br>Oct 8 12:36:00 n9 kernel: [79230.178073] [kthread+0x0/0x80] kthread+0x0/0x80<br>
Oct 8 12:36:00 n9 kernel: [79230.178104] [child_rip+0x0/0x12] child_rip+0x0/0x12<br>Oct 8 12:36:00 n9 kernel: [79230.179971] <br>Oct 8 12:36:00 n9 kernel: [79230.179993] <br>Oct 8 12:36:00 n9 kernel: [79230.179993] Code: 48 8b 80 58 02 00 00 c3 66 2e 0f 1f 84 00 00 00 00 00 8b 47 <br>
Oct 8 12:36:00 n9 kernel: [79230.180111] RIP [<ffffffff88473a7e>] :ocfs2:ocfs2_get_dentry_osb+0xe/0x20<br>Oct 8 12:36:00 n9 kernel: [79230.180156] RSP <ffff8104119b3ca8><br>Oct 8 12:36:00 n9 kernel: [79230.180183] CR2: 0000000000000258<br>
Oct 8 12:36:00 n9 kernel: [79230.180566] ---[ end trace ae9a4fee19ded66d ]---<br>:</font></div>
<div> </div>
<div><br><br> </div>
<div class="gmail_quote">On Wed, Oct 7, 2009 at 8:31 PM, Sunil Mushran <span dir="ltr"><<a href="mailto:sunil.mushran@oracle.com">sunil.mushran@oracle.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">It could be the stale inode info was propagated by the nfs node<br>to the oopsing node via the lvb. But I am not sure about that.<br>
<br>In any event, applying the fix would be a step forward. The fix<br>has been in mainline for quite sometime now.<br><br>Laurence Mayer wrote:<br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div class="im">Nope, the node that crashed is not the NFS server.<br> How should I proceed?<br> What do you suggest?<br> Could this happen again?<br><br></div>
<div>
<div></div>
<div class="h5">On Wed, Oct 7, 2009 at 8:16 PM, Sunil Mushran <<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>>> wrote:<br>
<br> And does the node exporting the volume encounter the oops?<br><br> If so, the likeliest candidate would be:<br> <a href="http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=6ca497a83e592d64e050c4d04b6dedb8c915f39a" target="_blank">http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=6ca497a83e592d64e050c4d04b6dedb8c915f39a</a><br>
<br> If it is on another node, I am currently unsure whether a nfs<br> export on one node could cause this to occur on another. Need more<br> coffee.<br><br> The problem in short is due to how nfs bypasses the normal fs lookup<br>
to access files. It uses the file handle to directly access the inode,<br> bypassing the locking. Normally that is not a problem. The race window<br> is if the file is deleted (on any node in the cluster) and nfs<br>
reads that<br> inode without the lock. In the oops we see the disk generation is<br> greater<br> than the in-memory inode generation. That means the inode was<br> deleted and<br> reused. The fix closes the race window.<br>
<br> Sunil<br><br> Laurence Mayer wrote:<br><br> Yes.<br> We have setup 10 node cluster, with one of the nodes exporting<br> the NFS to the workstations.<br> Please expand your answer.<br> Thanks<br>
Laurence<br><br><br> On Wed, Oct 7, 2009 at 7:12 PM, Sunil Mushran<br> <<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>><br>
<mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a><br> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>>>> wrote:<br>
<br> Are you exporting this volume via nfs? We fixed a small<br> race (in<br> the nfs<br> access path) that could lead to this oops.<br><br> Laurence Mayer wrote:<br><br> Hi again,<br>
OS: Ubuntu 8.04 x64<br> Kern: Linux n1 2.6.24-24-server #1 SMP Tue Jul 7<br> 19:39:36 UTC<br> 2009 x86_64 GNU/Linux<br> 10 Node Cluster<br> OCFS2 Version: 1.3.9-0ubuntu1<br>
I received this panic on the 5th Oct, I cannot work<br> out why<br> this has started to happen.<br> Please please can you provide directions.<br> Let me know if you require any further details or<br>
information.<br> Oct 5 10:21:22 n1 kernel: [1006473.993681]<br> (1387,3):ocfs2_meta_lock_update:1675 ERROR: bug expression:<br> inode->i_generation != le32_to_cpu(fe->i_generation)<br>
Oct 5 10:21:22 n1 kernel: [1006473.993756]<br> (1387,3):ocfs2_meta_lock_update:1675 ERROR: Invalid dinode<br> 3064741 disk generation: 1309441612 inode->i_generation: 13<br> 09441501<br>
Oct 5 10:21:22 n1 kernel: [1006473.993865]<br> ------------[ cut<br> here ]------------<br> Oct 5 10:21:22 n1 kernel: [1006473.993896] kernel BUG at<br> /build/buildd/linux-2.6.24/fs/ocfs2/dlmglue.c:1675!<br>
Oct 5 10:21:22 n1 kernel: [1006473.993949] invalid opcode:<br> 0000 [3] SMP<br> Oct 5 10:21:22 n1 kernel: [1006473.993982] CPU 3<br> Oct 5 10:21:22 n1 kernel: [1006473.994008] Modules<br>
linked in:<br> ocfs2 crc32c libcrc32c nfsd auth_rpcgss exportfs<br> ipmi_devintf<br> ipmi_si ipmi_msghandler ipv6 ocfs2_dlmfs ocfs2_dlm<br> ocfs2_nodemanager configfs iptable_filter ip_tables<br>
x_tables<br> xfs ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core<br> ib_addr<br> iscsi_tcp libiscsi scsi_transport_iscsi nfs lockd nfs_acl<br> sunrpc parport_pc lp parport loop serio_raw psmouse<br>
i2c_piix4<br> i2c_core dcdbas evdev button k8temp shpchp pci_hotplug<br> pcspkr<br> ext3 jbd mbcache sg sr_mod cdrom sd_mod ata_generic<br> pata_acpi<br> usbhid hid ehci_hcd tg3 sata_svw pata_serverworks ohci_hcd<br>
libata scsi_mod usbcore thermal processor fan fbcon<br> tileblit<br> font bitblit softcursor fuse<br> Oct 5 10:21:22 n1 kernel: [1006473.994445] Pid: 1387,<br> comm: R<br>
Tainted: G D 2.6.24-24-server #1<br> Oct 5 10:21:22 n1 kernel: [1006473.994479] RIP:<br> 0010:[<ffffffff8856c404>] [<ffffffff8856c404>]<br> :ocfs2:ocfs2_meta_lock_full+0x6a4/0xec0<br>
Oct 5 10:21:22 n1 kernel: [1006473.994558] RSP:<br> 0018:ffff8101238f9d58 EFLAGS: 00010296<br> Oct 5 10:21:22 n1 kernel: [1006473.994590] RAX:<br> 0000000000000093 RBX: ffff8102eaf03000 RCX:<br>
00000000ffffffff<br> Oct 5 10:21:22 n1 kernel: [1006473.994642] RDX:<br> 00000000ffffffff RSI: 0000000000000000 RDI:<br> ffffffff8058ffa4<br> Oct 5 10:21:22 n1 kernel: [1006473.994694] RBP:<br>
0000000100080000 R08: 0000000000000000 R09:<br> 00000000ffffffff<br> Oct 5 10:21:22 n1 kernel: [1006473.994746] R10:<br> 0000000000000000 R11: 0000000000000000 R12:<br> ffff81012599ee00<br>
Oct 5 10:21:22 n1 kernel: [1006473.994799] R13:<br> ffff81012599ef08 R14: ffff81012599f2b8 R15:<br> ffff81012599ef08<br> Oct 5 10:21:22 n1 kernel: [1006473.994851] FS:<br> 00002b3802fed670(0000) GS:ffff810418022c80(0000)<br>
knlGS:00000000f546bb90<br> Oct 5 10:21:22 n1 kernel: [1006473.994906] CS: 0010<br> DS: 0000<br> ES: 0000 CR0: 000000008005003b<br> Oct 5 10:21:22 n1 kernel: [1006473.994938] CR2:<br>
00007f5db5542000 CR3: 0000000167ddf000 CR4:<br> 00000000000006e0<br> Oct 5 10:21:22 n1 kernel: [1006473.994990] DR0:<br> 0000000000000000 DR1: 0000000000000000 DR2:<br> 0000000000000000<br>
Oct 5 10:21:22 n1 kernel: [1006473.995042] DR3:<br> 0000000000000000 DR6: 00000000ffff0ff0 DR7:<br> 0000000000000400<br> Oct 5 10:21:22 n1 kernel: [1006473.995095] Process R (pid:<br>
1387, threadinfo ffff8101238f8000, task ffff8104110cc000)<br> Oct 5 10:21:22 n1 kernel: [1006473.995148] Stack:<br> 000000004e0c7e4c ffff81044e0c7ddd ffff8101a3b4d2b8<br> 00000000802c34c0<br>
Oct 5 10:21:22 n1 kernel: [1006473.995212]<br> 0000000000000000<br> 0000000100000000 ffffffff80680c00 00000000804715e2<br> Oct 5 10:21:22 n1 kernel: [1006473.995272]<br> 0000000100000000<br>
ffff8101238f9e48 ffff810245558b80 ffff81031e358680<br> Oct 5 10:21:22 n1 kernel: [1006473.995313] Call Trace:<br> Oct 5 10:21:22 n1 kernel: [1006473.995380]<br> [<ffffffff8857d03f>]<br>
:ocfs2:ocfs2_inode_revalidate+0x5f/0x290<br> Oct 5 10:21:22 n1 kernel: [1006473.995427]<br> [<ffffffff88577fe6>] :ocfs2:ocfs2_getattr+0x56/0x1c0<br> Oct 5 10:21:22 n1 kernel: [1006473.995470]<br>
[vfs_stat_fd+0x46/0x80] vfs_stat_fd+0x46/0x80<br> Oct 5 10:21:22 n1 kernel: [1006473.995514]<br> [<ffffffff88569634>] :ocfs2:ocfs2_meta_unlock+0x1b4/0x210<br> Oct 5 10:21:22 n1 kernel: [1006473.995553]<br>
[filldir+0x0/0xf0] filldir+0x0/0xf0<br> Oct 5 10:21:22 n1 kernel: [1006473.995594]<br> [<ffffffff8856799e>] :ocfs2:ocfs2_readdir+0xce/0x230<br> Oct 5 10:21:22 n1 kernel: [1006473.995631]<br>
[sys_newstat+0x27/0x50] sys_newstat+0x27/0x50<br> Oct 5 10:21:22 n1 kernel: [1006473.995664]<br> [vfs_readdir+0xa5/0xd0] vfs_readdir+0xa5/0xd0<br> Oct 5 10:21:22 n1 kernel: [1006473.995699]<br>
[sys_getdents+0xcf/0xe0] sys_getdents+0xcf/0xe0<br> Oct 5 10:21:22 n1 kernel: [1006473.997568]<br> [system_call+0x7e/0x83] system_call+0x7e/0x83<br> Oct 5 10:21:22 n1 kernel: [1006473.997605]<br>
Oct 5 10:21:22 n1 kernel: [1006473.997627]<br> Oct 5 10:21:22 n1 kernel: [1006473.997628] Code: 0f 0b<br> eb fe<br> 83 fd fe 0f 84 73 fc ff ff 81 fd 00 fe ff ff 0f<br> Oct 5 10:21:22 n1 kernel: [1006473.997745] RIP<br>
[<ffffffff8856c404>]<br> :ocfs2:ocfs2_meta_lock_full+0x6a4/0xec0<br> Oct 5 10:21:22 n1 kernel: [1006473.997808] RSP<br> <ffff8101238f9d58><br> Thanks<br>
Laurence<br> ------------------------------------------------------------------------<br><br> _______________________________________________<br> Ocfs2-users mailing list<br>
<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a><br> <mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a>><br>
<mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a><br> <mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a>>><br>
<br> <a href="http://oss.oracle.com/mailman/listinfo/ocfs2-users" target="_blank">http://oss.oracle.com/mailman/listinfo/ocfs2-users</a><br><br><br><br><br><br></div></div></blockquote><br></blockquote></div>
<br></div>