<div>The fallocate() oops is probably the same that is fixed by this patch.</div><a href="https://oss.oracle.com/git/?p=smushran/linux-2.6.git;a=commit;h=a2118b301104a24381b414bc93371d666fe8d43a">https://oss.oracle.com/git/?p=smushran/linux-2.6.git;a=commit;h=a2118b301104a24381b414bc93371d666fe8d43a</a><div>
<br></div><div>Is in the list of patches that are ready to be pushed.</div><div><a href="https://oss.oracle.com/git/?p=smushran/linux-2.6.git;a=shortlog;h=mw-3.4-mar15">https://oss.oracle.com/git/?p=smushran/linux-2.6.git;a=shortlog;h=mw-3.4-mar15</a></div>
<div><br><div class="gmail_quote">On Mon, Jul 30, 2012 at 12:53 AM, Joel Becker <span dir="ltr"><<a href="mailto:jlbec@evilplan.org" target="_blank">jlbec@evilplan.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On Mon, Jul 30, 2012 at 09:45:14AM +0200, Vincent ETIENNE wrote:<br>
><br>
> HI,<br>
><br>
> Le 30/07/2012 08:30, Joel Becker a écrit :<br>
> > On Sat, Jul 28, 2012 at 12:18:30AM +0200, Vincent ETIENNE wrote:<br>
> >> Hello<br>
> >><br>
> >> Get this on first write made ( by deliver sending mail to inform of the<br>
> >> restart of services )<br>
> >> Home partition (the one receiving the mail) is based on ocfs2 created<br>
> >> from drbd block device in primary/primary mode<br>
> >> These drbd devices are based on lvm.<br>
> >><br>
> >> system is running linux-3.5.0, identical symptom with linux 3.3 and 3.2<br>
> >> but working with linux 3.0 kernel<br>
> >><br>
> >> reproduced on two machines ( so different hardware involved on this one<br>
> >> software md raid on SATA, on second one areca hardware raid card )<br>
> >> but the 2 machines are the one sharing this partition ( so share the<br>
> >> same data )<br>
> > Hmm. Any chance you can bisect this further?<br>
><br>
> Will try to. Will take a few days as the server is in production ( but<br>
> used as backup so...)<br>
><br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.169213] ------------[ cut here<br>
> >> ]------------<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.169261] kernel BUG at<br>
> >> fs/buffer.c:2886!<br>
> > This is:<br>
> ><br>
> > BUG_ON(!buffer_mapped(bh));<br>
> ><br>
> > in submit_bh().<br>
> ><br>
> ><br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] Call Trace:<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff81327546>] ?<br>
> >> ocfs2_read_blocks+0x176/0x6c0<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff8114e541>] ?<br>
> >> T.1552+0x91/0x2b0<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff81346ad0>] ?<br>
> >> ocfs2_find_actor+0x120/0x120<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff813464f7>] ?<br>
> >> ocfs2_read_inode_block_full+0x37/0x60<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff813964ff>] ?<br>
> >> ocfs2_fast_symlink_readpage+0x2f/0x160<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff81111585>] ?<br>
> >> do_read_cache_page+0x85/0x180<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff813964d0>] ?<br>
> >> ocfs2_fill_super+0x2500/0x2500<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff811116d9>] ?<br>
> >> read_cache_page+0x9/0x20<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff8115c705>] ?<br>
> >> page_getlink+0x25/0x80<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff8115c77b>] ?<br>
> >> page_follow_link_light+0x1b/0x30<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff8116099b>] ?<br>
> >> path_lookupat+0x38b/0x720<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff81160d5c>] ?<br>
> >> do_path_lookup+0x2c/0xd0<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff81346f31>] ?<br>
> >> ocfs2_inode_revalidate+0x71/0x160<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff81161c0c>] ?<br>
> >> user_path_at_empty+0x5c/0xb0<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff8106714a>] ?<br>
> >> do_page_fault+0x1aa/0x3c0<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff81156f2d>] ?<br>
> >> cp_new_stat+0x10d/0x120<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff81157021>] ?<br>
> >> vfs_fstatat+0x41/0x80<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff8115715f>] ?<br>
> >> sys_newstat+0x1f/0x50<br>
> >> Jul 27 23:41:41 jupiter2 kernel: [ 351.170003] [<ffffffff817ecee2>] ?<br>
> >> system_call_fastpath+0x16/0x1b<br>
> > This stack trace is from 3.5, because of the location of the<br>
> > BUG. The call path in the trace suggests the code added by Al's ea022d,<br>
> > but you say it breaks in 3.2 and 3.3 as well. Can you give me a trace<br>
> > from 3.2?<br>
><br>
> For a 3.2 kernel i get this stack trace. Different trace form 3.5 but<br>
> exactly at the same moment. and for the same reasons.<br>
> Seems to be less immmediate than with 3.5 but more a subjective<br>
> imrpession than something based on fact. ( it takes a few seconds after<br>
> deliver is started to have the bug )<br>
<br>
</div></div>Totally different stack trace. Not in symlink code, but instead in<br>
fallocate. Weird. I wonder if you are hitting two things. Bisection<br>
will definitely help.<br>
<br>
Joel<br>
<div><div class="h5"><br>
> [ 716.402833] o2dlm: Joining domain B43153ED20B942E291251F2C138ADA9E (<br>
> 0 1 ) 2 nodes<br>
> [ 716.501511] ocfs2: Mounting device (147,2) on (node 1, slot 0) with<br>
> ordered data mode.<br>
> [ 716.505744] mount.ocfs2 used greatest stack depth: 2936 bytes left<br>
> [ 727.133743] deliver used greatest stack depth: 2632 bytes left<br>
> [ 764.167029] deliver used greatest stack depth: 1896 bytes left<br>
> [ 764.778872] BUG: unable to handle kernel NULL pointer dereference at<br>
> 0000000000000038<br>
> [ 764.778897] IP: [<ffffffff8133c51a>]<br>
> __ocfs2_change_file_space+0x75a/0x1690<br>
> [ 764.778922] PGD 62697067 PUD 67a81067 PMD 0<br>
> [ 764.778939] Oops: 0000 [#1] SMP<br>
> [ 764.778953] CPU 0<br>
> [ 764.778959] Modules linked in: drbd lru_cache ipv6 [last unloaded: drbd]<br>
> [ 764.778986]<br>
> [ 764.778993] Pid: 5909, comm: deliver Not tainted 3.2.12-gentoo #2 HP<br>
> ProLiant ML150 G3/ML150 G3<br>
> [ 764.779017] RIP: 0010:[<ffffffff8133c51a>] [<ffffffff8133c51a>]<br>
> __ocfs2_change_file_space+0x75a/0x1690<br>
> [ 764.779041] RSP: 0018:ffff880067b2dd98 EFLAGS: 00010246<br>
> [ 764.779053] RAX: 0000000000000000 RBX: ffff880067f82000 RCX:<br>
> ffff880063d11000<br>
> [ 764.779069] RDX: 0000000000000000 RSI: 0000000000000001 RDI:<br>
> ffff88007ae83288<br>
> [ 764.779085] RBP: ffff880055d1f138 R08: 0010000000000000 R09:<br>
> ffff880063d11000<br>
> [ 764.779100] R10: 0000000000000000 R11: 0000000000000000 R12:<br>
> ffff88007ae83288<br>
> [ 764.779115] R13: 0000000000000000 R14: 0000000000000000 R15:<br>
> 00000000000000df<br>
> [ 764.779132] FS: 00007f1e40eb5700(0000) GS:ffff88007fc00000(0000)<br>
> knlGS:0000000000000000<br>
> [ 764.779149] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b<br>
> [ 764.779219] CR2: 0000000000000038 CR3: 0000000067ab5000 CR4:<br>
> 00000000000006f0<br>
> [ 764.779291] DR0: 0000000000000000 DR1: 0000000000000000 DR2:<br>
> 0000000000000000<br>
> [ 764.779364] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:<br>
> 0000000000000400<br>
> [ 764.779436] Process deliver (pid: 5909, threadinfo ffff880067b2c000,<br>
> task ffff88007bedbc00)<br>
> [ 764.779569] Stack:<br>
> [ 764.779634] ffffea0001647840 ffffffff8112983f 0000000000000000<br>
> ffff880000000000<br>
> [ 764.779768] 00000000000de000 ffffffff81333f35 ffffffff8133f880<br>
> 0000000000000000<br>
> [ 764.779903] 000000017d002240 ffff880055d1f1d8 ffff880000000001<br>
> ffff880067976708<br>
> [ 764.780009] Call Trace:<br>
> [ 764.780009] [<ffffffff8112983f>] ? handle_pte_fault+0x7cf/0x9e0<br>
> [ 764.780009] [<ffffffff81333f35>] ?<br>
> ocfs2_inode_lock_full_nested+0x355/0xb40<br>
> [ 764.780009] [<ffffffff8133f880>] ? ocfs2_inode_revalidate+0x70/0x160<br>
> [ 764.780009] [<ffffffff8106337a>] ? do_page_fault+0x1aa/0x3c0<br>
> [ 764.780009] [<ffffffff8114e780>] ? cp_new_stat+0xe0/0x100<br>
> [ 764.780009] [<ffffffff8133d4cd>] ? ocfs2_fallocate+0x7d/0x90<br>
> [ 764.780009] [<ffffffff811489e7>] ? do_fallocate+0x117/0x120<br>
> [ 764.780009] [<ffffffff81148a34>] ? sys_fallocate+0x44/0x70<br>
> [ 764.780009] [<ffffffff81771bbb>] ? system_call_fastpath+0x16/0x1b<br>
> [ 764.780009] Code: 89 45 60 48 89 55 68 48 89 45 70 48 89 55 78 4c 89<br>
> e7 48 8b 94 24 00 01 00 00 e8 12 31 00 00 41 89 c2 85 c0 78 2e 48 8b 54<br>
> 24 38 <f7> 42 38 00 10 10 00 74 06 41 80 4c 24 14 01 44 89 54 24 18 4c<br>
> [ 764.780785] RIP [<ffffffff8133c51a>]<br>
> __ocfs2_change_file_space+0x75a/0x1690<br>
> [ 764.780785] RSP <ffff880067b2dd98><br>
> [ 764.780785] CR2: 0000000000000038<br>
> [ 764.781561] ---[ end trace 654757aba94c3768 ]---<br>
><br>
> Vincent<br>
><br>
> > Joel<br>
> ><br>
><br>
> --<br>
</div></div>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in<br>
<div class="im">> the body of a message to <a href="mailto:majordomo@vger.kernel.org">majordomo@vger.kernel.org</a><br>
> More majordomo info at <a href="http://vger.kernel.org/majordomo-info.html" target="_blank">http://vger.kernel.org/majordomo-info.html</a><br>
</div>> Please read the FAQ at <a href="http://www.tux.org/lkml/" target="_blank">http://www.tux.org/lkml/</a><br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
<br>
Life's Little Instruction Book #456<br>
<br>
"Send your loved one flowers. Think of a reason later."<br>
<br>
<a href="http://www.jlbec.org/" target="_blank">http://www.jlbec.org/</a><br>
<a href="mailto:jlbec@evilplan.org">jlbec@evilplan.org</a><br>
</font></span><div class="HOEnZb"><div class="h5">--<br>
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in<br>
the body of a message to <a href="mailto:majordomo@vger.kernel.org">majordomo@vger.kernel.org</a><br>
More majordomo info at <a href="http://vger.kernel.org/majordomo-info.html" target="_blank">http://vger.kernel.org/majordomo-info.html</a><br>
</div></div></blockquote></div><br></div>