<div dir="ltr"><div>Hi Marty,<br></div><div> As we are using WindRiver Linux Release, I think I might have to ask them to look at it. <br></div><div><br>And one more thing, if I build a new ocfs2 release on kernel 2.6.34, would that help fix these issues. <br>
</div><div><br>regards,<br></div>Harsha<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Apr 25, 2014 at 7:37 PM, Marty Sweet <span dir="ltr"><<a href="mailto:msweet.dev@gmail.com" target="_blank">msweet.dev@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
This is a kernel bug, I would recommend you upgrade your kernel to a<br>
newer version. Your current running version of 2.6.34 is ~4 years<br>
old, OCFS2 has made a lot of improvements since then.<br>
<br>
Let us know if this resolves the problem.<br>
<br>
Marty<br>
<br>
On 22 April 2014 22:21, Herbert van den Bergh<br>
<div class="HOEnZb"><div class="h5"><<a href="mailto:herbert.van.den.bergh@oracle.com">herbert.van.den.bergh@oracle.com</a>> wrote:<br>
> When you see "------------[ cut here ]------------" in the messages log,<br>
> don't cut there. Cut a few lines above it, to include possible error<br>
> messages that were logged before the panic. Please send the messages from<br>
> before the panic.<br>
><br>
> Thanks,<br>
> Herbert.<br>
><br>
><br>
> On 03/31/2014 05:29 AM, Harsha A Patankar wrote:<br>
><br>
> During boot process, mount.ocfs2 is dumping core.<br>
><br>
> [ 74.249870] ------------[ cut here ]------------<br>
> [ 74.254633] kernel BUG at<br>
> /h/workspace/HNBGW_platform_wr_rsys/build/build/linux/fs/ocfs2/inode.c:510!<br>
> [ 74.264225] invalid opcode: 0000 [#1] PREEMPT SMP<br>
> [ 74.269209] LTT NESTING LEVEL : 0<br>
> [ 74.272668] last sysfs file: /sys/fs/o2cb/interface_revision<br>
> [ 74.278567] CPU 0<br>
> [ 74.280472] Modules linked in: sg i2c_core i2c_i801 x_tables ip_tables<br>
> ipv6 sctp binfmt_misc ipmi_msghandler ipmi_si ipmi_devintf bonding sunrpc<br>
> lockd exportfs auth_rpcgss nfs_acl nfsd dlm lru_cache drbd ocfs2_stackglue<br>
> ocfs2_stack_user ocfs2_nodemanager ocfs2 ocfs2_dlmfs [last unloaded:<br>
> scsi_wait_scan]<br>
> [ 74.308963]<br>
> [ 74.310505] Pid: 3728, comm: mount.ocfs2 Not tainted<br>
> 2.6.34.13-ipa-WR4.3.0.0_cgl #1 ATCA-4550 /ATCA-4550<br>
> [ 74.321196] RIP: 0010:[<ffffffffa036c145>] [<ffffffffa036c145>]<br>
> ocfs2_iget+0x265/0xb90 [ocfs2]<br>
> [ 74.330226] RSP: 0018:ffff880284d2fb78 EFLAGS: 00010296<br>
> [ 74.335754] RAX: 0000000000000079 RBX: 0000000000018b16 RCX:<br>
> 0000000000000003<br>
> [ 74.343140] RDX: ffff88000a600000 RSI: 0000000000000082 RDI:<br>
> 0000000000000001<br>
> [ 74.350494] RBP: ffff880284d2fbd8 R08: 00000000000102b6 R09:<br>
> 0000000000000000<br>
> [ 74.357933] R10: 0000000000000000 R11: 0000000000000000 R12:<br>
> 0000000000000000<br>
> [ 74.365309] R13: 0000000000000000 R14: ffff88021b636000 R15:<br>
> ffff880218d34160<br>
> [ 74.372672] FS: 000003b218921720(0000) GS:ffff88000a600000(0000)<br>
> knlGS:0000000000000000<br>
> [ 74.381124] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b<br>
> [ 74.387040] CR2: 0000000000432910 CR3: 000000000189a000 CR4:<br>
> 00000000000006b0<br>
> [ 74.394408] DR0: 0000000000000000 DR1: 0000000000000000 DR2:<br>
> 0000000000000000<br>
> [ 74.401823] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:<br>
> 0000000000000400<br>
> [ 74.409304] Process mount.ocfs2 (pid: 3728, threadinfo ffff880284d2e000,<br>
> task ffff88028384b680)<br>
> [ 74.418327] Stack:<br>
> [ 74.420411] 0000000000018b16 ffff88033fc00040 0000000000018b16<br>
> 0000000000018b16<br>
> [ 74.427912] <0> 0000000000000001 ffff8802b0cd3cf8 ffff880284d2fbd8<br>
> ffff8802870df000<br>
> [ 74.435912] <0> ffff8802939a739e ffff880218d34000 ffff8802a15f8800<br>
> ffff880218d34160<br>
> [ 74.444034] Call Trace:<br>
> [ 74.446544] [<ffffffffa03a5ccd>] ocfs2_initialize_super+0x8cd/0x1de0<br>
> [ocfs2]<br>
> [ 74.453928] [<ffffffff81732600>] ? packet_mmap_ops+0x11b400/0x128e00<br>
> [ 74.460649] [<ffffffffa03a7313>] ocfs2_fill_super+0x133/0x1b70 [ocfs2]<br>
> [ 74.467488] [<ffffffff81129d64>] get_sb_bdev+0x174/0x1b0<br>
> [ 74.473096] [<ffffffffa03a71e0>] ? ocfs2_fill_super+0x0/0x1b70 [ocfs2]<br>
> [ 74.479877] [<ffffffff81113352>] ? alloc_pages_current+0x82/0xd0<br>
> [ 74.486106] [<ffffffffa03a1673>] ocfs2_get_sb+0x13/0x20 [ocfs2]<br>
> [ 74.492242] [<ffffffff81129243>] vfs_kern_mount+0x83/0x230<br>
> [ 74.497918] [<ffffffff8112945d>] do_kern_mount+0x4d/0x130<br>
> [ 74.503514] [<ffffffff81585857>] ? _lock_kernel+0x47/0x19a<br>
> [ 74.509207] [<ffffffff81147770>] do_mount+0x330/0x980<br>
> [ 74.514447] [<ffffffff81113352>] ? alloc_pages_current+0x82/0xd0<br>
> [ 74.520694] [<ffffffff810e46e9>] ? __get_free_pages+0x9/0x50<br>
> [ 74.526531] [<ffffffff81147e4b>] sys_mount+0x8b/0xe0<br>
> [ 74.531705] [<ffffffff81002d10>] system_call_done+0x0/0x5<br>
> [ 74.537291] Code: 01 c1 ea 04 83 e2 01 38 c2 74 24 48 b8 00 00 80 00 01<br>
> 00 00 00 48 85 05 0a 42 f7 ff 74 0d 48 85 05 09 42 f7 ff 0f 84 9d 06 00 00<br>
> <0f> 0b eb fe 41 0f b7 46 28 25 00 f0 00 00 3d 00 60 00 00 0f 84<br>
> [ 74.557539] RIP [<ffffffffa036c145>] ocfs2_iget+0x265/0xb90 [ocfs2]<br>
> [ 74.564045] RSP <ffff880284d2fb78><br>
> [ 74.567647] ---[ end trace 46a2ed9b1b91cb15 ]---<br>
> 2014 Mar 31 12:13:58 MC0 [ 74.249870] ------------[ cut here ]------------<br>
> 2014 Mar 31 12:13:58 MC0 [ 74.264225] invalid opcode: 0000 [#1] PREEMPT<br>
> SMP<br>
> 2014 Mar 31 12:13:58 MC0 [ 74.269209] LTT NESTING LEVEL : 0<br>
> 2014 Mar 31 12:13:58 MC0 [ 74.272668] last sysfs file:<br>
> /sys/fs/o2cb/interface_revision<br>
> 2014 Mar 31 12:13:58 MC0 [ 74.418327] Stack:<br>
> 2014 Mar 31 12:13:58 MC0 [ 74.444034] Call Trace:<br>
> 2014 Mar 31 12:13:58 MC0 [ 74.537291] Code: 01 c1 ea 04 83 e2 01 38 c2 74<br>
> 24 48 b8 00 00 80 00 01 00 00 00 48 85 05 0a 42 f7 ff 74 0d 48 85 05 09 42<br>
> f7 ff 0f 84 9d 06 00 00 <0f> 0b eb fe 41 0f b7 46 28 25 00 f0 00 00 3d 00 60<br>
> 00 00 0f 84<br>
><br>
> regards,<br>
> Harsha<br>
><br>
><br>
> _______________________________________________<br>
> Ocfs2-users mailing list<br>
> <a href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</a><br>
> <a href="https://oss.oracle.com/mailman/listinfo/ocfs2-users" target="_blank">https://oss.oracle.com/mailman/listinfo/ocfs2-users</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Ocfs2-users mailing list<br>
> <a href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</a><br>
> <a href="https://oss.oracle.com/mailman/listinfo/ocfs2-users" target="_blank">https://oss.oracle.com/mailman/listinfo/ocfs2-users</a><br>
</div></div></blockquote></div><br></div>