<div dir="ltr"><div><div><div><div>Hello Gang,<br><br></div>Thank you for the quick response, it looks like the right direction for me - similar to other file systems (not clustered) have.<br><br>I've checked and saw that the mount forwards this parameter to the OCFS2 kernel driver and it looks the version I have in my kernel does not support the errors=continue but only panic and remount-ro.<br></div><br>You've mentioned the "latest code" ... my question is: On which kernel version it should be supported? I'm currently using 3.16 on ubuntu 14.04.<br><br><br></div>Thanks,<br><br></div>Guy<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 20, 2016 at 4:21 AM, Gang He <span dir="ltr"><<a href="mailto:ghe@suse.com" target="_blank">ghe@suse.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello guy,<br>
<br>
First, OCFS2 is a shared disk cluster file system, not a distibuted file system (like Ceph), we only share the same data/metadata copy on this shared disk, please make sure this shared disk are always integrated.<br>
Second, if file system encounters any error, the behavior is specified by mount options "errors=xxx",<br>
The latest code should support "errors=continue" option, that means file system will not panic the OS, and just return -EIO error and let the file system continue.<br>
<br>
Thanks<br>
Gang<br>
<br>
<br>
>>><br>
> Dear OCFS2 guys,<br>
><br>
><br>
><br>
> My name is Guy, and I'm testing ocfs2 due to its features as a clustered<br>
> filesystem that I need.<br>
><br>
> As part of the stability and reliability test I’ve performed, I've<br>
> encountered an issue with ocfs2 (format + mount + remove disk...), that I<br>
> wanted to make sure it is a real issue and not just a mis-configuration.<br>
><br>
><br>
><br>
> The main concern is that the stability of the whole system is compromised<br>
> when a single disk/volumes fails. It looks like the OCFS2 is not handling<br>
> the error correctly but stuck in an endless loop that interferes with the<br>
> work of the server.<br>
><br>
><br>
><br>
> I’ve test tested two cluster configurations – (1) Corosync/Pacemaker and<br>
> (2) o2cb that react similarly.<br>
><br>
> Following the process and log entries:<br>
><br>
><br>
> Also below additional configuration that were tested.<br>
><br>
><br>
> Node 1:<br>
><br>
> =======<br>
><br>
> 1. service corosync start<br>
><br>
> 2. service dlm start<br>
><br>
> 3. mkfs.ocfs2 -v -Jblock64 -b 4096 --fs-feature-level=max-features<br>
> --cluster-=pcmk --cluster-name=cluster-name -N 2 /dev/<path to device><br>
><br>
> 4. mount -o<br>
> rw,noatime,nodiratime,data=writeback,heartbeat=none,cluster_stack=pcmk<br>
> /dev/<path to device> /mnt/ocfs2-mountpoint<br>
><br>
><br>
><br>
> Node 2:<br>
><br>
> =======<br>
><br>
> 5. service corosync start<br>
><br>
> 6. service dlm start<br>
><br>
> 7. mount -o<br>
> rw,noatime,nodiratime,data=writeback,heartbeat=none,cluster_stack=pcmk<br>
> /dev/<path to device> /mnt/ocfs2-mountpoint<br>
><br>
><br>
><br>
> So far all is working well, including reading and writing.<br>
><br>
> Next<br>
><br>
> 8. I’ve physically, pull out the disk at /dev/<path to device> to simulate<br>
> a hardware failure (that may occur…) , in real life the disk is (hardware<br>
> or software) protected. Nonetheless, I’m testing a hardware failure that<br>
> the one of the OCFS2 file systems in my server fails.<br>
><br>
> Following - messages observed in the system log (see below) and<br>
><br>
> ==> 9. kernel panic(!) ... in one of the nodes or on both, or reboot on<br>
> one of the nodes or both.<br>
><br>
><br>
> Is there any configuration or set of parameters that will enable the system<br>
> to continue working, disabling the access to the failed disk without<br>
> compromising the system stability and not cause the kernel to panic?!<br>
><br>
><br>
><br>
>>From my point of view it looks basics – when a hardware failure occurs:<br>
><br>
> 1. All remaining hardware should continue working<br>
><br>
> 2. The failed disk/volume should be inaccessible – but not compromise the<br>
> whole system availability (Kernel panic).<br>
><br>
> 3. OCFS2 “understands” there’s a failed disk and stop trying to access it.<br>
><br>
> 3. All disk commands such as mount/umount, df etc. should continue working.<br>
><br>
> 4. When a new/replacement drive is connected to the system, it can be<br>
> accessed.<br>
><br>
> My settings:<br>
><br>
> ubuntu 14.04<br>
><br>
> linux: 3.16.0-46-generic<br>
><br>
> mkfs.ocfs2 1.8.4 (downloaded from git)<br>
><br>
><br>
><br>
><br>
><br>
> Some other scenarios which also were tested:<br>
><br>
> 1. Remove the max-features in the mkfs (i.e. mkfs.ocfs2 -v -Jblock64 -b<br>
> 4096 --cluster-stack=pcmk --cluster-name=cluster-name -N 2 /dev/<path to<br>
> device>)<br>
><br>
> This improved in some of the cases with no kernel panic but still the<br>
> stability of the system was compromised, the syslog indicates that<br>
> something unrecoverable is going on (See below - Appendix A1). Furthermore,<br>
> System is hanging when trying to software reboot.<br>
><br>
> 2. Also tried with the o2cb stack, with similar outcomes.<br>
><br>
> 3. The configuration was also tested with (1,2 and 3) Local and Global<br>
> heartbeat(s) that were NOT on the simulated failed disk, but on other<br>
> physical disks.<br>
><br>
> 4. Also tested:<br>
><br>
> Ubuntu 15.15<br>
><br>
> Kernel: 4.2.0-23-generic<br>
><br>
> mkfs.ocfs2 1.8.4 (git clone git://<a href="http://oss.oracle.com/git/ocfs2-tools.git" rel="noreferrer" target="_blank">oss.oracle.com/git/ocfs2-tools.git</a>)<br>
><br>
><br>
><br>
><br>
><br>
> ==============<br>
><br>
> Appendix A1:<br>
><br>
> ==============<br>
><br>
> from syslog:<br>
><br>
> [ 1676.608123] (ocfs2cmt,5316,14):ocfs2_commit_thread:2195 ERROR: status =<br>
> -5, journal is already aborted.<br>
><br>
> [ 1677.611827] (ocfs2cmt,5316,14):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1678.616634] (ocfs2cmt,5316,15):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1679.621419] (ocfs2cmt,5316,15):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1680.626175] (ocfs2cmt,5316,15):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1681.630981] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1682.107356] INFO: task kworker/u64:0:6 blocked for more than 120 seconds.<br>
><br>
> [ 1682.108440] Not tainted 3.16.0-46-generic #62~14.04.1<br>
><br>
> [ 1682.109388] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables<br>
> this message.<br>
><br>
> [ 1682.110381] kworker/u64:0 D ffff88103fcb30c0 0 6 2<br>
> 0x00000000<br>
><br>
> [ 1682.110401] Workqueue: fw_event0 _firmware_event_work [mpt3sas]<br>
><br>
> [ 1682.110405] ffff88102910b8a0 0000000000000046 ffff88102977b2f0<br>
> 00000000000130c0<br>
><br>
> [ 1682.110411] ffff88102910bfd8 00000000000130c0 ffff88102928c750<br>
> ffff88201db284b0<br>
><br>
> [ 1682.110415] ffff88201db28000 ffff881028cef000 ffff88201db28138<br>
> ffff88201db28268<br>
><br>
> [ 1682.110419] Call Trace:<br>
><br>
> [ 1682.110427] [<ffffffff8176a8b9>] schedule+0x29/0x70<br>
><br>
> [ 1682.110458] [<ffffffffc08d6c11>] ocfs2_clear_inode+0x3b1/0xa30 [ocfs2]<br>
><br>
> [ 1682.110464] [<ffffffff810b4de0>] ? prepare_to_wait_event+0x100/0x100<br>
><br>
> [ 1682.110487] [<ffffffffc08d8c7e>] ocfs2_evict_inode+0x6e/0x730 [ocfs2]<br>
><br>
> [ 1682.110493] [<ffffffff811eee04>] evict+0xb4/0x180<br>
><br>
> [ 1682.110498] [<ffffffff811eef09>] dispose_list+0x39/0x50<br>
><br>
> [ 1682.110501] [<ffffffff811efdb4>] invalidate_inodes+0x134/0x150<br>
><br>
> [ 1682.110506] [<ffffffff8120a64a>] __invalidate_device+0x3a/0x60<br>
><br>
> [ 1682.110510] [<ffffffff81367e81>] invalidate_partition+0x31/0x50<br>
><br>
> [ 1682.110513] [<ffffffff81368f45>] del_gendisk+0xf5/0x290<br>
><br>
> [ 1682.110519] [<ffffffff815177a1>] sd_remove+0x61/0xc0<br>
><br>
> [ 1682.110524] [<ffffffff814baf7f>] __device_release_driver+0x7f/0xf0<br>
><br>
> [ 1682.110529] [<ffffffff814bb013>] device_release_driver+0x23/0x30<br>
><br>
> [ 1682.110534] [<ffffffff814ba918>] bus_remove_device+0x108/0x180<br>
><br>
> [ 1682.110538] [<ffffffff814b7169>] device_del+0x129/0x1c0<br>
><br>
> [ 1682.110543] [<ffffffff815123a5>] __scsi_remove_device+0xd5/0xe0<br>
><br>
> [ 1682.110547] [<ffffffff815123d6>] scsi_remove_device+0x26/0x40<br>
><br>
> [ 1682.110551] [<ffffffff81512590>] scsi_remove_target+0x170/0x230<br>
><br>
> [ 1682.110561] [<ffffffffc03551e5>] sas_rphy_remove+0x65/0x80<br>
> [scsi_transport_sas]<br>
><br>
> [ 1682.110570] [<ffffffffc035707d>] sas_port_delete+0x2d/0x170<br>
> [scsi_transport_sas]<br>
><br>
> [ 1682.110575] [<ffffffff8124a6f9>] ? sysfs_remove_link+0x19/0x30<br>
><br>
> [ 1682.110588] [<ffffffffc03f1599>]<br>
> mpt3sas_transport_port_remove+0x1c9/0x1e0 [mpt3sas]<br>
><br>
> [ 1682.110598] [<ffffffffc03e60b5>] _scsih_remove_device+0x55/0x80<br>
> [mpt3sas]<br>
><br>
> [ 1682.110610] [<ffffffffc03e6159>]<br>
> _scsih_device_remove_by_handle.part.21+0x79/0xa0 [mpt3sas]<br>
><br>
> [ 1682.110619] [<ffffffffc03eca97>] _firmware_event_work+0x1337/0x1690<br>
> [mpt3sas]<br>
><br>
> [ 1682.110626] [<ffffffff8101c315>] ? native_sched_clock+0x35/0x90<br>
><br>
> [ 1682.110630] [<ffffffff8101c379>] ? sched_clock+0x9/0x10<br>
><br>
> [ 1682.110636] [<ffffffff81011574>] ? __switch_to+0xe4/0x580<br>
><br>
> [ 1682.110640] [<ffffffff81087bc9>] ? pwq_activate_delayed_work+0x39/0x80<br>
><br>
> [ 1682.110644] [<ffffffff8108a302>] process_one_work+0x182/0x450<br>
><br>
> [ 1682.110648] [<ffffffff8108aa71>] worker_thread+0x121/0x570<br>
><br>
> [ 1682.110652] [<ffffffff8108a950>] ? rescuer_thread+0x380/0x380<br>
><br>
> [ 1682.110657] [<ffffffff81091309>] kthread+0xc9/0xe0<br>
><br>
> [ 1682.110662] [<ffffffff81091240>] ? kthread_create_on_node+0x1c0/0x1c0<br>
><br>
> [ 1682.110667] [<ffffffff8176e818>] ret_from_fork+0x58/0x90<br>
><br>
> [ 1682.110672] [<ffffffff81091240>] ? kthread_create_on_node+0x1c0/0x1c0<br>
><br>
> [ 1682.635761] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1683.640549] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1684.645336] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1685.650114] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1686.654911] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1687.659684] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1688.664466] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1689.669252] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1690.674026] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1691.678810] (ocfs2cmt,5316,9):ocfs2_commit_cache:324 ERROR: status = -5<br>
><br>
> [ 1691.679920] (ocfs2cmt,5316,9):ocfs2_commit_thread:2195 ERROR: status =<br>
> -5, journal is already aborted.<br>
><br>
><br>
><br>
> Thanks in advance,<br>
><br>
> Guy<br>
</blockquote></div><br></div>