[Ocfs2-devel] [PATCH 0/1] test case for patch 1/1
Joseph Qi
joseph.qi at linux.alibaba.com
Sat Jun 18 02:35:22 UTC 2022
On 6/8/22 6:48 PM, Heming Zhao wrote:
> === test cases ====
>
> <1> remount on local node for cluster env
>
> mount -t ocfs2 /dev/vdb /mnt
> mount -t ocfs2 /dev/vdb /mnt <=== failure
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
>
This is mount multiple times, not remount.
Don't get it has any relations with your changes.
> <2> remount on local node for nocluster env
>
> mount -t ocfs2 -o nocluster /dev/vdb /mnt
> mount -t ocfs2 /dev/vdb /mnt <=== failure
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
>
> <3> remount on another node for cluster env
>
> node2:
> mount -t ocfs2 /dev/vdb /mnt
>
> node1:
> mount -t ocfs2 /dev/vdb /mnt <== success
> umount
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== failure
>
> <4> remount on another node for nocluster env
>
> node2:
> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>
> node1:
> mount -t ocfs2 /dev/vdb /mnt <== failure
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== success, see below comment
>
Why allow two nodes mount nocluster sucessfully?
Since there is no cluster lock enabled, it will corrupt data.
> <5> simulate after crash status for cluster env
>
> (below all steps did on node1. node2 is unmount status)
> mount -t ocfs2 /dev/vdb /mnt
> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.cluster.mnted
> umount /mnt
> dd if=/root/slotmap.cluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== failure
> mount -t ocfs2 /dev/vdb /mnt && umount /mnt <== clean slot 0
> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== success
>
> <6> simulate after crash status for nocluster env
>
> (below all steps did on node1. node2 is unmount status)
> mount -t ocfs2 -o nocluster /dev/vdb /mnt
> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.nocluster.mnted
> umount /mnt
> dd if=/root/slotmap.nocluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
> mount -t ocfs2 /dev/vdb /mnt <== failure
> mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt <== clean slot 0
> mount -t ocfs2 /dev/vdb /mnt <== success
>
'bs=1 count=8 skip=76058624', is this for slotmap backup?
Thanks,
Joseph
>
> -----
> For test case <4>, the kernel job is done, but there still left
> userspace work todo.
> In my view, mount.ocfs2 needs add double confirm for this scenario.
>
> current style:
> ```
> # mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt
> Warning: to mount a clustered volume without the cluster stack.
> Please make sure you only mount the file system from one node.
> Otherwise, the file system may be damaged.
> Proceed (y/N): y
> ```
>
> I plan to change as:
> ```
> # mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt
> Warning: to mount a clustered volume without the cluster stack.
> Please make sure you only mount the file system from one node.
> Otherwise, the file system may be damaged.
> Proceed (y/N): y
> Warning: detect volume already mounted as nocluster mode.
> Do you mount this volume on another node?
> Please confirm you want to mount this volume on this node.
> Proceed (y/N): y
> ```
>
> Heming Zhao (1):
> ocfs2: fix ocfs2_find_slot repeats alloc same slot issue
>
> fs/ocfs2/dlmglue.c | 3 ++
> fs/ocfs2/ocfs2_fs.h | 3 ++
> fs/ocfs2/slot_map.c | 70 ++++++++++++++++++++++++++++++++++++---------
> 3 files changed, 62 insertions(+), 14 deletions(-)
>
More information about the Ocfs2-devel
mailing list