[Ocfs2-devel] [PATCH 0/1] test case for patch 1/1

heming.zhao at suse.com heming.zhao at suse.com
Sat Jun 18 10:18:34 UTC 2022


On 6/18/22 10:35, Joseph Qi wrote:
> 
> 
> On 6/8/22 6:48 PM, Heming Zhao wrote:
>> === test cases ====
>>
>> <1> remount on local node for cluster env
>>
>> mount -t ocfs2 /dev/vdb /mnt
>> mount -t ocfs2 /dev/vdb /mnt              <=== failure
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
>>
> 
> This is mount multiple times, not remount.
> Don't get it has any relations with your changes.

Yes. not related with my patch.
I include this test only for watching remount result.

> 
>> <2> remount on local node for nocluster env
>>
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>> mount -t ocfs2 /dev/vdb /mnt              <=== failure
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
>>
>> <3> remount on another node for cluster env
>>
>> node2:
>> mount -t ocfs2 /dev/vdb /mnt
>>
>> node1:
>> mount -t ocfs2 /dev/vdb /mnt  <== success
>> umount
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== failure
>>
>> <4> remount on another node for nocluster env
>>
>> node2:
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>>
>> node1:
>> mount -t ocfs2 /dev/vdb /mnt              <== failure
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== success, see below comment
>>
> Why allow two nodes mount nocluster sucessfully?
> Since there is no cluster lock enabled, it will corrupt data.

I didn't know about ext4 mmp feature at that time. If ext4 allows mount on
different machine (can corrupt data), ocfs2 also allow this case to happen.
But following ext4 mmp, we could better to add some similar code for blocking
multi mount.
> 
>> <5> simulate after crash status for cluster env
>>
>> (below all steps did on node1. node2 is unmount status)
>> mount -t ocfs2 /dev/vdb /mnt
>> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.cluster.mnted
>> umount /mnt
>> dd if=/root/slotmap.cluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt   <== failure
>> mount -t ocfs2 /dev/vdb /mnt && umount /mnt <== clean slot 0
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt   <== success
>>
>> <6> simulate after crash status for nocluster env
>>
>> (below all steps did on node1. node2 is unmount status)
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.nocluster.mnted
>> umount /mnt
>> dd if=/root/slotmap.nocluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
>> mount -t ocfs2 /dev/vdb /mnt   <== failure
>> mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt <== clean slot 0
>> mount -t ocfs2 /dev/vdb /mnt   <== success
>>
> 'bs=1 count=8 skip=76058624', is this for slotmap backup?

sorry, I forgot to explain this secret number meaning. Your guess is right.

how to calculate:
```
my test disk is 500M raw file, attached to kvm-qemu with shared mode.
(my env) block size: 1K cluster size: 4K '//slot_map' inode number: 0xD.
debugfs: stat //slot_map
         Inode: 13   Mode: 0644   Generation: 4183895025 (0xf9612bf1)
         FS Generation: 4183895025 (0xf9612bf1)
         CRC32: 00000000   ECC: 0000
         Type: Regular   Attr: 0x0   Flags: Valid System
         Dynamic Features: (0x0)
         User: 0 (root)   Group: 0 (root)   Size: 4096
         Links: 1   Clusters: 1
         ctime: 0x62286e49 0x0 -- Wed Mar  9 17:07:21.0 2022
         atime: 0x62286e49 0x0 -- Wed Mar  9 17:07:21.0 2022
         mtime: 0x62286e4a 0x0 -- Wed Mar  9 17:07:22.0 2022
         dtime: 0x0 -- Thu Jan  1 08:00:00 1970
         Refcount Block: 0
         Last Extblk: 0   Orphan Slot: 0
         Sub Alloc Slot: Global   Sub Alloc Bit: 5
         Tree Depth: 0   Count: 51   Next Free Rec: 1
         ## Offset        Clusters       Block#          Flags
         0  0             1              74276           0x0

74276 * 1024 => 76058624 (0x4889000)
```

At last, do you think I could send v2 patch which includes part of ext4 mmp feature.
I plan to copy ext4_multi_mount_protect(), and won't include generating kmmpd kthread
code.
btw, to be honest, I can't totally got the idea of kmmpd. kmmpd does periodically
update/detect mmp area. And ext4_multi_mount_protect() already blocks new mounting
action. So in my view, kmmpd update/detect actions only work for user/something
directly modifies disk mmp area case. if my guess is right, the kmmpd is not necessary.

Thanks,
Heming
> 
> Thanks,
> Joseph
> 
>>
>> -----
>> For test case <4>, the kernel job is done, but there still left
>> userspace work todo.
>> In my view, mount.ocfs2 needs add double confirm for this scenario.
>>
>> current style:
>> ```
>> # mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt
>> Warning: to mount a clustered volume without the cluster stack.
>> Please make sure you only mount the file system from one node.
>> Otherwise, the file system may be damaged.
>> Proceed (y/N): y
>> ```
>>
>> I plan to change as:
>> ```
>> # mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt
>> Warning: to mount a clustered volume without the cluster stack.
>> Please make sure you only mount the file system from one node.
>> Otherwise, the file system may be damaged.
>> Proceed (y/N): y
>> Warning: detect volume already mounted as nocluster mode.
>> Do you mount this volume on another node?
>> Please confirm you want to mount this volume on this node.
>> Proceed (y/N): y
>> ```
>>
>> Heming Zhao (1):
>>    ocfs2: fix ocfs2_find_slot repeats alloc same slot issue
>>
>>   fs/ocfs2/dlmglue.c  |  3 ++
>>   fs/ocfs2/ocfs2_fs.h |  3 ++
>>   fs/ocfs2/slot_map.c | 70 ++++++++++++++++++++++++++++++++++++---------
>>   3 files changed, 62 insertions(+), 14 deletions(-)
>>




More information about the Ocfs2-devel mailing list