[Ocfs2-devel] [PATCH 0/1] test case for patch 1/1

Joseph Qi joseph.qi at linux.alibaba.com
Sat Jun 25 12:47:49 UTC 2022



On 6/18/22 6:18 PM, heming.zhao at suse.com wrote:
> On 6/18/22 10:35, Joseph Qi wrote:
>>
>>
>> On 6/8/22 6:48 PM, Heming Zhao wrote:
>>> === test cases ====
>>>
>>> <1> remount on local node for cluster env
>>>
>>> mount -t ocfs2 /dev/vdb /mnt
>>> mount -t ocfs2 /dev/vdb /mnt              <=== failure
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
>>>
>>
>> This is mount multiple times, not remount.
>> Don't get it has any relations with your changes.
> 
> Yes. not related with my patch.
> I include this test only for watching remount result.
> 
>>
>>> <2> remount on local node for nocluster env
>>>
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>>> mount -t ocfs2 /dev/vdb /mnt              <=== failure
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <=== failure
>>>
>>> <3> remount on another node for cluster env
>>>
>>> node2:
>>> mount -t ocfs2 /dev/vdb /mnt
>>>
>>> node1:
>>> mount -t ocfs2 /dev/vdb /mnt  <== success
>>> umount
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== failure
>>>
>>> <4> remount on another node for nocluster env
>>>
>>> node2:
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>>>
>>> node1:
>>> mount -t ocfs2 /dev/vdb /mnt              <== failure
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt <== success, see below comment
>>>
>> Why allow two nodes mount nocluster sucessfully?
>> Since there is no cluster lock enabled, it will corrupt data.
> 
> I didn't know about ext4 mmp feature at that time. If ext4 allows mount on
> different machine (can corrupt data), ocfs2 also allow this case to happen.
> But following ext4 mmp, we could better to add some similar code for blocking
> multi mount.
>>
>>> <5> simulate after crash status for cluster env
>>>
>>> (below all steps did on node1. node2 is unmount status)
>>> mount -t ocfs2 /dev/vdb /mnt
>>> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.cluster.mnted
>>> umount /mnt
>>> dd if=/root/slotmap.cluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt   <== failure
>>> mount -t ocfs2 /dev/vdb /mnt && umount /mnt <== clean slot 0
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt   <== success
>>>
>>> <6> simulate after crash status for nocluster env
>>>
>>> (below all steps did on node1. node2 is unmount status)
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt
>>> dd if=/dev/vdb bs=1 count=8 skip=76058624 of=/root/slotmap.nocluster.mnted
>>> umount /mnt
>>> dd if=/root/slotmap.nocluster.mnted of=/dev/vdb seek=76058624 bs=1 count=8
>>> mount -t ocfs2 /dev/vdb /mnt   <== failure
>>> mount -t ocfs2 -o nocluster /dev/vdb /mnt && umount /mnt <== clean slot 0
>>> mount -t ocfs2 /dev/vdb /mnt   <== success
>>>
>> 'bs=1 count=8 skip=76058624', is this for slotmap backup?
> 
> sorry, I forgot to explain this secret number meaning. Your guess is right.
> 
> how to calculate:
> ```
> my test disk is 500M raw file, attached to kvm-qemu with shared mode.
> (my env) block size: 1K cluster size: 4K '//slot_map' inode number: 0xD.
> debugfs: stat //slot_map
>         Inode: 13   Mode: 0644   Generation: 4183895025 (0xf9612bf1)
>         FS Generation: 4183895025 (0xf9612bf1)
>         CRC32: 00000000   ECC: 0000
>         Type: Regular   Attr: 0x0   Flags: Valid System
>         Dynamic Features: (0x0)
>         User: 0 (root)   Group: 0 (root)   Size: 4096
>         Links: 1   Clusters: 1
>         ctime: 0x62286e49 0x0 -- Wed Mar  9 17:07:21.0 2022
>         atime: 0x62286e49 0x0 -- Wed Mar  9 17:07:21.0 2022
>         mtime: 0x62286e4a 0x0 -- Wed Mar  9 17:07:22.0 2022
>         dtime: 0x0 -- Thu Jan  1 08:00:00 1970
>         Refcount Block: 0
>         Last Extblk: 0   Orphan Slot: 0
>         Sub Alloc Slot: Global   Sub Alloc Bit: 5
>         Tree Depth: 0   Count: 51   Next Free Rec: 1
>         ## Offset        Clusters       Block#          Flags
>         0  0             1              74276           0x0
> 
> 74276 * 1024 => 76058624 (0x4889000)
> ```
> 
> At last, do you think I could send v2 patch which includes part of ext4 mmp feature.
> I plan to copy ext4_multi_mount_protect(), and won't include generating kmmpd kthread
> code.
> btw, to be honest, I can't totally got the idea of kmmpd. kmmpd does periodically
> update/detect mmp area. And ext4_multi_mount_protect() already blocks new mounting
> action. So in my view, kmmpd update/detect actions only work for user/something
> directly modifies disk mmp area case. if my guess is right, the kmmpd is not necessary.
> 

Sorry for the late reply.
Since now this feature is incomplete and o2cb is the default stack, I'd
like take Junxiao's suggestion, we revert this feature first to quickly
fix the regression.
And we can take it again in the future once the feature is mature.

Thanks,
Joseph



More information about the Ocfs2-devel mailing list