[Ocfs2-tools-devel] [PATCH 0/6] Fix extent records for holes/overlaps

Tiger Yang tiger.yang at oracle.com
Thu Sep 1 23:22:21 PDT 2011


On 08/24/2011 11:52 PM, Goldwyn Rodrigues wrote:
> Hi Tiger,
>
> On Wed, Aug 24, 2011 at 2:21 AM, Tiger Yang<tiger.yang at oracle.com>  wrote:
>> Hi, Goldwyn,
>>
>> I have tested the new patches with test cases of punching hole at the
>> beginning and middle, them can't fix the problems.
>> I have studied this problem and I found the reason is the new size of dir
>> inode changed by fsck is wrong. In my test case, blocksize is 1024, cluster
>> size if 4096,  dir size is 23552 (23 blocks), but it taken 6 clusters (24
>> blocks) , so when fsck found a hole and reduce that cluster, the new size
>> became 20480, it include the last uninitialized block in the last cluster.
>> so in pass2,  ocfs2_read_dir_block->  ocfs2_validate_meta_ecc ->
>> ocfs2_block_check_validate will return OCFS2_ET_IO. For hole at the
>> beginning, the fix should initialize "." and ".." in the directory as Joel
>> commented on bug 1324.
> How did you punch the holes? Did you wipe out existing data from the
> directories? The idea of the patchset is to fix extent records.
ocfs2 don't allow to punch holes on directory inode, So I modify the 
code both in kernel and in tools. you can see the attached patches below.
Yes, that tools wipe out the existing data like punch holes on common file.
I think only fix extent records is not enough to fix this problem, the 
customer may have corrupted directory inode like I had. Do we need 
another patch to fix this?
> After fixing DIRENT_NOT_DOTTY were you able to access file of the
> inodes 37414 from the directory? Do you know of a situation where
> after fixing DIRENT_NOT_DOTTY the file represented by dirent fixed is
> lost?
I know fixing DIRENT_NOT_DOTTY will cause file lost, but I think we can 
add "." and ".." instead of change the files to them in the directory. 
Joel also commented this on bugzilla.
> This is a generic question: What is the best way to identify the size
> of the directory for a corrupted directory inode whose reported i_size
> is more than number of blocks? IOW, how do we know how many blocks of
> the last cluster is used by the directory? Is it safe to assume that
> if for first dirent of block, dirent->inode == dirent->reclen == 0
> then the block is unused?
I can't found out the best way to identify the size of corrupted 
directory, but I think if we found a hole in the directory inode, in 
case to fix this, we have to trust i_size, and we know the cluster size 
and block size, so we can know which block in the last cluster is used 
and which block is unused.

Thanks,
Tiger

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: punch_hole.patch
Url: http://oss.oracle.com/pipermail/ocfs2-tools-devel/attachments/20110902/06bf86e3/attachment.pl 
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: punchdir.patch
Url: http://oss.oracle.com/pipermail/ocfs2-tools-devel/attachments/20110902/06bf86e3/attachment-0001.pl 


More information about the Ocfs2-tools-devel mailing list