[Ocfs2-tools-devel] [PATCH 2/5] Add "list-sparse" option in
tunefs.ocfs2, take 2
Mark Fasheh
mark.fasheh at oracle.com
Tue Oct 30 22:31:17 PDT 2007
On Wed, Oct 31, 2007 at 01:22:13PM +0800, tao.ma wrote:
>>> + /*
>>> + * If the tail of the file is a hole, let the
>>> + * hole length only cover the last i_size.
>>> + */
>>> + if (v_cluster + num_clusters == UINT32_MAX)
>>> + num_clusters = clusters - v_cluster;
>>>
>>
>> I think you want:
>> if (!p_cluster && (v_cluster + num_clusters == UINT32_MAX))
>> num_clusters = cluster - v_cluster;
>>
>> Otherwise, we might concatenate the last extent length in a very large
>> file.
>>
> Does it really happen? Even if it happens, I think clusters - v_cluster
> really works since clusters come out of i_size and i_size should exists in
> the last allocated clusters, otherwise, the file is corrupted.
> But you are somewhat right. The comment is a little confusing that lead to
> that situation. ;)
> so I will modify it as you suggested below.
I was mostly thinking of the code below that which added num_clusters to
v_cluster to get the next lookup. In that case, we'd do multiple lookups in
the same extent, which is more of an efficiency problem than anything else.
>> Really, we could then just combine both the if statements into something
>> like this:
>>
>> if (!p_cluster) {
>> if (v_cluster + num_clusters == UINT32_MAX)
>> num_clusters = cluster - v_cluster;
>>
>> if (func)
>> func(priv_data, v_cluster, num_clusters);
>> }
>>
>>> +static errcode_t list_sparse_file(ocfs2_filesys *fs,
>>> + struct ocfs2_dinode *di,
>>> + struct list_ctxt *ctxt)
>>> +{
>>> + errcode_t ret = 0;
>>> + struct multi_link_file *file = NULL;
>>> +
>>> + assert(S_ISREG(di->i_mode));
>>> +
>>> + ctxt->file_hole_len = 0;
>>> +
>>> + if (di->i_links_count > 1) {
>>> + file = multi_link_file_lookup(ctxt, di->i_blkno);
>>> +
>>> + if (file) {
>>> + ctxt->file_hole_len = file->clusters;
>>> + ctxt->duplicated = 1;
>>> + goto print;
>>>
>>
>> Shouldn't we skip printing the same inode number twice?
>>
> We are iterating from "/" not all the inodes, remember? And we need to
> print the file name also.
> So I am afraid whether the user want to see some file name are missing when
> he go through the whole output.
Ahh, ok. Sounds good then!
>>> + /*
>>> + * We will use block cache in io, so that the extent block
>>> + * information can be cached to speed up our extent iteration.
>>> + * If io_init_cache failed, we will go on the list work without
>>> + * the io_cache, so there is no check here.
>>> + */
>>> + io_init_cache(fs->fs_io, ocfs2_extent_recs_per_eb(fs->fs_blocksize));
>>>
>>
>> I think it's best if we just do this up in the main body of tunefs.c
>> (after
>> the cluster checks) so that all operations get the benefit of caching. It
>> should also be in a seperate patch.
>> --Mark
>>
>>
> en, there is a problem of how many blocks tunefs wants. For list sparse, we
> need "ocfs2_extent_recs_per_eb(fs->fs_blocksize)", but for others they may
> only need 1 or 2 blocks. So I am not sure they can be organized together.
I was thinking we'd just pick some sane defaults and go with that. I'm not
really concerned much with memory usage of the cache at this point.
> And I think that it is also the reason why io cache isn't initialized by
> fs_io itself. In that case, all the block read/write can benefit from it.
> But that isn't Joel's original thought I think and the operation itself
> should handle the cache size.
I think we didn't want to initialize it in fs_io because there was a worry
that it wouldn't be correct for all possible usages (especially when we
didn't have a cluster lock).
--Mark
--
Mark Fasheh
Senior Software Developer, Oracle
mark.fasheh at oracle.com
More information about the Ocfs2-tools-devel
mailing list