[Ocfs2-users] Diagnosing poor write performance

Graeme Donaldson graeme at donaldson.za.net
Mon Apr 4 02:02:49 PDT 2016


On 2 April 2016 at 02:43, Eric Ren <zren at suse.com> wrote:

>
> Hi,
>
>
>> Yes, so use ocfs2 on top cLVM is good idea if you want it to get
>> resilience. I'm not sure if tune.ocfs2 can change block size suchlike
>> offline. FWIW, fragmentation is always evil;-)
>
>
> The files are the code and images, etc. that make up the customer's
> website. I ran something to show me the distributions of file sizes and
> only around 10% are under 4KB in size, so I wouldn't think that 4K
> block/cluster ought to be an issue. Perhaps it just down to the size. We're
> going to see if re-creating the filesystem with a 1K block (cluster cannot
> be smaller than 4K) and making it larger makes the issue go away.
>
> For interest's sake, this is the size distribution on the volume. The
> first column is the size in bytes and the second column is the count of
> files that fall in the size range, so there are 545 files of 0 bytes, there
> are 265 files between 16 bytes and 32 bytes, etc.
>
>          0 545
>          1  12
>          2   1
>          8   9
>         16  51
>         32 265
>         64 593
>        128 899
>        256 6902
>        512 1247
>       1024 10290
>       2048 21719
>       4096 46908
>       8192 53438
>      16384 42749
>      32768 68509
>      65536 62462
>     131072 32245
>     262144 13349
>     524288 5458
>    1048576 2193
>    2097152 245
>    4194304  66
>    8388608  15
>   67108864   3
>  268435456   1
>  536870912   1
>
>
> Yes, you're right. Thanks for correcting me. The big idea is that the
> bigger the allocation unit is, the more space will be wasted; the smaller
> cluster size is, then the easier disk will be fragmented. So, 4kb block
> size is fine due to we have inline data fearture; you should try bigger
> cluster size if disk size is not a big concern.
>
> BYW, could you share the way you get the statistic data? it's cool!
>
> Eric
>
>
This worked for me, may need adjustment depending on versions of find and
awk in use...

     find . -type f -printf "%s\n"|awk
'{size[int(log($1)/log(2))]++}END{for (i in size) printf("%10d %3d\n", 2^i,
size[i])}' | sort -n

We ended up growing the customer's filesystem to 150GB for now, which
appears to have resolved the poor writing. It would be great to figure out
optimal block and cluster sizes but I'm not sure we'll be able to get
anything meaningful there any time soon.

Thanks for all the assistance!

Graeme
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20160404/e4a28e35/attachment.html 


More information about the Ocfs2-users mailing list