[Ocfs2-users] A Billion Files on OCFS2 -- Best Practices?

Mark Hampton mark at cleverdba.com
Wed Feb 1 10:44:38 PST 2012


On Wed, Feb 1, 2012 at 1:27 PM, Sunil Mushran <sunil.mushran at oracle.com>wrote:

> On 02/01/2012 10:24 AM, Mark Hampton wrote:
>
>> Here's what I got from debugfs.ocfs2 -R "stats".  I have to type it out
>> manually, so I'm only including the "features" lines:
>>
>>    Feature Compat: 3 backup-super strict-journal-super
>>    Feature Incompat: 16208 sparse extended-slotmap inline-data metaecc
>>    xattr indexed-dirs refcount discontig-bg
>>    Feature RO compat: 7 unwritten usrquota grpquota
>>
>>
>> Some other info that may be interesting:
>>
>>    Links: 0   Clusters: 52428544
>>
>
>
> I would disable quotas. That line suggests the vol is 200G is size.
>

Ok, I'll try disabling  quotas and metaecc.

Yes, the filesystem is 200GB.  This particular filesystem is 1 out of 16
OCFS2 filesystems.  This is a 1/5 scale test of what will be in production.

I wonder if having 16 OCFS2 filesystems is ideal.  16 small OCFS2
filesystems definitely performed better than 1 large OFCS2 filesystem.
 Writing across 16 small OCFS2 filesystems turned out to be at least twice
as fast.  I don't know what the optimal number is though.

I noticed that there appear to be separate dlm process for each filesystem,
so maybe reducing serialization is a factor in the improved performance?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20120201/cbb48ffe/attachment.html 


More information about the Ocfs2-users mailing list