[Ocfs2-users] Ofcs2 Questions!
Antonis Kopsaftis
akops at edu.teiath.gr
Thu Feb 3 10:27:49 PST 2011
Hello,
My setup consists with 2 backend mail servers, running
dovecot/postfix/mailscanner in primary/secondary mode. The primary
server has 6GB ram, and the seconadary a virtual machine with 2GB ram.
Both servers using a external SAN over fiber. I have over 15.000users,
and the performance is very good without using any exotic performance
tips. I just mount the filesystem with noatime. I ran this setup for
almost a year without any problems.
After the discovery of the bug that ocfs2 has last October , and the
notification from oracle people, that the bug will not be fixed, i
searched for an alternative filesystem.
So during December i moved all my user files to gfs2. On the same setup
gfs2 is a little slower than ocfs2.
I have another cluster (not mail) running with ocfs2. Sadly i have to
change that too...
Regards,
akops
On 3/2/2011 7:45 μμ, Henrique Fernandes wrote:
> I just read some, the problems is it will be to difficult to us to
> migrate it now. Because they migrate into ocfs2 quite recently.
>
> No other docs for performance improvement for maildir ?
>
> []'sf.rique
>
>
> On Thu, Feb 3, 2011 at 4:40 AM, Antonis Kopsaftis <akops at teiath.gr
> <mailto:akops at teiath.gr>> wrote:
>
> Hello,
>
> You should read the archives of this list on Oct/Nov 2010. The
> ocfs2 1.4 has the "No space left on device error" bug which
> (according to oracle people on this list) will
> not be fixed, as the 2.6.18 kernel that centos 5.x has is very
> very old.
> You can try to install the Unbreakable Kernel from oracle, but
> dont count on it. I tried it and my system was pretty messed up
>
> I was using ocfs2 with a setup pretty like
> yours(dovecot/qmail/mailscanner) in production, and it worked very
> well, but because of the bug, i had to switch to another
> filesystem, althought i was happy with ocfs2.
>
> akops
>
>
>
> On 3/2/2011 3:10 πμ, Henrique Fernandes wrote:
>> centos 5.5
>> ocfs2 1.4
>> The mail solution, dovecot+postifx+mailscaner uses about 1 gb
>> the machine has 1.5 in my tests i am rising it to 3gb so it have
>> 2gb of cached
>>
>> How many memory do you thing i should separeted for cache ?
>>
>> When we made some tests ocfs2 1.4 had better performance than
>> 1.6 but or tests were very simple, an script that writes lots of
>> files and anothe rone that reads it.
>>
>> And it is a problem to have ocfs2 1.6 in ths centos!!
>>
>> Should i use ocfs2 in production ?
>>
>> how about the commit and noatime configs ??
>>
>>
>> thanks!!
>>
>> []'sf.rique
>>
>>
>> On Wed, Feb 2, 2011 at 9:27 PM, Sunil Mushran
>> <sunil.mushran at oracle.com <mailto:sunil.mushran at oracle.com>> wrote:
>>
>> version? distro?
>>
>> This workload will benefit a lot with the indexed directories
>> available in
>> ocfs2 1.6 (and mainline and sles11).
>>
>> The other thing to check is the amount memory in the virtual
>> machines.
>> File systems need memory to cache the inodes. If memory is
>> lacking,
>> the inodes are freed and have to be re-read from disk time
>> and again.
>> While this is a problem even in a local fs, it is a bigger
>> problem in a cfs
>> as a cfs needs to do lock mastery for the same inode time and
>> again.
>>
>>
>> On 02/02/2011 01:09 PM, Henrique Fernandes wrote:
>>> Hello,
>>>
>>> First of all, i am new at the list and i have several
>>> questions about ocfs2 performance.
>>>
>>> Where i am working i am having huge performance problens
>>> with ocfs2.
>>>
>>> Let me tell my envoriment.
>>>
>>> 3 Xen VirtualMachines withs ocfs2 mounting an LUN exported
>>> over iSCSI. ( acctualy 3 LUNS, 3 ocfs2 clusters )
>>>
>>> I am not the one who configured the envoriment, but it is
>>> making the performance of my MAIL system to bad.
>>>
>>> Have about 9k accounts but only 4k are active. It is a
>>> maildir system. ( postfix + dovecot )
>>>
>>> Now that this performance problens are afecting my system i
>>> am gonna try help to tunning the ocfs2.
>>>
>>> Pretty much all default settings.
>>>
>>> OCFS2 is configured to write with ordered mode. We know that
>>> changing to writeback will make performance much better, but
>>> we are not considering lose anydata, so it i snot an option.
>>>
>>> Now we are going to implemente noatime options in mount.
>>> This make better performace ?
>>>
>>> Other one, how about the commit mount options ? The default
>>> is set to 5s if i increse it how is the potential data loss
>>> in case we lost lose power?
>>>
>>> Does anyone have any other paramenter that should help us ?
>>>
>>> Another info, the inscremental backup is taking 10 to 12 hours.
>>>
>>> All nodes have VERY high I/O wait.
>>>
>>> Thanks to all!!
>>>
>>> If you could tell me any doc that i sould read would be nice to!
>>>
>>>
>>>
>>>
>>>
>>>
>>> []'sf.rique
>>>
>>>
>>> _______________________________________________
>>> Ocfs2-users mailing list
>>> Ocfs2-users at oss.oracle.com <mailto:Ocfs2-users at oss.oracle.com>
>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>
>>
>>
>> _______________________________________________
>> Ocfs2-users mailing list
>> Ocfs2-users at oss.oracle.com <mailto:Ocfs2-users at oss.oracle.com>
>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>
> --
>
> ============================================================
> ____________
> | __________ |\\ .....................................
> || 0 0 || | . . Κοψαύτης Αντώνης . .
> || J || | . . System Administrator . .
> || [___] || | . . Κέντρο Διαχείρισης Δικτύου . .
> ||__________|| | . . ΤΕΙ Αθήνας . .
> | __________ | | . . 210-5385790 . .
> | ______==== | | . . akops at teiath.gr <mailto:akops at teiath.gr> . .
> | __________ | | . . VMware Certified Professional . .
> |____________|/ .....................................
>
> =============================================================
>
>
>
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110203/e4cf6858/attachment.html
More information about the Ocfs2-users
mailing list