I just read some, the problems is it will be to difficult to us to migrate it now. Because they migrate into ocfs2 quite recently.<br><br>No other docs for performance improvement for maildir ?<br><br clear="all">[]'sf.rique <br>
<br><br><div class="gmail_quote">On Thu, Feb 3, 2011 at 4:40 AM, Antonis Kopsaftis <span dir="ltr"><<a href="mailto:akops@teiath.gr">akops@teiath.gr</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div bgcolor="#ffffff" text="#000000">
Hello,<br>
<br>
You should read the archives of this list on Oct/Nov 2010. The ocfs2
1.4 has the "No space left on device error" bug which (according to
oracle people on this list) will<br>
not be fixed, as the 2.6.18 kernel that centos 5.x has is very very
old.<br>
You can try to install the Unbreakable Kernel from oracle, but dont
count on it. I tried it and my system was pretty messed up<br>
<br>
I was using ocfs2 with a setup pretty like
yours(dovecot/qmail/mailscanner) in production, and it worked very
well, but because of the bug, i had to switch to another<br>
filesystem, althought i was happy with ocfs2.<br>
<br>
akops<div><div></div><div class="h5"><br>
<br>
<br>
On 3/2/2011 3:10 πμ, Henrique Fernandes wrote:
<blockquote type="cite">centos 5.5 <br>
ocfs2 1.4<br>
The mail solution, dovecot+postifx+mailscaner uses about 1 gb the
machine has 1.5 in my tests i am rising it to 3gb so it have 2gb
of cached<br>
<br>
How many memory do you thing i should separeted for cache ?<br>
<br>
When we made some tests ocfs2 1.4 had better performance than 1.6
but or tests were very simple, an script that writes lots of files
and anothe rone that reads it.<br>
<br>
And it is a problem to have ocfs2 1.6 in ths centos!!<br>
<br>
Should i use ocfs2 in production ?<br>
<br>
how about the commit and noatime configs ??<br>
<br>
<br>
thanks!!<br>
<br clear="all">
[]'sf.rique <br>
<br>
<br>
<div class="gmail_quote">On Wed, Feb 2, 2011 at 9:27 PM, Sunil
Mushran <span dir="ltr"><<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div bgcolor="#ffffff" text="#000000"> version? distro?<br>
<br>
This workload will benefit a lot with the indexed
directories available in<br>
ocfs2 1.6 (and mainline and sles11).<br>
<br>
The other thing to check is the amount memory in the virtual
machines.<br>
File systems need memory to cache the inodes. If memory is
lacking,<br>
the inodes are freed and have to be re-read from disk time
and again.<br>
While this is a problem even in a local fs, it is a bigger
problem in a cfs<br>
as a cfs needs to do lock mastery for the same inode time
and again.
<div>
<div><br>
<br>
On 02/02/2011 01:09 PM, Henrique Fernandes wrote: </div>
</div>
<blockquote type="cite">
<div>
<div>Hello,<br>
<br>
First of all, i am new at the list and i have several
questions about ocfs2 performance.<br>
<br>
Where i am working i am having huge performance
problens with ocfs2.<br>
<br>
Let me tell my envoriment.<br>
<br>
3 Xen VirtualMachines withs ocfs2 mounting an LUN
exported over iSCSI. ( acctualy 3 LUNS, 3 ocfs2
clusters )<br>
<br>
I am not the one who configured the envoriment, but it
is making the performance of my MAIL system to bad.<br>
<br>
Have about 9k accounts but only 4k are active. It is a
maildir system. ( postfix + dovecot )<br>
<br>
Now that this performance problens are afecting my
system i am gonna try help to tunning the ocfs2.<br>
<br>
Pretty much all default settings.<br>
<br>
OCFS2 is configured to write with ordered mode. We
know that changing to writeback will make performance
much better, but we are not considering lose anydata,
so it i snot an option.<br>
<br>
Now we are going to implemente noatime options in
mount. This make better performace ?<br>
<br>
Other one, how about the commit mount options ? The
default is set to 5s if i increse it how is the
potential data loss in case we lost lose power?<br>
<br>
Does anyone have any other paramenter that should help
us ?<br>
<br>
Another info, the inscremental backup is taking 10 to
12 hours. <br>
<br>
All nodes have VERY high I/O wait.<br>
<br>
Thanks to all!!<br>
<br>
If you could tell me any doc that i sould read would
be nice to!<br>
<br>
<br>
<br>
<br>
<br>
<br clear="all">
[]'sf.rique <br>
</div>
</div>
<pre><fieldset></fieldset>
_______________________________________________
Ocfs2-users mailing list
<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a>
<a href="http://oss.oracle.com/mailman/listinfo/ocfs2-users" target="_blank">http://oss.oracle.com/mailman/listinfo/ocfs2-users</a></pre>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
<pre><fieldset></fieldset>
_______________________________________________
Ocfs2-users mailing list
<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a>
<a href="http://oss.oracle.com/mailman/listinfo/ocfs2-users" target="_blank">http://oss.oracle.com/mailman/listinfo/ocfs2-users</a></pre>
</blockquote>
<br>
</div></div><pre cols="72">--
============================================================
____________
| __________ |\\ .....................................
|| 0 0 || | . . Κοψαύτης Αντώνης . .
|| J || | . . System Administrator . .
|| [___] || | . . Κέντρο Διαχείρισης Δικτύου . .
||__________|| | . . ΤΕΙ Αθήνας . .
| __________ | | . . 210-5385790 . .
| ______==== | | . . <a href="mailto:akops@teiath.gr" target="_blank">akops@teiath.gr</a> . .
| __________ | | . . VMware Certified Professional . .
|____________|/ .....................................
=============================================================
</pre>
</div>
</blockquote></div><br>