<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: inherit;">Lorenzo,<br><br> My 2 cents. This is purely speculation, since I never worked with a environment like what you have there.<br><br> You have a very different configuration from what is usual with OCFS2. The filesystem is tested on systems that have a fast network connection between nodes, so it is probably not tuned to environments where the network bandwith is low.<br><br> You might get some improvement if you change some of the VM tunables (/proc/sys/vm/*). On 2.6 there are not much of them for the filesystem cache, and some seem to have no effect.<br><br> vfs_cache_pressure could give you some control on the quantity of inodes cached by the kernel. Try either increasing or decreasing. Some of the problems you relate, like the long time for umount might actually be caused by keeping a large amount of
structures on memory, and since the network is slow a long time is needed to clear all of them.<br><br> Swappiness controls how aggressivelly pages are swapped out. Since you dont have swap on the OCFS2 filesystem it should not have much impact. (You dont have swap there, right?) But you may be able to force the kernel to release memory so that it can be used by the OCFS2. Again this could actually cause the problem to become worse.<br><br> There used to be a parameter that controls how much of the cache is used for inodes, and how much of it is used for data blocks, dcache_priorty. But it no longer available on 2.6 and I could not find an equivalent.<br><br>Regards,<br>Luis<br><br>--- On <b>Tue, 12/2/08, Lorenzo Milesi <i><lorenzo.milesi@yetopen.it></i></b> wrote:<br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;">From: Lorenzo Milesi
<lorenzo.milesi@yetopen.it><br>Subject: [Ocfs2-users] Please urgent help required - OCFS2 and VPN again<br>To: ocfs2-users@oss.oracle.com<br>Date: Tuesday, December 2, 2008, 8:59 AM<br><br><pre>Hi all...<br><br>I already wrote before on the list about the solution I have at a<br>customer running DRBD8+OCFS2 on two remote sites connected via VPN.<br>The different suggestions helped improving the situation, but still<br>we're having big troubles. We've also upgraded the old server with a<br>new<br>and much more powerful one but there was nearly no improvement at all!<br>The situation is resumed as:<br>SITE A: Dual Core 2GHz Pentium, 1Gb ram, 1 SATA hdd for /, 3 SATA hdd in<br>software raid5, DRBD on /dev/md0. <br>SITE B: Quad Core 2.4HGz Pentium, 2Gb ram, 3 SATA HDD in software raid5,<br>DRBD on /dev/md1.<br><br>The two sites are connected using two ADSL, with TWO bonded VPN.<br><br>Both machines run Debian Etch fully updated, kernel
2.6.26-bpo.1-686 SMP<br>with deadline scheduler, DRBD 8.0.13, OCFS2 1.4.1-1. <br>The shared data partition is 187G, 30 of which used.<br><br>The recent upgrade to OCFS2 1.4 and kernel 2.6.26 didn't improve the<br>performances as much as I expected.<br><br>The main problems we have are:<br>1. very high load average: this was previously caused by very high<br>iowait percentages, but with the new server the load is high while top<br>says the machine is 99-100% idle! <br>2. very slow dir browsing: Sunil pointed me to the user guide, where he<br>talks about inode stat. How can I raise inode cache memory? I've done<br>several searches without result... The server actually uses less than<br>300Mb of ram out of the 1Gb installed...<br>3. very long umount time: I often (not always) experience an extremely<br>long umount time. During the period while the process is executing iftop<br>says there's a high usage of network transfer. I suppose it's<br>transfering
file locks, but is it possible that stays stuck for more<br>than one hour, and still going?<br><br>This is the configuration file of OCFS2. The quad-core is file-server-2.<br><br>#/etc/ocfs2/cluster.conf<br>node:<br> ip_port = 7777<br> ip_address = 192.168.0.1<br> number = 0<br> name = file-server-1<br> cluster = ocfs2<br>node:<br> ip_port = 7777<br> ip_address = 192.168.2.31<br> number = 1<br> name = file-server-2<br> cluster = ocfs2<br>cluster:<br> node_count = 2<br> name = ocfs2<br><br><br>What is stunning me is that on file-server-2 we run a rsync backup during the<br>night on a local machine on the network, and it takes less than 20m! Doing the<br>same on the other server throws the load average to the stars!<br><br>We're in a critical situation because this solution is deployed since a<br>long time and it's not yet working as expected. <br>If nobody has suggestion
we have no problem in paying qualified support for<br>solving these problems. In this case please contact me directly. <br>Sunil, can I get Oracle support for this?<br><br>Thank you.<br>-- <br>Lorenzo Milesi - lorenzo.milesi@yetopen.it<br><br>YetOpen S.r.l. - http://www.yetopen.it/<br>C.so E. Filiberto, 74 23900 Lecco - ITALY -<br>Tel 0341 220 205 - Fax 178 607 8199<br><br>GPG/PGP Key-Id: 0xE704E230 - http://keyserver.linux.it<br><br>-------- D.Lgs. 196/2003 --------<br><br>Si avverte che tutte le informazioni contenute in questo messaggio sono<br>riservate ed a uso esclusivo del destinatario. Nel caso in cui questo<br>messaggio Le fosse pervenuto per errore, La invitiamo ad eliminarlo<br>senza copiarlo, a non inoltrarlo a terzi e ad avvertirci non appena<br>possibile.<br>Grazie.<br><br><br>_______________________________________________<br>Ocfs2-users mailing
list<br>Ocfs2-users@oss.oracle.com<br>http://oss.oracle.com/mailman/listinfo/ocfs2-users<br></pre></blockquote></td></tr></table><br>