[Ocfs2-users] mem usage

Brian Kroth bpkroth at gmail.com
Mon Jan 5 09:24:38 PST 2009


I've got a question about tuning mem usage which may or may not be ocfs2
related.  I have some VMs that share an iSCSI device formatted with
ocfs2.  They're all running a debian based 2.6.26 kernel.  We basically
just dialed the kernel down to 100HZ rather than the default 1000HZ.
Everything else is the same.  All the machines have 2 CPUs and 2GB of
RAM.

Over time I would expect that the amount of free mem decreases towards 0
and the amount of (fs) cached mem increases.  I think one can simulate
this by doing the following: 
echo 1 >> /proc/sys/vm/drop_caches
ls -lR /ocfs2/ > /dev/null

When I do this on a physical machine with a large ext3 volume, the
cached field steadily increases as I expected.  However, on the ocfs2
volume what I actually see is that the free mem and cache mem remains
fairly constant.

# free
             total       used       free     shared    buffers     cached
Mem:       2076376     989256    1087120          0     525356      33892
-/+ buffers/cache:     430008    1646368
Swap:      1052216          0    1052216

# top -n1  | head -n5
top - 11:11:19 up 2 days, 20:26,  2 users,  load average: 1.45, 1.39, 1.26
Tasks: 140 total,   1 running, 139 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.8%us,  3.7%sy,  0.0%ni, 79.5%id, 15.2%wa,  0.1%hi,  0.6%si,  0.0%st
Mem:   2076376k total,   989772k used,  1086604k free,   525556k buffers
Swap:  1052216k total,        0k used,  1052216k free,    33920k cached

I've tried to trick the machines into allowing for more cached inodes by
decreasing vfs_cache_pressure, but it doesn't seem to have had much
affect.  I also get the same results if only one machine has the ocfs2
fs mounted.  I have also tried mounting it with the localalloc=16 option
that I found in a previous mailing list post.

The ocfs2 filesystem is 2TB and has about 600GB of maildirs on it (many
small files).  The ext3 machine is about 200GB and has a couple of
workstation images on it (a mix of file sizes).

I haven't yet been able to narrow down whether this is vm vs. physical,
ocfs2 vs. ext3, iSCSI vs. local, or something else.

Has anyone else seen similar results or have some advice as to how to
improve the situation?

Thanks,
Brian



More information about the Ocfs2-users mailing list