[Ocfs2-users] mem usage

Brian Kroth bpkroth at gmail.com
Mon Jan 5 13:02:19 PST 2009


I had read in the past that lots of RAM would be helpful for caching
inodes, locks, and whatnot, so it concerned me that the machines didn't
appear to be using it.

As for my goals, the machines will be hosting maildirs so I think that
caching directories would be of most use.

Although I agree the ls -lR is perhaps not the best test, I saw similar
results with other VMs that didn't have ocfs2, so I'm leaning towards an
iSCSI/VM problem and not ocfs2.  However, getting it to cache more
directories would still be of interest to me.

I'll have to report back later about the number of disk reads.

Thanks,
Brian

Herbert van den Bergh <herbert.van.den.bergh at oracle.com> 2009-01-05 10:09:
>
> The ls -lR command will only access directory entries and inodes.  These  
> are cached in slabs (see /proc/slabinfo).  Not sure what happens to the  
> disk pages that they live in, but those disk pages can be discarded  
> immediately after the slab caches have been populated.  So it's probably  
> not a good test of filesystem data caching.  You may want to explain  
> what you hope to achieve with this: more cache hits on directory and  
> inode entries, or on file data?  Are you seeing more disk reads in this  
> configuration than in the one you're comparing with?
>
> Thanks,
> Herbert.
>
>
> Brian Kroth wrote:
>> I've got a question about tuning mem usage which may or may not be ocfs2
>> related.  I have some VMs that share an iSCSI device formatted with
>> ocfs2.  They're all running a debian based 2.6.26 kernel.  We basically
>> just dialed the kernel down to 100HZ rather than the default 1000HZ.
>> Everything else is the same.  All the machines have 2 CPUs and 2GB of
>> RAM.
>>
>> Over time I would expect that the amount of free mem decreases towards 0
>> and the amount of (fs) cached mem increases.  I think one can simulate
>> this by doing the following: echo 1 >> /proc/sys/vm/drop_caches
>> ls -lR /ocfs2/ > /dev/null
>>
>> When I do this on a physical machine with a large ext3 volume, the
>> cached field steadily increases as I expected.  However, on the ocfs2
>> volume what I actually see is that the free mem and cache mem remains
>> fairly constant.
>>
>> # free
>>              total       used       free     shared    buffers     cached
>> Mem:       2076376     989256    1087120          0     525356      33892
>> -/+ buffers/cache:     430008    1646368
>> Swap:      1052216          0    1052216
>>
>> # top -n1  | head -n5
>> top - 11:11:19 up 2 days, 20:26,  2 users,  load average: 1.45, 1.39, 1.26
>> Tasks: 140 total,   1 running, 139 sleeping,   0 stopped,   0 zombie
>> Cpu(s):  0.8%us,  3.7%sy,  0.0%ni, 79.5%id, 15.2%wa,  0.1%hi,  0.6%si,  0.0%st
>> Mem:   2076376k total,   989772k used,  1086604k free,   525556k buffers
>> Swap:  1052216k total,        0k used,  1052216k free,    33920k cached
>>
>> I've tried to trick the machines into allowing for more cached inodes by
>> decreasing vfs_cache_pressure, but it doesn't seem to have had much
>> affect.  I also get the same results if only one machine has the ocfs2
>> fs mounted.  I have also tried mounting it with the localalloc=16 option
>> that I found in a previous mailing list post.
>>
>> The ocfs2 filesystem is 2TB and has about 600GB of maildirs on it (many
>> small files).  The ext3 machine is about 200GB and has a couple of
>> workstation images on it (a mix of file sizes).
>>
>> I haven't yet been able to narrow down whether this is vm vs. physical,
>> ocfs2 vs. ext3, iSCSI vs. local, or something else.
>>
>> Has anyone else seen similar results or have some advice as to how to
>> improve the situation?
>>
>> Thanks,
>> Brian
>>
>> _______________________________________________
>> Ocfs2-users mailing list
>> Ocfs2-users at oss.oracle.com
>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>   



More information about the Ocfs2-users mailing list