[Ocfs2-users] Slow OCFS2 on very high-end hardware

Marek Królikowski admin at wset.edu.pl
Mon Dec 5 09:15:31 PST 2011


>> >> Hello
>> >> Today i create a cluster with OCFS2.
>> >> I name servers MAIL1 and MAIL2
>> >> Both connect via HBA card with 2 links 4Gbit/s and EMC storage with FC
>> >> RAID10.
>> >> Both connect to this same Cisco switch 1Gbit/s line.
>> >> Hardware is awsome but ocfs2 work verrrry slow.
>> >> I use Gentoo Linux with Kernel 3.0.6 and ocfs2-tools-1.6.4 that will 
>> >> be
>> >> postfix/imap/pop3 cluster with maildir support so there will be many 
>> >> many
>> >> directores and little files.
>> >> I link /home to my ocfs2 and do few tests but work verry slow...
>> >> When i write any file on server MAIL1 and try check mailbox from MAIL2
>> >> working amazing slow...
>> >I've gotta ask, what is "amazingly slow" to you?  A cluster
>> >filesystem accessing the same files from two places necessarily is
>> >slower than local access.  But if it is slow enough that you notice it
>> >by hand, it's probably something in configuration.
>> >Did you select the 'mail' filesystem type when creating the
>> >filesystem?  This probably shouldn't affect your simple test, but it
>> >will absolutely help as your system grows.
>> When i copy any file from/to MAIL1 and enter to home directory (using MC)
>> where is 7000 users on MAIL2 i need wait 30+ sec.... at normal when  i 
>> don`t
>> copy write on ocfs2 i wait 3 sec.
>
>Yeah, that seems really long.  Are you using indexed
>directories?  I don't think your multipath setup is the problem.

Hello
Yes i use indexed - i create again a new ocfs2 partition with:
mkfs.ocfs2 -N 2 -L 
MAIL --fs-features=backup-super,usrquota,indexed-dirs --fs-feature-level=max-features 
 -v /dev/dm-0
and got exacly this same problem....

But i saw when i enter after 30 sec to this directory in dmesg info:
rm              D ffff88107f3b25c0     0  8447  13527 0x00000000
ffff88101542c040 0000000000000082 ffff881020053080 00000000000125c0
ffff880401a8ffd8 00000000000125c0 00000000000125c0 00000000000125c0
ffff880401a8e000 00000000000125c0 ffff880401a8ffd8 00000000000125c0
Call Trace:
[<ffffffff8148148d>] ? schedule_timeout+0x1ed/0x2e0
[<ffffffff81480932>] ? __schedule+0x3a2/0x6b0
[<ffffffffa0a358bf>] ? dlmlock+0x7f/0xb70 [ocfs2_dlm]
[<ffffffff81480e0a>] ? wait_for_common+0x13a/0x190
[<ffffffff8104bc50>] ? try_to_wake_up+0x280/0x280
[<ffffffffa0902a38>] ? __ocfs2_cluster_lock.clone.21+0x1d8/0x6b0 [ocfs2]
[<ffffffff81480e2f>] ? wait_for_common+0x15f/0x190
[<ffffffffa0902fcc>] ? ocfs2_inode_lock_full_nested+0xbc/0x490 [ocfs2]
[<ffffffffa091dc1b>] ? ocfs2_lookup_lock_orphan_dir+0x6b/0x1b0 [ocfs2]
[<ffffffffa091f4ba>] ? ocfs2_prepare_orphan_dir+0x4a/0x280 [ocfs2]
[<ffffffffa092016f>] ? ocfs2_unlink+0x6ef/0xb90 [ocfs2]
[<ffffffff811b35a9>] ? may_link.clone.22+0xd9/0x170
[<ffffffff8113aa58>] ? vfs_unlink+0x98/0x100
[<ffffffff8113ac41>] ? do_unlinkat+0x181/0x1b0
[<ffffffff8113e7cd>] ? vfs_readdir+0x9d/0xe0
[<ffffffff811653d8>] ? fsnotify_find_inode_mark+0x28/0x40
[<ffffffff8148aa12>] ? system_call_fastpath+0x16/0x1b

Thanks




More information about the Ocfs2-users mailing list