[Ocfs2-users] ocfs2 become very slow

Dmitry Rybin kirgudu at kirgudu.org
Mon Dec 14 23:08:24 PST 2009


In September 2009 update from ocfs2 1.4.1 to 1.4.4-1
In November 2009 update RHEL Kernel from 2.6.18.128 to
2.6.18-164.6.1.el5 (5.3 to 5.4)

I cant say exactly, but I think it was gradual slowdown.

2TB lvm partition (not CLVM!!!)
directories ~ 300'000
files ~ 30'000'000

Usually over 500 files per dir.
Structure:
mail(dir)->aa.sub(600 dirs)->user(300-500 dirs)->maildirs(dirs)->files
OCFS2 disk share over nfs.

Memory issue: I try to upgrade to 16GB - result the same.

I use standard system, only with HP multipulse and Emulex drivers for
FC disk storage.

OCFS2 creating options:
mkfs.ocfs2 -T mail -N 2 -L clmail --fs-features=inline-data -J
size=512M /dev/eva4400/mail

May be small journal, or lvm?

2009/12/15 Sunil Mushran <sunil.mushran at oracle.com>:
> did you make any changes just before you noticed the slowdown?
> or, was this a gradual slowdown?
>
> how many files do you have in a typical directory?
>
> Dmitry Rybin wrote:
>>
>> Hello!
>>
>> I have a problem. OCFS2 on mail storage (HP EVA 4400) 2TB (2 lvm
>> *1TB), and ocfs2. For the first time ocfs2 works well (over 100-150 mb
>> r/w), but now 5-10 mb r/w. :( Read from raw device - 80-100 mb/s. With
>> one node - same problem. Very very slow.
>>
>> I stop all process, umount ocfs2, and make fsck.ocfs2 - no result.
>> Please consult me, what to do. Back to gfs? :(
>>
>>
>> 2 identical nodes
>>
>> $ df -h
>> /dev/mapper/eva4400-mail    2.2T  1.3T  932G  59% /mnt/hp
>>
>> over 30'000'000 files (mail storage)
>>
>> Linux megastorage 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST
>> 2009 x86_64 x86_64 x86_64 GNU/Linux (RHEL/CentOS 5.4)
>> ocfs2-tools-1.4.3-1.el5
>> ocfs2-2.6.18-164.6.1.el5-1.4.4-1.el5
>>
>> FS was made with -T mail.
>>
>> $ mount
>> /dev/mapper/eva4400-mail on /mnt/hp type ocfs2
>> (rw,_netdev,noatime,heartbeat=local)
>>
>> $ debugfs.ocfs2 -R "stats" /dev/mapper/eva4400-mail
>>        Revision: 0.90
>>        Mount Count: 0   Max Mount Count: 20
>>        State: 0   Errors: 0
>>        Check Interval: 0   Last Check: Mon Dec 14 22:22:38 2009
>>        Creator OS: 0
>>        Feature Compat: 3 backup-super strict-journal-super
>>        Feature Incompat: 80 sparse inline-data
>>        Tunefs Incomplete: 0
>>        Feature RO compat: 1 unwritten
>>        Root Blknum: 5   System Dir Blknum: 6
>>        First Cluster Group Blknum: 3
>>        Block Size Bits: 12   Cluster Size Bits: 12
>>        Max Node Slots: 2
>>        Extended Attributes Inline Size: 0
>>        Label: clmail
>>        UUID: D423B00940564F968D999FE698D6DADC
>>        UUID_hash: 0 (0x0)
>>        Cluster stack: classic o2cb
>>        Inode: 2   Mode: 00   Generation: 2801943371 (0xa702434b)
>>        FS Generation: 2801943371 (0xa702434b)
>>        CRC32: 00000000   ECC: 0000
>>        Type: Unknown   Attr: 0x0   Flags: Valid System Superblock
>>        Dynamic Features: (0x0)
>>        User: 0 (root)   Group: 0 (root)   Size: 0
>>        Links: 0   Clusters: 585103360
>>        ctime: 0x4aaa1a74 -- Fri Sep 11 13:37:56 2009
>>        atime: 0x0 -- Thu Jan  1 03:00:00 1970
>>        mtime: 0x4aaa1a74 -- Fri Sep 11 13:37:56 2009
>>        dtime: 0x0 -- Thu Jan  1 03:00:00 1970
>>        ctime_nsec: 0x00000000 -- 0
>>        atime_nsec: 0x00000000 -- 0
>>        mtime_nsec: 0x00000000 -- 0
>>        Last Extblk: 0
>>        Sub Alloc Slot: Global   Sub Alloc Bit: 65535
>>
>> $ cat /proc/meminfo
>> MemTotal:      8177108 kB
>> MemFree:       1441212 kB
>> Buffers:       2748692 kB
>> Cached:        1237632 kB
>> SwapCached:     103928 kB
>> Active:         568000 kB
>> Inactive:      3528176 kB
>> HighTotal:           0 kB
>> HighFree:            0 kB
>> LowTotal:      8177108 kB
>> LowFree:       1441212 kB
>> SwapTotal:     2097144 kB
>> SwapFree:      1944548 kB
>> Dirty:          660588 kB
>> Writeback:           0 kB
>> AnonPages:       64228 kB
>> Mapped:           8808 kB
>> Slab:          2602388 kB
>> PageTables:       4048 kB
>> NFS_Unstable:        0 kB
>> Bounce:              0 kB
>> CommitLimit:   6185696 kB
>> Committed_AS:   241504 kB
>> VmallocTotal: 34359738367 kB
>> VmallocUsed:    263944 kB
>> VmallocChunk: 34359474295 kB
>> HugePages_Total:     0
>> HugePages_Free:      0
>> HugePages_Rsvd:      0
>> Hugepagesize:     2048 kB
>>
>>
>> $ lvdisplay
>>  --- Logical volume ---
>>  LV Name                /dev/eva4400/mail
>>  VG Name                eva4400
>>  LV UUID                A1aQvz-TNuj-xAKh-s84R-dcL0-CI2i-53pkxA
>>  LV Write Access        read/write
>>  LV Status              available
>>  # open                 2
>>  LV Size                2.18 TB
>>  Current LE             571390
>>  Segments               2
>>  Allocation             inherit
>>  Read ahead sectors     auto
>>  - currently set to     256
>>  Block device           253:6
>>
>> $ pvdisplay
>>  --- Physical volume ---
>>  PV Name               /dev/sda
>>  VG Name               eva4400
>>  PV Size               1.09 TB / not usable 4.00 MB
>>  Allocatable           yes (but full)
>>  PE Size (KByte)       4096
>>  Total PE              285695
>>  Free PE               0
>>  Allocated PE          285695
>>  PV UUID               NfTtKa-o8sd-1Ho5-GSMp-aBDT-51ip-A9ogEz
>>
>>  --- Physical volume ---
>>  PV Name               /dev/sdb
>>  VG Name               eva4400
>>  PV Size               1.09 TB / not usable 4.00 MB
>>  Allocatable           yes (but full)
>>  PE Size (KByte)       4096
>>  Total PE              285695
>>  Free PE               0
>>  Allocated PE          285695
>>  PV UUID               HvzBfj-UbWY-3tR9-a7v1-UcB5-UJ8C-gdWAf0
>>
>> $cat /etc/sysconfig/o2cb|grep ^O
>> O2CB_ENABLED=true
>> O2CB_STACK=o2cb
>> O2CB_BOOTCLUSTER=ocfs2
>> O2CB_HEARTBEAT_THRESHOLD=
>> O2CB_IDLE_TIMEOUT_MS=
>> O2CB_KEEPALIVE_DELAY_MS=
>> O2CB_RECONNECT_DELAY_MS=
>>
>> _______________________________________________
>> Ocfs2-users mailing list
>> Ocfs2-users at oss.oracle.com
>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>
>
>



More information about the Ocfs2-users mailing list