[Ocfs2-users] Slow I/O on ocfs2 file system

Sunil Mushran sunil.mushran at oracle.com
Mon Dec 13 10:40:14 PST 2010


Can you email me the following:

debugfs.ocfs2 -R "stats" /dev/mapper/3600a0b800067df83000003ba4ba0f5ef
debugfs.ocfs2 -R "stat dd-1G"  /dev/mapper/3600a0b800067df83000003ba4ba0f5ef

This will tell us the fs params and the file layout.

Next run the same with different blocksizes. Run the same with bs=4K and
bs=1M. iflag=direct will bypass the cache.

echo 3 >/proc/sys/vm/drop_caches
strace -c dd if=/data/verejna/dd-1G of=/dev/null bs=4K
echo 3 >/proc/sys/vm/drop_caches
strace -c dd if=/data/verejna/dd-1G of=/dev/null bs=1M
strace -c dd if=/data/verejna/dd-1G of=/dev/null bs=4K iflag=direct
strace -c dd if=/data/verejna/dd-1G of=/dev/null bs=1M iflag=direct

echo 3 >/proc/sys/vm/drop_caches
strace -c dd if=/dev/... of=/dev/null of=/dev/null bs=4K
echo 3 >/proc/sys/vm/drop_caches
strace -c dd if=/dev/... of=/dev/null of=/dev/null bs=1M
strace -c dd if=/dev/... of=/dev/null of=/dev/null bs=4K iflag=direct
strace -c dd if=/dev/... of=/dev/null of=/dev/null bs=1M iflag=direct

Sunil

On 12/13/2010 04:18 AM, Michal Vyoral wrote:
> Hello,
>
> I have found, that ocfs2 is very slow when doing I/O operation without
> cache. See a simple test:
>
> ng-vvv1:~# dd if=/data/verejna/dd-1G bs=1k | dd of=/dev/null
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes (1.1 GB) copied, 395.183 s, 2.7 MB/s
> 2097152+0 records in
> 2097152+0 records out
> 1073741824 bytes (1.1 GB) copied, 395.184 s, 2.7 MB/s
>
> The underlying block device is quite quick:
>
> ng-vvv1:~# dd if=/dev/mapper/3600a0b800067df83000003ba4ba0f5ef bs=1k
> count=1M | dd of=/dev/null
> 1048576+0 records in
> 1048576+0 records out
> 2097152+0 records in
> 2097152+0 records out
> 1073741824 bytes (1.1 GB) copied, 6.32306 s, 170 MB/s
> 1073741824 bytes (1.1 GB) copied, 6.32293 s, 170 MB/s
>
> Here is the environment:
>
> * Infrastructure: two nodes, each with two port FC card conected to a
> disk array
>
> * OS: Debian GNU/Linux 5.0
>
> * Kernel: 2.6.26-2-amd64
>
> * Ocfs2: 1.4.1
>
> * Multipath:
>    # multipath -ll 3600a0b800067df83000003ba4ba0f5ef
>    3600a0b800067df83000003ba4ba0f5efdm-7 SUN     ,LCSM100_F
>    [size=100G][features=1 queue_if_no_path][hwhandler=1 rdac]
>    \_ round-robin 0 [prio=3][active]
>     \_ 7:0:0:0 sdb 8:16  [active][ready]
>    \_ round-robin 0 [prio=0][enabled]
>     \_ 8:0:0:0 sde 8:64  [active][ghost]
>
>
> * Creation of the FS:
>    # mkfs.ocfs2 -T mail /dev/mapper/mpath0
>
> * Mount parameters:
>    # mount | grep verejna
>    /dev/mapper/3600a0b800067df83000003ba4ba0f5ef on /data/verejna type
>    ocfs2 (rw,_netdev,heartbeat=local)
>    # mounted.ocfs2 -d
>    Device                FS     UUID
> Label
>    /dev/sdb              ocfs2  91ae6ead-eee5-42fe-957c-9392e02b5d90
>    /dev/dm-7             ocfs2  91ae6ead-eee5-42fe-957c-9392e02b5d90
>
> Please, could I ask, what could be wrong? Thanks.
>
> Best regards,
> Michal Vyoral
>
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users




More information about the Ocfs2-users mailing list