[Ocfs2-users] Concurrent write performance issues with OCFS2

Erik Schwartz schwartz.erik.c at gmail.com
Tue Feb 28 09:24:35 PST 2012


I have a two-node RHEL5 cluster that runs the following Linux kernel and
accompanying OCFS2 module packages:

  * kernel-2.6.18-274.17.1.el5
  * ocfs2-2.6.18-274.17.1.el5-1.4.7-1.el5

A 2.5TB LUN is presented to both nodes via DM-Multipath. I have carved
out a single partition (using the entire LUN), and formatted it with OCFS2:

  # mkfs.ocfs2 -N 2 -L 'foofs' -T datafiles /dev/mapper/bams01p1

Finally, the filesystem is mounted to both nodes with the following options:

  # mount | grep bams01
/dev/mapper/bams01p1 on /foofs type ocfs2
(rw,_netdev,noatime,data=writeback,heartbeat=local)

----------

When a single node is writing arbitrary data (i.e. dd(1) with /dev/zero
as input) to a large (say, 10 GB) file in /foofs, I see the expected
performance of ~850 MB/sec.

If both nodes are concurrently writing large files full of zeros to
/foofs, performance drops way down to ~45 MB/s. I experimented with each
node writing to /foofs/test01/ and /foofs/test02/ subdirectories,
respectively, and found that performance increased slightly to a - still
poor - 65 MB/s.

----------

I understand from searching past mailing list threads that the culprit
is likely related to the negotiation of file locks, and waiting for data
to be flushed to journal / disk.

My two questions are:

1. Does this dramatic write performance slowdown sound reasonable and
expected?

2. Are there any OCFS2-level steps I can take to improve this situation?


Thanks -

-- 
Erik Schwartz <schwartz.erik.c at gmail.com> | GPG key 14F1139B



More information about the Ocfs2-users mailing list