[Ocfs-users] OCFS Performance on a Hitachi SAN

Eric Hensley eric.hensley at instill.com
Fri Jun 25 23:19:55 CDT 2004


   I've been reading this group for a while and I've noticed a variety of comments regarding running OCFS on top of path-management packages such as EMC's Powerpath, and it brought to mind a problem I've been having.  
   I'm currently testing a six-node cluster connected to a Hitachi 9570V SAN storage array, using OCFS 1.0.12.  I have six LUNs presented to the hosts using HDLM, Hitachi's eqivalent of Powerpath.  Here's the problem:  I've been noticing outstandingly bad performance using OCFS.  Now then - I've benchmarked the Hitachi using iozone with an ext2 filesystem on top of the HDLM-managed LUNs, and seen performance that I would expect - that is, around 500MB/sec sustained reads and writes.  Obviously I can't just use iozone to test an equivalent OCFS filesystem on the same LUNs, since OCFS only supports the various Oracle data files.
   So, I've loaded one of our larger databases onto the array, perhaps 180GB.  However, when I start an I/O intensive internal Oracle operation (partition swap, in this case, which does full table scans and then rewrites that same data back to disk), I only see writes of 2-6MB/sec, per node, for maybe 15 - 40 MB/sec aggregate throughput.  The nodes are mostly idle during this process, with low iowait and load averages under 1 (dual 3.2GHz Xeons per node).  
   It feels to me like I've set something up horribly wrong, but I can't seem to find it.  It doesn't look like a problem purely with HDLM, or with my HBA drivers (QLogic 2300s), as the ext2 benchmarking comes out fine - only when running OCFS.  Any ideas?

Thanks for your help.

Eric Hensley
Director, IT
Instill Corp.
eric.hensley at instill.com



More information about the Ocfs-users mailing list