[Ocfs2-users] OCFS2 Block / Clustersize with Oracle 10gR2

Brian Long brilong at cisco.com
Thu Nov 9 11:21:53 PST 2006


The DBA wrote a patented Java-based application which stress tests the
Oracle IO subsystem.  We use this to benchmark our IO subsystems
(compare SAN to NAS, etc).  This same benchmark is showing a maximum
sustained throughput of 3,400 IO/sec while the same benchmark with the
same data will max out at 7K+ IO/sec on RAW.

I'll grab the iostat data which we've kept over time and try to make
some sense of it before posting anything additional.

Thanks.

/Brian/

On Thu, 2006-11-09 at 10:20 -0800, Sunil Mushran wrote:
> Why are you looking at iops and not the io thruput?
> 
> What is the actual io thruput? Please could you share some iostat
> numbers with us. In all our tests, we've seen very little difference
> in the actual io thruput between raw and ocfs2.
> 
> Clustersize will mainly affect the alloc/dealloc performance. It has very
> little role to play in io performance. If anything, it could help coalesce
> requests to reduce number of ios (read cdbs) required to do the task.
> 
> Brian Long wrote:
> > Hello,
> >
> > I followed the user's guide recommendation of 4K block size and 128K
> > cluster size.  I have 8 32GB OCFS2 filesystems mounted on two nodes.
> > The DBA has created a large tablespace with 4GB data files on each
> > filesystem.
> >
> > The performance is only getting 3,400 IO/sec read/write combined.  If I
> > re-use the LUNs and give the DBA 4GB raw partitions, he can get over
> > 7,000 IO/sec read/write combined (single-node) and over 11,000 IO/sec on
> > two nodes.
> >
> > What's my next step to improve performance of OCFS2?  Since the DBA is
> > using 4GB datafiles, should I increase the cluster size to the max 1MB?
> >
> > Thanks for any hints.
> >
> > /Brian/
> >   
-- 
       Brian Long                             |       |
       IT Infrastructure                  . | | | . | | | .
       Data Center Systems                    '       '
       Cisco Enterprise Linux                 C I S C O




More information about the Ocfs2-users mailing list