[Ocfs2-users] ocfs Vs ocfs2

Alexei_Roudnev Alexei_Roudnev at exigengroup.com
Wed Jan 17 08:52:43 PST 2007


I have opposite statistics - OCFS (v1) was very slow on 'tar x' and other 'appending file' operations, and OCFSv2 had
compatible speed *(with ext3), except it used a lot of CPU to syncronize locks (system was SLES9 SP3 kernel >= 244).

Moreover, db1 statuistics below looks wrong - the realistic time to 'dd from zero to 1 GB file is 18 seconds (db2). 0.7
seconds means tht data are in the cache, which it turn means that you cant use OCFSv1 at all on such scenario.

  ----- Original Message ----- 
  From: Luis Freitas
  To: ocfs2-users at oss.oracle.com ; ocfs-users at oss.oracle.com
  Sent: Wednesday, January 17, 2007 2:06 AM
  Subject: Re: [Ocfs2-users] ocfs Vs ocfs2


  Joel,

     It is not using o_direct only if the coreutils package was not installed on the RH3.0 machine.
(coreutils-4.5.3-41.i386.rpm ).

  http://oss.oracle.com/projects/coreutils/files/

     If it is installed, then both tests are using O_DIRECT, and can be compared.

     I do not have both a OCFS and a OCFS2 environment to compare here, but I am perceiving too a very slow performance
with copy operations on the OCFS2 volume, compared to what I was used to in OCFS.

  Regards,
  Luis

  Joel Becker <Joel.Becker at oracle.com> wrote:
    On Tue, Jan 16, 2007 at 01:28:41AM -0800, GOKHAN wrote:
    > Hi everbody this is my first post,
    > I have two test server .(Both of them is idle)
    > db1 : RHEL4 OCFS2
    > db2 : RHEL3 OCFS
    >
    > I test the IO both of them
    > The result is below.
    >
    > db1(Time Spend)db2(Time Spend)OS Test Command
    > dd (1GB) (Yazma)0m0.796s0m18.420stime dd if=/dev/zero of=./sill.t bs=1M count=1000
    > dd (1GB) (Okuma)0m0.241s8m16.406stime dd of=/dev/zero if=./sill.t bs=1M count=1000
    > cp (1GB)0m0.986s7m32.452stime cp sill.t sill2.t

    You are using dd(1), which does not use O_DIRECT. The original
    ocfs (on 2.4 kernels) does not really support buffered I/O well. What
    you are seeing is ocfs2 taking much better care of your buffered I/Os.
    They will be consistent across the cluster. In the ocfs case, you are
    caching a lot more because these safety precautions aren't taken.
    HOWEVER, the most important factor is that you are not using
    O_DIRECT. When you actually run the database, you _will_ be using
    O_DIRECT (make sure to mount ocfs2 with '-o datavolume'). Without the
    OS caching in the way, both filesystems should run at the same speed.
    The upshot is that buffered I/O operations (such as plain dd(1))
    are often not good indicators of database speed.

    Joel

    -- 

    "To announce that there must be no criticism of them president, or
    that we are to stand by the president, right or wrong, is not only
    unpatriotic and servile, but is morally treasonable to the American
    public."
    - Theodore Roosevelt

    Joel Becker
    Principal Software Developer
    Oracle
    E-mail: joel.becker at oracle.com
    Phone: (650) 506-8127

    _______________________________________________
    Ocfs2-users mailing list
    Ocfs2-users at oss.oracle.com
    http://oss.oracle.com/mailman/listinfo/ocfs2-users



  __________________________________________________
  Do You Yahoo!?
  Tired of spam? Yahoo! Mail has the best spam protection around
  http://mail.yahoo.com



------------------------------------------------------------------------------


  _______________________________________________
  Ocfs2-users mailing list
  Ocfs2-users at oss.oracle.com
  http://oss.oracle.com/mailman/listinfo/ocfs2-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20070117/ec377320/attachment.html


More information about the Ocfs2-users mailing list