[Ocfs2-users] Cluster setup

Alexei_Roudnev Alexei_Roudnev at exigengroup.com
Wed Oct 10 18:16:03 PDT 2007


Oracle can run anything, but
- oracle QA quality is below any requirements (not because Oracle is bad but because they test 10 - 20% of all possible configurations).
  We find 5 Oracle bugs during 4 days of heavy testing on our new DB system only... and unlimited ## of bugs aside of this (the best one -= sqlplus die if system uptime > 204 days! way!!!... fixed in 10.2.0.3,
- more important, oracle tests oracle access (which means - files are stable; # of files is limited; # of users is 1 - one - user; access lists are not used; 
# of FS operations is very limited). It is very different from _common use NFS server_.

Does Oracle tests behavior of OCFSv2 in case of:
- 1,000 different users;
- host1 appends to the file and host2 truncate it; then host3 rename file.
- file is removed on node1 but still open on node2;
- one node creates file and other try to rename another file into it.
- 900 GB file sytem and we create 1,000,000 directories with 5,000,000 files in it
- directory with 100,000 files inside
- file name length 512 symbols in UTF8 encoding
- we run 'mkdir x; rmdir x' in a loop for a week...
etc etc...
??

I am more concerned about OCFSv2 usage as a common file system, and not so many about LVM + OCFSv2. OCFSv2 looks pretty stable when used for limited ## of files and limited usage scenario (such as Oracle usage) but not as a common file system used by thousand of students with unlimited fantasy...
  ----- Original Message ----- 
  From: Luis Freitas 
  To: Alexei_Roudnev ; Pavel Georgiev ; ocfs2-users at oss.oracle.com 
  Sent: Wednesday, October 10, 2007 5:09 PM
  Subject: Re: [Ocfs2-users] Cluster setup


  Alexei,

    I do not agree on the heavily loaded part. Oracle run certification tests for their database, so the OCFS2 must have passed through this certification process, which must include high load scenarios. Last time I checked LVM2 was not supported for RAC use, so it probably was not tested.

    I do agree on the part of OCFS2+LVM and OCFS2+LVM+NFS not being well tested. Also I suspect one cannot run a Active-Active NFS cluster without special NFS software. It would need to be Active-Passive.

  Regards,
  Luis

  Alexei_Roudnev <Alexei_Roudnev at exigengroup.com> wrote:
    Yes, it can be done.

    Question is in reliability:
    - OCFSv2 is not very stable when it is about millions of files;
    - OCFSv2 cluster tend to self-fence after a small SAN storage glitches (it 
    is by design so you can't heal it even if
    you fix all timeouts - just to improve);
    - OCFSv2 + LVM +_ NFS is not well tested territory.

    It should work - in theory, IT works practically, under average load and FS 
    size. No one knows how it behaves on a very big storage and a very big file 
    systems in 1 - 2 years of active usage. I manage ti get stable OCFSv2 system 
    here, after applying few pathces and discovering few issues, BUT
    I use it on lightly-loaded file system (which is not critical at all)to get 
    more statistics on behavior before I wil use it for anything else.

    If comparing with heartbeat + LVM + reiserfs + NFS:
    - all technologies in stack are well tested and heavily used;
    - heartbeat have external fencing (stonith) so it is extremely reliable in 
    the long term - it can recover from almost any failure (sometimes it dont 
    feel failure, it's true);
    - ReiserFS (or ext3) proved to be very stable on a huge file systems (it is 
    widely used, so we dont expect any problems here).
    One problem comes from Novell - since they stoped using it as a default, I 
    can';t trust to ReiserFS on SLES10 (because it is not default) but we stil 
    can trust into it on SLES9 etc... (where it is default).

    Common rule - if you want reliable system, use defaults where possible. 
    OCFSv2 + NFS is not default yet (through OCFSv2 improved dramatically during 
    last 2 years).

    ----- Original Message ----- 
    From: "Pavel Georgiev" 
    To: 
    Sent: Wednesday, October 10, 2007 1:25 AM
    Subject: Re: [Ocfs2-users] Cluster setup


    > How about using just OCFSv2 as I described in my first mail - two servers
    > export their storage, the rest of the servers mount it and a failure of 
    > any
    > of the two storage servers remains transparent to the clients. Can this be
    > done with OCFSv2?
    >
    >
    > On Tuesday 09 October 2007 21:46:15 Alexei_Roudnev wrote:
    >> You better use
    >>
    >> LVM + heartbeat + NFS + cold failover cluster.
    >>
    >> It works 100% stable and is 100% safe from the bugs (and it allows online
    >> resizing, if your HBA or iSCSI can add lun's on the fly).
    >>
    >> Combining NFS + LVM + OCFSv2 can cause many unpredictable problems, esp. 
    >> on
    >> the unusual (for OCFSv2) system (such as Ubuntu).
    >>
    >> ----- Original Message -----
    >> From: "Brian Anderson" 
    >> To: 
    >> Sent: Tuesday, October 09, 2007 11:35 AM
    >> Subject: RE: [Ocfs2-users] Cluster setup
    >>
    >>
    >>
    >> Not exactly. I'm in a similar boat right now. I have 3 NFS servers all
    >> mounting an OCFS2 volume. Each NFS server has its own IP, and the
    >> clients load balance manually... some mount fs1, others fs2, and the
    >> rest fs3. In an ideal world, I'd have the NFS cluster presenting a
    >> single IP, and failing over / load balancing some other way.
    >>
    >> I'm looking at NFS v4 as one potential avenue (no single IP, but it does
    >> let you fail over from 1 server to the next in line), and commercial
    >> products such as IBRIX.
    >>
    >>
    >>
    >>
    >> Brian
    >>
    >> > -----Original Message-----
    >> > From: ocfs2-users-bounces at oss.oracle.com
    >> > [mailto:ocfs2-users-bounces at oss.oracle.com] On Behalf Of Sunil Mushran
    >> > Sent: Tuesday, October 09, 2007 2:27 PM
    >> > To: Luis Freitas
    >> > Cc: ocfs2-users at oss.oracle.com
    >> > Subject: Re: [Ocfs2-users] Cluster setup
    >> >
    >> > Unsure what you mean. If the two servers mount the same
    >> > ocfs2 volume and export them via nfs, isn't that clustered nfs?
    >> >
    >> > Luis Freitas wrote:
    >> > > Is there any cluster NFS solution out there? (Two NFS
    >> >
    >> > servers sharing
    >> >
    >> > > the same filesystem with distributed locking and failover
    >> >
    >> > capability)
    >> >
    >> > > Regards,
    >> > > Luis
    >> > >
    >> > > */Sunil Mushran /* wrote:
    >> > >
    >> > > Appears what you are looking for is a mix of ocfs2 and nfs.
    >> > > The storage servers mount the shared disks and the reexport
    >> > > them via nfs to the remaining servers.
    >> > >
    >> > > ubuntu 6.06 is too old. If you are stuck on Ubuntu LTS, the
    >> > > next version 7.10 should have all you want.
    >> > >
    >> > > Pavel Georgiev wrote:
    >> > > > Hi List,
    >> > > >
    >> > > > I`m trying to build a cluster storage with commodity
    >> >
    >> > hardware in
    >> >
    >> > > a way that
    >> > >
    >> > > > the all the data would be on > 1 server. It should
    >> >
    >> > have the meet
    >> >
    >> > > the
    >> > >
    >> > > > following requirements:
    >> > > > 1) If one of the servers goes down, the cluster
    >> >
    >> > should continue
    >> >
    >> > > to work with
    >> > >
    >> > > > rw access from all clients.
    >> > > > 2) Clients that mount the storage should not be part
    >> >
    >> > of cluster
    >> >
    >> > > (not export
    >> > >
    >> > > > any disk storage) - I have few servers with huge disks that I
    >> > >
    >> > > want to store
    >> > >
    >> > > > data on (currently 2 servers, maybe more in the future) and I
    >> > >
    >> > > want to
    >> > >
    >> > > > storethe data only on them, the rest of the server should just
    >> > >
    >> > > mount and use
    >> > >
    >> > > > that storage with the ability to continue operation if one of
    >> > >
    >> > > the two storage
    >> > >
    >> > > > servers goes down.
    >> > > > 3) More servers should be able to join the cluster
    >> >
    >> > and at given
    >> >
    >> > > point,
    >> > >
    >> > > > expanding the total size of the cluster, hopefully without
    >> > >
    >> > > rebuilding the
    >> > >
    >> > > > storage.
    >> > > > 4) Load balance is not a issue - all the load can go to one of
    >> > >
    >> > > the two storage
    >> > >
    >> > > > servers (although its better to be balanced), the main goal is
    >> > >
    >> > > to have
    >> > >
    >> > > > redundant storage
    >> > > >
    >> > > > Does ocfs2 meet these requirements? I read few howtos but none
    >> > >
    >> > > of them
    >> > >
    >> > > > mentioned my second requirement (only some of the servers to
    >> > >
    >> > > hold the data).
    >> > >
    >> > > > Are there any specific steps to do to accomplish (2) and (3)?
    >> > > >
    >> > > > I`m using Ubuntu 6.06 on x86.
    >> > > >
    >> > > > Thanks!
    >> > > >
    >> > > > _______________________________________________
    >> > > > Ocfs2-users mailing list
    >> > > > Ocfs2-users at oss.oracle.com
    >> > > > http://oss.oracle.com/mailman/listinfo/ocfs2-users
    >> > >
    >> > > _______________________________________________
    >> > > Ocfs2-users mailing list
    >> > > Ocfs2-users at oss.oracle.com
    >> > > http://oss.oracle.com/mailman/listinfo/ocfs2-users
    >> >
    >> > --------------------------------------------------------------
    >> > ----------
    >> >
    >> > > Take the Internet to Go: Yahoo!Go puts the Internet in your pocket:
    >> >
    >> > >> > refer=1GNXIC>
    >> >
    >> > > mail, news, photos & more.
    >> >
    >> > --------------------------------------------------------------
    >> > ----------
    >> >
    >> > > _______________________________________________
    >> > > Ocfs2-users mailing list
    >> > > Ocfs2-users at oss.oracle.com
    >> > > http://oss.oracle.com/mailman/listinfo/ocfs2-users
    >> >
    >> > _______________________________________________
    >> > Ocfs2-users mailing list
    >> > Ocfs2-users at oss.oracle.com
    >> > http://oss.oracle.com/mailman/listinfo/ocfs2-users
    >>
    >> _______________________________________________
    >> Ocfs2-users mailing list
    >> Ocfs2-users at oss.oracle.com
    >> http://oss.oracle.com/mailman/listinfo/ocfs2-users
    >>
    >>
    >> _______________________________________________
    >> Ocfs2-users mailing list
    >> Ocfs2-users at oss.oracle.com
    >> http://oss.oracle.com/mailman/listinfo/ocfs2-users
    >
    >
    >
    > _______________________________________________
    > Ocfs2-users mailing list
    > Ocfs2-users at oss.oracle.com
    > http://oss.oracle.com/mailman/listinfo/ocfs2-users
    > 


    _______________________________________________
    Ocfs2-users mailing list
    Ocfs2-users at oss.oracle.com
    http://oss.oracle.com/mailman/listinfo/ocfs2-users





------------------------------------------------------------------------------
  Don't let your dream ride pass you by. Make it a reality with Yahoo! Autos. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20071010/f27a6ce9/attachment-0001.html


More information about the Ocfs2-users mailing list