[Ocfs2-users] Cluster setup

Luis Freitas lfreitas34 at yahoo.com
Thu Oct 11 10:49:49 PDT 2007


     Yes, you could setup DRDB to mirror a device between the two servers. And you would need to configure heartbeat to mount the filesystem and start NFS on the second node only when the first node dies. It cannot be mounted on both nodes simultaneosly.

     This kind of configuration is an Active-Passive cluster and is common on many Unix servers and even with Oracle software on the Windows platform (Oracle Fail Safe), but using a shared storage instead of DRDB. 

    The downside is that you have to keep a idle server.

     You will probably get high availability, but not failover. The clients will probably hang or have some error when the primary server dies.

I never setup this though.

Regards,
Luis

Pavel Georgiev <pavel at netclime.com> wrote: On Thursday 11 October 2007 00:24:41 Alexei_Roudnev wrote:
> Yes, it can be done.
>
> Question is in reliability:
>  - OCFSv2 is not very stable when it is about millions of files;
> - OCFSv2 cluster tend to self-fence after a small SAN storage glitches (it
> is by design so you can't heal it even if
> you fix all timeouts - just to improve);
> - OCFSv2 + LVM +_ NFS is not well tested territory.
>
> It should work - in theory, IT works practically, under average load and FS
> size. No one knows how it behaves on a very big storage and a very big file
> systems in 1 - 2 years of active usage. I manage ti get stable OCFSv2
> system here, after applying few pathces and discovering few issues, BUT
> I use it on lightly-loaded file system (which is not critical at all)to get
> more statistics on behavior before I wil use it for anything else.
>
> If comparing with heartbeat + LVM + reiserfs + NFS:
This comes a little bit offtopic for an ocfs2 list but, how do I meet my 
requirements with just reiserfs, LVM and heartbeat? Do you mean running DRBD 
and reiserfs on top of it? This looks like a combination I could use - 
currently I have a single storage server which many clients mount, if I 
implement that (RDBD + heartbeat + nsf, no lvm) it would meat my requirements 
(except for the expanding of storage maybe) and I will not have to make any 
changes to the clients which is great. Do you say there is a way to do this 
(replicating filesystem between two nodes) with just heartbeat + lvm? 


> - all technologies in stack are well tested and heavily used;
> - heartbeat have external fencing (stonith) so it is extremely reliable in
> the long term - it can recover from almost any failure (sometimes it dont
> feel failure, it's true);
> - ReiserFS (or ext3) proved to be very stable on a huge file systems (it is
> widely used, so we dont expect any problems here).
> One problem comes from Novell - since they stoped using it as a default, I
> can';t trust to ReiserFS on SLES10 (because it is not default) but we stil
> can trust into it on SLES9 etc... (where it is default).
>
> Common rule - if you want reliable system, use defaults where possible.
> OCFSv2 + NFS is not default yet (through OCFSv2 improved dramatically
> during last 2 years).
>
> ----- Original Message -----
> From: "Pavel Georgiev" 

> To: 
> Sent: Wednesday, October 10, 2007 1:25 AM
> Subject: Re: [Ocfs2-users] Cluster setup
>
> > How about using just OCFSv2 as I described in my first mail - two servers
> > export their storage, the rest of the servers mount it and a failure of
> > any
> > of the two storage servers remains transparent to the clients. Can this
> > be done with OCFSv2?
> >
> > On Tuesday 09 October 2007 21:46:15 Alexei_Roudnev wrote:
> >> You better use
> >>
> >>  LVM + heartbeat + NFS + cold failover cluster.
> >>
> >> It works 100% stable and is 100% safe from the bugs (and it allows
> >> online resizing, if your HBA or iSCSI can add lun's on the fly).
> >>
> >> Combining NFS + LVM + OCFSv2 can cause many unpredictable problems, esp.
> >> on
> >> the unusual (for OCFSv2) system (such as Ubuntu).
> >>
> >> ----- Original Message -----
> >> From: "Brian Anderson" 
> >> To: 
> >> Sent: Tuesday, October 09, 2007 11:35 AM
> >> Subject: RE: [Ocfs2-users] Cluster setup
> >>
> >>
> >>
> >> Not exactly. I'm in a similar boat right now. I have 3 NFS servers all
> >> mounting an OCFS2 volume. Each NFS server has its own IP, and the
> >> clients load balance manually... some mount fs1, others fs2, and the
> >> rest fs3. In an ideal world, I'd have the NFS cluster presenting a
> >> single IP, and failing over / load balancing some other way.
> >>
> >> I'm looking at NFS v4 as one potential avenue (no single IP, but it does
> >> let you fail over from 1 server to the next in line), and commercial
> >> products such as IBRIX.
> >>
> >>
> >>
> >>
> >> Brian
> >>
> >> > -----Original Message-----
> >> > From: ocfs2-users-bounces at oss.oracle.com
> >> > [mailto:ocfs2-users-bounces at oss.oracle.com] On Behalf Of Sunil Mushran
> >> > Sent: Tuesday, October 09, 2007 2:27 PM
> >> > To: Luis Freitas
> >> > Cc: ocfs2-users at oss.oracle.com
> >> > Subject: Re: [Ocfs2-users] Cluster setup
> >> >
> >> > Unsure what you mean.  If the two servers mount the same
> >> > ocfs2 volume and export them via nfs, isn't that clustered nfs?
> >> >
> >> > Luis Freitas wrote:
> >> > > Is there any cluster NFS solution out there? (Two NFS
> >> >
> >> > servers sharing
> >> >
> >> > > the same filesystem with distributed locking and failover
> >> >
> >> > capability)
> >> >
> >> > > Regards,
> >> > > Luis
> >> > >
> >> > > */Sunil Mushran /* wrote:
> >> > >
> >> > >     Appears what you are looking for is a mix of ocfs2 and nfs.
> >> > >     The storage servers mount the shared disks and the reexport
> >> > >     them via nfs to the remaining servers.
> >> > >
> >> > >     ubuntu 6.06 is too old. If you are stuck on Ubuntu LTS, the
> >> > >     next version 7.10 should have all you want.
> >> > >
> >> > >     Pavel Georgiev wrote:
> >> > >     > Hi List,
> >> > >     >
> >> > >     > I`m trying to build a cluster storage with commodity
> >> >
> >> > hardware in
> >> >
> >> > >     a way that
> >> > >
> >> > >     > the all the data would be on > 1 server. It should
> >> >
> >> > have the meet
> >> >
> >> > >     the
> >> > >
> >> > >     > following requirements:
> >> > >     > 1) If one of the servers goes down, the cluster
> >> >
> >> > should continue
> >> >
> >> > >     to work with
> >> > >
> >> > >     > rw access from all clients.
> >> > >     > 2) Clients that mount the storage should not be part
> >> >
> >> > of cluster
> >> >
> >> > >     (not export
> >> > >
> >> > >     > any disk storage) - I have few servers with huge disks that I
> >> > >
> >> > >     want to store
> >> > >
> >> > >     > data on (currently 2 servers, maybe more in the future) and I
> >> > >
> >> > >     want to
> >> > >
> >> > >     > storethe data only on them, the rest of the server should just
> >> > >
> >> > >     mount and use
> >> > >
> >> > >     > that storage with the ability to continue operation if one of
> >> > >
> >> > >     the two storage
> >> > >
> >> > >     > servers goes down.
> >> > >     > 3) More servers should be able to join the cluster
> >> >
> >> > and at given
> >> >
> >> > >     point,
> >> > >
> >> > >     > expanding the total size of the cluster, hopefully without
> >> > >
> >> > >     rebuilding the
> >> > >
> >> > >     > storage.
> >> > >     > 4) Load balance is not a issue - all the load can go to one of
> >> > >
> >> > >     the two storage
> >> > >
> >> > >     > servers (although its better to be balanced), the main goal is
> >> > >
> >> > >     to have
> >> > >
> >> > >     > redundant storage
> >> > >     >
> >> > >     > Does ocfs2 meet these requirements? I read few howtos but none
> >> > >
> >> > >     of them
> >> > >
> >> > >     > mentioned my second requirement (only some of the servers to
> >> > >
> >> > >     hold the data).
> >> > >
> >> > >     > Are there any specific steps to do to accomplish (2) and (3)?
> >> > >     >
> >> > >     > I`m using Ubuntu 6.06 on x86.
> >> > >     >
> >> > >     > Thanks!
> >> > >     >
> >> > >     > _______________________________________________
> >> > >     > Ocfs2-users mailing list
> >> > >     > Ocfs2-users at oss.oracle.com
> >> > >     > http://oss.oracle.com/mailman/listinfo/ocfs2-users
> >> > >
> >> > >     _______________________________________________
> >> > >     Ocfs2-users mailing list
> >> > >     Ocfs2-users at oss.oracle.com
> >> > >     http://oss.oracle.com/mailman/listinfo/ocfs2-users
> >> >
> >> > --------------------------------------------------------------
> >> > ----------
> >> >
> >> > > Take the Internet to Go: Yahoo!Go puts the Internet in your pocket:
> >> >
> >> > > >> > refer=1GNXIC>
> >> >
> >> > > mail, news, photos & more.
> >> >
> >> > --------------------------------------------------------------
> >> > ----------
> >> >
> >> > > _______________________________________________
> >> > > Ocfs2-users mailing list
> >> > > Ocfs2-users at oss.oracle.com
> >> > > http://oss.oracle.com/mailman/listinfo/ocfs2-users
> >> >
> >> > _______________________________________________
> >> > Ocfs2-users mailing list
> >> > Ocfs2-users at oss.oracle.com
> >> > http://oss.oracle.com/mailman/listinfo/ocfs2-users
> >>
> >> _______________________________________________
> >> Ocfs2-users mailing list
> >> Ocfs2-users at oss.oracle.com
> >> http://oss.oracle.com/mailman/listinfo/ocfs2-users
> >>
> >>
> >> _______________________________________________
> >> Ocfs2-users mailing list
> >> Ocfs2-users at oss.oracle.com
> >> http://oss.oracle.com/mailman/listinfo/ocfs2-users
> >
> > _______________________________________________
> > Ocfs2-users mailing list
> > Ocfs2-users at oss.oracle.com
> > http://oss.oracle.com/mailman/listinfo/ocfs2-users



_______________________________________________
Ocfs2-users mailing list
Ocfs2-users at oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users



       
---------------------------------
Be a better Globetrotter. Get better travel answers from someone who knows.
Yahoo! Answers - Check it out.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20071011/60c470ac/attachment-0001.html


More information about the Ocfs2-users mailing list