<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.6000.16544" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2>Oracle can run anything, but</FONT></DIV>
<DIV><FONT face=Arial size=2>- oracle QA quality is below any requirements (not
because Oracle is bad but because they test 10 - 20% of all possible
configurations).</FONT></DIV>
<DIV><FONT face=Arial size=2> We find 5 Oracle bugs during 4 days of heavy
testing on our new DB system only... and unlimited ## of bugs aside of this (the
best one -= sqlplus die if system uptime > 204 days! way!!!... fixed in
10.2.0.3,</FONT></DIV>
<DIV><FONT face=Arial size=2>- more important, oracle tests oracle access (which
means - files are stable; # of files is limited; # of users is 1 - one - user;
access lists are not used; </FONT></DIV>
<DIV><FONT face=Arial size=2># of FS operations is very limited). It is very
different from _common use NFS server_.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Does Oracle tests behavior of OCFSv2 in case
of:</FONT></DIV>
<DIV><FONT face=Arial size=2>- 1,000 different users;</FONT></DIV>
<DIV><FONT face=Arial size=2>- host1 appends to the file and host2 truncate it;
then host3 rename file.</FONT></DIV>
<DIV><FONT face=Arial size=2>- file is removed on node1 but still open on
node2;</FONT></DIV>
<DIV><FONT face=Arial size=2>- one node creates file and other try to rename
another file into it.</FONT></DIV>
<DIV><FONT face=Arial size=2>- 900 GB file sytem and we create 1,000,000
directories with 5,000,000 files in it</FONT></DIV>
<DIV><FONT face=Arial size=2>- directory with 100,000 files inside</FONT></DIV>
<DIV><FONT face=Arial size=2>- file name length 512 symbols in UTF8
encoding</FONT></DIV>
<DIV><FONT face=Arial size=2>- we run 'mkdir x; rmdir x' in a loop for a
week...</FONT></DIV>
<DIV><FONT face=Arial size=2>etc etc...</FONT></DIV>
<DIV><FONT face=Arial size=2>??</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>I am more concerned about OCFSv2 usage as a common
file system, and not so many about LVM + OCFSv2. OCFSv2 looks pretty stable when
used for limited ## of files and limited usage scenario (such as Oracle usage)
but not as a common file system used by thousand of students with unlimited
fantasy...</FONT></DIV>
<BLOCKQUOTE
style="PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV style="FONT: 10pt arial">----- Original Message ----- </DIV>
<DIV
style="BACKGROUND: #e4e4e4; FONT: 10pt arial; font-color: black"><B>From:</B>
<A title=lfreitas34@yahoo.com href="mailto:lfreitas34@yahoo.com">Luis
Freitas</A> </DIV>
<DIV style="FONT: 10pt arial"><B>To:</B> <A
title=Alexei_Roudnev@exigengroup.com
href="mailto:Alexei_Roudnev@exigengroup.com">Alexei_Roudnev</A> ; <A
title=pavel@netclime.com href="mailto:pavel@netclime.com">Pavel Georgiev</A> ;
<A title=ocfs2-users@oss.oracle.com
href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users@oss.oracle.com</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Sent:</B> Wednesday, October 10, 2007 5:09
PM</DIV>
<DIV style="FONT: 10pt arial"><B>Subject:</B> Re: [Ocfs2-users] Cluster
setup</DIV>
<DIV><BR></DIV>
<DIV>Alexei,</DIV>
<DIV> </DIV>
<DIV> I do not agree on the heavily loaded part. Oracle run
certification tests for their database, so the OCFS2 must have passed through
this certification process, which must include high load scenarios. Last time
I checked LVM2 was not supported for RAC use, so it probably was not
tested.</DIV>
<DIV> </DIV>
<DIV> I do agree on the part of OCFS2+LVM and OCFS2+LVM+NFS not
being well tested. Also I suspect one cannot run a Active-Active NFS
cluster without special NFS software. It would need to be
Active-Passive.</DIV>
<DIV> </DIV>
<DIV>Regards,</DIV>
<DIV>Luis<BR><BR><B><I>Alexei_Roudnev
<Alexei_Roudnev@exigengroup.com></I></B> wrote:</DIV>
<BLOCKQUOTE class=replbq
style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #1010ff 2px solid">Yes,
it can be done.<BR><BR>Question is in reliability:<BR>- OCFSv2 is not very
stable when it is about millions of files;<BR>- OCFSv2 cluster tend to
self-fence after a small SAN storage glitches (it <BR>is by design so you
can't heal it even if<BR>you fix all timeouts - just to improve);<BR>-
OCFSv2 + LVM +_ NFS is not well tested territory.<BR><BR>It should work - in
theory, IT works practically, under average load and FS <BR>size. No one
knows how it behaves on a very big storage and a very big file <BR>systems
in 1 - 2 years of active usage. I manage ti get stable OCFSv2 system
<BR>here, after applying few pathces and discovering few issues, BUT<BR>I
use it on lightly-loaded file system (which is not critical at all)to get
<BR>more statistics on behavior before I wil use it for anything
else.<BR><BR>If comparing with heartbeat + LVM + reiserfs + NFS:<BR>- all
technologies in stack are well tested and heavily used;<BR>- heartbeat have
external fencing (stonith) so it is extremely reliable in <BR>the long term
- it can recover from almost any failure (sometimes it dont <BR>feel
failure, it's true);<BR>- ReiserFS (or ext3) proved to be very stable on a
huge file systems (it is <BR>widely used, so we dont expect any problems
here).<BR>One problem comes from Novell - since they stoped using it as a
default, I <BR>can';t trust to ReiserFS on SLES10 (because it is not
default) but we stil <BR>can trust into it on SLES9 etc... (where it is
default).<BR><BR>Common rule - if you want reliable system, use defaults
where possible. <BR>OCFSv2 + NFS is not default yet (through OCFSv2 improved
dramatically during <BR>last 2 years).<BR><BR>----- Original Message -----
<BR>From: "Pavel Georgiev" <PAVEL@NETCLIME.COM><BR>To:
<OCFS2-USERS@OSS.ORACLE.COM><BR>Sent: Wednesday, October 10, 2007 1:25
AM<BR>Subject: Re: [Ocfs2-users] Cluster setup<BR><BR><BR>> How about
using just OCFSv2 as I described in my first mail - two servers<BR>>
export their storage, the rest of the servers mount it and a failure of
<BR>> any<BR>> of the two storage servers remains transparent to the
clients. Can this be<BR>> done with OCFSv2?<BR>><BR>><BR>> On
Tuesday 09 October 2007 21:46:15 Alexei_Roudnev wrote:<BR>>> You
better use<BR>>><BR>>> LVM + heartbeat + NFS + cold failover
cluster.<BR>>><BR>>> It works 100% stable and is 100% safe from
the bugs (and it allows online<BR>>> resizing, if your HBA or iSCSI
can add lun's on the fly).<BR>>><BR>>> Combining NFS + LVM +
OCFSv2 can cause many unpredictable problems, esp. <BR>>>
on<BR>>> the unusual (for OCFSv2) system (such as
Ubuntu).<BR>>><BR>>> ----- Original Message -----<BR>>>
From: "Brian Anderson" <BANDERSON@ATHENAHEALTH.COM><BR>>> To:
<OCFS2-USERS@OSS.ORACLE.COM><BR>>> Sent: Tuesday, October 09, 2007
11:35 AM<BR>>> Subject: RE: [Ocfs2-users] Cluster
setup<BR>>><BR>>><BR>>><BR>>> Not exactly. I'm in a
similar boat right now. I have 3 NFS servers all<BR>>> mounting an
OCFS2 volume. Each NFS server has its own IP, and the<BR>>> clients
load balance manually... some mount fs1, others fs2, and the<BR>>>
rest fs3. In an ideal world, I'd have the NFS cluster presenting
a<BR>>> single IP, and failing over / load balancing some other
way.<BR>>><BR>>> I'm looking at NFS v4 as one potential avenue
(no single IP, but it does<BR>>> let you fail over from 1 server to
the next in line), and commercial<BR>>> products such as
IBRIX.<BR>>><BR>>><BR>>><BR>>><BR>>>
Brian<BR>>><BR>>> > -----Original Message-----<BR>>>
> From: ocfs2-users-bounces@oss.oracle.com<BR>>> >
[mailto:ocfs2-users-bounces@oss.oracle.com] On Behalf Of Sunil
Mushran<BR>>> > Sent: Tuesday, October 09, 2007 2:27 PM<BR>>>
> To: Luis Freitas<BR>>> > Cc:
ocfs2-users@oss.oracle.com<BR>>> > Subject: Re: [Ocfs2-users]
Cluster setup<BR>>> ><BR>>> > Unsure what you mean. If the
two servers mount the same<BR>>> > ocfs2 volume and export them via
nfs, isn't that clustered nfs?<BR>>> ><BR>>> > Luis
Freitas wrote:<BR>>> > > Is there any cluster NFS solution out
there? (Two NFS<BR>>> ><BR>>> > servers
sharing<BR>>> ><BR>>> > > the same filesystem with
distributed locking and failover<BR>>> ><BR>>> >
capability)<BR>>> ><BR>>> > > Regards,<BR>>> >
> Luis<BR>>> > ><BR>>> > > */Sunil Mushran
<SUNIL.MUSHRAN@ORACLE.COM>/* wrote:<BR>>> > ><BR>>> >
> Appears what you are looking for is a mix of ocfs2 and nfs.<BR>>>
> > The storage servers mount the shared disks and the
reexport<BR>>> > > them via nfs to the remaining
servers.<BR>>> > ><BR>>> > > ubuntu 6.06 is too old.
If you are stuck on Ubuntu LTS, the<BR>>> > > next version 7.10
should have all you want.<BR>>> > ><BR>>> > > Pavel
Georgiev wrote:<BR>>> > > > Hi List,<BR>>> > >
><BR>>> > > > I`m trying to build a cluster storage with
commodity<BR>>> ><BR>>> > hardware in<BR>>>
><BR>>> > > a way that<BR>>> > ><BR>>> >
> > the all the data would be on > 1 server. It should<BR>>>
><BR>>> > have the meet<BR>>> ><BR>>> > >
the<BR>>> > ><BR>>> > > > following
requirements:<BR>>> > > > 1) If one of the servers goes down,
the cluster<BR>>> ><BR>>> > should continue<BR>>>
><BR>>> > > to work with<BR>>> > ><BR>>>
> > > rw access from all clients.<BR>>> > > > 2)
Clients that mount the storage should not be part<BR>>>
><BR>>> > of cluster<BR>>> ><BR>>> > > (not
export<BR>>> > ><BR>>> > > > any disk storage) -
I have few servers with huge disks that I<BR>>> > ><BR>>>
> > want to store<BR>>> > ><BR>>> > > >
data on (currently 2 servers, maybe more in the future) and I<BR>>>
> ><BR>>> > > want to<BR>>> > ><BR>>>
> > > storethe data only on them, the rest of the server should
just<BR>>> > ><BR>>> > > mount and use<BR>>>
> ><BR>>> > > > that storage with the ability to
continue operation if one of<BR>>> > ><BR>>> > > the
two storage<BR>>> > ><BR>>> > > > servers goes
down.<BR>>> > > > 3) More servers should be able to join the
cluster<BR>>> ><BR>>> > and at given<BR>>>
><BR>>> > > point,<BR>>> > ><BR>>> >
> > expanding the total size of the cluster, hopefully
without<BR>>> > ><BR>>> > > rebuilding
the<BR>>> > ><BR>>> > > > storage.<BR>>>
> > > 4) Load balance is not a issue - all the load can go to one
of<BR>>> > ><BR>>> > > the two storage<BR>>>
> ><BR>>> > > > servers (although its better to be
balanced), the main goal is<BR>>> > ><BR>>> > > to
have<BR>>> > ><BR>>> > > > redundant
storage<BR>>> > > ><BR>>> > > > Does ocfs2
meet these requirements? I read few howtos but none<BR>>> >
><BR>>> > > of them<BR>>> > ><BR>>> >
> > mentioned my second requirement (only some of the servers
to<BR>>> > ><BR>>> > > hold the data).<BR>>>
> ><BR>>> > > > Are there any specific steps to do to
accomplish (2) and (3)?<BR>>> > > ><BR>>> > >
> I`m using Ubuntu 6.06 on x86.<BR>>> > > ><BR>>>
> > > Thanks!<BR>>> > > ><BR>>> > > >
_______________________________________________<BR>>> > > >
Ocfs2-users mailing list<BR>>> > > >
Ocfs2-users@oss.oracle.com<BR>>> > > >
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>>> >
><BR>>> > >
_______________________________________________<BR>>> > >
Ocfs2-users mailing list<BR>>> > >
Ocfs2-users@oss.oracle.com<BR>>> > >
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>>>
><BR>>> >
--------------------------------------------------------------<BR>>>
> ----------<BR>>> ><BR>>> > > Take the Internet to
Go: Yahoo!Go puts the Internet in your pocket:<BR>>> ><BR>>>
> <HTTP: us.rd.yahoo.com
evt="48253/*http://mobile.yahoo.com/go?<br">>> >
refer=1GNXIC><BR>>> ><BR>>> > > mail, news, photos
& more.<BR>>> ><BR>>> >
--------------------------------------------------------------<BR>>>
> ----------<BR>>> ><BR>>> > >
_______________________________________________<BR>>> > >
Ocfs2-users mailing list<BR>>> > >
Ocfs2-users@oss.oracle.com<BR>>> > >
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>>>
><BR>>> >
_______________________________________________<BR>>> > Ocfs2-users
mailing list<BR>>> > Ocfs2-users@oss.oracle.com<BR>>> >
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>>><BR>>>
_______________________________________________<BR>>> Ocfs2-users
mailing list<BR>>> Ocfs2-users@oss.oracle.com<BR>>>
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>>><BR>>><BR>>>
_______________________________________________<BR>>> Ocfs2-users
mailing list<BR>>> Ocfs2-users@oss.oracle.com<BR>>>
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>><BR>><BR>><BR>>
_______________________________________________<BR>> Ocfs2-users mailing
list<BR>> Ocfs2-users@oss.oracle.com<BR>>
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>>
<BR><BR><BR>_______________________________________________<BR>Ocfs2-users
mailing
list<BR>Ocfs2-users@oss.oracle.com<BR>http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR></BLOCKQUOTE><BR>
<P>
<HR SIZE=1>
Don't let your dream ride pass you by. <A
href="http://us.rd.yahoo.com/evt=51200/*http://autos.yahoo.com/index.html;_ylc=X3oDMTFibjNlcHF0BF9TAzk3MTA3MDc2BHNlYwNtYWlsdGFncwRzbGsDYXV0b3MtZHJlYW1jYXI-">Make
it a reality</A> with Yahoo! Autos. </BLOCKQUOTE></BODY></HTML>