<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2800.1561" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2>Luis.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Things can be worst because we can run 3
clusterware at the same time on the same Linux:</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>- CRS (oracle RAC)</FONT></DIV>
<DIV><FONT face=Arial size=2>- O2CB</FONT></DIV>
<DIV><FONT face=Arial size=2>- Heartbeat2</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Problem is that each system makes independent
decisions and independent selection of the masters and slaves, and decide _to
fence _ or _to suicide_ independently.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>It makes a common case, when, if we have a SAN
service interruption or IP network interruption (for a short time), different
components makes a different decisions and fence themself or each other (btw, in
case of CRS, fencing is a feature of CSS and not a CRS).</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Of these 3 clusterwares, only heartbeat (or
heartbeat2) is reliable. Both o2cb and CRS uses a very primitive heartbeat
without redundancy and with bad initial parameters, and both makes a wrong
decisions easily.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Fortunately, SuSe10 have integrated O2CB +
heartbeat2 version (I am not sure how stable is it, but stability is a matter of
time only) and Oracle CRS (CSS) is conservative enough to prevent many
unnecessary reboots. But you are right - all this mess don't increase overall
reliability.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>OCFv2 is a great thing with a great potential (not
revealed yet), esp. counting on heartbeat2 integration and datavolume options
(and because it is well tested with Oracle). But it really require some
improvements to became a production-grade thing. Some improvemments are cheap
and safe (such as multiple interfaces for heartbeat - I always guess what is the
problem to implement such simple and standard thing), other are already in
progress (heartbeat2 integration), and some require careful design and testing
(improved and smart fencing policy).</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>PS. I really watched such thing as independent self
-fencing. We have a RAC cluster in the lab, running on iSCSI with the same
switch for SAN and interconnect (generally I use cross cable for interconnect
but I used switch connection in this case). Once apon a time we had UPS glitch
and switch rebooted. </FONT></DIV>
<DIV><FONT face=Arial size=2>All nodes in cluster rebooted - one becasue 'OCFS
fence himself' and other because 'CSS fence himself' (through no one non-cluster
system even noticed this reboot). heartbeat cluster was not affected as well
(because of redundant heartbeat - eth0, eth1 , /dev/ttyS0). So multiple
self-fencing is a real problem.</FONT></DIV>
<BLOCKQUOTE
style="PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV style="FONT: 10pt arial">----- Original Message ----- </DIV>
<DIV
style="BACKGROUND: #e4e4e4; FONT: 10pt arial; font-color: black"><B>From:</B>
<A title=lfreitas34@yahoo.com href="mailto:lfreitas34@yahoo.com">Luis
Freitas</A> </DIV>
<DIV style="FONT: 10pt arial"><B>To:</B> <A title=Sunil.Mushran@oracle.com
href="mailto:Sunil.Mushran@oracle.com">Sunil Mushran</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Cc:</B> <A title=ocfs2-users@oss.oracle.com
href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users@oss.oracle.com</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Sent:</B> Monday, April 09, 2007 4:54
PM</DIV>
<DIV style="FONT: 10pt arial"><B>Subject:</B> Re: [Ocfs2-users] Catatonic
nodes under SLES10</DIV>
<DIV><BR></DIV>
<DIV>Sunil,</DIV>
<DIV> </DIV>
<DIV> First I want to make clear that I do think
that Oracle Cluster File System provides a great value for Oracle Linux
customers and I do know that one has to pay top dollar for equivalent
functionality on other platforms, for example Veritas Storage Foundation,
and others offered by IBM and HP.</DIV>
<DIV> </DIV>
<DIV> But, the Linux platform is the only one where there
are two independent clusterwares running (O2CB and CRS). On all the other
platforms, as far as I know, when there is a second clusterware on the
machine, CRS acts as a client to it. Use of a uncertified clusterware
stack independently and concurrently with CRS is not even allowed on
other platforms.</DIV>
<DIV> </DIV>
<DIV> This is kind of funny because both o2cb and crs
are Oracle products. </DIV>
<DIV><BR>Regards,</DIV>
<DIV>Luis Freitas</DIV>
<DIV><BR><B><I>Sunil Mushran <Sunil.Mushran@oracle.com></I></B>
wrote:</DIV>
<BLOCKQUOTE class=replbq
style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #1010ff 2px solid">Fencing
is not a fs operation but a cluster operation. The fs is only a
<BR>client<BR>of the cluster stack.<BR><BR>Alexei_Roudnev wrote:<BR>> It
all depends of the usage scenario.<BR>><BR>> Tipical usage is, for
example:<BR>><BR>> (1) Shared application home. Writes happens once /
week during maintanance,<BR>> otehr time files are opened for reading
only. Few logfiles<BR>> can be redirected if required.<BR>><BR>>
So, when server see a problems, it HAD NOT any pending IO for a 3 days -
so<BR>> what the purpose of reboot? It 100% knows that NO ANY IO<BR>>
is pending, and other nodes have not any IO pending as well.<BR>><BR>>
(2) Backup storage for the RAC. FS is not opened 90% of the time. At
night,<BR>> one node opens it and creates a few files. Other node have
not any pending<BR>> IO on this FS. Fencing passive node (which dont run
any backup) is not<BR>> useful because it HAD NOT ANY PENDING IO for a
few hours.<BR>><BR>> (3) WEB server. 10 nodes, 1 only makes updates.
The same - most nodes have<BR>> not any pending IO.<BR>><BR>> Of
course there is always a risk of FS corruption in the clusters. Any
layer<BR>> can keep pending IO forever (I saw Linux kernel keeping it for
10 minutes).<BR>> Problem is that in such cases software fencing can't
help as well because<BR>> node is half-dead and can't detect it's own
status.<BR>><BR>> So, the key point here is not in _fence for each
ap-chi_ but _keep system<BR>> without pending writes as long as possible
and make clean transition between<BR>> active write/active read / passive
states. Then you can avoid self-fencing<BR>> in 90% cases (because of
server wil be in passive or active reads state). I<BR>> mounT FS but
don't cd into it, or just CD but dont read - passive status. I<BR>> read
file - active read for 1 minute, tbhnen flush buffers so that it is
in<BR>> passive mode again. I began top write - switch system to write
mode. I did<BR>> not write blocks for 1 minute - flush everything, wait 1
more minute and<BR>> switch to passive
mode.<BR>><BR>><BR>><BR>><BR>> ----- Original Message -----
<BR>> From: "Sunil Mushran" <SUNIL.MUSHRAN@ORACLE.COM><BR>> To: "David
Miller" <SYSLOG@D.SPARKS.NET><BR>> Cc:
<OCFS2-USERS@OSS.ORACLE.COM><BR>> Sent: Monday, April 09, 2007 3:18
PM<BR>> Subject: Re: [Ocfs2-users] Catatonic nodes under
SLES10<BR>><BR>><BR>> <BR>>> For io fencing to be graceful,
one requires better hardware. Read<BR>>> <BR>> expensive.<BR>>
<BR>>> As in, switches where one can choke off all the ios to the
storage from<BR>>> a specific<BR>>>
node.<BR>>><BR>>> Read the following for a discussion on force
umounts. In short, not<BR>>> possible as yet.<BR>>>
http://lwn.net/Articles/192632/<BR>>><BR>>> Readonly does not
work wrt to io fencing. As in, ro only stops any new<BR>>>
userspace<BR>>> writes but cannot stop pending writes. And writes
could be lodged in any<BR>>> io layer.<BR>>> A reboot is the
cheapest way to avoid corruption. (While a reboot is<BR>>> painful, it
is<BR>>> much less painful than a corrupted
fs.)<BR>>><BR>>> With 1.2.5 you should be able to increase the
network timeouts and<BR>>> hopefully avoid<BR>>> the
problem.<BR>>><BR>>> David Miller wrote:<BR>>>
<BR>>>> Alexei_Roudnev wrote:<BR>>>> <BR>>>>>
Did you checked<BR>>>>><BR>>>>>
/proc/sys/kernel/panic
/proc/sys/kernel/panic_on_oops<BR>>>>><BR>>>>>
system variables?<BR>>>>><BR>>>>> <BR>>>>
No. Maybe I'm missing something here.<BR>>>><BR>>>> Are
you saying that a panic/freeze/reboot is the
expected/desirable<BR>>>> behavior? That nothing more graceful
could be done, like to just<BR>>>> dismount the ocfs2 file systems,
or force them to a read-only mount or<BR>>>> something like that?
We have to reload the kernel?<BR>>>><BR>>>>
Thanks,<BR>>>><BR>>>> ---
David<BR>>>><BR>>>> <BR>>>>> ----- Original
Message ----- From: "David Miller" <SYSLOG@D.SPARKS.NET><BR>>>>>
To: <OCFS2-USERS@OSS.ORACLE.COM><BR>>>>> Sent: Monday, April 02,
2007 9:01 AM<BR>>>>> Subject: [Ocfs2-users] Catatonic nodes
under SLES10<BR>>>>><BR>>>>> <BR>>>>
[snip]<BR>>>><BR>>>> <BR>>>>> Both servers
will be connected to a dual-host external RAID system.<BR>>>>>
I've setup ocfs2 on a couple of test systems and everything
appears<BR>>>>> to work
fine.<BR>>>>><BR>>>>> Until, that is, one of the
systems loses network connectivity.<BR>>>>><BR>>>>>
When the systems can't talk to each other anymore, but the
disk<BR>>>>> heartbeat is still alive, the high numbered node
goes catatonic.<BR>>>>> Under SLES 9 it fenced itself off with a
kernel panic; under 10 it<BR>>>>> simply stops responding to
network or console. A power cycling is<BR>>>>> required to bring
it back up.<BR>>>>><BR>>>>> The desired behavior
would be for the higher numbered node to lose<BR>>>>> access to
the ocfs2 file system(s). I don't really care whether it<BR>>>>>
would simply timeout ala stale NFS mounts, or immediately error
like<BR>>>>> access to non-existent
files.<BR>>>>><BR>>>>><BR>>>>>
<BR>>>>
_______________________________________________<BR>>>> Ocfs2-users
mailing list<BR>>>> Ocfs2-users@oss.oracle.com<BR>>>>
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>>>>
<BR>>> _______________________________________________<BR>>>
Ocfs2-users mailing list<BR>>> Ocfs2-users@oss.oracle.com<BR>>>
http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR>>><BR>>>
<BR>><BR>>
<BR><BR>_______________________________________________<BR>Ocfs2-users
mailing
list<BR>Ocfs2-users@oss.oracle.com<BR>http://oss.oracle.com/mailman/listinfo/ocfs2-users<BR></BLOCKQUOTE><BR>
<P>
<HR SIZE=1>
<A
href="http://us.rd.yahoo.com/evt=49938/*http://tools.search.yahoo.com/toolbar/features/mail/">Never
miss an email again!<BR>Yahoo! Toolbar</A> alerts you the instant new Mail
arrives.<A
href="http://us.rd.yahoo.com/evt=49937/*http://tools.search.yahoo.com/toolbar/features/mail/">
Check it out.</A>
<P>
<HR>
<P></P>_______________________________________________<BR>Ocfs2-users mailing
list<BR>Ocfs2-users@oss.oracle.com<BR>http://oss.oracle.com/mailman/listinfo/ocfs2-users</BLOCKQUOTE></BODY></HTML>