[Ocfs2-users] Cluster setup

Sunil Mushran Sunil.Mushran at oracle.com
Thu Oct 11 15:28:02 PDT 2007


How is this a fs problem?

Ulf Zimmermann wrote:
> We don't and when we were investigating why we had on the ProCurve
> 4108gl reassembly problems, we were specific asked if we are doing
> bonding or VLAN tagging (neither we were doing). Just looks like the
> ProCurve are loosing packets without telling so. We switched in Cisco
> 2960G-48 with Jumbo Frames now and haven't had any reassembly timeouts
> since then. Global Cache timeout has gone down significant. Each
> Interconnect for Oracle 10G has its own Cisco 2960G-48 now.
>
>   
>> -----Original Message-----
>> From: Sunil Mushran [mailto:Sunil.Mushran at oracle.com]
>> Sent: Thursday, October 11, 2007 15:13
>> To: Ulf Zimmermann
>> Cc: Randy Ramsdell; ocfs2-users at oss.oracle.com
>> Subject: Re: [Ocfs2-users] Cluster setup
>>
>> Use network bonding.
>>
>> Ulf Zimmermann wrote:
>>     
>>>> -----Original Message-----
>>>> From: ocfs2-users-bounces at oss.oracle.com [mailto:ocfs2-users-
>>>> bounces at oss.oracle.com] On Behalf Of Alexei_Roudnev
>>>> Sent: Thursday, October 11, 2007 11:10
>>>> To: Sunil Mushran; Randy Ramsdell
>>>> Cc: ocfs2-users at oss.oracle.com
>>>> Subject: Re: [Ocfs2-users] Cluster setup
>>>>
>>>> I explained you:
>>>> 1 - single heartbeat interface IS A BUG for me.
>>>>
>>>>         
>>> I haven't really followed the whole discussion but that point above
>>>       
> did
>   
>>> just come to my mind a few days ago when we replaced our HP ProCurve
>>> 4108gl used for 3 separate Interconnects on 10g, where only 1 also
>>> carries the OCFS2 heartbeat. So if that switch dies, OCFS2 will go
>>>       
> down
>   
>>> while Oracle 10g could survive (if OCFS2 wouldn't die).
>>>
>>> I have to agree that is a bad design at this point. Heartbeat should
>>> also be on at least 2 links for OCFS2.
>>>
>>> Ulf.
>>>
>>>
>>>       




More information about the Ocfs2-users mailing list