[Ocfs2-users] cluster manager does not start/run ERROR: OemInit2: Attempting to open the CMDiskFile for a multi-node RAC

Sunil Mushran sunil.mushran at oracle.com
Sun May 24 17:31:38 PDT 2009


Use sles or (rh)el. It won't work on other distros.

On May 24, 2009, at 3:43 PM, sundar mahadevan <sundarmahadevan82 at gmail.com 
 > wrote:

> To add some more details to this issue:
>
> /u01/oradata/orcl/orcl/cmquorumfile mentioned in the error message is
> accessible from the shell prompt. I even checked the file
> $ORACLE_HOME/oracm/admin/cmcfg.ora and the entry in it states
> CmDiskFile=/u01/oradata/orcl/orcl/cmquorumfile which is accessible. So
> this makes me doubt, if OCFS2 does not recognise this file. Newbie.
> Help Plsssssssssss.
>
> On Sat, May 23, 2009 at 5:05 PM, sundar mahadevan
> <sundarmahadevan82 at gmail.com> wrote:
>> Hi Members,
>>
>> I 'm trying to install 9i RAC (initial installation) on a 2 node
>> cluster (ocfs2 on opensuse 11.1) on my home test boxes both 900MHz
>> with 512 MB RAM. I can either go for 9i or 10gR1 because of the RAM
>> limitation. I'm using OCFS2 and the disk is visible/accessible to  
>> both
>> the boxes. The permissions are set right. I'm executing the
>> ocmstart.sh script as root and i have root added to dba group which
>> has rwx on all required directories starting from /u01. I have
>> cmquorumfile and sharedsrvctlconfig file on /u01/oradata/orcl/orcl.
>> There are no error messages on /var/log/messages. I know that Oracle
>> does not support opensuse and supports only Enterprise Linux, Redhat,
>> SLES and Asianux. Newbie. Help please!!!!!!!
>>
>>
>> sunny1:~ # cat /u01/oradata/orcl/product/9.2/db_1/oracm/log/cm.log
>> oracm, version[ 9.2.0.2.0.47 ] started {Sat May 23 15:53:16 2009 }
>> KernelModuleName is hangcheck-timer {Sat May 23 15:53:16 2009 }
>> OemNodeConfig(): Network Address of node0: 10.1.1.1 (port 9998)
>>  {Sat May 23 15:53:16 2009 }
>> OemNodeConfig(): Network Address of node1: 10.1.1.2 (port 9998)
>>  {Sat May 23 15:53:16 2009 }
>>> ERROR:    OemInit2: Attempting to open the CMDiskFile for a multi- 
>>> node RAC on a non-NFS, non-OCFS, or non-raw device cluster, tid =  
>>> main:-1210403136 file = oem.c, line = 482 {Sat May 23 15:53:16  
>>> 2009 }
>>> ERROR:    OemInit2: If the CMDiskFile is supposed to be an NFS or  
>>> OCFS file, please make sure that the relevant shared file system  
>>> is mounted properly, tid = main:-1210403136 file = oem.c, line =  
>>> 483 {Sat May 23 15:53:16 2009 }
>>> ERROR:    OemInit2: If the CMDiskFile is supposed to be a raw  
>>> device, please make sure that it has been created properly, tid =  
>>> main:-1210403136 file = oem.c, line = 484 {Sat May 23 15:53:16  
>>> 2009 }
>> --- DUMP GROUP STATE DB ---
>> --- END OF GROUP STATE DUMP ---
>> --- Begin Dump ---
>>> TRACE:    LogListener: Spawned with tid 0xb70c6b90., tid = -1223922800 
>>>  file = logging.c, line = 116 {Sat May 23 15:53:16 2009 }
>> oracm, version[ 9.2.0.2.0.47 ] started {Sat May 23 15:53:16 2009 }
>>> TRACE:    Can't read registry value for HeartBeat, tid = main:-1210403136 
>>>  file = unixinc.c, line = 1011 {Sat May 23 15:53:16 2009 }
>>> TRACE:    Can't read registry value for PollInterval, tid = main:-1210403136 
>>>  file = unixinc.c, line = 1011 {Sat May 23 15:53:16 2009 }
>>> TRACE:    Can't read registry value for WatchdogTimerMargin, tid =  
>>> main:-1210403136 file = unixinc.c, line = 1011 {Sat May 23  
>>> 15:53:16 2009 }
>>> TRACE:    Can't read registry value for WatchdogSafetyMargin, tid  
>>> = main:-1210403136 file = unixinc.c, line = 1011 {Sat May 23  
>>> 15:53:16 2009 }
>> KernelModuleName is hangcheck-timer {Sat May 23 15:53:16 2009 }
>>> TRACE:    Can't read registry value for ClientTimeout, tid = main:-1210403136 
>>>  file = unixinc.c, line = 1011 {Sat May 23 15:53:16 2009 }
>>> TRACE:    InitNMInfo:  setting clientTimeout to 620s based on  
>>> MissCount 620 and PollInterval 1000ms, tid = main:-1210403136 file  
>>> = nmconfig.c, line = 137 {Sat May 23 15:53:16 2009 }
>>> TRACE:    InitClusterDb(): getservbyname on CMSrvr failed - 0 :  
>>> assigning 9998, tid = main:-1210403136 file = nmconfig.c, line =  
>>> 212 {Sat May 23 15:53:16 2009 }
>> OemNodeConfig(): Network Address of node0: 10.1.1.1 (port 9998)
>>  {Sat May 23 15:53:16 2009 }
>>> TRACE:    OemCreateListenPort:  bound at 9998, tid = main:-1210403136 
>>>  file = oem.c, line = 858 {Sat May 23 15:53:16 2009 }
>>> TRACE:    InitClusterDb():  found my node info at 0 name sunny1,  
>>> priv sunny1, port 3623, tid = main:-1210403136 file = nmconfig.c,  
>>> line = 265 {Sat May 23 15:53:16 2009 }
>> OemNodeConfig(): Network Address of node1: 10.1.1.2 (port 9998)
>>  {Sat May 23 15:53:16 2009 }
>>> TRACE:    InitClusterDb(): Local Node(0) NodeName[sunny1], tid =  
>>> main:-1210403136 file = nmconfig.c, line = 283 {Sat May 23  
>>> 15:53:16 2009 }
>>> TRACE:    InitClusterDb(): Cluster(Oracle) with (2) Defined Nodes,  
>>> tid = main:-1210403136 file = nmconfig.c, line = 286 {Sat May 23  
>>> 15:53:16 2009 }
>>> TRACE:    OEMInits(): CM Disk File (/u01/oradata/orcl/orcl/ 
>>> cmquorumfile), tid = main:-1210403136 file = oem.c, line = 243  
>>> {Sat May 23 15:53:16 2009 }
>>> ERROR:    OemInit2: Attempting to open the CMDiskFile for a multi- 
>>> node RAC on a non-NFS, non-OCFS, or non-raw device cluster, tid =  
>>> main:-1210403136 file = oem.c, line = 482 {Sat May 23 15:53:16  
>>> 2009 }
>>> ERROR:    OemInit2: If the CMDiskFile is supposed to be an NFS or  
>>> OCFS file, please make sure that the relevant shared file system  
>>> is mounted properly, tid = main:-1210403136 file = oem.c, line =  
>>> 483 {Sat May 23 15:53:16 2009 }
>>> ERROR:    OemInit2: If the CMDiskFile is supposed to be a raw  
>>> device, please make sure that it has been created properly, tid =  
>>> main:-1210403136 file = oem.c, line = 484 {Sat May 23 15:53:16  
>>> 2009 }
>>> TRACE:    IncrementEventValue: *(80f2920) = (1, 1), tid = main:-1210403136 
>>>  file = unixinc.c, line = 253 {Sat May 23 15:53:17 2009 }
>> --- End Dump ---
>>
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users



More information about the Ocfs2-users mailing list