[Ocfs2-users] initial cluster setup

Sunil Mushran Sunil.Mushran at oracle.com
Thu Mar 23 12:23:51 CST 2006


/etc/ocfs.conf is used in OCFS (Release 1)
/etc/ocfs2/cluster.conf is used in OCFS2.

The two products are different.

As others have commented, OCFS2 is a shared disk clustering
file system. It requires the disks to be concurrently accessible from
all nodes in the cluster.

# echo "stats" | debugfs.ocfs2 /dev/sdX | grep UUID
For a shared disk, the UUID must match on all nodes.

Kevin Hulse wrote:
> What about /etc/ocfs.conf?
>
> Matthew Boeckman wrote:
>
>   
>> I followed the manuals advice on setting up the initial nodes - and 
>> let it create the /etc/cluster/ocfs2/cluster.conf (below) for me. This 
>> file is identical on both systems. I restarted o2cb on both systems 
>> after this file was in place.
>>
>> I get conceptually how it all fits together, are you saying that I 
>> have a config problem?
>>
>> cluster.conf:
>> node:
>>         ip_port = 7777
>>         ip_address = 10.1.1.100
>>         number = 0
>>         name = leto-ii
>>         cluster = ocfs2
>>
>> node:
>>         ip_port = 7777
>>         ip_address = 10.1.1.33
>>         number = 1
>>         name = voyager
>>         cluster = ocfs2
>>
>> cluster:
>>         node_count = 2
>>         name = ocfs2
>>
>>
>> Kevin Hulse wrote:
>>
>>     
>>> Think of the nodes that share ocfs filesystems as
>>> a cluster. The configuration and modules for ocfs
>>> form a cluster of sorts. The ocfs configuration
>>> must be aware of all nodes that attach to the
>>> filesystems. Add a node and that shared configuration
>>> needs to be changed and updated on all nodes
>>> of the "cluster".
>>>
>>> Didn't you configure any files in /etc for the
>>> first node?
>>>
>>> Matthew Boeckman wrote:
>>>
>>>       
>>>> Hello list!
>>>>
>>>> Just pulled down ocfs2 this morning and installed it on two RHEL 4 
>>>> 2.6.9-34 systems. I have followed the manual, but am not seeing some 
>>>> things it describes and am not sure what the problem is.
>>>>
>>>> I guess my first question is - do I need to have a partition 
>>>> formatted as ocfs2 on all nodes in the cluster for this to work?
>>>>
>>>> I have formatted a partition via ocfs2console on node1, and mounted 
>>>> it as well. However I cannot figure out how to make that filesystem 
>>>> available on node0, where i have not created an ocfs2 partition.
>>>>
>>>> I've been assuming that formatting as ocfs2 will destroy existing 
>>>> data, and don't have a spare partition on node0 yet, I can get one 
>>>> if that is required.
>>>>
>>>> Basically my overall question is how do I mount the filesystem(s) 
>>>> from various nodes on each other, or make the whole available to the 
>>>> nodes?
>>>>
>>>> Thanks in advance!
>>>>
>>>> _______________________________________________
>>>> Ocfs2-users mailing list
>>>> Ocfs2-users at oss.oracle.com
>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>>>
>>>>
>>>>  
>>>>
>>>>         
>>>
>>>       
>>     
>
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>   



More information about the Ocfs2-users mailing list