[Ocfs2-users] initial cluster setup

Wim Coekaerts wim.coekaerts at oracle.com
Thu Mar 23 11:20:15 CST 2006


well the cluster.conf is so that all noes can talk over the network for 
locking and membership

if you have

node1 : /dev/sda1  formatted w/ ocfs2

then on node2 you will have to have THAT disk as some devicename, 
whether it also be sda1 or sdd1 or whatever
and there you just have to mount.

1 format on one node once only.

but your disks need to be shared across all nodes! and that's not 
ocfs2's job that s your job to set it up
have you done that part ?

Wim

Matthew Boeckman wrote:
> I followed the manuals advice on setting up the initial nodes - and let 
> it create the /etc/cluster/ocfs2/cluster.conf (below) for me. This file 
> is identical on both systems. I restarted o2cb on both systems after 
> this file was in place.
>
> I get conceptually how it all fits together, are you saying that I have 
> a config problem?
>
> cluster.conf:
> node:
>          ip_port = 7777
>          ip_address = 10.1.1.100
>          number = 0
>          name = leto-ii
>          cluster = ocfs2
>
> node:
>          ip_port = 7777
>          ip_address = 10.1.1.33
>          number = 1
>          name = voyager
>          cluster = ocfs2
>
> cluster:
>          node_count = 2
>          name = ocfs2
>
>
> Kevin Hulse wrote:
>   
>> Think of the nodes that share ocfs filesystems as
>> a cluster. The configuration and modules for ocfs
>> form a cluster of sorts. The ocfs configuration
>> must be aware of all nodes that attach to the
>> filesystems. Add a node and that shared configuration
>> needs to be changed and updated on all nodes
>> of the "cluster".
>>
>> Didn't you configure any files in /etc for the
>> first node?
>>
>> Matthew Boeckman wrote:
>>
>>     
>>> Hello list!
>>>
>>> Just pulled down ocfs2 this morning and installed it on two RHEL 4 
>>> 2.6.9-34 systems. I have followed the manual, but am not seeing some 
>>> things it describes and am not sure what the problem is.
>>>
>>> I guess my first question is - do I need to have a partition formatted 
>>> as ocfs2 on all nodes in the cluster for this to work?
>>>
>>> I have formatted a partition via ocfs2console on node1, and mounted it 
>>> as well. However I cannot figure out how to make that filesystem 
>>> available on node0, where i have not created an ocfs2 partition.
>>>
>>> I've been assuming that formatting as ocfs2 will destroy existing 
>>> data, and don't have a spare partition on node0 yet, I can get one if 
>>> that is required.
>>>
>>> Basically my overall question is how do I mount the filesystem(s) from 
>>> various nodes on each other, or make the whole available to the nodes?
>>>
>>> Thanks in advance!
>>>
>>> _______________________________________________
>>> Ocfs2-users mailing list
>>> Ocfs2-users at oss.oracle.com
>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>>
>>>
>>>  
>>>
>>>       
>>
>>     
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>   




More information about the Ocfs2-users mailing list