[Ocfs2-users] Another node is heartbeating in our slot

Werner van der Walt Werner at softselect.biz
Tue Jan 8 12:21:04 PST 2008


Okay I see what you are getting at. Even though its mounting two separate
iSCSI block devices from two separate hosts it will be treated as 1 cluster.
So the conf on each of the two nodes will look like this:
cluster:
	node_count=2
	name=mycluster
node 1:
	...
	number=0
	cluster=mycluster
node 2:
	...
	number=1
	cluster=mycluster

What I actually meant was that if you treated iSCSI block /dev/sdb and iSCSI
block /dev/sdc as totally separate entities and you want to independently
link to each entity as needed. On the first node you want to link to both
and on the second node only to one. So the conf files will look like this
hypothetically:
-------Node 1 (conf file on node 1)
cluster:
	node_count=1
	name=cluster1
node 1:
	...
	number=0
	cluster=cluster1

cluster:
	node_count=2
	name=cluster2
node 1:
	...
	number=0
	cluster=cluster2
node 2:
	number=1
	cluster=cluster2

-------Node 2 (conf file on node 2)
cluster:
	node_count=2
	name=cluster2
node 1:
	...
	number=0
	cluster=cluster2
node 2:
	...
	number=1
	cluster=cluster2

See my way I have got flexibility to create more than 1 independent cluster
membership per node. Your way all mounted ocfs2 volumes will always be
treated as part of a single unbreakable group for every node with the same
conf file, even though you may not want it to be like that. Looking at my
example above node 2 will always have membership in cluster2 as well because
the conf files on both must be the same and as far as I can see there is no
way around this?

Werner

-----Original Message-----
From: Sunil Mushran [mailto:Sunil.Mushran at oracle.com] 
Sent: Tuesday, January 08, 2008 9:54 PM
To: Werner van der Walt
Cc: ocfs2-users at oss.oracle.com
Subject: Re: [Ocfs2-users] Another node is heartbeating in our slot

host1 and host2 are exporting one iscsi volume each.

node1 and node2 are discovering these exported iscsi volumes as sda and sdc
(for this example say the devnames are consistent on both the nodes).

cluster.conf will need to have the two nodes, node1 and node2 described as
a cluster.

host1 and host2 are not part of the cluster. (Unsure what you mean by 
pointing
to host1.)

mkfs.ocfs2 both volumes sda and sdc on any node and mount on both node1 
and node2.

Werner van der Walt wrote:
> Okay so we are in agreement then as per your second point that its
multiple
> nodes with a single shared block device :)
>
> I am still unclear on your first point though...
> Scenario is as follows:
> - 2 x iSCSI hosts both exporting an iSCSI block device each (let's call it
> host1 and host2). The block devices are locally attached storage to each
of
> the hosts so they are different block devices.
> - 2 x nodes attempting connection to both the iSCSI hosts at the same
time.
> Setting both iSCSI connections up for each node is no problem as I just do
a
> discovery per node for the two iSCSI targets and it gets added locally
> (let's say /dev/sdb and /dev/sdc).
> - I now need to set up OCFS2 for each of /dev/sdb and /dev/sdc to format
the
> file systems and mount.
> - Configuring /etc/ocfs2/cluster.conf it only allows me to set up one
> cluster so I will point that to host1 on each of the nodes by giving it a
> cluster name of host1 and node numbers of 0 and 1 respectively.
>
> Now my problem is how do I create the OCFS2 cluster config file for host2
> and give it node numbers 0 and 1 again for the shared file system on
host2?
> According to the docs I cannot do this in /etc/ocfs2/cluster.conf again a
> second time even if the cluster name details are different?
>
> Werner
>   
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 2934 bytes
Desc: not available
Url : http://oss.oracle.com/pipermail/ocfs2-users/attachments/20080108/43ab9c83/smime.bin


More information about the Ocfs2-users mailing list