[Ocfs2-users] Another node is heartbeating in our slot

Werner van der Walt Werner at softselect.biz
Tue Jan 8 09:26:32 PST 2008


Thank you very much Sunil. I will give it a try with multiple iSCSI nodes
then and no host mount and check the results :)

Just another question relating to the number of hosts and nodes. According to 
the docs it seems like you can only set up 1 cluster. What if you want a node 
to be able to connect to two separate iSCSI hosts serving the same or 
different OCFS2 block devices. Can that be done? In the config file it only 
allows for the one cluster's config and you can only have one config file per 
node?
Am I then correct in saying that OCFS2 is more of a multi-node access/usage 
file system rather than a distributed cluster file system? As I cannot connect 
to more than one OCFS2 block device host at the same time it will always have 
a one-to-many relationship. If my host dies then the block device dies. Its 
not keeping multiple synchronised copies of the block device distributed over 
several hosts which I can set up and let it fail-over if one of the hosts 
dies?

Maybe my interpretation of a clustered file system is wrong?

Thanks.

Werner

-----Original Message-----
From: Sunil Mushran [mailto:sunil.mushran at oracle.com]
Sent: Saturday, January 05, 2008 9:16 PM
To: Werner van der Walt
Cc: ocfs2-users at oss.oracle.com
Subject: Re: [Ocfs2-users] Another node is heartbeating in our slot

You cannot mount it simultaneously on a iscsi target (host) and a iscsi
node.
But you can on multiple iscsi nodes.

Werner van der Walt wrote:
> Hi All,
>
> Have been trying to set up ocfs2 on a machine that will act as a SAN.
> O/S used is Ubuntu 7.10 for both host and node. Mounting the shared
> ocfs2 storage via iscsi target+initiator on the node and as a local
> mount on the host. Everything installed perfectly. Modules are loaded
> and services started. So far so good :)
> Problem comes in when booting the node and mounting the block device
> there. It then starts giving an error on the host stating "Another
> node is heartbeating in our slot". The error number given is 767. If I
> do a mount and look at the settings on both host and node then both
> states that the device has been mounted and heartbeat=local. Also if
> looking at the mounted.ocfs2 -f it is empty but if I do a
> mounted.ocfs2 -f /dev/sdc then it shows just the node details and not
> also the host details as mounted even though it is mounted on the host
> and accessible (I can cd into the dir on the host and see the contents
> created on the node).
> The moment I umount on the node then the error messages stop on the
> host. It remains mounted on the host as I can cd into it even though
> the mounted.ocfs2 -f /dev/sdb shows nothing mounted.
>
> Any advice? I have been googling the whole day for info on using ocfs2
> and iscsi together and the correct procedure to follow but can't find
> anything of value. In the ocfs2 user guide it states you can export
> via NFS but doesn't mention iscsi in any way. Also what is unclear to
> me is how are you suppose to mount a remote file system if you are not
> making use of something like iscsi, or is it always automatically
> assumed that the different nodes are all directly attached to the
> storage? In the guide they just always refer to local block devices in
> the procedure descriptions but they don't show you how those devices
> got there?
>
> Thanks for the assistance.
>
> Werner
> ------------------------------------------------------------------------
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 2934 bytes
Desc: not available
Url : http://oss.oracle.com/pipermail/ocfs2-users/attachments/20080108/7926f862/smime-0001.bin


More information about the Ocfs2-users mailing list