[Ocfs2-users] initial cluster setup

Matthew Boeckman matthewb at saepio.com
Thu Mar 23 13:36:35 CST 2006


Thanks to all the replies on this topic, I'm getting it slowly!

I originally did have both systems using local disks. Now what I have is 
node0 has a directly connected IDE drive that I have iscsi exported to 
both node0 (via localhost) and node1 (via 10.1.1.100). Both systems now 
see /dev/sda. I think this is what I need to have, correct?

So now my question is procedural - what's the next step? I have 
formatted /dev/sda with ocfs2console on node1... do I have to do the 
same thing on node0?

Perhaps I should have started with what I'm trying to accomplish, it's 
possible I'm in entirely the wrong place:

One or more SCSI RAID systems directly attached to Linux heads that are 
capable of sharing storage to a dozen clients via gigabit etherenet. NFS 
is how we do it now, it's slow and I'm looking to replace it. The 
clients each need to access the volume, but do not need to access common 
directories (server a accesses /volume/client1, /volume/client2, server 
b accesses/volume/client3, client4, etc).

Am I on the right path, or down a rabbit hole with ocfs2?

THANKS!

-Matthew


Zach Brown wrote:
>> I have formatted a partition via ocfs2console on node1, and mounted it 
>> as well. However I cannot figure out how to make that filesystem 
>> available on node0, where i have not created an ocfs2 partition.
> 
> Can you describe specifically the storage you're using and how the nodes
> access it?
> 
> My apologies if I have it wrong, but your question makes it sound like
> the nodes don't have shared access to storage which is required for
> ocfs2.  You can't build a shared ocfs2 volume out of nodes that all have
> individual private disks.
> 
> - z
> 
> 
> 



More information about the Ocfs2-users mailing list