[Ocfs2-users] Missing something basic...

Ulf Zimmermann ulf at atc-onlane.com
Wed Oct 17 21:30:02 PDT 2007


You need shared storage to use OCFS, not local storage on each server. 

> -----Original Message-----
> From: ocfs2-users-bounces at oss.oracle.com [mailto:ocfs2-users-
> bounces at oss.oracle.com] On Behalf Of Benjamin Smith
> Sent: Wednesday, October 17, 2007 18:00
> To: ocfs2-users at oss.oracle.com
> Subject: [Ocfs2-users] Missing something basic...
> 
> I'm stumped. I'm doing some research on clustered file systems to be
> deployed
> over winter break, and am testing on spare machines first.
> 
> I have two identically configured computers, each with a 10 GB
> partition, /dev/hda2. I intend to combine these two LAN/RAID1 style to
> represent 10 GB of redundant cluster storage, so that if either
machine
> fails, computing can resume with reasonable efficiency.
> 
> These machines are called "cluster1" and "cluster2", and are currently
on
> a
> local Gb LAN. They are running CentOS 4.4 (recompile of RHEL 4.4) I've
set
> up
> SSH RSA keys so that I can ssh directly from either to the other
without
> passwords, though I use a non-standard port, defined in ssh_config and
> sshd_config.
> 
> I've installed the RPMs without incident. I've set up a cluster called
> "ocfs2"
> with nodes "cluster1" and "cluster2", with the corresponding LAN IP
> addresses. I've confirmed that configuration changes populate to
cluster2
> when I push the appropriate button in the X11 ocfs2console on
cluster1.
> I've
> checked the firewall(s) to allow inbound TCP to port 7777 connections
on
> both
> machines, and verified this with nmap. I've also tried turning off
> iptables
> completely. On cluster1, I've formatted and mounted partition "oracle"
> to /meda/cluster using the ocfs2console and I can r/w to this
partition
> with
> other applications. There's about a 5-second delay when
> mounting/unmounting,
> and the FAQ reflects that this is normal. SELinux is completely off.
> 
> Questions:
> 
> 1) How do I get this "oracle" partition to show/mount on host
cluster2,
> and
> subsequent systems added to the cluster? Should I be expecting a
/dev/*
> block
> device to mount, or is there some other program I should be using,
similar
> to
> smbmount?
> 
> 2) How do I get this /dev/hda2 (aka "oracle") on cluster1 to combine
> (RAID1
> style) with /dev/hda2 on cluster2, so that if either host goes down I
> still
> have a complete FS to work from? Am I mis-understanding the abilities
and
> intentions of OCFS2? Do I need to do something with NBD, GNBD, ENDB,
or
> similar? If so, what's the "recommended" approach?
> 
> Thanks,
> 
> -Ben
> 
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> 
> 
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users



More information about the Ocfs2-users mailing list