[Ocfs2-users] sanity check - Xen+iSCSI+LVM+OCFS2 at dom0/domU

Alok K. Dhir adhir at symplicity.com
Thu Feb 7 07:07:20 PST 2008


Hello all - we're evaluating OCFS2 in our development environment to  
see if it meets our needs.

We're testing it with an iSCSI storage array (Dell MD3000i) and 5  
servers running Centos 5.1 (2.6.18-53.1.6.el5xen).

1) Each of the 5 servers is running the Centos 5.1 open-iscsi  
initiator, and sees the volumes exposed by the array just fine.  So  
far so good.

2) Created a volume group using the exposed iscsi volumes and created  
a few LVM2 logical volumes.

3) vgscan; vgchange -a y; on all the cluster members.  all see the  
"md3000vg" volume group.  looking good. (we have no intention of  
changing the LVM2 configurations much if at all, and can make sure all  
such changes are done when the volumes are off-line on all cluster  
members, so theoretically this should not be a problem).

4) mkfs.ocfs2 /dev/md3000vg/testvol0 -- works great

5) mount on all Xen dom0 boxes in the cluster, works great.

6) create a VM on one of the cluster members, set up iscsi, vgscan,  
md3000vg shows up -- looking good.

7) install ocfs2, 'service o2cb enable', starts up fine.  mount /dev/ 
md3000vg/testvol0, works fine.

** Thanks for making it this far -- this is where is gets interesting

8) run 'iozone' in domU against ocfs2 share - BANG - immediate kernel  
panic, repeatable all day long.

	"kernel BUG at fs/inode.c"

So my questions:

1) should this work?

2) if not, what should we do differently?

3) currently we're tracking the latest RHEL/Centos 5.1 kernels --  
would we have better luck using the latest mainline kernel?

Thanks for any assistance.

Alok Dhir




More information about the Ocfs2-users mailing list