[Ocfs2-users] How are people using OCFS2 - any limitations

Henrik Carlqvist hc11 at poolhem.se
Mon Oct 13 14:14:00 PDT 2008


On Mon, 13 Oct 2008 08:50:33 -0700
Patrick Kelly <pjkelly at ucdavis.edu> wrote:
> We are running sakai using RHEL 4.0. We have six application servers
> that retrieve and update files stored in our AFS (Andrew File System)
> infrastructure service. There is currently about 500GB of data stored
> there, and we anticipate this will grow to 1TB over the next year or
> two. We are considering moving that data to an OCFS2 file system on an
> EMC SAN, using fiber channel connections to the application servers.
> 
> Can anyone give us some idea of their usage? How large have the file
> systems grown? How many nodes are connected? Any issues with expansion?
> Any information would be appreciated.

About two years ago I made an attempt to get ocfs2 working on two dell
servers connected to an EMC SAN. If I remember right they were connected
by a brocade FC switch to their single channel Qlogic FC cards. The
purpose of this 2-node configuration was to build an active-active HA NFS
server.

I don't remember for sure, but I think that they were running RHEL 4.0,
but it could also have been RHEL 5.0.

We did choose this solution as my company wanted commersial support from
trusted vendors. Dell made the installation and configuration of the
RedHat servers and they had also delivered the EMC SAN.

We had a number of problems with that configuration. Sometimes the
computers lost their connections to the SAN. For reasons we never found
out one of the servers sometimes rebooted followed by a reboot also by the
other server. The servers also had a closed-source software colled Dell
Powerpath installed. That software might be useful with dual channel HBA
cards, but we only had single channel HBA cards. It turned out that the
version of Powerpath that we had installed reduced the disk bandwidth to
less than half the bandwidth of raw access to the disks.

Dell gave us support and tried to solve our problems, but eventually we
came to a point were our system still wasn't usable and continued support
would cost far too much. We had to give up that configuration.

Since almost a year the two dell servers now run Slackware 12.0 and the
qlogic cards are directly connected to an easyraid disk array. We have
this system working fine as our 2-node HA NFS server. I don't know for
sure which replacement made it work, but we have done the following
changes:

Slackware 12.0 instead of RHEL
No closed-source modules tainting the kernel
No Dell Powerpath installed
HBA cards directly connected to raid array without any FC switch
Easyraid disk array with IDE disks instead of EMC with FC disks

The servers share about a total of 900 GB with NFS and smb, but those 900
MB are split into different partitions.

My configuration has some differences, but also some similarities with
your configuration. I hope that you will find my experiences useful.

regards Henrik
-- 
NOTE: Dear Outlook users: Please remove me from your address books.
      Read this article and you know why:
      http://newsforge.com/article.pl?sid=03/08/21/143258



More information about the Ocfs2-users mailing list