[Ocfs2-tools-commits] smushran commits r999 - trunk/documentation

svn-commits at oss.oracle.com svn-commits at oss.oracle.com
Tue Jul 26 16:50:48 CDT 2005


Author: smushran
Date: 2005-07-26 16:50:43 -0500 (Tue, 26 Jul 2005)
New Revision: 999

Added:
   trunk/documentation/ocfs2_faq.txt
   trunk/documentation/users_guide.sxw
Log:
user's guide and faq added

Added: trunk/documentation/ocfs2_faq.txt
===================================================================
--- trunk/documentation/ocfs2_faq.txt	2005-07-26 18:50:43 UTC (rev 998)
+++ trunk/documentation/ocfs2_faq.txt	2005-07-26 21:50:43 UTC (rev 999)
@@ -0,0 +1,210 @@
+/* -*- mode: txt; c-basic-offset: 4; -*-
+ * vim: noexpandtab sw=4 ts=4 sts=0:
+ */
+
+General
+-------
+
+Q01 How do I get started?
+A01 a) Download and install the module and tools rpms.
+	b) Create cluster.conf and propagate to all nodes.
+	c) Configure and start the O2CB cluster service.
+	d) Format the volume.
+	e) Mount the volume.
+==============================================================================
+
+Download and Install
+--------------------
+
+Q01 How do I download the rpms?
+A01 If you are on Novell's SLES9, upgrade to SP2 and you will have the
+	required module installed. However, you will be required to install
+	ocfs2-tools and ocfs2console rpms from the distribution.
+	If you are on Red Hat's EL4, download and install the appropriate module
+	rpm and the two tools rpms, ocfs2-tools and ocfs2console. Appropriate
+	module refers to one matching the kernel flavor, uniprocessor, smp or
+	hugemem.
+
+Q02 How do I install the rpms?
+A02 You can install all three rpms in one go using:
+	rpm -ivh ocfs2-tools-X.i386.rpm ocfs2console-X.i386.rpm ocfs2-2.6.9-11.ELsmp-X.i686.rpm
+	If you need to upgrade, do:
+	rpm -Uvh ocfs2-2.6.9-11.ELsmp-Y.i686.rpm
+
+Q03 Do I need to install the console?
+A03 No, the console is recommended but not required.
+
+Q04	What are the dependencies for installing ocfs2console?
+A04	ocfs2console requires e2fsprogs, glib2 2.2.3 or later, vte 0.11.10 or later,
+	pygtk2 (EL4) or python-gtk (SLES9) 1.99.16 or later, python 1.99.16 or later
+	and ocfs2-tools.
+==============================================================================
+
+Configure
+---------
+
+Q01 How do I populate /etc/ocfs2/cluster.conf?
+A01 If you have installed the console, use it to create this
+	configuration file. For details, refer to the user's guide.
+	If you do not have the console installed, check the Appendix in the
+	User's guide for a sample cluster.conf and the details of all the
+	components. 
+	Do not forget to copy this file to all the nodes in the cluster.
+	If you ever edit this file on any node, ensure the other nodes are
+	updated as well.
+==============================================================================
+
+O2CB Cluster Service
+--------------------
+
+Q01 How do I configure the cluster service?
+A01 # /etc/init.d/o2cb configure
+	Enter 'y' if you want the service to load on boot and the name of
+	the cluster (as listed in /etc/ocfs2/cluster.conf).
+
+Q02	How do I start the cluster service?
+A02	a) Load the modules as:
+		# /etc/init.d/o2cb load
+	b) Online it as:
+		# /etc/init.d/o2cb online [cluster_name]
+	If you have configured the cluster to load on boot, you could
+	combine the two as follows:
+		# /etc/init.d/o2cb start [cluster_name]
+	The cluster name is not required if you have specified the name
+	during configuration.
+
+Q03 How do I stop the cluster service?
+A03	a) Offline it as:
+		# /etc/init.d/o2cb offline [cluster_name]
+	b) Unload the modules as:
+		# /etc/init.d/o2cb unload
+	If you have configured the cluster to load on boot, you could
+	combine the two as follows:
+		# /etc/init.d/o2cb stop [cluster_name]
+	The cluster name is not required if you have specified the name
+	during configuration.
+
+Q04	How can I learn the status of the cluster?
+A04	Use the status command as follows:
+		# /etc/init.d/o2cb status
+
+Q05	I am unable to get the cluster online. What could be wrong?
+A05	Check whether the node name in the cluster.conf exactly matches the
+	hostname. One of the nodes in the cluster.conf need to be in the
+	cluster for the cluster to be online.
+==============================================================================
+
+Format
+------
+
+Q01 How do I format a volume?
+A01	You could either use the console or use mkfs.ocfs2 directly to format the
+	volume.  For console, refer to the user's guide.
+		# mkfs.ocfs2 -L "oracle_home" /dev/sdX
+	The above formats the volume with default block and cluster sizes,
+	which are computed based upon the size of the volume.
+		# mkfs.ocfs2 -b 4k -C 32K -L "oracle_home" -N 4 /dev/sdX
+	The above formats the volume for 4 nodes with a 4K block size and a
+	32K cluster size.
+
+Q02	What does the number of node slots during format mean?
+A02	The number of node slots specifies the number of nodes that can
+	concurrently mount the volume. This number is specified during
+	format and can be increased using tunefs.ocfs2. This number cannot
+	be decreased.
+
+Q03	What should I consider when determining the number of node slots?
+A03	OCFS2 allocates system files, like Journal, for each node slot.
+	So as to not to waste space, one should specify a number within the
+	ballpark of the actual number of nodes. Also, as this number can be
+	increased, there is no need to specify a much larger number than one
+	plans for mounting the volume.
+
+Q04 Does the number of node slots have to be the same for all volumes?
+A04 No. This number can be different for each volume.
+
+Q05	What block size should I use?
+A05	A block size is the smallest unit of space addressable by the file
+	system. OCFS2 supports block sizes of 512 bytes, 1K, 2K and 4K.
+	The block size cannot be changed after the format. For most volume
+	sizes, a 4K size is recommended. On the other hand, the 512 bytes
+	block is never recommended.
+
+Q06	What cluster size should I use?
+A06 A cluster size is the smallest unit of space allocated to a file to
+	hold the data. OCFS2 supports cluster sizes of 4K, 8K, 16K, 32K,
+	64K, 128K, 256K, 512K and 1M. For database volumes, a cluster size
+	of 128K or larger is recommended. For Oracle home, 32K to 64K.
+
+Q07	Any advantage of labeling the volumes?
+A07	As in a shared disk environment, the disk names (/dev/sdX) could
+	change from boot to boot, labeling becomes a must for easy identification.
+	You could also mount volumes by label using the (mount -L "label"
+	command). The label is changeable using the tunefs.ocfs2 utility.
+==============================================================================
+
+Mount
+-----
+
+Q01	How do I mount the volume?
+A01	You could either use the console or use mount directly. For console,
+	refer to the user's guide.
+		# mount -t ocfs2 /dev/sdX /dir
+	The above command will mount device /dev/sdX on directory /dir.
+
+Q02	How do I mount by label?
+A02	To mount by label do:
+		# mount -L "label" /dir
+
+Q03	What entry to I add to /etc/fstab to mount an ocfs2 volume?
+A03	Add the following:
+		/dev/sdX	/dir	ocfs2	noauto,_netdev	0	0
+	The _netdev option indicates that the devices needs to be mounted after the
+	network is up.
+
+Q04	What all do I need to do to automount OCFS2 volumes on boot?
+A04	a) Enable o2cb service using:
+		# chkconfig --add o2cb
+	b) Configure o2cb to load on boot using:
+		# /etc/init.d/o2cb configure
+	c) Add entries into /etc/fstab as follows:
+		/dev/sdX	/dir	ocfs2	_netdev	0	0
+
+Q05	How do I know my volume is mounted?
+A05	a) Enter mount without arguments, or
+		# mount
+	b) List /etc/mtab, or
+		# cat /etc/mtab
+	c) List /proc/mounts
+		# cat /proc/mounts
+	mount command reads the /etc/mtab to show the information.
+
+Q06	What are the /config and /dlm mountpoints for?
+A06	OCFS2 comes bundled with two in-memory filesystems configfs and ocfs2_dlmfs.
+	configfs is used by the ocfs2 tools to communicate to the in-kernel
+	node manager the list of nodes in the cluster and to the in-kernel
+	heartbeat thread the resource to heartbeat on.
+	ocfs2_dlmfs is used by ocfs2 tools to communicate with the in-kernel
+	dlm to take and release clusterwide locks on resources.
+==============================================================================
+
+Oracle RAC
+----------
+
+Q01	Any special flags to run Oracle RAC?
+A01	OCFS2 volumes containing the Voting diskfile (CRS), Cluster registry
+	(OCR), Data files, Redo logs, Archive logs and control files should
+	be mounted with the "datavolume" mount option. This is to ensure
+	that the Oracle processes open these files with the o_direct flag.
+
+Q02	What about the volume containing Oracle home?
+A02	Oracle home volume should be mounted normally, that is, without the
+	"datavolume" mount option. This mount option is only relevant for
+	Oracle files listed above.
+
+Q03	Does that mean I cannot have my data file and Oracle home on the
+	same volume?
+A03	Yes. The volume containing the Oracle data files, redo-logs, etc.
+	should never be on the same volume as the distribution (including the
+	trace logs like, alert.log).
+==============================================================================

Added: trunk/documentation/users_guide.sxw
===================================================================
(Binary files differ)


Property changes on: trunk/documentation/users_guide.sxw
___________________________________________________________________
Name: svn:mime-type
   + application/octet-stream



More information about the Ocfs2-tools-commits mailing list