[Ocfs2-test-devel] [PATCH 5/5] Ocfs2-test: Add multi-nodes testing launcher for dx-dirs test

tristan.ye tristan.ye at oracle.com
Wed Feb 25 17:29:59 PST 2009


On Wed, 2009-02-25 at 08:50 -0500, Marcos E. Matsunaga wrote:
> Just one comment here.. You have two logfiles. If O2TDIR or if the
> logdir argument is on a shared partition (NFS or OCFS2), what are the
> chances that you will be using a single logfile for RUN_LOG_FILE? Is
> it meant to be that way or each node should have its own logfile?

Marcos,

For these 2 logs, they're locally master node located. I did not keep
any separate logs among slave nodes. since these 2 logs have traced
running information for all nodes(such as hostname, rank number, and
command name etc), it's good enough for me to backstrace the error so
far.

1. The RUN_LOG_FILE is a simplified log ,by which we can easily get a
general idea of what all testcases look like, survive or not.  
All testcases run in each node are sychronized, they are expected to be
launched simultaneously among the nodes, therefore, a failure in each
node will fail the test.


2. While LOG_FILE is a somewhat detailed log, where all running
information were kept,it's a commmand level log, by which we can locate
a error occured to one single testing cmd.


Regards,
Tristan



> Regards,
> 
> Marcos Eduardo Matsunaga
> 
> Oracle USA
> Linux Engineering
> 
> “The statements and opinions expressed here are my own and do not
> necessarily represent those of Oracle Corporation.”
> 
> 
> Tristan Ye wrote: 
> > This script will behave as openmpi binary launcher to perform
> > following tests among multiple nodes for indexed-dirs on ocfs2.
> > 
> > 	1. Grow test
> > 
> > 	2. Rename test
> > 
> > 	3. Read test
> > 
> > 	4. Unlink test
> > 
> > 	5. Fillup test
> > 
> > 	6. Stress test
> > 
> > Signed-off-by: Tristan Ye <tristan.ye at oracle.com>
> > ---
> >  programs/dx_dirs_tests/multi_index_dir_run.sh |  432 +++++++++++++++++++++++++
> >  1 files changed, 432 insertions(+), 0 deletions(-)
> >  create mode 100755 programs/dx_dirs_tests/multi_index_dir_run.sh
> > 
> > diff --git a/programs/dx_dirs_tests/multi_index_dir_run.sh b/programs/dx_dirs_tests/multi_index_dir_run.sh
> > new file mode 100755
> > index 0000000..af81c44
> > --- /dev/null
> > +++ b/programs/dx_dirs_tests/multi_index_dir_run.sh
> > @@ -0,0 +1,432 @@
> > +#!/bin/bash
> > +#
> > +# vim: noexpandtab sw=8 ts=8 sts=0:
> > +#
> > +# multi_index_dir_run.sh
> > +#
> > +# description:  This script will behave as openmpi binary launcher to 
> > +#		perform following tests among multiple nodes for
> > +#		indexed-dirs on ocfs2.
> > +#
> > +#		1. Grow test
> > +#		
> > +#		2. Rename test
> > +#		
> > +#		3. Read test
> > +#		
> > +#		4. Unlink test
> > +#
> > +#		5. Fillup test
> > +#		
> > +#		6. Stress test 
> > +#
> > +# Author:       Tristan Ye,     tristan.ye at oracle.com
> > +#
> > +# History:      10 Feb 2009
> > +#
> > +#
> > +# Copyright (C) 2008 Oracle.  All rights reserved.
> > +#
> > +# This program is free software; you can redistribute it and/or
> > +# modify it under the terms of the GNU General Public
> > +# License, version 2,  as published by the Free Software Foundation.
> > +#
> > +# This program is distributed in the hope that it will be useful,
> > +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > +# General Public License for more details.
> > +#
> > +
> > +
> > +################################################################################
> > +# Global Variables
> > +################################################################################
> > +PATH=$PATH:/sbin      # Add /sbin to the path for ocfs2 tools
> > +export PATH=$PATH:.
> > +
> > +. ./config.sh
> > +
> > +MKFS_BIN="`which sudo` -u root `which mkfs.ocfs2`"
> > +MOUNT_BIN="`which sudo` -u root `which mount`"
> > +REMOTE_MOUNT_BIN="${BINDIR}/remote_mount.py"
> > +UMOUNT_BIN="`which sudo` -u root `which umount`"
> > +REMOTE_UMOUNT_BIN="${BINDIR}/remote_umount.py"
> > +TEE_BIN=`which tee`
> > +RM_BIN=`which rm`
> > +TAR_BIN=`which tar`
> > +MKDIR_BIN=`which mkdir`
> > +TOUCH_BIN=`which touch`
> > +DIFF_BIN=`which diff`
> > +MOVE_BIN=`which mv`
> > +CP_BIN=`which cp`
> > +SED_BIN=`which sed`
> > +CUT_BIN=`which cut`
> > +CHOWN_BIN=`which chown`
> > +CHMOD_BIN=`which chmod`
> > +
> > +SUDO="`which sudo` -u root"
> > +
> > +export PATH=$PATH:.
> > +MULTI_INDEXED_DIRS_TEST_BIN="${BINDIR}/multi_index_dir"
> > +
> > +USERNAME=`id -un`
> > +GROUPNAME=`id -gn`
> > +
> > +BLOCKSIZE=
> > +CLUSTERSIZE=
> > +SLOTS=4
> > +JOURNALSIZE=
> > +BLOCKS=
> > +DEVICE=
> > +LABELNAME=ocfs2-multi-indexed-dirs-tests
> > +WORK_PLACE_DIRENT=multi-indexed-dirs-tests
> > +WORK_PLACE=
> > +
> > +DEFAULT_LOG_DIR=${O2TDIR}/log
> > +LOG_DIR=
> > +RUN_LOG_FILE=
> > +LOG_FILE=
> > +MKFSLOG=
> > +MOUNTLOG=
> > +
> > +DEFAULT_RANKS=4
> > +MPI_RANKS=
> > +MPI_HOSTS=
> > +MPI_ACCESS_METHOD="ssh"
> > +MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
> > +MPI_BTL_ARG="-mca btl tcp,self"
> > +MPI_BTL_IF_ARG=
> > +
> > +TEST_NO=0
> > +TEST_PASS=0
> > +
> > +set -o pipefail
> > +
> > +BOOTUP=color
> > +RES_COL=80
> > +MOVE_TO_COL="echo -en \\033[${RES_COL}G"
> > +SETCOLOR_SUCCESS="echo -en \\033[1;32m"
> > +SETCOLOR_FAILURE="echo -en \\033[1;31m"
> > +SETCOLOR_WARNING="echo -en \\033[1;33m"
> > +SETCOLOR_NORMAL="echo -en \\033[0;39m"
> > +
> > +################################################################################
> > +# Utility Functions
> > +################################################################################
> > +function f_echo_success()
> > +{
> > +	[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
> > +		echo -n "["
> > +	[ "$BOOTUP" = "color" ] && $SETCOLOR_SUCCESS
> > +		echo -n $" PASS "
> > +	[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
> > +		echo -n "]"
> > +
> > +	return 0
> > +}
> > +
> > +function f_echo_failure()
> > +{
> > +	[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
> > +		echo -n "["
> > +	[ "$BOOTUP" = "color" ] && $SETCOLOR_FAILURE
> > +		echo -n $"FAILED"
> > +	[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
> > +		echo -n "]"
> > +
> > +	return 1
> > +}
> > +
> > +function f_echo_status()
> > +{
> > +	if [ "${1}" == "0" ];then
> > +		f_echo_success
> > +		echo
> > +	else
> > +		f_echo_failure
> > +		echo
> > +		exit 1
> > +	fi
> > +}
> > +
> > +function f_exit_or_not()
> > +{
> > +	if [ "${1}" != "0" ];then
> > +		exit 1;
> > +	fi
> > +}
> > +
> > +function f_usage()
> > +{
> > +    echo "usage: `basename ${0}` [-r MPI_Ranks] <-f MPI_Hosts> [-a access method] [-o logdir] <-d <device>> <mountpoint path>"
> > +    echo "       -r size of MPI rank"
> > +    echo "       -a access method for process propagation,should be ssh or rsh,set ssh as a default method when omited."
> > +    echo "       -f MPI hosts list,separated by comma,e.g -f node1.us.oracle.com,node2.us.oracle.com."
> > +    echo "       -o output directory for the logs"
> > +    echo "       -d specify the device which has been formated as an ocfs2 volume."
> > +    echo "       <mountpoint path> path of mountpoint where the ocfs2 volume will be mounted on."
> > +    exit 1;
> > +
> > +}
> > +function f_getoptions()
> > +{
> > +	 if [ $# -eq 0 ]; then
> > +                f_usage;
> > +                exit 1
> > +         fi
> > +
> > +	 while getopts "o:d:r:f:a:h:" options; do
> > +                case $options in
> > +		r ) MPI_RANKS="$OPTARG";;
> > +                f ) MPI_HOSTS="$OPTARG";;
> > +                o ) LOG_DIR="$OPTARG";;
> > +                d ) DEVICE="$OPTARG";;
> > +		a ) MPI_ACCESS_METHOD="$OPTARG";;
> > +                h ) f_usage
> > +                    exit 1;;
> > +                * ) f_usage
> > +                   exit 1;;
> > +                esac
> > +        done
> > +	shift $(($OPTIND -1))
> > +	MOUNT_POINT=${1}
> > +}
> > +
> > +function f_setup()
> > +{
> > +	if [ "${UID}" = "0" ];then
> > +		echo "Should not run tests as root"
> > +		exit 1
> > +	fi
> > +
> > +	f_getoptions $*
> > +	
> > +	if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
> > +		MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
> > +	fi
> > +
> > +	if [ -z "${MOUNT_POINT}" ];then 
> > +		f_usage
> > +	else
> > +		if [ ! -d ${MOUNT_POINT} ]; then
> > +			echo "Mount point ${MOUNT_POINT} does not exist." 
> > +			exit 1
> > +		else
> > +		#To assure that mount point will not end with a trailing '/'
> > +			if [ "`dirname ${MOUNT_POINT}`" = "/" ]; then
> > +				MOUNT_POINT="`dirname ${MOUNT_POINT}``basename ${MOUNT_POINT}`"
> > +			else
> > +				MOUNT_POINT="`dirname ${MOUNT_POINT}`/`basename ${MOUNT_POINT}`"
> > +			fi
> > +		fi
> > +	fi
> > +
> > +	if [ -z "$MPI_HOSTS" ];then
> > +		f_usage
> > +	fi
> > +
> > +	MPI_RANKS=${MPI_RANKS:-$DEFAULT_RANKS}
> > +
> > +	LOG_DIR=${LOG_DIR:-$DEFAULT_LOG_DIR}
> > +        ${MKDIR_BIN} -p ${LOG_DIR} || exit 1
> > +
> > +	RUN_LOG_FILE="`dirname ${LOG_DIR}`/`basename ${LOG_DIR}`/`date +%F-%H-%M-%S`-multi-indexed-dirs-tests-run.log"
> > +	LOG_FILE="`dirname ${LOG_DIR}`/`basename ${LOG_DIR}`/`date +%F-%H-%M-%S`-multi-indexed-dirs-tests.log"
> > +	MKFSLOG="`dirname ${LOG_DIR}`/`basename ${LOG_DIR}`/$$_mkfs.log"
> > +	MOUNTLOG="`dirname ${LOG_DIR}`/`basename ${LOG_DIR}`/$$_mount.log"
> > +}
> > +
> > +function f_LogRunMsg()
> > +{
> > +        echo -ne "$@"| ${TEE_BIN} -a ${RUN_LOG_FILE}
> > +}
> > +
> > +function f_LogMsg()
> > +{
> > +        echo "$(date +%Y/%m/%d,%H:%M:%S)  $@" >>${LOG_FILE}
> > +}
> > +
> > +function f_mkfs()
> > +{
> > +	f_LogMsg "Mkfs volume ${DEVICE} by ${BLOCKSIZE} bs and ${CLUSTERSIZE} cs"
> > +	echo "y"|${MKFS_BIN} --fs-features=indexed-dirs -b ${BLOCKSIZE} -C ${CLUSTERSIZE} -L ${LABELNAME} -N ${SLOTS} ${DEVICE}>>${MKFSLOG} 2>&1
> > +	RET=$?
> > +
> > +	if [ "${RET}" != "0" ];then
> > +		f_LogMsg "Mkfs failed"
> > +		return ${RET}
> > +	fi
> > +	
> > +	return 0
> > +}
> > +
> > +function f_remote_mount()
> > +{
> > +	f_LogMsg "Mounting device ${LABELNAME} to nodes(${MPI_HOSTS}):"
> > +	${REMOTE_MOUNT_BIN} -l ${LABELNAME} -m ${MOUNT_POINT} -n ${MPI_HOSTS}>>${LOG_FILE} 2>&1
> > +	RET=$?
> > +
> > +	if [ "${RET}" != "0" ];then
> > +                f_LogMsg "Remote failed"
> > +		return ${RET}
> > +        fi
> > +
> > +	${SUDO} chown -R ${USERNAME}:${GROUPNAME} ${MOUNT_POINT}
> > +        ${SUDO} chmod -R 777 ${MOUNT_POINT}
> > +
> > +	WORK_PLACE=${MOUNT_POINT}/${WORK_PLACE_DIRENT}
> > +
> > +        ${MKDIR_BIN} -p ${WORK_PLACE}
> > +
> > +	return 0
> > +}
> > +
> > +function f_remote_umount()
> > +{
> > +	f_LogMsg "Remote umount from nodes(${MPI_HOSTS}):"
> > +	${REMOTE_UMOUNT_BIN} -m ${MOUNT_POINT} -n ${MPI_HOSTS}>>${LOG_FILE} 2>&1
> > +
> > +	RET=$?
> > +
> > +        if [ "${RET}" != "0" ];then
> > +                f_LogMsg "Remote failed"
> > +                return ${RET}
> > +        fi
> > +
> > +	return 0
> > +}
> > +
> > +function f_runtest()
> > +{
> > +	f_LogRunMsg "[*] Mkfs device ${DEVICE}:"
> > +	f_mkfs
> > +	RET=$?
> > +	f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +	f_exit_or_not ${RET}
> > +
> > +	f_LogRunMsg "[*] Remote Mount amongs nodes ${MPI_HOSTS}:"
> > +	f_remote_mount
> > +        RET=$?
> > +        f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +        f_exit_or_not ${RET}
> > +
> > +	((TEST_NO++))
> > +	f_LogRunMsg "[${TEST_NO}] Basic Grow Test:"
> > +	f_LogMsg "[${TEST_NO}] Basic Grow Test, CMD:${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 10 -n 4000 -w ${WORK_PLACE} -g"
> > +	${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 10 -n 4000 -w ${WORK_PLACE} -g >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +	f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +	f_exit_or_not ${RET}
> > +	((TEST_PASS++))
> > +	f_LogMsg "Cleanup working place"
> > +	${RM_BIN} -rf ${WORK_PLACE}/* >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +        f_exit_or_not ${RET}
> > +
> > +	((TEST_NO++))
> > +	f_LogRunMsg "[${TEST_NO}] Rename Test:"
> > +	f_LogMsg "[${TEST_NO}] Rename Test, CMD:${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 1 -n 2000 -w ${WORK_PLACE} -m"
> > +	${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 1 -n 2000 -w ${WORK_PLACE} -m >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +	f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +	f_exit_or_not ${RET}
> > +	((TEST_PASS++))
> > +	f_LogMsg "Cleanup working place"
> > +	${RM_BIN} -rf ${WORK_PLACE}/* >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +        f_exit_or_not ${RET}
> > +
> > +	((TEST_NO++))
> > +	f_LogRunMsg "[${TEST_NO}] Read Test:"
> > +	f_LogMsg "[${TEST_NO}] Read Test, CMD:${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 1 -n 3000 -w ${WORK_PLACE} -r"
> > +	${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 1 -n 3000 -w ${WORK_PLACE} -r >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +	f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +	f_exit_or_not ${RET}
> > +	((TEST_PASS++))
> > +	f_LogMsg "Cleanup working place"
> > +	${RM_BIN} -rf ${WORK_PLACE}/* >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +        f_exit_or_not ${RET}
> > + 
> > +	((TEST_NO++))
> > +	f_LogRunMsg "[${TEST_NO}] Unlink Test:"
> > +	f_LogMsg "[${TEST_NO}] Unlink Test, CMD:${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN}"
> > +	${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 1 -n 100 -w ${WORK_PLACE} -u >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +	f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +	f_exit_or_not ${RET}
> > +	((TEST_PASS++))
> > +	f_LogMsg "Cleanup working place"
> > +	${RM_BIN} -rf ${WORK_PLACE}/* >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +        f_exit_or_not ${RET}
> > +
> > +	((TEST_NO++))
> > +        f_LogRunMsg "[${TEST_NO}] Fillup Test:"
> > +        f_LogMsg "[${TEST_NO}] Fillup Test, CMD:${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 1 -n 10000 -w ${WORK_PLACE} -f"
> > +	${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 1 -n 10000 -w ${WORK_PLACE} -f >>${LOG_FILE} 2>&1
> > +        RET=$?
> > +        f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +        f_exit_or_not ${RET}
> > +        ((TEST_PASS++))
> > +        f_LogMsg "Cleanup working place"
> > +        ${RM_BIN} -rf ${WORK_PLACE}/* >>${LOG_FILE} 2>&1
> > +        RET=$?
> > +        f_exit_or_not ${RET}
> > +	
> > +	((TEST_NO++))
> > +	f_LogRunMsg "[${TEST_NO}] Stress Test:"
> > +	f_LogMsg "[${TEST_NO}] Stress Test, CMD:${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 10 -n 60000 -w ${WORK_PLACE} -s"
> > +	${MPIRUN} ${MPI_PLS_AGENT_ARG} ${MPI_BTL_ARG} ${MPI_BTL_IF_ARG} -np ${MPI_RANKS} --host ${MPI_HOSTS} ${MULTI_INDEXED_DIRS_TEST_BIN} -i 10 -n 60000 -w ${WORK_PLACE} -s >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +	f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +	f_exit_or_not ${RET}
> > +	((TEST_PASS++))
> > +	f_LogMsg "Cleanup working place"
> > +	${RM_BIN} -rf ${WORK_PLACE}/* >>${LOG_FILE} 2>&1
> > +	RET=$?
> > +        f_exit_or_not ${RET}
> > +
> > +	f_LogRunMsg "[*] Umount volume ${LABELNAME} amongs nodes ${MPI_HOSTS}:"
> > +        f_remote_umount
> > +        RET=$?
> > +        f_echo_status ${RET}| tee -a ${RUN_LOG_FILE}
> > +        f_exit_or_not ${RET}
> > +}
> > +
> > +f_cleanup()
> > +{
> > +	:
> > +}
> > +
> > +################################################################################
> > +# Main Entry
> > +################################################################################
> > +
> > +trap 'echo -ne "\n\n">>${RUN_LOG_FILE};echo  "Interrupted by Ctrl+C,Cleanuping... "|tee -a ${RUN_LOG_FILE}; f_cleanup;exit 1' SIGINT
> > +
> > +f_setup $*
> > +
> > +START_TIME=${SECONDS}
> > +f_LogRunMsg "=====================Multi-nodes indexed dirs tests start:  `date`=====================\n"
> > +f_LogMsg "=====================Multi-nodes indexed dirs tests start:  `date`====================="
> > +
> > +for BLOCKSIZE in 512 1024 2048 4096
> > +do
> > +        for CLUSTERSIZE in  4096 32768 1048576
> > +        do
> > +		f_LogRunMsg "<- Running test with ${BLOCKSIZE} bs and ${CLUSTERSIZE} cs ->\n"
> > +                f_LogMsg "<- Running test with ${BLOCKSIZE} bs and ${CLUSTERSIZE} cs ->"
> > +		f_runtest
> > +        done
> > +done
> > +f_cleanup
> > +
> > +END_TIME=${SECONDS}
> > +f_LogRunMsg "=====================Multi-nodes indexed dirs tests end: `date`=====================\n"
> > +f_LogMsg "=====================Multi-nodes indexed dirs tests end: `date`====================="
> > +
> > +f_LogRunMsg "Time elapsed(s): $((${END_TIME}-${START_TIME}))\n"
> > +f_LogRunMsg "Tests total: ${TEST_NO}\n"
> > +f_LogRunMsg "Tests passed: ${TEST_PASS}\n"
> >   




More information about the Ocfs2-test-devel mailing list