<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body text="#000066" bgcolor="#ffffcc">
<tt>This is all i686 to i686... The path is the same, but different
locations to each one of the archs. If you look at
/scratch/mmatsuna/ocfs2test, you will notice it is a link to
/usr/local/links/ocfs2test, which is a link to
/scratch/mmatsuna/runtest_el5_<arch> directory. That's the real
location of the binaries so, each arch has its own set of binaries and
they do not get mixed up.<br>
<br>
It seems to me that openmpi on i686 is acting as x86_64 without the
fix. Very weird. I'll be checking it again tomorrow.<br>
</tt>
<pre class="moz-signature" cols="72">Regards,
Marcos Eduardo Matsunaga
Oracle USA
Linux Engineering
“The statements and opinions expressed here are my own and do not
necessarily represent those of Oracle Corporation.”
</pre>
<br>
On 12/29/2009 08:29 PM, tristan wrote:
<blockquote cite="mid:4B3AAD15.1090402@oracle.com" type="cite">Marcos
E. Matsunaga wrote:
<br>
<blockquote type="cite">Tristan.
<br>
One comment.
<br>
<br>
Instead of making SLOTS=4, you could add to the end of f_getoptions the
<br>
following:
<br>
<br>
echo $MPI_HOSTS|sed -e 's/,/\n/g' >/tmp/$$
<br>
SLOTS=`cat /tmp/$$ |wc -l`
<br>
rm -f /tmp/$$
<br>
<br>
That way, it will always format the partition with the number of nodes
<br>
specified.
<br>
</blockquote>
<br>
Good point, it's better to determine the slot number by mpi hosts.
<br>
<br>
<blockquote type="cite">Also, SLOTS is defined, but not used
anywhere. You should make the
<br>
MKFS_BIN line to include "-N ${SLOTS}" instead of "-N 2".
<br>
</blockquote>
<br>
Did I hardcode this before? Oops, I may do this for debugging, and
forgot to correct this afterward...
<br>
Thanks for pointing this out!
<br>
<br>
<blockquote type="cite">One thing that I found interesting is that it
works fine on x86_64, but
<br>
on i686, it has the same behavior that I reported you before. It stays
<br>
in this loop:
<br>
</blockquote>
<br>
Interesting, what did you mean eactly? It hangs when communicating
between x86_64 and i686, or between i686 and i686?
<br>
<br>
You kept the mpi binary in a shared volume among multiple nodes? will
this hurt the executing of mpi binary on different arches but with same
ELF format?
<br>
<br>
<br>
<br>
Tristan.
<br>
<blockquote type="cite"><br>
14:19:48.575946 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
<br>
{fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
<br>
{fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
<br>
{fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
<br>
14:19:48.576051 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
<br>
{fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
<br>
{fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
<br>
{fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
<br>
14:19:48.576156 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
<br>
{fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
<br>
{fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
<br>
{fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
<br>
14:19:48.576262 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
<br>
{fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
<br>
{fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
<br>
{fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
<br>
14:19:48.576369 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
<br>
{fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
<br>
{fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
<br>
{fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
<br>
14:19:48.576498 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
<br>
{fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
<br>
{fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
<br>
{fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000008>
<br>
<br>
I still didn't have a chance to test it on ia64 and ppc as I'm waiting
<br>
for other tests to complete before I test this patch.
<br>
<br>
Regards,
<br>
<br>
Marcos Eduardo Matsunaga
<br>
<br>
Oracle USA
<br>
Linux Engineering
<br>
<br>
�The statements and opinions expressed here are my own and do not
<br>
necessarily represent those of Oracle Corporation.�
<br>
<br>
<br>
On 12/21/2009 12:44 AM, Tristan Ye wrote:
<br>
<blockquote type="cite">There are still some corners where changes
needed for latest 1.3.2
<br>
still untouched there, this patch is going to correct them accordingly.
<br>
<br>
Marcos, will you please help to verify if it could solve your issues
<br>
encountered in xattr tests before giving a SOB on this patch.
<br>
<br>
Signed-off-by: Tristan Ye <a class="moz-txt-link-rfc2396E" href="mailto:tristan.ye@oracle.com"><tristan.ye@oracle.com></a>
<br>
---
<br>
programs/dx_dirs_tests/multi_index_dir_run.sh | 4 +-
<br>
.../multi_inode_alloc_perf.sh | 6 +-
<br>
programs/reflink_tests/multi_reflink_test_run.sh | 4 +-
<br>
programs/xattr_tests/xattr-multi-run.sh | 67
++++++--------------
<br>
4 files changed, 26 insertions(+), 55 deletions(-)
<br>
<br>
diff --git a/programs/dx_dirs_tests/multi_index_dir_run.sh
b/programs/dx_dirs_tests/multi_index_dir_run.sh
<br>
index 735fc79..3a3624c 100755
<br>
--- a/programs/dx_dirs_tests/multi_index_dir_run.sh
<br>
+++ b/programs/dx_dirs_tests/multi_index_dir_run.sh
<br>
@@ -63,7 +63,7 @@ DEFAULT_RANKS=4
<br>
MPI_RANKS=
<br>
MPI_HOSTS=
<br>
MPI_ACCESS_METHOD="ssh"
<br>
-MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
<br>
+MPI_PLS_AGENT_ARG="-mca plm_rsh_agent ssh:rsh"
<br>
MPI_BTL_ARG="-mca btl tcp,self"
<br>
MPI_BTL_IF_ARG=
<br>
<br>
@@ -123,7 +123,7 @@ function f_setup()
<br>
f_getoptions $*
<br>
<br>
if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
<br>
- MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
<br>
+ MPI_PLS_AGENT_ARG="-mca plm_rsh_agent rsh:ssh"
<br>
fi
<br>
<br>
if [ -z "${MOUNT_POINT}" ];then diff --git
a/programs/inode_alloc_perf_tests/multi_inode_alloc_perf.sh
b/programs/inode_alloc_perf_tests/multi_inode_alloc_perf.sh
<br>
index e419961..a815ef0 100755
<br>
--- a/programs/inode_alloc_perf_tests/multi_inode_alloc_perf.sh
<br>
+++ b/programs/inode_alloc_perf_tests/multi_inode_alloc_perf.sh
<br>
@@ -88,7 +88,7 @@ ORIG_COMMITID=
<br>
declare -i MPI_RANKS
<br>
MPI_HOSTS=
<br>
MPI_ACCESS_METHOD="ssh"
<br>
-MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
<br>
+MPI_PLS_AGENT_ARG="-mca plm_rsh_agent ssh:rsh"
<br>
<br>
set -o pipefail
<br>
<br>
@@ -196,7 +196,7 @@ function f_check()
<br>
f_getoptions $*
<br>
<br>
if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
<br>
- MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
<br>
+ MPI_PLS_AGENT_ARG="-mca plm_rsh_agent rsh:ssh"
<br>
fi
<br>
<br>
if [ -z "${MOUNT_POINT}" ];then
<br>
@@ -314,7 +314,7 @@ f_run_test_one_time()
<br>
f_exit_or_not ${RET}
<br>
<br>
f_LogRunMsg "<Iteration ${1}>Run inode alloc perf tests
among nodes ${MPI_HOSTS}:"
<br>
- ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
btl_tcp_if_include eth0 --host ${MPI_HOSTS} ${LOCAL_TEST_RUNNER} -l
${LOCAL_LOG_DIR} -b ${LABELNAME} -t ${KERNEL_TARBALL} -k ${KERNEL_PATH}
-p ${PATCH_PATH} -s ${ISCSI_SERVER} ${MOUNT_POINT} >>${LOG_FILE}
2>&1
<br>
+ ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self --host
${MPI_HOSTS} ${LOCAL_TEST_RUNNER} -l ${LOCAL_LOG_DIR} -b ${LABELNAME}
-t ${KERNEL_TARBALL} -k ${KERNEL_PATH} -p ${PATCH_PATH} -s
${ISCSI_SERVER} ${MOUNT_POINT} >>${LOG_FILE} 2>&1
<br>
RET=$?
<br>
f_echo_status ${RET} | tee -a ${RUN_LOG_FILE}
<br>
f_exit_or_not ${RET}
<br>
diff --git a/programs/reflink_tests/multi_reflink_test_run.sh
b/programs/reflink_tests/multi_reflink_test_run.sh
<br>
index f389172..13b72e0 100755
<br>
--- a/programs/reflink_tests/multi_reflink_test_run.sh
<br>
+++ b/programs/reflink_tests/multi_reflink_test_run.sh
<br>
@@ -74,7 +74,7 @@ DEFAULT_RANKS=4
<br>
MPI_RANKS=
<br>
MPI_HOSTS=
<br>
MPI_ACCESS_METHOD="ssh"
<br>
-MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
<br>
+MPI_PLS_AGENT_ARG="-mca plm_rsh_agent ssh:rsh"
<br>
MPI_BTL_ARG="-mca btl tcp,self"
<br>
MPI_BTL_IF_ARG=
<br>
<br>
@@ -135,7 +135,7 @@ function f_setup()
<br>
f_getoptions $*
<br>
<br>
if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
<br>
- MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
<br>
+ MPI_PLS_AGENT_ARG="-mca plm_rsh_agent rsh:ssh"
<br>
fi
<br>
<br>
if [ -z "${MOUNT_POINT}" ];then
<br>
diff --git a/programs/xattr_tests/xattr-multi-run.sh
b/programs/xattr_tests/xattr-multi-run.sh
<br>
index cb6984f..701d5fb 100755
<br>
--- a/programs/xattr_tests/xattr-multi-run.sh
<br>
+++ b/programs/xattr_tests/xattr-multi-run.sh
<br>
@@ -71,18 +71,17 @@ OCFS2_DEVICE=
<br>
BLOCKSIZE=
<br>
CLUSTERSIZE=
<br>
BLOCKNUMS=
<br>
+SLOTS=4
<br>
<br>
WORKPLACE=
<br>
<br>
TMP_DIR=/tmp
<br>
-DEFAULT_HOSTFILE=".openmpi_hostfile"
<br>
DEFAULT_RANKS=4
<br>
<br>
declare -i MPI_RANKS
<br>
MPI_HOSTS=
<br>
-MPI_HOSTFILE=
<br>
MPI_ACCESS_METHOD="ssh"
<br>
-MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
<br>
+MPI_PLS_AGENT_ARG="-mca plm_rsh_agent ssh:rsh"
<br>
<br>
TEST_NO=0
<br>
TEST_PASS=0
<br>
@@ -176,31 +175,6 @@ f_getoptions()
<br>
<br>
}
<br>
<br>
-f_create_hostfile()
<br>
-{
<br>
- MPI_HOSTFILE="${TMP_DIR}/${DEFAULT_HOSTFILE}"
<br>
- TMP_FILE="${TMP_DIR}/.tmp_openmpi_hostfile_$$"
<br>
-
<br>
- echo ${MPI_HOSTS}|sed -e 's/,/\n/g'>$TMP_FILE
<br>
-
<br>
- if [ -f "$MPI_HOSTFILE" ];then
<br>
- ${RM} -rf ${MPI_HOSTFILE}
<br>
- fi
<br>
-
<br>
- while read line
<br>
- do
<br>
- if [ -z $line ];then
<br>
- continue
<br>
- fi
<br>
-
<br>
- echo "$line">>$MPI_HOSTFILE
<br>
-
<br>
- done<$TMP_FILE
<br>
-
<br>
- ${RM} -rf $TMP_FILE
<br>
-}
<br>
-
<br>
-
<br>
f_setup()
<br>
{
<br>
if [ "${UID}" = "0" ];then
<br>
@@ -211,7 +185,7 @@ f_setup()
<br>
f_getoptions $*
<br>
<br>
if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
<br>
- MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
<br>
+ MPI_PLS_AGENT_ARG="-mca plm_rsh_agent rsh:ssh"
<br>
REMOTE_SH_BIN=${RSH_BIN}
<br>
fi
<br>
<br>
@@ -244,11 +218,8 @@ f_setup()
<br>
<br>
if [ -z "$MPI_HOSTS" ];then
<br>
f_usage
<br>
- else
<br>
- f_create_hostfile
<br>
fi
<br>
<br>
-
<br>
${CHMOD_BIN} -R 777 ${MOUNT_POINT}
<br>
<br>
${CHOWN_BIN} -R ${USERNAME}:${GROUPNAME} ${MOUNT_POINT}
<br>
@@ -303,10 +274,10 @@ f_runtest()
<br>
do
<br>
for filetype in normal directory symlink
<br>
do
<br>
- echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 20 -n ${namespace} -t
${filetype} -l 50 -s 200 ${WORKPLACE}">>${LOG_FILE}
<br>
+ echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 20 -n ${namespace} -t ${filetype} -l 50 -s
200 ${WORKPLACE}">>${LOG_FILE}
<br>
echo "********${namespace} mode on
${filetype}********">>${LOG_FILE}
<br>
<br>
- ${SUDO} ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self
-mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 20 -n ${namespace} -t ${filetype} -l 50 -s
200 ${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
+ ${SUDO} ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self
-np ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 20 -n
${namespace} -t ${filetype} -l 50 -s 200
${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
rc=$?
<br>
if [ "$rc" != "0" ];then
<br>
if [ "$namespace" == "user" -a "$filetype" ==
"symlink" ]; then
<br>
@@ -346,8 +317,8 @@ f_runtest()
<br>
echo >>${LOG_FILE}
<br>
echo
"==========================================================">>${LOG_FILE}
<br>
for((i=0;i<4;i++));do
<br>
- echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 10 -n user -t normal -l 50 -s
100 ${WORKPLACE}">>${LOG_FILE}
<br>
- ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 10 -n user -t normal -l 50 -s 100
${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
+ echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 10 -n user -t normal -l 50 -s 100
${WORKPLACE}">>${LOG_FILE}
<br>
+ ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 10 -n user
-t normal -l 50 -s 100 ${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
rc=$?
<br>
if [ ! "$rc" == "0" ];then
<br>
echo_failure |tee -a ${RUN_LOG_FILE}
<br>
@@ -370,8 +341,8 @@ f_runtest()
<br>
echo -ne "[${TEST_NO}] Check Max Multinode Xattr
EA_Name_Length:">> ${LOG_FILE}
<br>
echo >>${LOG_FILE}
<br>
echo
"==========================================================">>${LOG_FILE}
<br>
- echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca
btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 4 -n user -t normal -l 255 -s
300 ${WORKPLACE}">>${LOG_FILE}
<br>
- ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 4 -n user -t normal -l 255 -s 300
${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
+ echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca
btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i
1 -x 4 -n user -t normal -l 255 -s 300 ${WORKPLACE}">>${LOG_FILE}
<br>
+ ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np ${MPI_RANKS}
--host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 4 -n user -t normal -l
255 -s 300 ${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
RET=$?
<br>
echo_status ${RET} |tee -a ${RUN_LOG_FILE}
<br>
exit_or_not ${RET}
<br>
@@ -386,8 +357,8 @@ f_runtest()
<br>
echo -ne "[${TEST_NO}] Check Max Multinode Xattr
EA_Size:">> ${LOG_FILE}
<br>
echo >>${LOG_FILE}
<br>
echo
"==========================================================">>${LOG_FILE}
<br>
- echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1 -n user -t normal -l 50 -s
65536 ${WORKPLACE}">>${LOG_FILE}
<br>
- ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 1 -n user -t normal -l 50 -s 65536
${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
+ echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 1 -n user -t normal -l 50 -s 65536
${WORKPLACE}">>${LOG_FILE}
<br>
+ ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1 -n user -t
normal -l 50 -s 65536 ${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
RET=$?
<br>
echo_status ${RET} |tee -a ${RUN_LOG_FILE}
<br>
exit_or_not ${RET}
<br>
@@ -402,8 +373,8 @@ f_runtest()
<br>
echo -ne "[${TEST_NO}] Check Huge Multinode Xattr
EA_Entry_Nums:">> ${LOG_FILE}
<br>
echo >>${LOG_FILE}
<br>
echo
"==========================================================">>${LOG_FILE}
<br>
- echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 10000 -n user -t normal -l 100
-s 200 ${WORKPLACE}">>${LOG_FILE}
<br>
- ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 10000 -n user -t normal -l 100 -s 200
${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
+ echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 10000 -n user -t normal -l 100 -s 200
${WORKPLACE}">>${LOG_FILE}
<br>
+ ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 10000 -n
user -t normal -l 100 -s 200 ${WORKPLACE}>>${LOG_FILE}
2>&1
<br>
RET=$?
<br>
echo_status ${RET} |tee -a ${RUN_LOG_FILE}
<br>
exit_or_not ${RET}
<br>
@@ -418,8 +389,8 @@ f_runtest()
<br>
echo -ne "[${TEST_NO}] Check All Max Multinode Xattr Arguments
Together:">> ${LOG_FILE}
<br>
echo >>${LOG_FILE}
<br>
echo
"==========================================================">>${LOG_FILE}
<br>
- echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s
65536 ${WORKPLACE}">>${LOG_FILE}
<br>
- ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s 65536
${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
+ echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s 65536
${WORKPLACE}">>${LOG_FILE}
<br>
+ ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1000 -n user
-t normal -l 255 -s 65536 ${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
RET=$?
<br>
echo_status ${RET} |tee -a ${RUN_LOG_FILE}
<br>
exit_or_not ${RET}
<br>
@@ -434,8 +405,8 @@ f_runtest()
<br>
echo -ne "[${TEST_NO}] Launch Concurrent Adding Test:">>
${LOG_FILE}
<br>
echo >>${LOG_FILE}
<br>
echo
"==========================================================">>${LOG_FILE}
<br>
- echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s
5000 -o -r -k ${WORKPLACE}">>${LOG_FILE}
<br>
- ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s 5000 -o -r
-k ${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
+ echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s 5000 -o -r
-k ${WORKPLACE}">>${LOG_FILE}
<br>
+ ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1000 -n user
-t normal -l 255 -s 5000 -o -r -k ${WORKPLACE}>>${LOG_FILE}
2>&1
<br>
RET=$?
<br>
echo_status ${RET} |tee -a ${RUN_LOG_FILE}
<br>
exit_or_not ${RET}
<br>
@@ -450,8 +421,8 @@ f_runtest()
<br>
echo -ne "[${TEST_NO}] Launch MultiNode Xattr Stress
Test:">> ${LOG_FILE}
<br>
echo >>${LOG_FILE}
<br>
echo
"==========================================================">>${LOG_FILE}
<br>
- echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 2000 -n user -t normal -l 255 -s
5000 -r -k ${WORKPLACE}">>${LOG_FILE}
<br>
- ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 2000 -n user -t normal -l 255 -s 5000 -r -k
${WORKPLACE}>>${LOG_FILE} 2>&1
<br>
+ echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
-mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
${XATTR_TEST_BIN} -i 1 -x 2000 -n user -t normal -l 255 -s 5000 -r -k
${WORKPLACE}">>${LOG_FILE}
<br>
+ ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 2000 -n user
-t normal -l 255 -s 5000 -r -k ${WORKPLACE}>>${LOG_FILE}
2>&1
<br>
RET=$?
<br>
echo_status ${RET} |tee -a ${RUN_LOG_FILE}
<br>
exit_or_not ${RET}
<br>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
</body>
</html>