[Ocfs2-test-devel] [PATCH 1/1] Ocfs2-test: Changes needed from openmpi-1.2.5 to 1.3.2

Marcos E. Matsunaga Marcos.Matsunaga at oracle.com
Tue Dec 29 19:16:50 PST 2009


This is all i686 to i686... The path is the same, but different
locations to each one of the archs. If you look at
/scratch/mmatsuna/ocfs2test, you will notice it is a link to
/usr/local/links/ocfs2test, which is a link to
/scratch/mmatsuna/runtest_el5_<arch> directory. That's the real location
of the binaries so, each arch has its own set of binaries and they do
not get mixed up.

It seems to me that openmpi on i686 is acting as x86_64 without the fix.
Very weird. I'll be checking it again tomorrow.

Regards,

Marcos Eduardo Matsunaga

Oracle USA
Linux Engineering

“The statements and opinions expressed here are my own and do not
necessarily represent those of Oracle Corporation.”


On 12/29/2009 08:29 PM, tristan wrote:
> Marcos E. Matsunaga wrote:
>> Tristan.
>> One comment.
>>
>> Instead of making SLOTS=4, you could add to the end of f_getoptions the
>> following:
>>
>>         echo $MPI_HOSTS|sed -e 's/,/\n/g' >/tmp/$$
>>         SLOTS=`cat /tmp/$$ |wc -l`
>>         rm -f /tmp/$$
>>
>> That way, it will always format the partition with the number of nodes
>> specified.
>>   
>
> Good point, it's better to determine the slot number by mpi hosts.
>
>> Also, SLOTS is defined, but not used anywhere. You should make the
>> MKFS_BIN line to include "-N ${SLOTS}" instead of "-N 2".
>>   
>
> Did I hardcode this before? Oops, I may do this for debugging, and
> forgot to correct this afterward...
> Thanks for pointing this out!
>
>> One thing that I found interesting is that it works fine on x86_64, but
>> on i686, it has the same behavior that I reported you before. It stays
>> in this loop:
>>   
>
> Interesting, what did you mean eactly? It hangs when communicating
> between x86_64 and i686, or between i686 and i686?
>
> You kept the mpi binary in a shared volume among multiple nodes? will
> this hurt the executing of mpi binary on different arches but with
> same ELF format?
>
>
>
> Tristan.
>>
>> 14:19:48.575946 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
>> {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
>> {fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
>> {fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
>> 14:19:48.576051 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
>> {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
>> {fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
>> {fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
>> 14:19:48.576156 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
>> {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
>> {fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
>> {fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
>> 14:19:48.576262 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
>> {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
>> {fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
>> {fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
>> 14:19:48.576369 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
>> {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
>> {fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
>> {fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000009>
>> 14:19:48.576498 poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN},
>> {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN},
>> {fd=9, events=POLLIN}, {fd=10, events=POLLIN}, {fd=11, events=POLLIN},
>> {fd=12, events=POLLIN}], 9, 0) = 0 (Timeout) <0.000008>
>>
>> I still didn't have a chance to test it on ia64 and ppc as I'm waiting
>> for other tests to complete before I test this patch.
>>
>> Regards,
>>
>> Marcos Eduardo Matsunaga
>>
>> Oracle USA
>> Linux Engineering
>>
>> �The statements and opinions expressed here are my own and do not
>> necessarily represent those of Oracle Corporation.�
>>
>>
>> On 12/21/2009 12:44 AM, Tristan Ye wrote:
>>  
>>> There are still some corners where changes needed for latest 1.3.2
>>> still untouched there, this patch is going to correct them accordingly.
>>>
>>> Marcos, will you please help to verify if it could solve your issues
>>> encountered in xattr tests before giving a SOB on this patch.
>>>
>>> Signed-off-by: Tristan Ye <tristan.ye at oracle.com>
>>> ---
>>>  programs/dx_dirs_tests/multi_index_dir_run.sh      |    4 +-
>>>  .../multi_inode_alloc_perf.sh                      |    6 +-
>>>  programs/reflink_tests/multi_reflink_test_run.sh   |    4 +-
>>>  programs/xattr_tests/xattr-multi-run.sh            |   67
>>> ++++++--------------
>>>  4 files changed, 26 insertions(+), 55 deletions(-)
>>>
>>> diff --git a/programs/dx_dirs_tests/multi_index_dir_run.sh
>>> b/programs/dx_dirs_tests/multi_index_dir_run.sh
>>> index 735fc79..3a3624c 100755
>>> --- a/programs/dx_dirs_tests/multi_index_dir_run.sh
>>> +++ b/programs/dx_dirs_tests/multi_index_dir_run.sh
>>> @@ -63,7 +63,7 @@ DEFAULT_RANKS=4
>>>  MPI_RANKS=
>>>  MPI_HOSTS=
>>>  MPI_ACCESS_METHOD="ssh"
>>> -MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
>>> +MPI_PLS_AGENT_ARG="-mca plm_rsh_agent ssh:rsh"
>>>  MPI_BTL_ARG="-mca btl tcp,self"
>>>  MPI_BTL_IF_ARG=
>>>  
>>> @@ -123,7 +123,7 @@ function f_setup()
>>>      f_getoptions $*
>>>     
>>>      if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
>>> -        MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
>>> +        MPI_PLS_AGENT_ARG="-mca plm_rsh_agent rsh:ssh"
>>>      fi
>>>  
>>>      if [ -z "${MOUNT_POINT}" ];then diff --git
>>> a/programs/inode_alloc_perf_tests/multi_inode_alloc_perf.sh
>>> b/programs/inode_alloc_perf_tests/multi_inode_alloc_perf.sh
>>> index e419961..a815ef0 100755
>>> --- a/programs/inode_alloc_perf_tests/multi_inode_alloc_perf.sh
>>> +++ b/programs/inode_alloc_perf_tests/multi_inode_alloc_perf.sh
>>> @@ -88,7 +88,7 @@ ORIG_COMMITID=
>>>  declare -i MPI_RANKS
>>>  MPI_HOSTS=
>>>  MPI_ACCESS_METHOD="ssh"
>>> -MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
>>> +MPI_PLS_AGENT_ARG="-mca plm_rsh_agent ssh:rsh"
>>>  
>>>  set -o pipefail
>>>  
>>> @@ -196,7 +196,7 @@ function f_check()
>>>          f_getoptions $*
>>>  
>>>      if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
>>> -                MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
>>> +                MPI_PLS_AGENT_ARG="-mca plm_rsh_agent rsh:ssh"
>>>          fi
>>>  
>>>          if [ -z "${MOUNT_POINT}" ];then
>>> @@ -314,7 +314,7 @@ f_run_test_one_time()
>>>          f_exit_or_not ${RET}
>>>  
>>>      f_LogRunMsg "<Iteration ${1}>Run inode alloc perf tests among
>>> nodes ${MPI_HOSTS}:"
>>> -    ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
>>> btl_tcp_if_include eth0  --host ${MPI_HOSTS} ${LOCAL_TEST_RUNNER} -l
>>> ${LOCAL_LOG_DIR} -b ${LABELNAME} -t ${KERNEL_TARBALL} -k
>>> ${KERNEL_PATH} -p ${PATCH_PATH} -s ${ISCSI_SERVER} ${MOUNT_POINT}
>>> >>${LOG_FILE} 2>&1
>>> +    ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self --host
>>> ${MPI_HOSTS} ${LOCAL_TEST_RUNNER} -l ${LOCAL_LOG_DIR} -b
>>> ${LABELNAME} -t ${KERNEL_TARBALL} -k ${KERNEL_PATH} -p ${PATCH_PATH}
>>> -s ${ISCSI_SERVER} ${MOUNT_POINT} >>${LOG_FILE} 2>&1
>>>      RET=$?
>>>          f_echo_status ${RET} | tee -a ${RUN_LOG_FILE}
>>>          f_exit_or_not ${RET}
>>> diff --git a/programs/reflink_tests/multi_reflink_test_run.sh
>>> b/programs/reflink_tests/multi_reflink_test_run.sh
>>> index f389172..13b72e0 100755
>>> --- a/programs/reflink_tests/multi_reflink_test_run.sh
>>> +++ b/programs/reflink_tests/multi_reflink_test_run.sh
>>> @@ -74,7 +74,7 @@ DEFAULT_RANKS=4
>>>  MPI_RANKS=
>>>  MPI_HOSTS=
>>>  MPI_ACCESS_METHOD="ssh"
>>> -MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
>>> +MPI_PLS_AGENT_ARG="-mca plm_rsh_agent ssh:rsh"
>>>  MPI_BTL_ARG="-mca btl tcp,self"
>>>  MPI_BTL_IF_ARG=
>>>  
>>> @@ -135,7 +135,7 @@ function f_setup()
>>>      f_getoptions $*
>>>  
>>>      if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
>>> -        MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
>>> +        MPI_PLS_AGENT_ARG="-mca plm_rsh_agent rsh:ssh"
>>>      fi
>>>  
>>>      if [ -z "${MOUNT_POINT}" ];then
>>> diff --git a/programs/xattr_tests/xattr-multi-run.sh
>>> b/programs/xattr_tests/xattr-multi-run.sh
>>> index cb6984f..701d5fb 100755
>>> --- a/programs/xattr_tests/xattr-multi-run.sh
>>> +++ b/programs/xattr_tests/xattr-multi-run.sh
>>> @@ -71,18 +71,17 @@ OCFS2_DEVICE=
>>>  BLOCKSIZE=
>>>  CLUSTERSIZE=
>>>  BLOCKNUMS=
>>> +SLOTS=4
>>>  
>>>  WORKPLACE=
>>>  
>>>  TMP_DIR=/tmp
>>> -DEFAULT_HOSTFILE=".openmpi_hostfile"
>>>  DEFAULT_RANKS=4
>>>  
>>>  declare -i MPI_RANKS
>>>  MPI_HOSTS=
>>> -MPI_HOSTFILE=
>>>  MPI_ACCESS_METHOD="ssh"
>>> -MPI_PLS_AGENT_ARG="-mca pls_rsh_agent ssh:rsh"
>>> +MPI_PLS_AGENT_ARG="-mca plm_rsh_agent ssh:rsh"
>>>  
>>>  TEST_NO=0
>>>  TEST_PASS=0
>>> @@ -176,31 +175,6 @@ f_getoptions()
>>>  
>>>  }
>>>  
>>> -f_create_hostfile()
>>> -{
>>> -        MPI_HOSTFILE="${TMP_DIR}/${DEFAULT_HOSTFILE}"
>>> -    TMP_FILE="${TMP_DIR}/.tmp_openmpi_hostfile_$$"
>>> -
>>> -    echo ${MPI_HOSTS}|sed -e 's/,/\n/g'>$TMP_FILE
>>> -
>>> -        if [ -f "$MPI_HOSTFILE" ];then
>>> -                ${RM} -rf ${MPI_HOSTFILE}
>>> -        fi
>>> -
>>> -        while read line
>>> -        do
>>> -        if [ -z $line ];then
>>> -            continue
>>> -        fi
>>> -
>>> -                echo "$line">>$MPI_HOSTFILE
>>> -
>>> -        done<$TMP_FILE
>>> -
>>> -        ${RM} -rf $TMP_FILE
>>> -}
>>> -
>>> -
>>>  f_setup()
>>>  {
>>>      if [ "${UID}" = "0" ];then
>>> @@ -211,7 +185,7 @@ f_setup()
>>>      f_getoptions $*
>>>     
>>>      if [ "$MPI_ACCESS_METHOD" = "rsh" ];then
>>> -        MPI_PLS_AGENT_ARG="-mca pls_rsh_agent rsh:ssh"
>>> +        MPI_PLS_AGENT_ARG="-mca plm_rsh_agent rsh:ssh"
>>>          REMOTE_SH_BIN=${RSH_BIN}
>>>      fi
>>>  
>>> @@ -244,11 +218,8 @@ f_setup()
>>>     
>>>      if [ -z "$MPI_HOSTS" ];then
>>>          f_usage
>>> -    else
>>> -        f_create_hostfile
>>>      fi
>>>  
>>> -
>>>      ${CHMOD_BIN} -R 777 ${MOUNT_POINT}
>>>  
>>>          ${CHOWN_BIN} -R ${USERNAME}:${GROUPNAME} ${MOUNT_POINT}
>>> @@ -303,10 +274,10 @@ f_runtest()
>>>      do
>>>          for filetype in normal directory symlink
>>>          do
>>> -            echo -e "Testing Binary:\t\t${MPIRUN}
>>> ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca btl_tcp_if_include eth0
>>> -np ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 20 -n
>>> ${namespace} -t ${filetype} -l 50 -s 200 ${WORKPLACE}">>${LOG_FILE}
>>> +            echo -e "Testing Binary:\t\t${MPIRUN}
>>> ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np ${MPI_RANKS} --host
>>> ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 20 -n ${namespace} -t
>>> ${filetype} -l 50 -s 200 ${WORKPLACE}">>${LOG_FILE}
>>>              echo "********${namespace} mode on
>>> ${filetype}********">>${LOG_FILE}
>>>  
>>> -            ${SUDO} ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl
>>> tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
>>> ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 20 -n ${namespace} -t
>>> ${filetype} -l 50 -s 200 ${WORKPLACE}>>${LOG_FILE} 2>&1
>>> +            ${SUDO} ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl
>>> tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1
>>> -x 20 -n ${namespace} -t ${filetype} -l 50 -s 200
>>> ${WORKPLACE}>>${LOG_FILE} 2>&1
>>>              rc=$?
>>>              if [ "$rc" != "0" ];then
>>>                  if [ "$namespace" == "user" -a "$filetype" ==
>>> "symlink" ]; then
>>> @@ -346,8 +317,8 @@ f_runtest()
>>>          echo >>${LOG_FILE}
>>>          echo
>>> "==========================================================">>${LOG_FILE}
>>>
>>>      for((i=0;i<4;i++));do
>>> -        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS}
>>> --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 10 -n user -t normal
>>> -l 50 -s 100 ${WORKPLACE}">>${LOG_FILE}
>>> -        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
>>> btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 10 -n user -t normal -l 50 -s 100
>>> ${WORKPLACE}>>${LOG_FILE} 2>&1
>>> +        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 10 -n user -t normal -l 50 -s 100
>>> ${WORKPLACE}">>${LOG_FILE}
>>> +        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
>>> ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 10 -n
>>> user -t normal -l 50 -s 100 ${WORKPLACE}>>${LOG_FILE} 2>&1
>>>          rc=$?
>>>          if [ ! "$rc" == "0"  ];then
>>>              echo_failure |tee -a ${RUN_LOG_FILE}
>>> @@ -370,8 +341,8 @@ f_runtest()
>>>      echo -ne "[${TEST_NO}] Check Max Multinode Xattr
>>> EA_Name_Length:">> ${LOG_FILE}
>>>      echo >>${LOG_FILE}
>>>          echo
>>> "==========================================================">>${LOG_FILE}
>>>
>>> -    echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca
>>> btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS} --host
>>> ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 4 -n user -t normal -l 255 -s
>>> 300 ${WORKPLACE}">>${LOG_FILE}
>>> -    ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
>>> btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 4 -n user -t normal -l 255 -s 300
>>> ${WORKPLACE}>>${LOG_FILE} 2>&1
>>> +    echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca
>>> btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN}
>>> -i 1 -x 4 -n user -t normal -l 255 -s 300 ${WORKPLACE}">>${LOG_FILE}
>>> +    ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
>>> ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 4 -n user
>>> -t normal -l 255 -s 300 ${WORKPLACE}>>${LOG_FILE} 2>&1
>>>      RET=$?
>>>          echo_status ${RET} |tee -a ${RUN_LOG_FILE}
>>>          exit_or_not ${RET}
>>> @@ -386,8 +357,8 @@ f_runtest()
>>>          echo -ne "[${TEST_NO}] Check Max Multinode Xattr
>>> EA_Size:">> ${LOG_FILE}
>>>          echo >>${LOG_FILE}
>>>          echo
>>> "==========================================================">>${LOG_FILE}
>>>
>>> -        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS}
>>> --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1 -n user -t normal -l
>>> 50 -s 65536 ${WORKPLACE}">>${LOG_FILE}
>>> -        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
>>> btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 1 -n user -t normal -l 50 -s 65536
>>> ${WORKPLACE}>>${LOG_FILE} 2>&1
>>> +        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 1 -n user -t normal -l 50 -s 65536
>>> ${WORKPLACE}">>${LOG_FILE}
>>> +        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
>>> ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1 -n user
>>> -t normal -l 50 -s 65536 ${WORKPLACE}>>${LOG_FILE} 2>&1
>>>          RET=$?
>>>          echo_status ${RET} |tee -a ${RUN_LOG_FILE}
>>>          exit_or_not ${RET}
>>> @@ -402,8 +373,8 @@ f_runtest()
>>>          echo -ne "[${TEST_NO}] Check Huge Multinode Xattr
>>> EA_Entry_Nums:">> ${LOG_FILE}
>>>          echo >>${LOG_FILE}
>>>          echo
>>> "==========================================================">>${LOG_FILE}
>>>
>>> -        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS}
>>> --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 10000 -n user -t
>>> normal -l 100 -s 200 ${WORKPLACE}">>${LOG_FILE}
>>> -        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
>>> btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 10000 -n user -t normal -l 100 -s 200
>>> ${WORKPLACE}>>${LOG_FILE} 2>&1
>>> +        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 10000 -n user -t normal -l 100 -s 200
>>> ${WORKPLACE}">>${LOG_FILE}
>>> +        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
>>> ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 10000 -n
>>> user -t normal -l 100 -s 200 ${WORKPLACE}>>${LOG_FILE} 2>&1
>>>          RET=$?
>>>          echo_status ${RET} |tee -a ${RUN_LOG_FILE}
>>>          exit_or_not ${RET}
>>> @@ -418,8 +389,8 @@ f_runtest()
>>>          echo -ne "[${TEST_NO}] Check All Max Multinode Xattr
>>> Arguments Together:">> ${LOG_FILE}
>>>          echo >>${LOG_FILE}
>>>          echo
>>> "==========================================================">>${LOG_FILE}
>>>
>>> -        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS}
>>> --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal
>>> -l 255 -s 65536 ${WORKPLACE}">>${LOG_FILE}
>>> -        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
>>> btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s 65536
>>> ${WORKPLACE}>>${LOG_FILE} 2>&1
>>> +        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s 65536
>>> ${WORKPLACE}">>${LOG_FILE}
>>> +        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
>>> ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1000 -n
>>> user -t normal -l 255 -s 65536 ${WORKPLACE}>>${LOG_FILE} 2>&1
>>>          RET=$?
>>>          echo_status ${RET} |tee -a ${RUN_LOG_FILE}
>>>          exit_or_not ${RET}
>>> @@ -434,8 +405,8 @@ f_runtest()
>>>          echo -ne "[${TEST_NO}] Launch Concurrent Adding Test:">>
>>> ${LOG_FILE}
>>>          echo >>${LOG_FILE}
>>>          echo
>>> "==========================================================">>${LOG_FILE}
>>>
>>> -        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS}
>>> --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal
>>> -l 255 -s 5000 -o -r -k ${WORKPLACE}">>${LOG_FILE}
>>> -        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
>>> btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s 5000 -o
>>> -r -k ${WORKPLACE}>>${LOG_FILE} 2>&1
>>> +        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 1000 -n user -t normal -l 255 -s 5000 -o
>>> -r -k ${WORKPLACE}">>${LOG_FILE}
>>> +        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
>>> ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 1000 -n
>>> user -t normal -l 255 -s 5000 -o -r -k ${WORKPLACE}>>${LOG_FILE} 2>&1
>>>          RET=$?
>>>          echo_status ${RET} |tee -a ${RUN_LOG_FILE}
>>>          exit_or_not ${RET}
>>> @@ -450,8 +421,8 @@ f_runtest()
>>>          echo -ne "[${TEST_NO}] Launch MultiNode Xattr Stress
>>> Test:">> ${LOG_FILE}
>>>          echo >>${LOG_FILE}
>>>          echo
>>> "==========================================================">>${LOG_FILE}
>>>
>>> -        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -mca btl_tcp_if_include eth0 -np ${MPI_RANKS}
>>> --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 2000 -n user -t normal
>>> -l 255 -s 5000  -r -k ${WORKPLACE}">>${LOG_FILE}
>>> -        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -mca
>>> btl_tcp_if_include eth0 -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 2000 -n user -t normal -l 255 -s 5000  -r
>>> -k ${WORKPLACE}>>${LOG_FILE} 2>&1
>>> +        echo -e "Testing Binary:\t\t${MPIRUN} ${MPI_PLS_AGENT_ARG}
>>> -mca btl tcp,self -np ${MPI_RANKS} --host ${MPI_HOSTS}
>>> ${XATTR_TEST_BIN} -i 1 -x 2000 -n user -t normal -l 255 -s 5000  -r
>>> -k ${WORKPLACE}">>${LOG_FILE}
>>> +        ${MPIRUN} ${MPI_PLS_AGENT_ARG} -mca btl tcp,self -np
>>> ${MPI_RANKS} --host ${MPI_HOSTS} ${XATTR_TEST_BIN} -i 1 -x 2000 -n
>>> user -t normal -l 255 -s 5000  -r -k ${WORKPLACE}>>${LOG_FILE} 2>&1
>>>          RET=$?
>>>          echo_status ${RET} |tee -a ${RUN_LOG_FILE}
>>>          exit_or_not ${RET}
>>>       
>>
>>   
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-test-devel/attachments/20091229/fd987351/attachment-0001.html 


More information about the Ocfs2-test-devel mailing list