[OracleOSS] [TitleIndex] [WordIndex]

OCFS2/Ocfs2TestOn16TB+

OCFS2 Test on 16TB+ Volume

Introduction

This document aims at scheduling the testing plan on a 16TB+ OCFS2 volume, which could be of great significance in production environment. A thorough testplan will be illustrated later with each testcase well-defined, and it also serves as a testing report to keep track of all of the issues we hit during the tests. Given that ocfs2-test has already been there for years with a rich set of ocfs2 specific and common testcases integrated, Testing on 16TB+ volum will therefore mainly rely on this, some additional tests however also will be mentioned.

Test Limitation

Before we start to support 64 bits worth of clusters, we have to look into testing the large volumes with >4KB clustersize. And as things stand today, we should be able to support up to 4PB OCFS2(32 bits + 20 bits for clustersize) volume without 64 bits clusters implemented yet

Testcases

we concentrated on functionality,stress and boundary test for kernel fs both in single and multiple nodes. Also, other than this, 16TB+ support by userspace ocfs2-tool will also be checked.

1.Single-node Tests

Single node tests mainly includes basic functional test and stress test.

1. Basic functional test, it includes:

    1) Tools support on 16TB+ volume.(mkfs-tests, tunefs-test and fsck-tests from ocfs2-test, manual debugfs.ocfs2 test.)
    2) stat_sysdir.sh
    3) loop mounting check on a sparse file.
    3) inode64 checking on mount.
    3) sinlge_run-WIP tests, which include a set of sub testcases to carefully check the separate functionalities of fs.
    4) xattr_test
    5) reflink_test
    6) inline_test
    7) mmap_test
    8) quota_test

2. Random I/O tests, it will be performed on a huge file(Say 20T size)

    1). Random writes
    2). Random reads
    3). Random rw
    4). Random truncate
    5). Random appends
    6). Random directio rw

3. Stress and destructive test, actually, former testcases in random and basic test has tested the fs with stress workload more or less, following gives some manual tests in a straightforward way.

   1) Quick&dirty filling-up into volume by dd.
   2) Mass dirents with considerable diretory depth and size.
   3) Mass inodes propagation

 

2.Multi-nodes Tests

Currently, multi-node tests only focus on the testcases which multiple_run.sh in ocfs2-test already included.

1. multi_inline_test

2. write_append_truncate_test

3. multi_mmap_test

4. lvb_torture_test

5. create_racer_test

6. create_racer_test

7. create_racer_test

8. multi_xattr_test

9. multi_reflink_test

Testing Status

Kernel/Patches

Arch/Nodes

Storage Info

Testing Briefing

Ocfs2-Tools

Testing Tool

Date

Testing Report&Log

Ubuntu 9.10 linux-2.6.32.2.31339

i686 single-node

SAN: 37TB Volume

Single-node basic test:
1. manual sanity check against tools support for 16TB+ volume passed
2. all random testcases also passed
3. loop mounting check on a sparse file passed.
4. inode64 checking on mount passed.
5. inline-data test passed
6. mmap_test passed
7. filling-up test passed.
8. Mass inodes propagation test passed

ocfs2-tools-1.4.3

ocfs2-test
stat_sysdir.sh

1/11/2010

inline-dirs-test.log
inline-data-test.log
stat_sysdir.log

Ubuntu 9.10 linux-2.6.32.2.31339

i686 single-node

SAN: 37TB Volume

Single-node basic test:
1. single_run-WIP test
2. xattr single-node test passed
3. reflink single-node test passed.

ocfs2-tools-1.4.3

ocfs2-test

1/13/2010

single_logs.tgz
reflink_single_logs.tgz
xattr_logs.tgz


2011-12-23 01:01