HG repo
Quick and dirty usage: (note the -d option changed in 0.6)
- Untar compilebench
- ./compilebench -D some_working_dir -i 10 -r 30
- ./compilebench -D some_working_dir -i 10 --makej
- ./copmilebench -D some_working_dir -i 10 --makej -d /dev/xxx -t trace_file
- ./compilebench --help for more
In --makej mode, it does a shorter run that simulates the files created by running make -j in a kernel tree, and then reading and deleting the kernel trees. This results in the object files being spread out all over the kernel directories, and it tests the filesystem's ability to maintain metadata and data locality while writing to a large number of directories at once.
compilebench can start seekwatcher at the start of each phase. Running compilebench -d /dev/xxx -t trace_file will start seekwatcher and blkparse automatically at the start of each phase.
compilebench includes dataset files that record the names and sizes of files in a kernel tree in 4 different states:
- clean, unpatched (v2.6.20 after untar)
- compiled, unpatched (v2.6.20 after compile)
- clean, patched (v2.6.21)
- compiled, patched (v2.6.21 after compile)
compilebench starts by putting these lists of file names into an order native to the filesystem it is working on. The files are created in sorted order based on the filename, and then readdir is used to find the order the filesystem uses for storing the names. After this initial phase, the filesystem native order is used for creates, patches and compile. Deleting, reading and stating the trees are done in readdir order.
Next, a number of trees in the clean/unpatched state are created. This is controlled by -i, and defaults to 30. Then compilebench randomly selects from a list of operations similar to:
- compile
- patch
- make clean
- rm -rf kernel tree
- create a new kernel tree
- read an entire kernel tree
- stat each file in a kernel tree
By default, once all of the trees are created a call to sync is included in the timings for each random operation, and /proc/sys/vm/drop_caches is used to drop all filesystem cache. This can be turned of with --no-sync, but the results will be less reliable because the cache can artificially slow down (due to writeback in progress) or speed up operations based on what came before them. compilebench is trying to measure the decisions made by the allocator, so it is best to remove caching from the equation.
Longer runs take some time, but I'll post comparisons of a few filesystems here soon.
Sample output: (trimmed a little)
using working directory /mnt/default, 30 intial dirs 150 runs create dir kernel-0 222MB in 7.59 seconds (29.30 MB/s) ... create dir kernel-29 222MB in 30.78 seconds (7.22 MB/s) compile dir kernel-6 680MB in 29.55 seconds (23.03 MB/s) stat dir kernel-1 in 13.85 seconds delete kernel-6 in 28.15 seconds patch dir kernel-26 109MB in 43.70 seconds (2.51 MB/s) ... read dir kernel-15 in 75.06 3.04 MB/s ... clean kernel-15 691MB in 5.37 seconds (128.79 MB/s) ... run complete: ========================================================================== intial create total runs 30 avg 8.74 MB/s (user 1.17s sys 5.43s) create total runs 29 avg 9.41 MB/s (user 1.04s sys 4.84s) patch total runs 18 avg 1.99 MB/s (user 0.52s sys 2.61s) compile total runs 15 avg 20.82 MB/s (user 0.25s sys 2.24s) clean total runs 11 avg 116.46 MB/s (user 0.05s sys 0.70s) read tree total runs 30 avg 5.17 MB/s (user 1.16s sys 5.57s) delete tree total runs 16 avg 24.70 seconds (user 0.76s sys 6.40s) stat tree total runs 31 avg 14.36 seconds (user 0.79s sys 2.53s)