Download the latest revision (0.6)
HG repo

Quick and dirty usage: (note the -d option changed in 0.6)

Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. Thanks to Matt Mackall for the idea of simulating kernel compiles to achieve this.

In --makej mode, it does a shorter run that simulates the files created by running make -j in a kernel tree, and then reading and deleting the kernel trees. This results in the object files being spread out all over the kernel directories, and it tests the filesystem's ability to maintain metadata and data locality while writing to a large number of directories at once.

compilebench can start seekwatcher at the start of each phase. Running compilebench -d /dev/xxx -t trace_file will start seekwatcher and blkparse automatically at the start of each phase.

compilebench includes dataset files that record the names and sizes of files in a kernel tree in 4 different states:

compilebench starts by putting these lists of file names into an order native to the filesystem it is working on. The files are created in sorted order based on the filename, and then readdir is used to find the order the filesystem uses for storing the names. After this initial phase, the filesystem native order is used for creates, patches and compile. Deleting, reading and stating the trees are done in readdir order.

Next, a number of trees in the clean/unpatched state are created. This is controlled by -i, and defaults to 30. Then compilebench randomly selects from a list of operations similar to:

Average throughput for each operation is recorded and printed when the run is over. The number of random operations is controlled by -r. The same random seed is used for each run, so repeated runs with identical command line options will do the same operations in the same order each time.

By default, once all of the trees are created a call to sync is included in the timings for each random operation, and /proc/sys/vm/drop_caches is used to drop all filesystem cache. This can be turned of with --no-sync, but the results will be less reliable because the cache can artificially slow down (due to writeback in progress) or speed up operations based on what came before them. compilebench is trying to measure the decisions made by the allocator, so it is best to remove caching from the equation.

Longer runs take some time, but I'll post comparisons of a few filesystems here soon.

Sample output: (trimmed a little)

using working directory /mnt/default, 30 intial dirs 150 runs
create dir kernel-0 222MB in 7.59 seconds (29.30 MB/s)
create dir kernel-29 222MB in 30.78 seconds (7.22 MB/s)
compile dir kernel-6 680MB in 29.55 seconds (23.03 MB/s)
stat dir kernel-1 in 13.85 seconds
delete kernel-6 in 28.15 seconds
patch dir kernel-26 109MB in 43.70 seconds (2.51 MB/s)
read dir kernel-15 in 75.06 3.04 MB/s
clean kernel-15 691MB in 5.37 seconds (128.79 MB/s)
run complete:
intial create total runs 30 avg 8.74 MB/s (user 1.17s sys 5.43s)
create total runs 29 avg 9.41 MB/s (user 1.04s sys 4.84s)
patch total runs 18 avg 1.99 MB/s (user 0.52s sys 2.61s)
compile total runs 15 avg 20.82 MB/s (user 0.25s sys 2.24s)
clean total runs 11 avg 116.46 MB/s (user 0.05s sys 0.70s)
read tree total runs 30 avg 5.17 MB/s (user 1.16s sys 5.57s)
delete tree total runs 16 avg 24.70 seconds (user 0.76s sys 6.40s)
stat tree total runs 31 avg 14.36 seconds (user 0.79s sys 2.53s)