[Ocfs2-users] ocfs2 performance and scaling

Sabuj Pattanayek sabujp at gmail.com
Tue Jul 22 10:34:25 PDT 2008


Hi,

> Try it out. If not, then we have a bottleneck somewhere.
>
> One obvious bottleneck is the global bitmap. The fs works around this by
> using a node local bitmap cache called localalloc. By default it is 8MB.
> So if you are using a 4K/4K (block/cluster), then you will hit the global
> bitmap (and thus cluster lock) every 2048 extents. If that is a bottleneck,
> you can mount with a larger localalloc.
>
> To mount with 16MB localalloc, do:
> mount -olocalalloc=16

I've given up on using volumes >16TB for now and will just settle for
a 15TB volume and a 10TB volume created with 4k/4k block/cluster sizes
and -T mail. However, the performance is exactly halved when I start a
dd on both nodes. I know it's not maxing out the storage speed which
should be around 200MB/s when I used XFS. The mount on both orca and
porpoise:

/dev/mapper/vg-ocfs2_0 on /export/ocfs2_0 type ocfs2
(rw,_netdev,localalloc=16,data=writeback,heartbeat=local)

When the tests are run individually:

orca tmp # time dd if=/dev/zero of=testFile.orca bs=4k count=500000
500000+0 records in
500000+0 records out
2048000000 bytes (2.0 GB) copied, 11.3476 s, 180 MB/s

porpoise tmp # time dd if=/dev/zero of=testFile.porpoise bs=4k count=500000
500000+0 records in
500000+0 records out
2048000000 bytes (2.0 GB) copied, 12.6702 s, 162 MB/s

Now when I run them almost simultaneously:

orca tmp # time dd if=/dev/zero of=testFile.orca bs=4k count=500000
500000+0 records in
500000+0 records out
2048000000 bytes (2.0 GB) copied, 23.9214 s, 85.6 MB/s

porpoise tmp # time dd if=/dev/zero of=testFile.porpoise bs=4k count=500000
500000+0 records in
500000+0 records out
2048000000 bytes (2.0 GB) copied, 25.319 s, 80.9 MB/s

I couldn't stripe the LVM with lvcreate -i 3 because two of the three
physical volumes are smaller than one of them. I'll get these sizes
matched, make sure I'm still below 16T for the LVM, re-create the FS,
and try the test again.

Thanks,
Sabuj



More information about the Ocfs2-users mailing list