[Ocfs2-users] Free space oddities on OCFS2
Robinson Maureira Castillo
rmaureira at solint.cl
Wed Aug 2 15:27:37 PDT 2006
Hi all,
I'm testing OCFS2 as a cluster filesystem for a mail system based on maildir, so basically the filesystem must be able to deal with lots of directories, and lots of small files.
The first "oddity", is that when I mount a newly formated ocfs2 fs, it already contains used space:
[root at ocfs1 /]# df /cgp02
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb2 10710016 135004 10575012 2% /cgp02
The info for that partition:
[root at ocfs1 /]# fsck.ocfs2 -n /dev/sdb2
Checking OCFS2 filesystem in /dev/sdb2:
label: cgp02
uuid: ad 2e 20 38 60 70 45 b8 97 68 48 d7 b9 88 5e 59
number of blocks: 10710016
bytes per block: 1024
number of clusters: 2677504
bytes per cluster: 4096
max slots: 2
After creating 300 directories, and 300 files (2kB each) on each directory, renaming them, and then deleting, the df output is:
[root at ocfs1 cgp02]# df /cgp02/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb2 10710016 228256 10481760 3% /cgp02
I've created another partition, using these parameters:
[root at ocfs1 cgp01]# fsck.ocfs2 -n /dev/sdb1
Checking OCFS2 filesystem in /dev/sdb1:
label: cgp01
uuid: cf 1c 34 6b 10 87 45 37 84 fd 98 ea 8a 46 d2 7a
number of blocks: 2441724
bytes per block: 4096
number of clusters: 2441724
bytes per cluster: 4096
max slots: 2
After creating 100000 accounts, and deleting them, the df output is:
[root at ocfs1 cgp01]# df /cgp01
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 9766896 2122080 7644816 22% /cgp01
The usage reported by du in that directory:
[root at ocfs1 cgp01]# du -sh .
23K .
If I create the 100000 accounts again...
[root at ocfs1 example.lan]# df /cgp01/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 9766896 3342204 6424692 35% /cgp01
[root at ocfs1 example.lan]# du -sh .
531M .
After a reboot...
[root at ocfs1 ~]# df /cgp01/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 9766896 3344956 6421940 35% /cgp01
The application is using a 2-level hashing for directory creation, a typical user account resides on a hierarchy structure as shown below, and with those default files:
[root at ocfs1 example.lan]# ll aa.sub/g.sub/test10110.macnt/
total 1
-rw-rw---- 1 root mail 134 Aug 2 16:57 account.info
-rw-rw---- 1 root mail 78 Aug 2 16:57 account.settings
-rw-rw---- 1 root mail 0 Aug 2 16:57 INBOX.mbox
Where are the other 2.8GB of data being used? Is this an expected behaviour? If so, then maybe I'm doing something terribly wrong, and I would appreciate an advise on what settings should I use for this scenario.
On production systems, the size of the LUN presented is 1TB, 3 LUN per server, holding ~300000 user accounts, and expecting 1 million in a near future.
Thanks in advance, and best regards,
__________________________
Robinson Maureira Castillo
Soluciones Integrales S.A.
Eleodoro Flores 2425, Ñuñoa, Santiago - Chile
Central: (56 2) 411 9000 Fax: (56 2) 411 9001
Directo: (56 2) 411 9047
Móvil: (56 9) 599 4987
e-mail: rmaureira at solint.cl
More information about the Ocfs2-users
mailing list