[Ocfs2-users] The trouble with the ocfs2 partition continues ...

Juan Pablo Marco Cardona jmarco.cardona at cast-info.es
Thu Oct 29 15:58:16 PDT 2009


Hi,
you are right, i can create a 5GB file but no more.
But the trouble isn't in the file size, it is in the total free size os 
the partition.

For example if a i create 2 file of 2.5 GB, i can't create a third file 
of 2.5 GB.

If after creation, i remove and create again the file happens the same.

Really, i don't know what is the problem!

Tomorrow, like Sunil said me, i will open a bug report in Bugzilla 
attaching the stat_sysdir.sh output.

What is the url bugzilla to open the bug report about this problem?

Thanks in advanced

Regards,
Pablo

P.D:
Now just i cant write ~ 2.8 GB in the same partition:

df -kh | grep mail
300G  239G   62G  80% /home/mail

dd if=/dev/zero of=./file1 bs=100M count=25
25+0 records in
25+0 records out
2621440000 bytes (2,6 GB) copied, 129,77 seconds, 20,2 MB/s
 
dd if=/dev/zero of=./file2 bs=100M count=25
dd: escribiendo «./file2»: No queda espacio en el dispositivo (Spanish 
message about the lack of free space)
3+0 records in
2+0 records out
251658240 bytes (252 MB) copied, 3,52704 seconds, 71,4 MB/s



Srinivas Eeda escribió:
> I don't see any problem with your filesystem configuration. df reports 
> 63G free space, so you should be able to create files of that 
> size(approximately). But you are able to create one file of size 5gb 
> and no more? Or you are able to create another 5GB file?. Once you get 
> ENOSPC what happens, can't create anymore? What happens if you delete 
> the 5GB file, you can create that again?
>
> 4k cluster size and 1k block size is good. But if your file sizes are 
> bigger N*4K you can set the clustersize to that so you can benefit more.
>
> Juan Pablo Marco Cardona wrote:
>> Hi Srinivas,
>> the trouble happens creating files like this:
>>
>> dd if=/dev/zero of=./file bs=100M count=60
>>
>> When the file arrives to 5GB the filesystem outputs a error about no 
>> more free space!
>>
>> Theorically there is about 63 GB of free space:
>>
>> df -kh | grep mail
>> 300G  238G   63G  80% /home/mail
>>
>> This trouble happens creating small files, too
>>
>> The inodes seems not to be exhausted:
>>
>> df -i | grep mail
>> 78642183 62365063 16277120   80% /home/mail
>>
>> I think, there are so few orphaned inodes and maybe it doesn't affect:
>>
>> echo "ls //orphan_dir:0000" | debugfs.ocfs2 /dev/sda1
>>
>> debugfs.ocfs2 1.4.3
>> debugfs:        16              16   1    2  .
>>         10              16   2    2  ..
>>         79813023        28   16   1  0000000004c1d99f
>>         28472617        28   16   1  0000000001b27529
>>         8438318         28   16   1  000000000080c22e
>>         80973610        28   16   1  0000000004d38f2a
>>         213406992       28   16   1  000000000cb85510
>>         13234609        28   16   1  0000000000c9f1b1
>>         228704523       28   16   1  000000000da1c10b
>>         225968869       28   16   1  000000000d7802e5
>>         256692752       28   16   1  000000000f4cd210
>>         103224605       28   16   1  000000000627151d
>>         83675914        28   16   1  0000000004fccb0a
>>         225968588       28   16   1  000000000d7801cc
>>         278103558       28   16   1  0000000010938606
>>         256692760       28   16   1  000000000f4cd218
>>         13235439        28   16   1  0000000000c9f4ef
>>         8861985         28   16   1  0000000000873921
>>         228997031       28   16   1  000000000da637a7
>>         111786205       28   16   1  0000000006a9b8dd
>>         24850261        28   16   1  00000000017b2f55
>>         29889095        28   16   1  0000000001c81247
>>         311022924       28   16   1  000000001289d54c
>>         13235309        28   16   1  0000000000c9f46d
>>         129665645       28   16   1  0000000007ba8a6d
>>         79605831        28   16   1  0000000004beb047
>>         28104589        28   16   1  0000000001acd78d
>>         294769884       28   16   1  000000001191d4dc
>>         253519012       28   16   1  000000000f1c64a4
>>         80973232        28   16   1  0000000004d38db0
>>         13234376        28   16   1  0000000000c9f0c8
>>         312527073       28   16   1  0000000012a0c8e1
>>         25863407        28   16   1  00000000018aa4ef
>>         305612210       28   16   1  00000000123745b2
>>         226494741       28   16   1  000000000d800915
>>         228705439       28   16   1  000000000da1c49f
>>         79604433        40   16   1  0000000004beaad1
>>         80973605        28   16   1  0000000004d38f25
>>         226494890       28   16   1  000000000d8009aa
>>         80973659        28   16   1  0000000004d38f5b
>>         13236194        56   16   1  0000000000c9f7e2
>>         80973320        56   16   1  0000000004d38e08
>>         46226861        28   16   1  0000000002c15dad
>>         228995934       28   16   1  000000000da6335e
>>         294770302       28   16   1  000000001191d67e
>>         256692906       28   16   1  000000000f4cd2aa
>>         255321636       56   16   1  000000000f37e624
>>         80974273        28   16   1  0000000004d391c1
>>         209062548       56   16   1  000000000c760a94
>>         46227042        168  16   1  0000000002c15e62
>>         108197615       28   16   1  000000000672f6ef
>>         13236185        56   16   1  0000000000c9f7d9
>>         294768980       112  16   1  000000001191d154
>>         294768983       212  16   1  000000001191d157
>>
>>
>> I will show you some info about the partition:
>>
>> Block size:
>>
>> tunefs.ocfs2 -q -Q "BS=%5B\n" /dev/sda1
>> BS= 1024
>>
>> Cluster size:
>>
>> tunefs.ocfs2 -q -Q "CS=%5T\n" /dev/sda1
>> CS= 4096
>>
>> Number Cluster Nodes:
>>
>> tunefs.ocfs2 -q -Q "CN=%5N\n" /dev/sda1
>> CN=    5
>>
>> I think is better to give you more information about this partition.
>>
>> In this 300 GB ocfs2 partition are the maildir's of about 3K users 
>> with a lot of small files ( ~ 62 M used inodes).
>> In the near future we are planning to mount a 2-mail-cluster system, 
>> because this volume is in a SAN.
>>
>> Perhaps the cluster and node size are to much big? Maybe the trouble 
>> is fragmentation?
>> What are the best ocfs2 options to format this mail volume?
>>
>> Thanks in advanced.
>>
>> Regards,
>> Pablo
>>
>>
>>
>>
>> Srinivas Eeda escribió:
>>> Do you run into ENOSPC creating new files or extending existing 
>>> files? What is the cluster size? Don't think this may be the issue, 
>>> but any files under orphan directory? run (echo "ls 
>>> //orphan_dir:000X" | debugfs.ocfs2 <device>) to check if there are any.
>>>
>>>
>>> Juan Pablo Marco Cardona wrote:
>>>> Hi Sunil,
>>>> we still have problems with the ocfs2 mail partition. The old 
>>>> problem with the free space still continues.
>>>> Theorically we hace 62 GB free space, but we only can use about 5 GB!
>>>>
>>>> df -kh | grep mail
>>>> 300G  238G   63G  80% /home/mail
>>>>
>>>> Also, yesterday we have a power outage that force a fsck.ocfs2 of 
>>>> the partition.
>>>>
>>>> We have upgraded to ocfs2 1.4.4 and ocfs2 1.4.3 tools and the 
>>>> kernel version is the same:
>>>>
>>>> rpm -qa | grep ocfs2
>>>> ocfs2-2.6.18-128.2.1.el5-1.4.4-1.el5
>>>> ocfs2-tools-1.4.3-1.el5
>>>> ocfs2console-1.4.3-1.el5
>>>>
>>>> modinfo ocfs2 | grep ^version
>>>> version:        1.4.4
>>>>
>>>> uname -r
>>>> 2.6.18-128.2.1.el5
>>>>
>>>> What we can make? Maybe make a fsck.ocfs2 or mkfs.ocfs2 with the 
>>>> new tools version (1.4.3) ??
>>>>
>>>> Thanks in advance.
>>>>
>>>> Regards,
>>>> Pablo
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>  
>>>>
>>>>
>>>>
>>>> -- 
>>>>
>>>> ------------------------------------------------------------------------
>>>>
>>>> _______________________________________________
>>>> Ocfs2-users mailing list
>>>> Ocfs2-users at oss.oracle.com
>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>>

-- 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20091029/1432a31f/attachment.html 


More information about the Ocfs2-users mailing list