[Ocfs2-users] No space left on the device

Tao Ma tao.ma at oracle.com
Thu Mar 18 17:25:28 PDT 2010


Hi Aravind,

Aravind Divakaran wrote:
>> Hi Aravind,
>>
>> Aravind Divakaran wrote:
>>> Hi Tao,
>>>
>>>> Hi Aravind,
>>>>
>>>> Aravind Divakaran wrote:
>>>>> Hi All,
>>>>>
>>>>> I have already sent one mail regarding the space issue i am facing
>>>>> with
>>>>> my
>>>>> ocfs filesystem. As mentioned in the below link it is an issue related
>>>>> to
>>>>> free space fragmentation.
>>>>>
>>>>> http://oss.oracle.com/bugzilla/show_bug.cgi?id=1189
>>>>>
>>>>> I have seen a patch for stealing extent allocation which was there is
>>>>> 2.6.34-rc1 kernel. So i compiled my new kernel and installed on my
>>>>> system.
>>>>>
>>>>> Below is my ocfs details on my system
>>>>>
>>>>> #modinfo ocfs2
>>>>>
>>>>> filename:       /lib/modules/2.6.34-rc1/kernel/fs/ocfs2/ocfs2.ko
>>>>> license:        GPL
>>>>> author:         Oracle
>>>>> version:        1.5.0
>>>>> description:    OCFS2 1.5.0
>>>>> srcversion:     A8B69947E8FF56D74858993
>>>>> depends:        jbd2,ocfs2_stackglue,quota_tree,ocfs2_nodemanager
>>>>> vermagic:       2.6.34-rc1 SMP mod_unload modversions
>>>>>
>>>>> This is my stat_sysdir.sh output
>>>>>
>>>>> http://pastebin.com/RZH9DkTk
>>>>>
>>>>> Can anyone help me how to resolve this, please as the problem occurs
>>>>> on
>>>>> production mail server with 3000 emailid.
>>>> I just checked your stat_sysdir output. It isn't caused by extent block
>>>> alloc actually. So the patch doesn't work for you. Yes, the problem you
>>>> meet is fragmentation issue, but the root cause is that inode_alloc
>>>> can't allocate any more inodes(a little different from 1189).
>>>>
>>>> I am now working on discontiguous block group. It will resolve your
>>>> issue I think. Hope it can be get into mainline in 2.6.35.
>>>>
>>>> Regards,
>>>> Tao
>>>>
>>> For my previous mail i got reply from you
>>>
>>> "Another way is that you can cp the file to another volume, remove it
>>> and
>>> then cp back. It should be contiguous enough."
>>>
>>> As mentioned in the 1189
>>>
>>> "However, reducing the slot count by 1 (to 4) may not be enough as it
>>> does
>>> not
>>> have much contiguous space. It may work. But reducing it by 2 will
>>> definitely work.
>>>
>>> Umount the volume on all nodes and run:
>>> # tunefs.ocfs2 -N 3 /dev/sda1
>>>
>>> Run fsck.ocfs2 for sanity checking."
>>>
>>> Will anyone of the above solution will temporary solve my problem.
>> Yes, it works. I just replied you in another e-mail.
>>
>> Regards,
>> Tao
>>
> I am running tunefs.ocfs2 on my 500gb harddisk which contain 215gb of
> data, in order to reduce the slots. I had used the below command.
> 
> tunefs.ocfs2  -N 3 /dev/mapper/store
> 
> Now almost 7hours is over still it didnt finished the execution. Below is
> the output i am getting.
> 
> node01:~# tunefs.ocfs2 -N 3 /dev/mapper/store
> tunefs.ocfs2 1.4.1
> 
> How much time it will take to reduce the slots. Whether it will be
> finished within 10hours. Can anyone help me.
It shouldn't cost so much time. I guess it get blocked in some case. So 
is this volume umounted in all the nodes? If yes, could you please 
strace it to see what's wrong?

Regards,
Tao



More information about the Ocfs2-users mailing list