[Ocfs2-devel] [PATCH 2/2] ocfs2: fix for local alloc window restore unconditionally

heming.zhao at suse.com heming.zhao at suse.com
Mon Jun 13 01:48:52 UTC 2022


On 6/12/22 20:38, Joseph Qi wrote:
> 
> 
> On 6/12/22 3:45 PM, heming.zhao at suse.com wrote:
>> On 6/12/22 10:57, Joseph Qi wrote:
>>>
>>>
>>> On 5/21/22 6:14 PM, Heming Zhao wrote:
>>>> When la state is ENABLE, ocfs2_recalc_la_window restores la window
>>>> unconditionally. The logic is wrong.
>>>>
>>>> Let's image below path.
>>>>
>>>> 1. la state (->local_alloc_state) is set THROTTLED or DISABLED.
>>>>
>>>> 2. About 30s (OCFS2_LA_ENABLE_INTERVAL), delayed work is triggered,
>>>>      ocfs2_la_enable_worker set la state to ENABLED directly.
>>>>
>>>> 3. a write IOs thread run:
>>>>
>>>>      ```
>>>>      ocfs2_write_begin
>>>>       ...
>>>>        ocfs2_lock_allocators
>>>>         ocfs2_reserve_clusters
>>>>          ocfs2_reserve_clusters_with_limit
>>>>           ocfs2_reserve_local_alloc_bits
>>>>            ocfs2_local_alloc_slide_window // [1]
>>>>             + ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE) // [2]
>>>>             + ...
>>>>             + ocfs2_local_alloc_new_window
>>>>                ocfs2_claim_clusters // [3]
>>>>      ```
>>>>
>>>> [1]: will be called when la window bits used up.
>>>> [2]: under la state is ENABLED (eg OCFS2_LA_ENABLE_INTERVAL delayed work
>>>>        happened), it unconditionally restores la window to default value.
>>>> [3]: will use default la window size to search clusters. IMO the timing
>>>>        is O(n^4). The timing O(n^4) will cost huge time to scan global
>>>>        bitmap. It makes write IOs (eg user space 'dd') become dramatically
>>>>        slow.
>>>>
>>>> i.e.
>>>> an ocfs2 partition size: 1.45TB, cluster size: 4KB,
>>>> la window default size: 106MB.
>>>> The partition is fragmentation by creating & deleting huge mount of
>>>> small file.
>>>>
>>>> the timing should be (the number got from real world):
>>>> - la window size change order (size: MB):
>>>>     106, 53, 26.5, 13, 6.5, 3.25, 1.6, 0.8
>>>>     only 0.8MB succeed, 0.8MB also triggers la window to disable.
>>>>     ocfs2_local_alloc_new_window retries 8 times, first 7 times totally
>>>>     runs in worst case.
>>>> - group chain number: 242
>>>>     ocfs2_claim_suballoc_bits calls for-loop 242 times
>>>> - each chain has 49 block group
>>>>     ocfs2_search_chain calls while-loop 49 times
>>>> - each bg has 32256 blocks
>>>>     ocfs2_block_group_find_clear_bits calls while-loop for 32256 bits.
>>>>     for ocfs2_find_next_zero_bit uses ffz() to find zero bit, let's use
>>>>     (32256/64) for timing calucation.
>>>>
>>>> So the loop times: 7*242*49*(32256/64) = 41835024 (~42 million times)
>>>>
>>>> In the worst case, user space writes 100MB data will trigger 42M scanning
>>>> times, and if the write can't finish within 30s (OCFS2_LA_ENABLE_INTERVAL),
>>>> the write IO will suffer another 42M scanning times. It makes the ocfs2
>>>> partition keep pool performance all the time.
>>>>
>>>> The fix method:
>>>>
>>>> 1.  la restores double la size once.
>>>>
>>>> current code logic decrease la window with half size once, but directly
>>>> restores default_bits one time. It bounces the la window between '<1M'
>>>> and default_bits. This patch makes restoring process more smoothly.
>>>> eg.
>>>> la default window is 106MB, current la window is 13MB.
>>>> when there is a free action to release one block group space, la should
>>>> roll back la size to 26MB (by 13*2).
>>>> if there are many free actions to release many block group space, la
>>>> will smoothly roll back to default window (106MB).
>>>>
>>>> 2. introduced a new state: OCFS2_LA_RESTORE.
>>>>
>>>> Current code uses OCFS2_LA_ENABLED to mark a new big space available.
>>>> the state overwrite OCFS2_LA_THROTTLED, it makes la window forget
>>>> it's already in throttled status.
>>>> '->local_alloc_state' should keep OCFS2_LA_THROTTLED until la window
>>>> restore to default_bits.
>>>
>>> Since now we have enough free space, why not restore to default la
>>> window?
>>
>> The key is: the decrease speed is not same with increase speed.
>> The decrease la window happens on the current la window size is not available any more.
>> But current restore action happens on there appears any one of default_bits space.
>> e.g:
>> the default la window is 100MB, current system only has some 20MB contiguous space.
>> la window change to half size (10MB) when there is no 20MB space any more.
>> but current code restore logic will restore to 100MB. and allocation path will
>> suffer the O(n^4) timing.
>>
>>  From my understanding, most of the ocfs2 users use ocfs2 to manage big & not huge
>> number of files. But my patch response scenario: ocfs2 volume contains huge number
>> of small files. user case is creating/deleting/moving the small files all the time.
>> It makes the fs fragmentation totally.
>>
> 
> Yes, typically scenario is vm images with cluster size 1M.
> This is a corny talk and I'm afraid it still cannot resolve above case
> with optimized local alloc window.

You're right. the optimized la window could help small files case but can't resolve.
The total solution should redesign something, eg. allocation algorithm.
This issue was triggered by a SUSE customer, they had bad experience during product env
frequently happening poor performance issue. In my view, if ocfs2 only works fine on
managing vm images (big & not too many files), it's waste & ridiculous.
A powerful fs should work fine for any size files. ocfs2 should support small files use case,
but there is a long way to go.

> 
>>>
>>> This is an issue happened in a corner case, which blames current restore
>>> window is too large. I agree with your method that restoring double la
>>> size once instead of default directly. So why not just change the logic
>>> of ocfs2_recalc_la_window() to do this?
>>
>> This path is related with two issues:
>> - restore speed more quick than decrease speed.
>> - unconditionally restore la window
>>
>> only change ocfs2_recalc_la_window() can't avoid unconditionally restore la window.
> 
> Seems the following will restore double each time?
YES.
> 
> if (osb->local_alloc_state != OCFS2_LA_THROTTLED)
> 	osb->local_alloc_bits <<= 1;
> 
> This may introduce another issue which blames restore too slow. So it's a
> balance.

Yes, I agree it's a balance.
But I prefer my patch works better than existed restore method.
(IIUC) there are some reasons:
1. Truncate feature may delay restore the released space. So there is a time gap
    between restore la window and available space ready.
2. existed method is suite for big file use case:
    restore to default la window gives a chance to grab big space at once.
    A VM file may tens of GB size, user may free one VM file then create a new VM
    file. this free action could very likely release enough space for later create
    action, so ocfs2 could benefit from la window restore default size.
3. this patch introduced slow-down restore give significant help for small file
    allocation. and not make regression for ocfs2 total performance.
4. the scenario is described in this patch commit log. if la window restore default
    size unconditionally from the before state DISABLE, ocfs2 will busy with waiting
    scanning la window.

for 3, I give a example. (below analysis base on patch code already merged)
(the ocfs2 volume size 1TB)

3.1> csize:1M, group:32G, la:31G
(these number get from OCFS2_LA_MAX_DEFAULT_MB source code comment in fs/ocfs2/localalloc.c)

For this case, ocfs2 volume is used for saving big file. the worse case
describe below (please tell me, if this case is not enough)

Current la window: 2MB, release a 40GB file. la window change to 4MB.

User wants to create a 40GB file. because la is 4MB, it can successfully
get the 4M contiguous space from just released 40GB space.
the scanning timing is same as before. but only get 4MB from global bitmap.
And later slide window action will speed up by the saving 'hint' in ocfs2_claim_suballoc_bits().
the timing is not O(n^4), may O(n) here.

in this case, la window will become double size by every slide la window:
   ocfs2_local_alloc_slide_window
    ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE)
     if (osb->local_alloc_state & OCFS2_LA_RESTORE)//set 'la window' * 2

So la window restore sequence (14 times):
4MB,8,16,32,64,128,256,512,1024,2048,4096,8192,16384,default_window(31GB)

It costs extra 13 times to slide la window for allocating space.
But in my view, current ocfs2 hot spot should be on new file writing IOs,
not busy with waiting for scanning la window. User is very likely never
feel allocation is slow than before.

3.2> csize:4K, group:126M, la:121M
(see fs/ocfs2/localalloc.c for above numbers)

the worse case is like <3.1>: allocation a big file.

Current la window: 2MB, release a 40GB file. la window change to 4MB.
User wants to create a 40GB file. in this case, the la window restore sequence (6 times):
4MB,8,16,32,64,default_window(121MB)
It costs extra 5 times vs before. I have same world:
wait for scanning la window is not hot spot.

/Heming

> 
> Thanks,
> Joseph
> 
>> so I introduced new state OCFS2_LA_RESTORE, which will help ocfs2 to keep throttled
>> state.
>>
>> btw, during I investigating this bug, I found other two issues/works need to do:
>> - current allocation algorithm is very easy to generate fragment.
>> - there is O(n^4) timing. even after with this patch, the timing only becomes O(n^3):
>>    <group chain number> * <block group per chain> * <bg bitmap size>
>> we needs to improve the allocation algorithm to make ocfs2 more power.
>>
>> Thanks,
>> Heming




More information about the Ocfs2-devel mailing list