[Ocfs2-devel] [PATCH 1/3] ocfs2: add ocfs2_try_rw_lock and ocfs2_try_inode_lock

Gang He ghe at suse.com
Mon Nov 27 21:26:36 PST 2017


Hello Changwei,


>>> 
> Hi Gang,
> 
> On 2017/11/27 17:48, Gang He wrote:
>> Add ocfs2_try_rw_lock and ocfs2_try_inode_lock functions, which
>> will be used in non-block IO scenarios.
>> 
>> Signed-off-by: Gang He <ghe at suse.com>
>> ---
>>   fs/ocfs2/dlmglue.c | 22 ++++++++++++++++++++++
>>   fs/ocfs2/dlmglue.h |  4 ++++
>>   2 files changed, 26 insertions(+)
>> 
>> diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
>> index 4689940..5cfbd04 100644
>> --- a/fs/ocfs2/dlmglue.c
>> +++ b/fs/ocfs2/dlmglue.c
>> @@ -1742,6 +1742,28 @@ int ocfs2_rw_lock(struct inode *inode, int write)
>>   	return status;
>>   }
>>   
>> +int ocfs2_try_rw_lock(struct inode *inode, int write)
>> +{
>> +	int status, level;
>> +	struct ocfs2_lock_res *lockres;
>> +	struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
>> +
>> +	mlog(0, "inode %llu try to take %s RW lock\n",
>> +	     (unsigned long long)OCFS2_I(inode)->ip_blkno,
>> +	     write ? "EXMODE" : "PRMODE");
>> +
>> +	if (ocfs2_mount_local(osb))
>> +		return 0;
>> +
>> +	lockres = &OCFS2_I(inode)->ip_rw_lockres;
>> +
>> +	level = write ? DLM_LOCK_EX : DLM_LOCK_PR;
>> +
>> +	status = ocfs2_cluster_lock(OCFS2_SB(inode->i_sb), lockres, level,
>> +				    DLM_LKF_NOQUEUE, 0);
>> +	return status;
>> +}
> 
> The newly added function ocfs2_try_rw_lock almost has the same logic 
> with ocfs2_rw_lock.Is it possible to combine them into an unique one?
> That will be more elegant.
I prefer to keep ocfs2_try_rw_lock() separately, since there has been the similar function/code here (e.g. ocfs2_try_open_lock).
second, adding a new ocfs2_try_rw_lock() function can avoid impact the existing code.

> 
> Moreover, can you elaborate further why we need a *NOQUEUE* lock for 
> supporting non-block aio?
Non-block IO means that the invoking should return with -EAGAIN instead of being blocked to wait for certain resource (e.g. lock, block allocation, etc.).

> 
> Why can't we wait for a while to grant a lock request? Is this necessary?
Non-block IO is a way for the upper application to submit IO, if the invoking will be blocked, the invoking will failed with -EAGAIN, 
then, the upper application will submit this IO with the normal (block mode) way in a delayed thread, this IO mode will benefit some database application.

> 
> Thanks,
> Changwei
> 
>> +
>>   void ocfs2_rw_unlock(struct inode *inode, int write)
>>   {
>>   	int level = write ? DLM_LOCK_EX : DLM_LOCK_PR;
>> diff --git a/fs/ocfs2/dlmglue.h b/fs/ocfs2/dlmglue.h
>> index a7fc18b..05910fc 100644
>> --- a/fs/ocfs2/dlmglue.h
>> +++ b/fs/ocfs2/dlmglue.h
>> @@ -116,6 +116,7 @@ void ocfs2_refcount_lock_res_init(struct ocfs2_lock_res 
> *lockres,
>>   int ocfs2_create_new_inode_locks(struct inode *inode);
>>   int ocfs2_drop_inode_locks(struct inode *inode);
>>   int ocfs2_rw_lock(struct inode *inode, int write);
>> +int ocfs2_try_rw_lock(struct inode *inode, int write);
>>   void ocfs2_rw_unlock(struct inode *inode, int write);
>>   int ocfs2_open_lock(struct inode *inode);
>>   int ocfs2_try_open_lock(struct inode *inode, int write);
>> @@ -140,6 +141,9 @@ int ocfs2_inode_lock_with_page(struct inode *inode,
>>   /* 99% of the time we don't want to supply any additional flags --
>>    * those are for very specific cases only. */
>>   #define ocfs2_inode_lock(i, b, e) ocfs2_inode_lock_full_nested(i, b, e, 0, 
> OI_LS_NORMAL)
>> +#define ocfs2_try_inode_lock(i, b, e)\
>> +		ocfs2_inode_lock_full_nested(i, b, e, OCFS2_META_LOCK_NOQUEUE,\
>> +		OI_LS_NORMAL)
>>   void ocfs2_inode_unlock(struct inode *inode,
>>   		       int ex);
>>   int ocfs2_super_lock(struct ocfs2_super *osb,
>> 




More information about the Ocfs2-devel mailing list