[Ocfs2-users] Diagnosing poor write performance

Graeme Donaldson graeme at donaldson.za.net
Wed Mar 30 22:52:26 PDT 2016


On 31 March 2016 at 04:17, Eric Ren <zren at suse.com> wrote:

> Hi,
>
>> How did you perform the testing? It really matters. If you write a file on
>>> shared disk from one node, and read this file from another node, without,
>>> or with very little interval, the writing IO speed could decrease by ~20
>>> times according my previous testing(just as a reference). It's a
>>> extremely
>>> bad situation for 2 nodes cluster, isn't?
>>>
>>> But it's incredible that in your case writing speed drop by >3000 times!
>>>
>>
>>
>> I simply used 'dd' to create a file with /dev/zero as a source. If there
>> is
>> a better way to do this I am all ears.
>>
>
> Alright, you just did a local IO on ocfs2, then the performance shouldn't
> be that bad. I guess the ocfs2 volume has been used over 60%? or seriously
> fragmented?
> Please give info with `df -h`, and super block with debugfs.ocfs2, and
> also the exact `dd` command you performed. Additionally, perform `dd` on
> each node.
>
> You know, ocfs2 is a shared disk fs. So 3 basic testing cases I can think
> of are:
> 1. only one node of cluster do IO;
> 2. more than one nodes of cluster perform IO, but each nodes just
> read/write its own file on shared disk;
> 3. like 2), but some nodes read and some write a same file on shared disk.
>
> The above model is much theoretically simplified though. The practical
> scenarios could be much more complicated, like fragmentation issue that
> your case much likely is.


Here is all the output requested: http://pastebin.com/raw/BnJAQv9T

It's interesting to me that you guessed the usage is over 60%. It is indeed
sitting at 65%. Is the solution as simple as ensuring that an OCFS2
filesystem doesn't go over the 60% usage mark? Or am I getting ahead of
myself a little?

Thanks for your effort so far!

Graeme.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20160331/8d6b8bcc/attachment.html 


More information about the Ocfs2-users mailing list