[Ocfs2-users] Diagnosing poor write performance

Graeme Donaldson graeme at donaldson.za.net
Wed Mar 30 07:26:22 PDT 2016


On 30 March 2016 at 14:24, Eric Ren <zren at suse.com> wrote:

> Hi,
>>
>> We're seeing very poor write performance on a cluster that was built
>> roughly a year ago. I am by no means an expert on OCFS2, nor the DRBD
>> layer
>> that we have under it. We do have several clusters that are configured in
>> much the same way via our puppet infrastructure, yet this particular one
>> gives us write speeds around the 15 kilobyte/sec mark, where some of our
>> other clients do 55 megabytes/sec on similar hardware.
>>
>
> How did you perform the testing? It really matters. If you write a file on
> shared disk from one node, and read this file from another node, without,
> or with very little interval, the writing IO speed could decrease by ~20
> times according my previous testing(just as a reference). It's a extremely
> bad situation for 2 nodes cluster, isn't?
>
> But it's incredible that in your case writing speed drop by >3000 times!


I simply used 'dd' to create a file with /dev/zero as a source. If there is
a better way to do this I am all ears.



> I realise that this is all very vague, so for now I am just hoping for
>> general pointers on where to start in diagnosing this, from which I can do
>> more research and then hopefully revisit the thread with more detailed
>> questions and data.
>>
>> Some basic info to get started:
>>
>> O/S: Debian Wheezy
>> Kernel: Linux hostname 3.2.0-4-amd64 #1 SMP Debian 3.2.73-2+deb7u3 x86_64
>> GNU/Linux
>> ocfs2-tools: 1.6.4-1+deb7u1
>> 2 servers in the cluster. OCFS2 filesystem lives on a DRBD dual-primary
>> device, which itself is built on an LVM volume, whose VG lives on a RAID1
>> pair of 1TB SATA HDDs.
>>
>
> Could you firstly do test on LVM, then DRBD, and then OCFS2? Let's blame
> on them more fairly.
>
> Eric
>
>
If I do a similar write of a file to a directory that exists on a LVM LV I
get roughly 100 megabytes/sec.

I can't write straight to the DRBD device, as that would entail wiping the
customer's OCFS2 filesystem, which I cannot do.

Graeme
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20160330/168f5ae8/attachment.html 


More information about the Ocfs2-users mailing list