[Ocfs2-users] 10 Node OCFS2 Cluster - Performance

Sunil Mushran sunil.mushran at oracle.com
Mon Sep 14 12:15:32 PDT 2009


Add a sync. Both utils are showing very little io. And do the same
for runs on both ocfs2 and xfs.

# dd if...  ; sync;

Laurence Mayer wrote:
> Here is the output of iostat while running the test on all the OCFS 
> volume.
>  
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.23    0.00   15.80    0.45    0.00   83.52
>
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s 
> avgrq-sz avgqu-sz   await  svctm  %util
> sdc               0.00     4.00    5.00    4.00    43.00    57.00    
> 11.11     0.08    8.89   8.89   8.00
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.28    0.00    4.46    0.00    0.00   95.26
>
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s 
> avgrq-sz avgqu-sz   await  svctm  %util
> sdc               0.00     0.00    0.00    0.00     0.00     0.00     
> 0.00     0.00    0.00   0.00   0.00
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.25    0.00    0.25    3.23    0.00   96.28
>
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s 
> avgrq-sz avgqu-sz   await  svctm  %util
> sdc               0.00     7.00    1.00   13.00    11.00   153.00    
> 11.71     0.24   17.14  11.43  16.00
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.00    0.00    0.00    0.00    0.00  100.00
>
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s 
> avgrq-sz avgqu-sz   await  svctm  %util
> sdc               0.00     0.00    0.00    0.00     0.00     0.00     
> 0.00     0.00    0.00   0.00   0.00
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.00    0.00    0.00    0.00    0.00  100.00
>
> Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s 
> avgrq-sz avgqu-sz   await  svctm  %util
> sdc               0.00     0.00    1.00    1.00    11.00     1.00     
> 6.00     0.03   15.00  15.00   3.00
>
> vmstat:
>  
> procs -----------memory---------- ---swap-- -----io---- -system-- 
> ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us 
> sy id wa
>  0  0      0  54400 279320 15651312    0    0     9     8    2    4 
> 30  1 69  0
>  0  0      0  54384 279320 15651316    0    0     6     0   24  299  
> 0  0 100  0
>  0  0      0  54384 279320 15651316    0    0     0     0   92  409  
> 0  0 100  0
>  2  0      0  54384 279320 15651316    0    0     5     1   81  386  
> 0  0 100  0
>  0  0      0  53756 279320 15651352    0    0     8     0  730 1664  
> 0  1 99  0
>  0  0      0  53232 279320 15651352    0    0     6    88  586 1480  
> 0  0 99  0
>  0  0      0 242848 279320 15458608    0    0     8     0  348 1149  
> 0  3 97  0
>  0  0      0 242868 279320 15458608    0    0     5     1  220  721  
> 0  0 100  0
>  0  0      0 242868 279320 15458608    0    0     0     0  201  709  
> 0  0 100  0
>  0  0      0 243116 279320 15458608    0    0     6     0  239  775  
> 0  0 100  0
>  0  0      0 243116 279320 15458608    0    0     0     0  184  676  
> 0  0 100  0
>  0  0      0 243116 279336 15458608    0    0     5    65  236  756  
> 0  0 99  0
>  0  0      0 243488 279336 15458608    0    0     0     0  231  791  
> 0  0 100  0
>  1  0      0 243488 279336 15458608    0    0     6     0  193  697  
> 0  1 100  0
>  0  0      0 243488 279336 15458608    0    0     0     0  221  762  
> 0  0 100  0
>  0  0      0 243860 279336 15458608    0    0     9     1  240  793  
> 0  0 100  0
>  0  0      0 243860 279336 15458608    0    0     0     0  197  708  
> 0  0 100  0
>  1  0      0 117384 279348 15585384    0    0    26    16  124  524  0 
> 15 84  1
>  0  0      0  53204 279356 15651364    0    0     0   112  141  432  
> 0  8 91  1
>  0  0      0  53212 279356 15651320    0    0     5     1   79  388  
> 0  0 100  0
>  0  0      0  53212 279356 15651320    0    0     0    20   30  301  
> 0  0 100  0
>  
> Does this give you any clue to the bottle neck?
>  
>  
>
>  
> On Mon, Sep 14, 2009 at 9:42 PM, Sunil Mushran 
> <sunil.mushran at oracle.com <mailto:sunil.mushran at oracle.com>> wrote:
>
>     Get some iostat/vmstat numbers.
>     # iostat -x /dev/sdX 1
>     # vmstat 1
>
>     How much memory do the nodes have? If more than 2G, XFS
>     is probably leveraging its delayed allocation feature to heavily
>     cache the writes. iostat/vmstat should show that.
>
>     Is the timing for the 10 node test cumulative?
>
>     Laurence Mayer wrote:
>
>         Hi,
>
>         I am currently running a 10 Node OCFS2  Cluster (version
>         1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64.
>         Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009
>         x86_64 GNU/Linux
>
>         The Cluster is connected to a 1Tera iSCSI Device presented by
>         an IBM 3300 Storage System, running over a 1Gig Network.
>         Mounted on all nodes:  /dev/sdc1 on /cfs1 type ocfs2
>         (rw,_netdev,data=writeback,heartbeat=local)
>         Maximum Nodes: 32
>         Block Size=4k
>         Cluster Size=4k
>
>         My testing shows that to write simultaneously from the 10
>         nodes, 10 x 200Meg files (1 file per node,  total of 2Gig)
>         takes ~23.54secs.
>         Reading the files back can take just as long.
>
>         Do these numbers sound correct?
>
>         Doing dd if=/dev/zero of=/cfs1/xxxxx/txt count=1000 bs=2048000
>         (2Gig) from a single node takes 16secs.
>
>         (running the same dd command on an XFS filesystem connected to
>         the same iSCSI Storage takes 2.2secs)
>
>         Is there any tips & tricks to improve performance on OCFS2?
>
>         Thanks in advance
>         Laurence
>
>         _______________________________________________
>         Ocfs2-users mailing list
>         Ocfs2-users at oss.oracle.com <mailto:Ocfs2-users at oss.oracle.com>
>         http://oss.oracle.com/mailman/listinfo/ocfs2-users
>          
>
>
>




More information about the Ocfs2-users mailing list