<div dir="ltr"><div>No, I have 2 NICS in each Server.</div>
<div>1) Dedicated ISCSI </div>
<div>2) Public Network and 02CB</div>
<div> </div>
<div><br><br> </div>
<div class="gmail_quote">On Tue, Sep 15, 2009 at 8:58 PM, Sunil Mushran <span dir="ltr"><<a href="mailto:sunil.mushran@oracle.com">sunil.mushran@oracle.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">Is the o2cb interconnect and iscsi sharing the same network channel?<br><br>Laurence Mayer wrote:<br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">*1 x Node*:<br>root@n1 <mailto:<a href="mailto:root@n1" target="_blank">root@n1</a>>:~# dd if=/dev/sdc1 of=/dev/null bs=1M count=1000 skip=2000
<div class="im"><br>1000+0 records in<br>1000+0 records out<br>1048576000 bytes (1.0 GB) copied, 10.9246 s, 96.0 MB/s<br>*2 x Nodes*<br></div>root@n1:/cfs1/laurence <mailto:<a href="mailto:root@n1" target="_blank">root@n1</a>:/cfs1/laurence># cat run.sh.e7470.1
<div class="im"><br>1000+0 records in<br>1000+0 records out<br>1048576000 bytes (1.0 GB) copied, 18.6313 s, 56.3 MB/s<br></div>root@n1:/cfs1/laurence <mailto:<a href="mailto:root@n1" target="_blank">root@n1</a>:/cfs1/laurence># cat run.sh.e7470.2
<div>
<div></div>
<div class="h5"><br>1000+0 records in<br>1000+0 records out<br>1048576000 bytes (1.0 GB) copied, 19.0982 s, 54.9 MB/s<br>real 0m21.557s<br>user 0m0.010s<br>sys 0m0.000s<br>*5 x Nodes*<br>run.sh.e7471.1:1048576000 bytes (1.0 GB) copied, 45.7561 s, 22.9 MB/s<br>
run.sh.e7471.2:1048576000 bytes (1.0 GB) copied, 43.3075 s, 24.2 MB/s<br>run.sh.e7471.3:1048576000 bytes (1.0 GB) copied, 38.9945 s, 26.9 MB/s<br>run.sh.e7471.4:1048576000 bytes (1.0 GB) copied, 43.535 s, 24.1 MB/s<br>run.sh.e7471.5:1048576000 bytes (1.0 GB) copied, 41.4462 s, 25.3 MB/s<br>
real 0m49.552s<br>user 0m0.000s<br>sys 0m0.010s<br>*8 x Nodes:*<br>run.sh.e7472.1:1048576000 bytes (1.0 GB) copied, 60.7164 s, 17.3 MB/s<br>run.sh.e7472.2:1048576000 bytes (1.0 GB) copied, 50.3527 s, 20.8 MB/s<br>run.sh.e7472.3:1048576000 bytes (1.0 GB) copied, 57.4285 s, 18.3 MB/s<br>
run.sh.e7472.4:1048576000 bytes (1.0 GB) copied, 47.4362 s, 22.1 MB/s<br>run.sh.e7472.5:1048576000 bytes (1.0 GB) copied, 61.4835 s, 17.1 MB/s<br>run.sh.e7472.6:1048576000 bytes (1.0 GB) copied, 48.5347 s, 21.6 MB/s<br>run.sh.e7472.7:1048576000 bytes (1.0 GB) copied, 63.9391 s, 16.4 MB/s<br>
run.sh.e7472.8:1048576000 bytes (1.0 GB) copied, 60.6223 s, 17.3 MB/s<br>real 1m7.497s<br>user 0m0.010s<br>sys 0m0.010s<br>*10 x Nodes:*<br>run.sh.e7473.1:1048576000 bytes (1.0 GB) copied, 58.4126 s, 18.0 MB/s<br>run.sh.e7473.10:1048576000 bytes (1.0 GB) copied, 50.982 s, 20.6 MB/s<br>
run.sh.e7473.2:1048576000 bytes (1.0 GB) copied, 53.1949 s, 19.7 MB/s<br>run.sh.e7473.3:1048576000 bytes (1.0 GB) copied, 48.3755 s, 21.7 MB/s<br>run.sh.e7473.4:1048576000 bytes (1.0 GB) copied, 60.8544 s, 17.2 MB/s<br>run.sh.e7473.5:1048576000 bytes (1.0 GB) copied, 59.9801 s, 17.5 MB/s<br>
run.sh.e7473.6:1048576000 bytes (1.0 GB) copied, 61.6221 s, 17.0 MB/s<br>run.sh.e7473.7:1048576000 bytes (1.0 GB) copied, 59.2011 s, 17.7 MB/s<br>run.sh.e7473.8:1048576000 bytes (1.0 GB) copied, 56.3118 s, 18.6 MB/s<br>run.sh.e7473.9:1048576000 bytes (1.0 GB) copied, 54.2202 s, 19.3 MB/s<br>
real 1m6.979s<br>user 0m0.010s<br>sys 0m0.010s<br>Do you think the hardware cannot handle the load?<br><br></div></div>
<div>
<div></div>
<div class="h5"> On Tue, Sep 15, 2009 at 7:53 PM, Sunil Mushran <<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>>> wrote:<br>
<br> All clusters are running release tests. So not at the moment.<br><br> But you can see if your hardware is limiting you.<br><br> # time dd if=/dev/sdX1 of=/dev/null bs=1M count=1000 skip=2000<br><br> Run this on one node, then two nodes concurrently, 5 nodes, 10 nodes.<br>
The idea is to see whether you see any drop off in read performance<br> when multiple nodes are hitting the iscsi io stack.<br><br> # echo 3 > /proc/sys/vm/drop_caches<br> Do remember to clear the caches between runs.<br>
<br> Sunil<br><br><br> Laurence Mayer wrote:<br><br> Hi Sunil<br> I am running iostat on only one of the nodes, so the results<br> you see is only from a single node.<br> However I am running this concurrently on the 10 nodes,<br>
resulting in a total of 2Gig being written, so yes on this node<br> it took 8 secs to write 205Megs.<br><br> My latest results (using sync after the dd) show that when<br> running on the 10 nodes concurrently it take 37secs<br>
to write the 10 x 205Meg files (2Gig),<br> Here are the results from ALL the nodes:<br> run.sh.e7212.1:204800000 bytes (205 MB) copied, 17.9657 s,<br> 11.4 MB/s<br> run.sh.e7212.10:204800000 bytes (205 MB) copied, 30.1489 s,<br>
6.8 MB/s<br> run.sh.e7212.2:204800000 bytes (205 MB) copied, 16.4605 s,<br> 12.4 MB/s<br> run.sh.e7212.3:204800000 bytes (205 MB) copied, 18.1461 s,<br> 11.3 MB/s<br> run.sh.e7212.4:204800000 bytes (205 MB) copied, 20.9716 s, 9.8<br>
MB/s<br> run.sh.e7212.5:204800000 bytes (205 MB) copied, 22.6265 s, 9.1<br> MB/s<br> run.sh.e7212.6:204800000 bytes (205 MB) copied, 12.9318 s,<br> 15.8 MB/s<br> run.sh.e7212.7:204800000 bytes (205 MB) copied, 15.1739 s,<br>
13.5 MB/s<br> run.sh.e7212.8:204800000 bytes (205 MB) copied, 13.8953 s,<br> 14.7 MB/s<br> run.sh.e7212.9:204800000 bytes (205 MB) copied, 29.5445 s, 6.9<br> MB/s<br><br> real 0m37.920s<br>
user 0m0.000s<br> sys 0m0.030s<br><br> (This averages 11.17MB/sec per node, which seems very low.)<br><br> compared to 23.5secs when writing 2Gig from a single node.<br><br> root@n2:# time (dd if=/dev/zero of=txt bs=2048000 count=1000;<br>
sync)<br> 1000+0 records in<br> 1000+0 records out<br> 2048000000 bytes (2.0 GB) copied, 16.1369 s, 127 MB/s<br><br> real 0m23.495s<br> user 0m0.000s<br> sys 0m15.180s<br>
<br><br> Sunil, do you have any way to run the same test (10 x 200Megs)<br> concurrently on 10 or more nodes to compare results?<br><br> Thanks again<br><br> Laurence<br><br><br> Sunil Mushran wrote:<br>
<br> Always cc ocfs2-users.<br><br> Strange. The ocfs2 numbers look incomplete. It shows only<br> 200MB written.<br> You said it was taking 16 secs. Yet the iostat numbers are<br> for 8 secs only.<br>
<br> The xfs numbers look complete. Shows 90+ MB/s.<br><br> On my iscsi setup (netapp backend, gige, node with single<br> cpu box and<br> 512M RAM), I get 85MB/s.<br><br> # time (dd if=/dev/zero of=/mnt/boq7 count=2000 bs=1M ;<br>
sync ;)<br> sync<br> 2000+0 records in<br> 2000+0 records out<br> 2097152000 bytes (2.1 GB) copied, 24.4168 seconds, 85.9 MB/s<br><br> real 0m24.515s<br> user 0m0.035s<br>
sys 0m14.967s<br><br> This is with data=writeback.<br><br> The 2.2 secs is probably because of delayed allocation.<br> Since your box has<br> enough memory, xfs can cache all the writes and return to<br>
the user. Its<br> writeback then flushes the data in the background. The<br> iostat/vmstat<br> numbers should show similar writeback numbers.<br><br> Sunil<br><br> Laurence Mayer wrote:<br>
<br> iostat from cfs volume<br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 1.77 2.28 0.00 95.95<br><br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>
sdc 0.00 4.00 2.00 4.00 16.00 64.00 13.33 0.12 15.00 15.00 9.00<br><br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 6.90 7.14 0.00 85.96<br>
<br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br> sdc 0.00 16.00 9.00 40.00 75.00 441.00 10.53 0.43 9.39 6.73 33.00<br>
<br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 7.67 7.18 0.00 85.15<br><br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>
sdc 0.00 20.00 11.00 47.00 88.00 536.00 10.76 0.36 6.21 4.48 26.00<br><br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 5.65 10.07 0.00 84.28<br>
<br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br> sdc 0.00 16.00 9.00 37.00 75.00 417.00 10.70 0.55 11.96 8.48 39.00<br>
<br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.25 0.00 12.69 31.22 0.00 55.84<br><br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>
sdc 0.00 40324.00 2.00 181.00 16.00 174648.00 954.45 94.58 364.86 4.81 88.00<br><br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 13.35 14.14 0.00 72.51<br>
<br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br> sdc 0.00 9281.00 1.00 228.00 11.00 224441.00 980.14 100.93 559.17 4.37 100.00<br>
<br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 0.25 0.50 0.00 99.25<br><br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>
sdc 0.00 0.00 0.00 3.00 0.00 1040.00 346.67 0.03 240.00 6.67 2.00<br><br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 0.00 0.00 0.00 100.00<br>
<br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br> sdc 0.00 0.00 1.00 1.00 11.00 1.00 6.00 0.04 20.00 20.00 4.00<br>
<br> vmstat from cfs volume:<br> procs -----------memory---------- ---swap--<br> -----io---- -system-- ----cpu----<br> r b swpd free buff cache si so bi bo in cs us sy id wa<br>
0 0 0 447656 279416 15254408 0 0 0 0 39 350 0 0 100 0<br> 0 0 0 447656 279416 15254408 0 0 5 21 61 358 0 0 100 0<br>
0 0 0 447656 279416 15254408 0 0 0 0 49 369 0 0 100 0<br> 0 0 0 447656 279416 15254408 0 0 6 0 28 318 0 0 100 0<br>
0 0 0 447656 279416 15254408 0 0 0 0 26 321 0 0 100 0<br> 0 0 0 447656 279416 15254408 0 0 5 1 45 339 0 0 100 0<br>
0 0 0 447656 279416 15254412 0 0 0 0 8 283 0 0 100 0<br> 0 1 0 439472 279424 15262604 0 0 14 80 93 379 0 1 90 9<br>
0 0 0 439472 279424 15262604 0 0 0 4 43 338 0 0 97 2<br> 0 0 0 382312 279456 15319964 0 0 37 209 208 562 0 7 85 8<br>
0 0 0 324524 279500 15377292 0 0 44 264 250 647 0 7 86 7<br> 0 0 0 266864 279532 15434636 0 0 38 208 213 548 0 7 83 10<br>
0 3 0 250072 279544 15450584 0 0 44<br> 124832 13558 2038 0 11 62 27<br> 0 1 0 250948 279564 15450584 0 0 5<br> 75341 19596 2735 0 13 71 16<br>
0 0 0 252808 279564 15450548 0 0 0 52 2777 849 0 2 95 3<br> 0 0 0 252808 279564 15450548 0 0 6 0 21 310 0 0 100 0<br>
0 0 0 252808 279564 15450548 0 0 0 0 15 298 0 0 100 0<br> 0 0 0 253012 279564 15450548 0 0 5 1 29 310 0 0 100 0<br>
0 0 0 253048 279564 15450552 0 0 0 0 19 290 0 0 100 0<br> 0 0 0 253048 279564 15450552 0 0 6 0 26 305 0 0 100 0<br>
1 0 0 253172 279564 15450552 0 0 0 60 28 326 0 0 100 0<br> xfs volume:<br> iostat<br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>
sdd 0.00 0.00 4.00 0.00 40.00 0.00 10.00 0.05 12.00 12.00 4.80<br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 14.98 0.25 0.00 84.77<br>
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br> sdd 0.00 0.00 3.00 5.00 24.00 3088.00 389.00 6.54 44.00 17.00 13.60<br>
avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 10.67 21.86 0.00 67.47<br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>
sdd 0.00 1.00 0.00 221.00 0.00 202936.00 918.26 110.51 398.39 4.52 100.00<br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 4.92 21.84 0.00 73.23<br>
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br> sdd 0.00 2.00 0.00 232.00 0.00 209152.00 901.52 110.67 493.50 4.31 100.00<br>
avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 3.67 22.78 0.00 73.54<br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>
sdd 0.00 1.00 0.00 215.00 0.00 185717.00 863.80 111.37 501.67 4.65 100.00<br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.12 0.00 6.24 12.61 0.00 81.02<br>
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br> sdd 0.00 1.00 0.00 200.00 0.00 178456.00 892.28 80.01 541.82 4.88 97.60<br>
avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.12 0.00 4.61 8.34 0.00 86.92<br> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>
sdd 0.00 0.00 0.00 179.00 0.00 183296.00 1024.00 134.56 470.61 5.21 93.20<br> avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 4.25 9.96 0.00 85.79<br>
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br> sdd 0.00 0.00 0.00 201.00 0.00 205824.00 1024.00 142.86 703.92 4.98 100.00<br>
vmstat<br> procs -----------memory---------- ---swap--<br> -----io---- -system-- ----cpu----<br> r b swpd free buff cache si so bi bo in cs us sy id wa<br>
1 0 45396 214592 6332 31771312 0 0 668 908 3 6 3 2 92 3<br> 0 0 45396 214460 6332 31771336 0 0 0 0 14 4874 0 0 100 0<br>
2 0 45396 161032 6324 31822524 0 0 20 0 42 6074 0 13 87 0<br> 5 1 45396 166380 6324 31820072 0 0 12<br> 77948 8166 6416 0 16 77 7<br>
1 2 45396 163176 6324 31824580 0 0 28<br> 102920 24190 6660 0 6 73 21<br> 0 2 45396 163096 6332 31824580 0 0 0<br> 102743 22576 6700 0 5 72 23<br>
0 2 45396 163076 6332 31824580 0 0 0<br> 90400 21831 6500 0 4 76 21<br> 0 1 45396 163012 6332 31824580 0 0 0<br> 114732 19686 5894 0 7 83 10<br>
0 1 45396 162972 6332 31824580 0 0 0<br> 98304 24882 6314 0 4 87 8<br> 0 1 45396 163064 6332 31824580 0 0 0<br> 98304 24118 6285 0 4 84 12<br>
0 1 45396 163096 6340 31824576 0 0 0<br> 114720 24800 6166 0 4 87 9<br> 0 1 45396 162964 6340 31824584 0 0 0<br> 98304 24829 6105 0 3 85 12<br>
0 1 45396 162856 6340 31824584 0 0 0<br> 98304 23506 6402 0 5 83 12<br> 0 1 45396 162888 6340 31824584 0 0 0<br> 114688 24685 7057 0 4 87 9<br>
0 1 45396 162600 6340 31824584 0 0 0<br> 98304 24902 7107 0 4 86 10<br> 0 1 45396 162740 6340 31824584 0 0 0<br> 98304 24906 7019 0 4 91 6<br>
0 1 45396 162616 6348 31824584 0 0 0<br> 114728 24997 7169 0 4 86 9<br> 0 1 45396 162896 6348 31824584 0 0 0<br> 98304 23700 6857 0 4 85 11<br>
0 1 45396 162732 6348 31824584 0 0 0<br> 94512 24468 6995 0 3 89 8<br> 0 1 45396 162836 6348 31824584 0 0 0<br> 81920 19764 6604 0 7 81 11<br>
0 3 45396 162996 6348 31824584 0 0 0<br> 114691 24303 7270 0 4 81 14<br> procs -----------memory---------- ---swap--<br> -----io---- -system-- ----cpu----<br>
r b swpd free buff cache si so bi bo in cs us sy id wa<br> 0 1 45396 163160 6356 31824584 0 0 0<br> 98332 22695 7174 0 4 78 18<br>
0 1 45396 162848 6356 31824584 0 0 0<br> 90549 24836 7347 0 4 82 15<br> 1 0 45396 163092 6364 31824580 0 0 0 37 13990 6216 0 6 83 11<br>
0 0 45396 163272 6364 31824588 0 0 0 320 65 3817 0 0 100 0<br> 0 0 45396 163272 6364 31824588 0 0 0 0 8 3694 0 0 100 0<br>
0 0 45396 163272 6364 31824588 0 0 0 0 25 3833 0 0 100 0<br> 0 0 45396 163272 6364 31824588 0 0 0 1 13 3690 0 0 100 0<br>
On Mon, Sep 14, 2009 at 10:15 PM, Sunil Mushran<br> <<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a><br> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>><br>
</div></div>
<div>
<div></div>
<div class="h5"> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a><br> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>>>> wrote:<br>
<br> Add a sync. Both utils are showing very little io.<br> And do the same<br> for runs on both ocfs2 and xfs.<br><br> # dd if... ; sync;<br><br> Laurence Mayer wrote:<br>
<br> Here is the output of iostat while running the<br> test on all the<br> OCFS volume.<br> avg-cpu: %user %nice %system %iowait<br> %steal %idle<br>
0.23 0.00 15.80 0.45 0.00<br> 83.52<br><br> Device: rrqm/s wrqm/s r/s w/s<br> rsec/s wsec/s avgrq-sz avgqu-sz await<br>
svctm %util<br> sdc 0.00 4.00 5.00 4.00<br> 43.00 57.00 11.11 0.08 8.89 8.89 8.00<br><br> avg-cpu: %user %nice %system %iowait %steal<br>
%idle<br> 0.28 0.00 4.46 0.00 0.00<br> 95.26<br><br> Device: rrqm/s wrqm/s r/s w/s<br> rsec/s wsec/s avgrq-sz avgqu-sz await<br>
svctm %util<br> sdc 0.00 0.00 0.00 0.00<br> 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br><br> avg-cpu: %user %nice %system %iowait %steal<br>
%idle<br> 0.25 0.00 0.25 3.23 0.00<br> 96.28<br><br> Device: rrqm/s wrqm/s r/s w/s<br> rsec/s wsec/s avgrq-sz avgqu-sz await<br>
svctm %util<br> sdc 0.00 7.00 1.00 13.00<br> 11.00 153.00 11.71 0.24 17.14<br> 11.43 16.00<br><br> avg-cpu: %user %nice %system %iowait %steal<br>
%idle<br> 0.00 0.00 0.00 0.00 0.00<br> 100.00<br><br> Device: rrqm/s wrqm/s r/s w/s<br> rsec/s wsec/s avgrq-sz avgqu-sz await<br>
svctm %util<br> sdc 0.00 0.00 0.00 0.00<br> 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br><br> avg-cpu: %user %nice %system %iowait %steal<br>
%idle<br> 0.00 0.00 0.00 0.00 0.00<br> 100.00<br><br> Device: rrqm/s wrqm/s r/s w/s<br> rsec/s wsec/s avgrq-sz avgqu-sz await<br>
svctm %util<br> sdc 0.00 0.00 1.00 1.00<br> 11.00 1.00 6.00 0.03 15.00<br> 15.00 3.00<br><br> vmstat:<br>
procs -----------memory---------- ---swap--<br> -----io----<br> -system-- ----cpu----<br> r b swpd free buff cache si so bi bo in cs us sy id wa<br>
0 0 0 54400 279320 15651312 0 0 9 8 2<br> 4 30 1 69 0<br> 0 0 0 54384 279320 15651316 0 0 6 0 24<br>
299 0 0 100 0<br> 0 0 0 54384 279320 15651316 0 0 0 0 92<br> 409 0 0 100 0<br> 2 0 0 54384 279320 15651316 0 0 5 1 81<br>
386 0 0 100 0<br> 0 0 0 53756 279320 15651352 0 0 8 0 730<br> 1664 0 1 99 0<br> 0 0 0 53232 279320 15651352 0 0 6 88 586<br>
1480 0 0 99 0<br> 0 0 0 242848 279320 15458608 0 0 8 0 348<br> 1149 0 3 97 0<br> 0 0 0 242868 279320 15458608 0 0 5 1 220<br>
721 0 0 100 0<br> 0 0 0 242868 279320 15458608 0 0 0 0 201<br> 709 0 0 100 0<br> 0 0 0 243116 279320 15458608 0 0 6 0 239<br>
775 0 0 100 0<br> 0 0 0 243116 279320 15458608 0 0 0 0 184<br> 676 0 0 100 0<br> 0 0 0 243116 279336 15458608 0 0 5 65 236<br>
756 0 0 99 0<br> 0 0 0 243488 279336 15458608 0 0 0 0 231<br> 791 0 0 100 0<br> 1 0 0 243488 279336 15458608 0 0 6 0 193<br>
697 0 1 100 0<br> 0 0 0 243488 279336 15458608 0 0 0 0 221<br> 762 0 0 100 0<br> 0 0 0 243860 279336 15458608 0 0 9 1 240<br>
793 0 0 100 0<br> 0 0 0 243860 279336 15458608 0 0 0 0 197<br> 708 0 0 100 0<br> 1 0 0 117384 279348 15585384 0 0 26 16 124<br>
524 0 15 84 1<br> 0 0 0 53204 279356 15651364 0 0 0 112 141<br> 432 0 8 91 1<br> 0 0 0 53212 279356 15651320 0 0 5 1 79<br>
388 0 0 100 0<br> 0 0 0 53212 279356 15651320 0 0 0 20 30<br> 301 0 0 100 0<br> Does this give you any clue to the bottle neck?<br>
On Mon, Sep 14, 2009 at 9:42 PM,<br> Sunil Mushran<br> <<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a><br> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>><br>
<mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a><br> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>>><br>
<mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a><br> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>><br>
<mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a><br> <mailto:<a href="mailto:sunil.mushran@oracle.com" target="_blank">sunil.mushran@oracle.com</a>>>>> wrote:<br>
<br> Get some iostat/vmstat numbers.<br> # iostat -x /dev/sdX 1<br> # vmstat 1<br><br> How much memory do the nodes have? If more<br>
than 2G, XFS<br> is probably leveraging its delayed<br> allocation feature to<br> heavily<br> cache the writes. iostat/vmstat should show<br>
that.<br><br> Is the timing for the 10 node test cumulative?<br><br> Laurence Mayer wrote:<br><br> Hi,<br><br> I am currently running a 10 Node OCFS2<br>
Cluster (version<br> 1.3.9-0ubuntu1) on Ubuntu Server 8.04<br> x86_64.<br> Linux n1 2.6.24-24-server #1 SMP Tue Jul<br> 7 19:39:36 UTC<br>
2009<br> x86_64 GNU/Linux<br><br> The Cluster is connected to a 1Tera<br> iSCSI Device<br> presented by<br>
an IBM 3300 Storage System, running over<br>
a 1Gig Network.<br> Mounted on all nodes: /dev/sdc1 on<br> /cfs1 type ocfs2<br> (rw,_netdev,data=writeback,heartbeat=local)<br> Maximum Nodes: 32<br>
Block Size=4k<br> Cluster Size=4k<br><br> My testing shows that to write<br> simultaneously from the 10<br> nodes, 10 x 200Meg files (1 file per<br>
node, total of 2Gig)<br> takes ~23.54secs.<br> Reading the files back can take just as<br> long.<br><br> Do these numbers sound correct?<br>
<br> Doing dd if=/dev/zero of=/cfs1/xxxxx/txt<br> count=1000<br> bs=2048000<br> (2Gig) from a single node takes 16secs.<br><br> (running the same dd command on an XFS<br>
filesystem<br> connected to<br> the same iSCSI Storage takes 2.2secs)<br><br> Is there any tips & tricks to improve<br> performance on OCFS2?<br>
<br> Thanks in advance<br> Laurence<br><br> _______________________________________________<br> Ocfs2-users mailing list<br>
<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a><br> <mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a>><br>
<mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a><br> <mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a>>><br>
<mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a><br> <mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a>><br>
<mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a><br> <mailto:<a href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com</a>>>><br>
<br> <a href="http://oss.oracle.com/mailman/listinfo/ocfs2-users" target="_blank">http://oss.oracle.com/mailman/listinfo/ocfs2-users</a><br> <br><br><br>
<br>
</div></div></blockquote><br></blockquote></div><br></div>