<div>FYI the vg is setuped on /dev/sdc1.</div>
<div>Also I noticed some strange thing. The mounted command doesn't show anything:</div>
<div>[root@impaxdb ~]# mounted.ocfs2 -d<br>Device FS UUID Label<br>[root@impaxdb ~]# mounted.ocfs2 -f<br>Device FS Nodes<br>[root@impaxdb ~]#<br> </div>
<div>[root@impax ~]# mounted.ocfs2 -d<br>Device FS UUID Label<br>[root@impax ~]# mounted.ocfs2 -f<br>Device FS Nodes<br>[root@impax ~]#<br> </div>
<div> </div>
<div>but devices are mounted:</div>
<div> </div>
<div>[root@impax ~]# mount -t ocfs2<br>/dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-sts002LV on /srv/dcm4chee/sts/002 type ocfs2 (rw,_netdev,heartbeat=local)
<br>/dev/mapper/raidVG-sts003LV on /srv/dcm4chee/sts/003 type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-cacheLV on /srv/dcm4chee/sts/cache type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-u01LV on /srv/oracle/u01 type ocfs2 (rw,_netdev,heartbeat=local)
<br>/dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-u03LV on /srv/oracle/u03 type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2 (rw,_netdev,heartbeat=local)
<br>/dev/mapper/raidVG-webstartLV on /var/www/html/webstart type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-configLV on /opt/dcm4chee/server/default/conf type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-xmbeanattrsLV on /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2 (rw,_netdev,heartbeat=local)
<br>/dev/mapper/raidVG-installLV on /install type ocfs2 (rw,_netdev,heartbeat=local)<br> </div>
<div>[root@impaxdb ~]# mount -t ocfs2<br>/dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-sts002LV on /srv/dcm4chee/sts/002 type ocfs2 (rw,_netdev,heartbeat=local)
<br>/dev/mapper/raidVG-sts003LV on /srv/dcm4chee/sts/003 type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-cacheLV on /srv/dcm4chee/sts/cache type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-u01LV on /srv/oracle/u01 type ocfs2 (rw,_netdev,heartbeat=local)
<br>/dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-u03LV on /srv/oracle/u03 type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2 (rw,_netdev,heartbeat=local)
<br>/dev/mapper/raidVG-webstartLV on /var/www/html/webstart type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-configLV on /opt/dcm4chee/server/default/conf type ocfs2 (rw,_netdev,heartbeat=local)<br>/dev/mapper/raidVG-xmbeanattrsLV on /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2 (rw,_netdev,heartbeat=local)
<br>/dev/mapper/raidVG-installLV on /install type ocfs2 (rw,_netdev,heartbeat=local)<br> </div>
<div> </div>
<div>yours</div>
<div> </div>
<div>arnold</div>
<div> </div>
<div><br><br> </div>
<div><span class="gmail_quote">On 8/24/07, <b class="gmail_sendername">Arnold Maderthaner</b> <<a href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div>vmstat server1:</div>
<div>[root@impax ~]# vmstat 5<br>procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------<br> r b swpd free buff cache si so bi bo in cs us sy id wa st<br> 0 0 0 2760468 119924 872240 0 0 4 4 107 63 0 0 99 0 0
<br> 0 2 0 2760468 119928 872240 0 0 4 30 1095 277 1 0 99 0 0<br> 0 0 0 2760468 119940 872248 0 0 7 11 1076 232 0 0 100 0 0<br> 0 0 0 2760468 119948 872272 0 0 4 18 1084 244 0 0 100 0 0
<br> 0 0 0 2760468 119948 872272 0 0 7 34 1063 220 0 0 100 0 0<br> 0 0 0 2760468 119948 872272 0 0 0 1 1086 243 0 0 100 0 0<br> 0 0 0 2760716 119956 872272 0 0 7 10 1065 234 2 0 98 0 0
<br>vmstart server2:</div>
<div>[root@impaxdb ~]# vmstat 5<br>procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------<br> r b swpd free buff cache si so bi bo in cs us sy id wa st<br> 0 0 24676 143784 46428 3652848 0 0 11 20 107 122 0 0 84 16 0
<br> 0 1 24676 143784 46452 3652832 0 0 4 14 1080 237 0 0 89 11 0<br> 0 1 24672 143660 46460 3652812 6 0 14 34 1074 246 0 0 90 10 0<br> 0 0 24672 143660 46460 3652852 0 0 7 22 1076 246 0 0 80 20 0
<br> 0 1 24672 143660 46476 3652844 0 0 7 10 1068 227 0 0 85 15 0<br> </div>
<div>iostat server1:</div>
<div> </div>
<div>
<p>[root@impax ~]# iostat -x 5<br>Linux 2.6.18-8.el5PAE (impax) 08/24/2007</p>
<p>avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.07 0.00 0.04 0.45 0.00 99.44</p>
<p>Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>sda 0.24 0.90 0.23 0.76 15.72 13.33 29.22 0.01 12.68 4.77 0.47<br>sdb
0.03 1.23 0.02 0.58 1.34 6.81 13.59 0.00 0.79 0.30 0.02<br>sdc 0.58 0.87 3.88 3.78 16.69 10.87 3.60 7.98 1042.08 89.69 68.68<br>dm-0 0.00 0.00
0.31 1.14 0.96 6.99 5.46 1.55 1061.28 459.00 66.92<br>dm-1 0.00 0.00 0.31 0.31 0.95 0.31 2.01 0.66 1054.16 1054.17 66.15<br>dm-2 0.00 0.00 0.31 0.31
0.95 0.31 2.01 0.66 1057.72 1057.84 66.24<br>dm-3 0.00 0.00 0.96 0.34 6.15 0.55 5.12 0.77 592.01 504.72 66.04<br>dm-4 0.00 0.00 0.32 0.33 0.96
0.41 2.14 0.66 1031.13 1028.75 66.00<br>dm-5 0.00 0.00 0.31 0.32 0.95 0.40 2.12 0.66 1037.27 1036.55 66.09<br>dm-6 0.00 0.00 0.31 0.31 0.94 0.31
2.01 0.66 1064.34 1064.45 66.37<br>dm-7 0.00 0.00 0.32 0.31 0.95 0.32 2.01 0.66 1043.05 1042.99 65.77<br>dm-8 0.00 0.00 0.31 0.31 0.95 0.32 2.03
0.66 1050.73 1048.23 65.97<br>dm-9 0.00 0.00 0.34 0.31 0.99 0.31 2.01 0.67 1027.41 1006.68 65.71<br>dm-10 0.00 0.00 0.32 0.32 0.95 0.32 2.01 0.68
1078.71 1050.22 66.41<br>dm-12 0.00 0.00 0.31 0.31 0.96 0.31 2.03 0.66 1065.33 1065.10 66.43<br>dm-13 0.00 0.00 0.04 1.27 1.27 2.54 2.91 0.00 2.43
0.10 0.01<br>dm-18 0.00 0.00 0.01 0.35 0.04 2.80 8.00 0.01 25.42 0.03 0.00<br>dm-19 0.00 0.00 0.00 0.18 0.01 1.47 8.00 0.00 0.42 0.21 0.00
<br>dm-21 0.00 0.00 0.02 0.01 2.20 0.11 65.49 0.00 9.84 3.01 0.01<br>dm-26 0.00 0.00 0.01 0.05 0.12 0.43 9.21 0.00 11.66 7.05 0.04<br>dm-27
0.00 0.00 0.00 0.21 0.02 1.70 8.02 0.00 12.09 4.57 0.10<br>dm-28 0.00 0.00 0.26 0.11 8.90 0.91 26.40 0.01 14.94 1.85 0.07<br>dm-29 0.00 0.00
0.05 1.03 1.04 8.22 8.56 0.01 13.76 2.48 0.27</p>
<p>avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 0.00 0.40 0.00 99.60</p>
<p>Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>sda 0.00 2.80 0.00 2.20 0.00 40.00 18.18 0.03 12.27 4.73 1.04<br>sdb
0.00 0.40 0.00 0.40 0.00 1.60 4.00 0.00 5.50 5.50 0.22<br>sdc 0.00 0.00 4.80 4.80 14.40 4.80 2.00 7.21 1560.12 63.67 61.12<br>dm-0 0.00 0.00
0.40 0.40 1.20 0.40 2.00 0.60 1559.50 750.25 60.02<br>dm-1 0.00 0.00 0.40 0.40 1.20 0.40 2.00 0.60 1558.50 750.50 60.04<br>dm-2 0.00 0.00 0.40 0.40
1.20 0.40 2.00 0.60 1559.50 752.75 60.22<br>dm-3 0.00 0.00 0.40 0.40 1.20 0.40 2.00 0.60 1562.75 752.25 60.18<br>dm-4 0.00 0.00 0.40 0.40 1.20
0.40 2.00 0.60 1559.50 752.50 60.20<br>dm-5 0.00 0.00 0.40 0.40 1.20 0.40 2.00 0.60 1563.00 750.25 60.02<br>dm-6 0.00 0.00 0.40 0.40 1.20 0.40 2.00
0.60 1558.50 750.25 60.02<br>dm-7 0.00 0.00 0.40 0.40 1.20 0.40 2.00 0.60 1559.50 752.50 60.20<br>dm-8 0.00 0.00 0.40 0.40 1.20 0.40 2.00 0.60
1559.75 750.25 60.02<br>dm-9 0.00 0.00 0.40 0.40 1.20 0.40 2.00 0.60 1554.00 750.25 60.02<br>dm-10 0.00 0.00 0.40 0.40 1.20 0.40 2.00 0.60 1555.75 752.25
60.18 <br>dm-12 0.00 0.00 0.40 0.40 1.20 0.40 2.00 0.60 1571.25 750.25 60.02<br>dm-13 0.00 0.00 0.00 0.80 0.00 1.60 2.00 0.01 8.25 2.75 0.22<br>
dm-18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-21
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-26 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.01 12.67 8.33 0.50<br>dm-27 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-28 0.00 0.00 0.00 1.80 0.00 14.40 8.00 0.02 10.22 2.44 0.44<br>dm-29 0.00 0.00 0.00 0.80
0.00 6.40 8.00 0.01 10.25 3.75 0.30</p>
<p>avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.05 0.00 0.00 0.00 0.00 99.95</p>
<p>Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>sda 0.00 0.20 0.00 0.60 0.00 6.40 10.67 0.01 14.67 6.00 0.36<br>sdb
0.00 0.40 0.00 0.40 0.00 1.60 4.00 0.00 0.00 0.00 0.00<br>sdc 0.00 0.00 4.80 4.80 14.40 4.80 2.00 8.81 1292.92 77.06 73.98<br>dm-0 0.00 0.00
0.20 0.40 0.60 0.40 1.67 0.74 1723.33 1233.00 73.98<br>dm-1 0.00 0.00 0.20 0.40 0.60 0.40 1.67 0.73 1722.00 1223.33 73.40<br>dm-2 0.00 0.00 0.20
0.40 0.60 0.40 1.67 0.73 1726.33 1222.00 73.32<br>dm-3 0.00 0.00 0.20 0.40 0.60 0.40 1.67 0.73 1725.67 1222.00 73.32<br>dm-4 0.00 0.00 0.20 0.40
0.60 0.40 1.67 0.73 1726.00 1222.00 73.32<br>dm-5 0.00 0.00 0.20 0.40 0.60 0.40 1.67 0.73 1722.33 1222.00 73.32<br>dm-6 0.00 0.00 0.20 0.40 0.60
0.40 1.67 0.73 1722.00 1223.67 73.42<br>dm-7 0.00 0.00 0.20 0.40 0.60 0.40 1.67 0.73 1726.00 1222.00 73.32<br>dm-8 0.00 0.00 0.20 0.40 0.60 0.40
1.67 0.73 1722.33 1222.00 73.32<br>dm-9 0.00 0.00 0.20 0.40 0.60 0.40 1.67 0.73 1723.00 1214.33 72.86<br>dm-10 0.00 0.00 0.20 0.40 0.60 0.40 1.67
0.73 1725.67 1222.00 73.32<br>dm-12 0.00 0.00 0.20 0.40 0.60 0.40 1.67 0.74 1722.00 1231.67 73.90<br>dm-13 0.00 0.00 0.00 0.80 0.00 1.60 2.00 0.00 0.00
0.00 0.00<br>dm-18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
<br>dm-21 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-26 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-27
0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.01 13.00 4.33 0.26<br>dm-28 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-29 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br></p></div>
<div><br>iostat server2:</div>
<p>[root@impaxdb ~]# iostat -x 5<br>Linux 2.6.18-8.el5PAE (impaxdb) 08/24/2007</p>
<p>avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.48 0.00 0.10 15.70 0.00 83.72</p>
<p>Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>sda 0.28 1.38 0.32 0.82 18.52 17.59 31.55 0.00 1.98 0.83 0.10<br>sdb
0.01 1.73 0.02 0.04 3.20 14.15 297.21 0.00 9.18 0.87 0.01<br>sdc 6.44 14.59 3.94 4.26 65.36 126.12 23.36 9.52 1160.63 91.33 74.88<br>dm-0 0.00 0.00
0.29 0.29 0.89 0.29 2.01 0.71 1214.34 1214.01 71.18<br>dm-1 0.00 0.00 0.29 0.29 0.89 0.29 2.01 0.71 1206.47 1206.38 70.95<br>dm-2 0.00 0.00 0.29
0.29 0.89 0.29 2.01 0.71 1207.35 1207.27 70.96<br>dm-3 0.00 0.00 7.10 9.84 55.33 76.63 7.79 25.51 1506.27 43.35 73.42<br>dm-4 0.00 0.00 0.32 5.48
1.10 41.82 7.39 9.21 1586.36 122.41 71.06<br>dm-5 0.00 0.00 0.30 0.88 0.90 5.02 5.02 0.72 606.80 600.72 70.89<br>dm-6 0.00 0.00 0.29 0.29 0.89 0.29
2.01 0.71 1206.90 1206.94 70.91<br>dm-7 0.00 0.00 0.29 0.29 0.89 0.29 2.01 0.71 1202.65 1202.69 70.82<br>dm-8 0.00 0.00 0.29 0.29 0.89 0.29 2.01
0.71 1204.47 1204.43 70.84<br>dm-9 0.00 0.00 0.29 0.29 0.89 0.29 2.01 0.71 1202.38 1202.42 70.77<br>dm-10 0.00 0.00 0.29 0.29 0.88 0.29 2.01 0.71
1208.53 1208.48 70.93<br>dm-12 0.00 0.00 0.30 0.29 0.91 0.29 2.04 0.71 1202.82 1202.49 70.92<br>dm-13 0.00 0.00 0.00 0.00 0.01 0.00 6.99 0.00 0.47
0.06 0.00<br>dm-18 0.00 0.00 0.02 1.77 3.16 14.15 9.67 0.03 16.69 0.03 0.01<br>dm-19 0.00 0.00 0.00 0.00 0.01 0.00 8.02 0.00 0.69 0.34 0.00
<br>dm-20 0.00 0.00 0.05 0.28 3.48 2.26 17.18 0.00 3.37 0.47 0.02<br>dm-25 0.00 0.00 0.01 0.05 0.16 0.44 9.68 0.00 1.20 0.65 0.00<br>dm-26
0.00 0.00 0.00 0.14 0.02 1.11 8.03 0.00 3.28 0.05 0.00<br>dm-27 0.00 0.00 0.31 0.15 9.69 1.24 23.61 0.00 9.39 0.88 0.04<br>dm-28 0.00 0.00
0.06 1.08 1.24 8.61 8.65 0.00 1.04 0.21 0.02</p>
<p>avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.05 0.00 0.05 23.62 0.00 76.27</p>
<p>Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>sda 0.00 4.85 0.00 2.22 0.00 56.57 25.45 0.00 0.91 0.36 0.08<br>sdb
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>sdc 0.00 1.21 4.85 5.45 14.55 24.24 3.76 12.51 1556.98 94.24 97.09<br>dm-0 0.00 0.00
0.20 0.40 0.61 0.40 1.67 0.97 2055.67 1602.00 97.09<br>dm-1 0.00 0.00 0.20 0.40 0.61 0.40 1.67 0.96 2064.67 1589.00 96.30<br>dm-2 0.00 0.00 0.20
0.40 0.61 0.40 1.67 0.96 2063.67 1582.67 95.92<br>dm-3 0.00 0.00 0.20 0.40 0.61 0.40 1.67 4.79 8988.33 1582.00 95.88<br>dm-4 0.00 0.00 0.20 1.21
0.61 6.87 5.29 0.96 886.57 679.14 96.04<br>dm-5 0.00 0.00 0.20 1.21 0.61 6.87 5.29 0.96 878.00 680.00 96.16<br>dm-6 0.00 0.00 0.20 0.40 0.61 0.40
1.67 0.97 2055.67 1593.00 96.55<br>dm-7 0.00 0.00 0.20 0.40 0.61 0.40 1.67 0.96 2064.67 1584.33 96.02<br>dm-8 0.00 0.00 0.20 0.40 0.61 0.40 1.67
0.96 2064.67 1586.33 96.14<br>dm-9 0.00 0.00 0.20 0.40 0.61 0.40 1.67 0.96 2064.67 1583.33 95.96<br>dm-10 0.00 0.00 0.20 0.40 0.61 0.40 1.67 0.97
2055.67 1597.67 96.83<br>dm-12 0.00 0.00 0.20 0.40 0.61 0.40 1.67 0.97 2071.67 1595.67 96.71<br>dm-13 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00<br>dm-18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
<br>dm-20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-25 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-26
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-27 0.00 0.00 0.00 0.81 0.00 6.46 8.00 0.00 0.00 0.00 0.00<br>dm-28 0.00 0.00
0.00 4.85 0.00 38.79 8.00 0.01 2.17 0.17 0.08</p>
<p>avg-cpu: %user %nice %system %iowait %steal %idle<br> 0.00 0.00 0.00 19.85 0.00 80.15</p>
<div>Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util<br>sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>sdb
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>sdc 0.00 0.61 0.00 0.00 0.00 0.00 0.00 9.12 0.00 0.00 80.26<br>dm-0 0.00 0.00
0.20 0.00 0.61 0.00 3.00 0.65 0.00 3196.00 64.70<br>dm-1 0.00 0.00 0.20 0.00 0.61 0.00 3.00 0.65 0.00 3235.00 65.49<br>dm-2 0.00 0.00 0.20
0.00 0.61 0.00 3.00 0.66 0.00 3254.00 65.87<br>dm-3 0.00 0.00 0.40 0.81 2.23 6.48 7.17 3.29 0.00 660.83 80.26<br>dm-4 0.00 0.00 0.20 0.00
0.61 0.00 3.00 0.66 0.00 3248.00 65.75<br>dm-5 0.00 0.00 0.20 0.00 0.61 0.00 3.00 0.66 0.00 3242.00 65.63<br>dm-6 0.00 0.00 0.20 0.00 0.61
0.00 3.00 0.65 0.00 3223.00 65.24<br>dm-7 0.00 0.00 0.20 0.00 0.61 0.00 3.00 0.66 0.00 3249.00 65.77<br>dm-8 0.00 0.00 0.20 0.00 0.61 0.00
3.00 0.66 0.00 3243.00 65.65<br>dm-9 0.00 0.00 0.20 0.00 0.61 0.00 3.00 0.66 0.00 3252.00 65.83<br>dm-10 0.00 0.00 0.20 0.00 0.61 0.00 3.00
0.65 0.00 3209.00 64.96<br>dm-12 0.00 0.00 0.20 0.00 0.61 0.00 3.00 0.65 0.00 3215.00 65.08<br>dm-13 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00<br>dm-18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
<br>dm-20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-25 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-26
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-27 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00<br>dm-28 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00</div>
<div> </div>
<div>top server1:</div>
<div> </div>
<div> </div>
<div>
<p>Tasks: 217 total, 1 running, 214 sleeping, 0 stopped, 2 zombie<br>Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.4%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st<br>Mem: 4084444k total, 1323728k used, 2760716k free, 120072k buffers
<br>Swap: 8193020k total, 0k used, 8193020k free, 872284k cached</p>
<p> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br>14245 root 16 0 2288 988 704 R 2 0.0 0:00.01 top<br> 1 root 15 0 2036 652 564 S 0 0.0 0:01.13 init<br> 2 root RT 0 0 0 0 S 0
0.0 0:00.00 migration/0<br> 3 root 34 19 0 0 0 S 0 0.0 0:00.46 ksoftirqd/0<br> 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0<br> 5 root RT 0 0 0 0 S 0
0.0 0:00.00 migration/1<br> 6 root 34 19 0 0 0 S 0 0.0 0:00.07 ksoftirqd/1<br> 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1<br> 8 root RT 0 0 0 0 S 0
0.0 0:00.00 migration/2<br> 9 root 34 19 0 0 0 S 0 0.0 0:00.19 ksoftirqd/2<br> 10 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/2<br> 11 root RT 0 0 0 0 S 0
0.0 0:00.00 migration/3<br> 12 root 34 19 0 0 0 S 0 0.0 0:00.23 ksoftirqd/3<br> 13 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/3<br> 14 root 10 -5 0 0 0 S 0
0.0 0:00.00 events/0<br> 15 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/1<br> 16 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/2<br> 17 root 10 -5 0 0 0 S 0 0.0
0:00.00 events/3<br> 18 root 19 -5 0 0 0 S 0 0.0 0:00.00 khelper<br> 19 root 10 -5 0 0 0 S 0 0.0 0:00.00 kthread<br> 25 root 10 -5 0 0 0 S 0 0.0 0:
00.00 kblockd/0<br> 26 root 10 -5 0 0 0 S 0 0.0 0:00.04 kblockd/1<br> 27 root 10 -5 0 0 0 S 0 0.0 0:00.00 kblockd/2<br> 28 root 10 -5 0 0 0 S 0 0.0 0:
00.06 kblockd/3<br> 29 root 14 -5 0 0 0 S 0 0.0 0:00.00 kacpid<br> 147 root 14 -5 0 0 0 S 0 0.0 0:00.00 cqueue/0<br> 148 root 14 -5 0 0 0 S 0 0.0 0:00.00
cqueue/1<br> 149 root 14 -5 0 0 0 S 0 0.0 0:00.00 cqueue/2<br> 150 root 14 -5 0 0 0 S 0 0.0 0:00.00 cqueue/3<br> 153 root 10 -5 0 0 0 S 0 0.0 0:00.00 khubd
<br> 155 root 10 -5 0 0 0 S 0 0.0 0:00.00 kseriod<br> 231 root 19 0 0 0 0 S 0 0.0 0:00.00 pdflush<br> 232 root 15 0 0 0 0 S 0 0.0 0:00.15 pdflush<br> 233 root 14 -5 0 0 0 S 0
0.0 0:00.00 kswapd0<br> 234 root 14 -5 0 0 0 S 0 0.0 0:00.00 aio/0<br> 235 root 14 -5 0 0 0 S 0 0.0 0:00.00 aio/1<br> 236 root 14 -5 0 0 0 S 0 0.0 0:00.00
aio/2<br></p></div>
<div> </div>
<div>top server2:</div>
<div>
<p>Tasks: 264 total, 1 running, 260 sleeping, 0 stopped, 3 zombie<br>Cpu(s): 0.5%us, 0.1%sy, 0.0%ni, 83.7%id, 15.7%wa, 0.0%hi, 0.0%si, 0.0%st<br>Mem: 4084444k total, 3941216k used, 143228k free, 46608k buffers
<br>Swap: 8193020k total, 24660k used, 8168360k free, 3652864k cached</p>
<p> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br> 9507 root 15 0 2292 1020 704 R 2 0.0 0:00.01 top<br> 1 root 15 0 2032 652 564 S 0 0.0 0:01.20 init<br> 2 root RT 0 0 0 0 S 0
0.0 0:00.00 migration/0<br> 3 root 34 19 0 0 0 S 0 0.0 0:00.25 ksoftirqd/0<br> 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0<br> 5 root RT 0 0 0 0 S 0
0.0 0:00.00 migration/1<br> 6 root 34 19 0 0 0 S 0 0.0 0:00.42 ksoftirqd/1<br> 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1<br> 8 root RT 0 0 0 0 S 0
0.0 0:00.00 migration/2<br> 9 root 34 19 0 0 0 S 0 0.0 0:00.56 ksoftirqd/2<br> 10 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/2<br> 11 root RT 0 0 0 0 S 0
0.0 0:00.00 migration/3<br> 12 root 39 19 0 0 0 S 0 0.0 0:00.08 ksoftirqd/3<br> 13 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/3<br> 14 root 10 -5 0 0 0 S 0
0.0 0:00.03 events/0<br> 15 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/1<br> 16 root 10 -5 0 0 0 S 0 0.0 0:00.02 events/2<br> 17 root 10 -5 0 0 0 S 0 0.0
0:00.02 events/3<br> 18 root 10 -5 0 0 0 S 0 0.0 0:00.00 khelper<br> 19 root 11 -5 0 0 0 S 0 0.0 0:00.00 kthread<br> 25 root 10 -5 0 0 0 S 0 0.0 0:
00.08 kblockd/0<br> 26 root 10 -5 0 0 0 S 0 0.0 0:00.00 kblockd/1<br> 27 root 10 -5 0 0 0 S 0 0.0 0:00.00 kblockd/2<br> 28 root 10 -5 0 0 0 S 0 0.0 0:
00.08 kblockd/3<br> 29 root 18 -5 0 0 0 S 0 0.0 0:00.00 kacpid<br> 147 root 18 -5 0 0 0 S 0 0.0 0:00.00 cqueue/0<br> 148 root 20 -5 0 0 0 S 0 0.0 0:00.00
cqueue/1<br> 149 root 10 -5 0 0 0 S 0 0.0 0:00.00 cqueue/2<br> 150 root 10 -5 0 0 0 S 0 0.0 0:00.00 cqueue/3<br> 153 root 10 -5 0 0 0 S 0 0.0 0:00.00 khubd
<br> 155 root 10 -5 0 0 0 S 0 0.0 0:00.00 kseriod<br> 233 root 15 -5 0 0 0 S 0 0.0 0:00.84 kswapd0<br> 234 root 20 -5 0 0 0 S 0 0.0 0:00.00 aio/0<br> 235 root 20 -5 0 0 0 S 0
0.0 0:00.00 aio/1<br> 236 root 20 -5 0 0 0 S 0 0.0 0:00.00 aio/2<br> 237 root 20 -5 0 0 0 S 0 0.0 0:00.00 aio/3<br> 409 root 11 -5 0 0 0 S 0 0.0 0:00.00
kpsmoused<br></p>
<p>slabtop server1:</p>
<p> Active / Total Objects (% used) : 301060 / 312027 (96.5%)<br> Active / Total Slabs (% used) : 12055 / 12055 (100.0%)<br> Active / Total Caches (% used) : 106 / 146 (72.6%)<br> Active / Total Size (% used) :
45714.46K / 46814.27K (97.7%)<br> Minimum / Average / Maximum Object : 0.01K / 0.15K / 128.00K</p>
<p> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME<br>148176 148095 99% 0.05K 2058 72 8232K buffer_head<br> 44573 44497 99% 0.13K 1537 29 6148K dentry_cache<br> 32888 32878 99%
0.48K 4111 8 16444K ext3_inode_cache<br> 10878 10878 100% 0.27K 777 14 3108K radix_tree_node<br> 7943 7766 97% 0.02K 47 169 188K dm_io<br> 7917 7766 98% 0.02K 39 203 156K dm_tio
<br> 6424 5414 84% 0.09K 146 44 584K vm_area_struct<br> 6384 6223 97% 0.04K 76 84 304K sysfs_dir_cache<br> 5989 5823 97% 0.03K 53 113 212K size-32<br> 5074 4833 95%
0.06K 86 59 344K size-64<br> 4488 4488 100% 0.33K 408 11 1632K inode_cache<br> 2610 2561 98% 0.12K 87 30 348K size-128<br> 2540 1213 47% 0.01K 10 254 40K anon_vma
<br> 2380 1475 61% 0.19K 119 20 476K filp<br> 1430 1334 93% 0.35K 130 11 520K proc_inode_cache<br> 1326 1031 77% 0.05K 17 78 68K selinux_inode_security<br>
1320 1304 98% 0.25K 88 15 352K size-256<br> 1015 629 61% 0.02K 5 203 20K biovec-1<br> 1008 125 12% 0.05K 14 72 56K journal_head<br> 930 841 90%
0.12K 31 30 124K bio<br> 920 666 72% 0.19K 46 20 184K skbuff_head_cache<br> 798 757 94% 2.00K 399 2 1596K size-2048<br> 791 690 87% 0.03K 7 113 28K ocfs2_em_ent
<br> 784 748 95% 0.50K 98 8 392K size-512<br> 590 503 85% 0.06K 10 59 40K biovec-4<br> 564 562 99% 0.88K 141 4 564K ocfs2_inode_cache<br> 546 324 59%
0.05K 7 78 28K delayacct_cache<br> 531 497 93% 0.43K 59 9 236K shmem_inode_cache<br> 520 487 93% 0.19K 26 20 104K biovec-16<br> 505 327 64% 0.04K
5 101 20K pid<br> 500 487 97% 0.75K 100 5 400K biovec-64<br> 460 376 81% 0.04K 5 92 20K Acpi-Operand<br> 452 104 23% 0.03K 4 113 16K pgd
<br> 347 347 100% 4.00K 347 1 1388K size-4096<br> 338 303 89% 0.02K 2 169 8K Acpi-Namespace<br> 336 175 52% 0.04K 4 84 16K crq_pool<br></p></div>
<div> </div>
<div>slabtop server2:</div>
<div> </div>
<div>
<p> Active / Total Objects (% used) : 886188 / 907066 (97.7%)<br> Active / Total Slabs (% used) : 20722 / 20723 (100.0%)<br> Active / Total Caches (% used) : 104 / 147 (70.7%)<br> Active / Total Size (% used) :
76455.01K / 78620.30K (97.2%)<br> Minimum / Average / Maximum Object : 0.01K / 0.09K / 128.00K</p>
<p> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME<br>736344 729536 99% 0.05K 10227 72 40908K buffer_head<br> 37671 34929 92% 0.13K 1299 29 5196K dentry_cache<br> 32872 32666 99%
0.48K 4109 8 16436K ext3_inode_cache<br> 20118 20097 99% 0.27K 1437 14 5748K radix_tree_node<br> 10912 9437 86% 0.09K 248 44 992K vm_area_struct<br> 7714 7452 96%
0.02K 38 203 152K dm_tio<br> 7605 7452 97% 0.02K 45 169 180K dm_io<br> 6552 6475 98% 0.04K 78 84 312K sysfs_dir_cache<br> 5763 5461 94% 0.03K 51 113 204K size-32
<br> 4661 4276 91% 0.06K 79 59 316K size-64<br> 4572 3569 78% 0.01K 18 254 72K anon_vma<br> 4440 3770 84% 0.19K 222 20 888K filp<br> 2288 1791 78%
0.35K 208 11 832K proc_inode_cache<br> 1980 1884 95% 0.12K 66 30 264K size-128<br> 1804 1471 81% 0.33K 164 11 656K inode_cache<br> 1404 1046 74% 0.05K 18 78 72K selinux_inode_security
<br> 1060 670 63% 0.19K 53 20 212K skbuff_head_cache<br> 904 767 84% 0.03K 8 113 32K ocfs2_em_ent<br> 870 870 100% 0.12K 29 30 116K bio<br> 840 816 97%
0.50K 105 8 420K size-512<br> 812 578 71% 0.02K 4 203 16K biovec-1<br> 810 800 98% 0.25K 54 15 216K size-256<br> 800 761 95% 2.00K 400 2 1600K size-2048
<br> 549 514 93% 0.43K 61 9 244K shmem_inode_cache<br> 546 289 52% 0.05K 7 78 28K delayacct_cache<br> 531 478 90% 0.06K 9 59 36K biovec-4<br>
520 471 90% 0.19K 26 20 104K biovec-16<br> 505 292 57% 0.04K 5 101 20K pid<br> 490 479 97% 0.75K 98 5 392K biovec-64<br> 460 376 81% 0.04K
5 92 20K Acpi-Operand<br> 452 175 38% 0.03K 4 113 16K pgd<br> 440 357 81% 0.38K 44 10 176K sock_inode_cache<br> 369 303 82% 0.44K 41 9 164K UNIX
<br> 360 320 88% 1.00K 90 4 360K size-1024<br> 354 124 35% 0.06K 6 59 24K fs_cache<br> 351 351 100% 4.00K 351 1 1404K pmd<br></p>
<p>uptime server1:</p>
<p>[root@impax ~]# uptime<br> 10:21:09 up 18:16, 1 user, load average: 7.22, 7.72, 7.83<br></p>
<p>uptime server2:</p>
<p>[root@impaxdb ~]# uptime<br> 10:21:17 up 18:16, 1 user, load average: 8.79, 9.02, 8.98<br></p></div>
<div>yours</div>
<div> </div>
<div>arnold<br> </div>
<div><span class="e" id="q_11496375f2de82b9_1">
<div><span class="gmail_quote">On 8/24/07, <b class="gmail_sendername">Alexei_Roudnev</b> <<a onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:Alexei_Roudnev@exigengroup.com" target="_blank">Alexei_Roudnev@exigengroup.com
</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div bgcolor="#ffffff">
<div><font size="2">Run and send here, please:</font></div>
<div><font size="2"></font> </div>
<div><font size="2">vmstat 5</font></div>
<div><font size="2">(5 lines)</font></div>
<div><font size="2"></font> </div>
<div><font size="2">iostat -x 5</font></div>
<div><font size="2">(2 - 3 outputs)</font></div>
<div><font size="2"></font> </div>
<div><font size="2">top</font></div>
<div><font size="2">(1 screen)</font></div>
<div><font size="2"></font> </div>
<div><font size="2">slabtop (or equivalent, I dont remember)</font></div>
<div><font size="2"></font> </div>
<div><font size="2"></font> </div>
<div>----- Original Message ----- </div>
<blockquote style="PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px"><span>
<div style="BACKGROUND: #e4e4e4; FONT: 10pt arial"><b>From:</b> <a title="arnold.maderthaner@j4care.com" onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:arnold.maderthaner@j4care.com" target="_blank">
Arnold Maderthaner</a> </div>
<div style="FONT: 10pt arial"><b>To:</b> <a title="Sunil.Mushran@oracle.com" onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:Sunil.Mushran@oracle.com" target="_blank">Sunil Mushran</a> ; <a title="ocfs2-users@oss.oracle.com" onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:ocfs2-users@oss.oracle.com" target="_blank">
ocfs2-users@oss.oracle.com</a> </div>
<div style="FONT: 10pt arial"><b>Sent:</b> Thursday, August 23, 2007 4:12 AM</div>
<div style="FONT: 10pt arial"><b>Subject:</b> Re: [Ocfs2-users] Slow concurrent actions on the same LVM logicalvolume</div>
<div><br> </div></span>
<div><span>Hi !<br><br>I watched the servers today and noticed that the server load is very high but not much is working on the server:<br>server1:<br> 16:37:11 up 32 min, 1 user, load average: 6.45, 7.90, 7.45<br>server2:
<br> 16:37:19 up 32 min, 4 users, load average: 9.79, 9.76, 8.26<br>i also tried to do some selects on the oracle database today.<br>to do a "select count(*) from xyz" where xyz has 120 000 rows it took me about 10 sec.
<br>always the process who does the filesystem access is the first one. <br>and every command (ls,du,vgdisplay,lvdisplay,dd,cp,....) which needs access to a ocfs2 filesystem seams like blocking or it seams to me that there is a lock.
<br>Can anyone please help me ?<br><br>yours<br><br>Arnold<br><br>
<div><span class="gmail_quote">On 8/22/07, <b class="gmail_sendername">Arnold Maderthaner</b> <<a onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:arnold.maderthaner@j4care.com" target="_blank">arnold.maderthaner@j4care.com
</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">There is not really a high server load when writing to the disks but will try it asap. what do you mean with AST ?
<br>Can I enable some debuging on ocfs2 ? We have 2 Dell servers with about 4gb of ram each and 2 Dual Core CPUs with 2GHZ each. <br>Can you please describe which tests todo again ? <br><br>Thx for your help<br><br>Arnold
<div><span><br><br>
<div><span class="gmail_quote">On 8/22/07, <b class="gmail_sendername">Sunil Mushran </b><<a onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:Sunil.Mushran@oracle.com" target="_blank"> Sunil.Mushran@oracle.com
</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">Repeat the first test. This time run top on the first server. Which <br>process is eating the cpu? From the time it appears that the second
<br>server is waiting for the AST that the first server is slow to send. And<br>it could be slow because it may be busy flushing the data to disk.<br>How much memory/cpu do these servers have?<br><br>As far as the second test goes, what is the concurrent write performance
<br>directly to the LV (minus fs).<br><br>On Wed, Aug 22, 2007 at 01:14:02PM +0530, Arnold Maderthaner wrote: <br>> Hi 2 all !<br>><br>> I have problems with concurrent filesystem actions on a ocfs2<br>> filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=
1.2.6<br>> F.e.: If I have a LV called testlv which is mounted on /mnt on both <br>> servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024<br>> count=1000000" on server 1 and do at the same time a du -hs
<br>> /mnt/test.a it takes about 5 seconds for du -hs to execute:<br>> 270M test.a<br>><br>> real 0m3.627s<br>> user 0m0.000s<br>> sys 0m0.002s<br>><br>> or even longer. If i do a du -hs ond a file which is not written by
<br>> another server its fast:<br>> 977M test.b<br>><br>> real 0m0.001s<br>> user 0m0.001s<br>> sys 0m0.000s<br>><br>> If I write 2 different files from both servers to the same LV I get
<br>> only 2.8mb/sec.<br>> If I write 2 different files from both servers to a different LV <br>> (testlv,test2lv) then I get about 30mb/sec on each write (so 60mb/sec<br>> combined which is very good).<br>> does anyone has an idea what could be wrong. Are there any mount
<br>> options for ocfs2 which could help me? is this a lvm2 or ocfs2 problem <br>> (locking,blocking,.... issue) ?<br>><br>> Any help will be appriciated.<br>><br>> yours<br>><br>> Arnold<br>> --
<br>> Arnold Maderthaner<br>> J4Care Inc.<br>><br>> _______________________________________________ <br>> Ocfs2-users mailing list<br>> <a onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">
Ocfs2-users@oss.oracle.com</a><br>> <a onclick="return top.js.OpenExtLink(window,event,this)" href="http://oss.oracle.com/mailman/listinfo/ocfs2-users" target="_blank">http://oss.oracle.com/mailman/listinfo/ocfs2-users
</a><br></blockquote></div><br><br clear="all"><br></span></div><span>-- <br>Arnold Maderthaner<br>J4Care Inc. </span></blockquote></div><br><br clear="all"><br>-- <br>Arnold Maderthaner<br>J4Care Inc. </span></div>
<p>
<hr>
<p></p>_______________________________________________<span><br>Ocfs2-users mailing list<br><a onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:Ocfs2-users@oss.oracle.com" target="_blank">Ocfs2-users@oss.oracle.com
</a><br><a onclick="return top.js.OpenExtLink(window,event,this)" href="http://oss.oracle.com/mailman/listinfo/ocfs2-users" target="_blank">http://oss.oracle.com/mailman/listinfo/ocfs2-users</a></span>
<p></p>
<p></p></p></blockquote></div></blockquote></div><br><br clear="all"><br></span></div><span class="sg">-- <br>Arnold Maderthaner<br>J4Care Inc. </span></blockquote></div><br><br clear="all"><br>-- <br>Arnold Maderthaner<br>
J4Care Inc.