<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.6000.16525" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2>You better do the same under more heavy load (you
had 0 - 1 active tasks while running this reports).</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>But look here:</FONT></DIV>
<DIV>Device: rrqm/s
wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm
%util<BR>sdc
0.00 0.00 4.80
4.80 14.40
4.80
2.00 7.21 1560.12
63.67 61.12<BR></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>This device is waiting for IO to be completed 60%
of the time, doing almost nothing (very littel writing there). Can you
check it first of all?</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>It may be that you have a bad driver or bad device,
which experience long delays during writes. While such thing can be unnoticed on
normal file system (because of bufferring), it can cause a sugnificant delays on
clustered file system (which wait for IO to be completed in many
cases).</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<BLOCKQUOTE
style="PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV style="FONT: 10pt arial">----- Original Message ----- </DIV>
<DIV
style="BACKGROUND: #e4e4e4; FONT: 10pt arial; font-color: black"><B>From:</B>
<A title=arnold.maderthaner@j4care.com
href="mailto:arnold.maderthaner@j4care.com">Arnold Maderthaner</A> </DIV>
<DIV style="FONT: 10pt arial"><B>To:</B> <A
title=Alexei_Roudnev@exigengroup.com
href="mailto:Alexei_Roudnev@exigengroup.com">Alexei_Roudnev</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Cc:</B> <A title=Sunil.Mushran@oracle.com
href="mailto:Sunil.Mushran@oracle.com">Sunil Mushran</A> ; <A
title=ocfs2-users@oss.oracle.com
href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users@oss.oracle.com</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Sent:</B> Thursday, August 23, 2007 9:53
PM</DIV>
<DIV style="FONT: 10pt arial"><B>Subject:</B> Re: [Ocfs2-users] Slow
concurrent actions on the same LVM logicalvolume</DIV>
<DIV><BR></DIV>
<DIV>vmstat server1:</DIV>
<DIV>[root@impax ~]# vmstat 5<BR>procs -----------memory---------- ---swap--
-----io---- --system-- -----cpu------<BR> r b
swpd free buff cache si
so bi bo in cs us
sy id wa st<BR> 0 0 0 2760468 119924
872240 0 0
4 4 107 63 0 0 99
0 0 <BR> 0 2 0 2760468 119928
872240 0 0
4 30 1095 277 1 0 99 0
0<BR> 0 0 0 2760468 119940
872248 0 0
7 11 1076 232 0 0 100 0
0<BR> 0 0 0 2760468 119948
872272 0 0
4 18 1084 244 0 0 100 0 0
<BR> 0 0 0 2760468 119948
872272 0 0
7 34 1063 220 0 0 100 0
0<BR> 0 0 0 2760468 119948
872272 0 0
0 1 1086 243 0 0 100 0
0<BR> 0 0 0 2760716 119956
872272 0 0
7 10 1065 234 2 0 98 0 0
<BR>vmstart server2:</DIV>
<DIV>[root@impaxdb ~]# vmstat 5<BR>procs -----------memory---------- ---swap--
-----io---- --system-- -----cpu------<BR> r b
swpd free buff cache si
so bi bo in cs us
sy id wa st<BR> 0 0 24676 143784 46428
3652848 0 0
11 20 107 122 0 0 84 16 0
<BR> 0 1 24676 143784 46452 3652832
0 0 4 14
1080 237 0 0 89 11 0<BR> 0 1 24672
143660 46460 3652812 6
0 14 34 1074 246 0 0 90
10 0<BR> 0 0 24672 143660 46460
3652852 0 0
7 22 1076 246 0 0 80 20 0
<BR> 0 1 24672 143660 46476 3652844
0 0 7 10
1068 227 0 0 85 15 0<BR> </DIV>
<DIV>iostat server1:</DIV>
<DIV> </DIV>
<DIV>
<P>[root@impax ~]# iostat -x 5<BR>Linux 2.6.18-8.el5PAE (impax)
08/24/2007</P>
<P>avg-cpu: %user %nice %system %iowait
%steal
%idle<BR>
0.07 0.00 0.04
0.45 0.00 99.44</P>
<P>Device: rrqm/s
wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm
%util<BR>sda
0.24 0.90 0.23 0.76
15.72 13.33 29.22
0.01 12.68 4.77
0.47<BR>sdb
0.03 1.23 0.02
0.58 1.34
6.81 13.59 0.00
0.79 0.30
0.02<BR>sdc
0.58 0.87 3.88 3.78
16.69 10.87
3.60 7.98 1042.08 89.69
68.68<BR>dm-0
0.00 0.00 0.31
1.14 0.96
6.99 5.46 1.55 1061.28
459.00
66.92<BR>dm-1
0.00 0.00 0.31
0.31 0.95
0.31 2.01 0.66 1054.16
1054.17
66.15<BR>dm-2
0.00 0.00 0.31 0.31
0.95
0.31 2.01 0.66 1057.72
1057.84
66.24<BR>dm-3
0.00 0.00 0.96
0.34 6.15
0.55 5.12 0.77 592.01
504.72
66.04<BR>dm-4
0.00 0.00 0.32
0.33 0.96
0.41 2.14 0.66 1031.13
1028.75
66.00<BR>dm-5
0.00 0.00 0.31
0.32 0.95
0.40 2.12 0.66 1037.27
1036.55
66.09<BR>dm-6
0.00 0.00 0.31
0.31 0.94
0.31 2.01 0.66 1064.34
1064.45
66.37<BR>dm-7
0.00 0.00 0.32
0.31 0.95
0.32 2.01 0.66 1043.05
1042.99
65.77<BR>dm-8
0.00 0.00 0.31
0.31 0.95
0.32 2.03 0.66 1050.73
1048.23
65.97<BR>dm-9
0.00 0.00 0.34
0.31 0.99
0.31 2.01 0.67 1027.41
1006.68
65.71<BR>dm-10
0.00 0.00 0.32
0.32 0.95
0.32 2.01 0.68 1078.71
1050.22
66.41<BR>dm-12
0.00 0.00 0.31
0.31 0.96
0.31 2.03 0.66 1065.33
1065.10
66.43<BR>dm-13
0.00 0.00 0.04
1.27 1.27
2.54 2.91
0.00 2.43 0.10
0.01<BR>dm-18
0.00 0.00 0.01
0.35 0.04
2.80 8.00 0.01
25.42 0.03
0.00<BR>dm-19
0.00 0.00 0.00
0.18 0.01
1.47 8.00
0.00 0.42 0.21
0.00<BR>dm-21
0.00 0.00 0.02
0.01 2.20
0.11 65.49 0.00
9.84 3.01
0.01<BR>dm-26
0.00 0.00 0.01
0.05 0.12
0.43 9.21 0.00
11.66 7.05
0.04<BR>dm-27
0.00 0.00 0.00
0.21 0.02
1.70 8.02 0.00
12.09 4.57
0.10<BR>dm-28
0.00 0.00 0.26
0.11 8.90
0.91 26.40 0.01
14.94 1.85
0.07<BR>dm-29
0.00 0.00 0.05 1.03
1.04
8.22 8.56 0.01
13.76 2.48 0.27</P>
<P>avg-cpu: %user %nice %system %iowait
%steal
%idle<BR>
0.00 0.00 0.00
0.40 0.00 99.60</P>
<P>Device: rrqm/s
wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm
%util<BR>sda
0.00 2.80 0.00
2.20 0.00 40.00
18.18 0.03 12.27
4.73
1.04<BR>sdb
0.00 0.40 0.00
0.40 0.00
1.60 4.00
0.00 5.50 5.50
0.22<BR>sdc
0.00 0.00 4.80 4.80
14.40 4.80
2.00 7.21 1560.12 63.67
61.12<BR>dm-0
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1559.50
750.25
60.02<BR>dm-1
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1558.50
750.50
60.04<BR>dm-2
0.00 0.00 0.40 0.40
1.20
0.40 2.00 0.60 1559.50
752.75
60.22<BR>dm-3
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1562.75
752.25
60.18<BR>dm-4
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1559.50
752.50
60.20<BR>dm-5
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1563.00
750.25
60.02<BR>dm-6
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1558.50
750.25
60.02<BR>dm-7
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1559.50
752.50
60.20<BR>dm-8
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1559.75
750.25
60.02<BR>dm-9
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1554.00
750.25
60.02<BR>dm-10
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1555.75
752.25 60.18
<BR>dm-12
0.00 0.00 0.40
0.40 1.20
0.40 2.00 0.60 1571.25
750.25
60.02<BR>dm-13
0.00 0.00 0.00
0.80 0.00
1.60 2.00
0.01 8.25 2.75
0.22<BR>dm-18
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-19
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-21
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-26
0.00 0.00 0.00
0.60 0.00
4.80 8.00 0.01
12.67 8.33
0.50<BR>dm-27
0.00 0.00 0.00 0.00
0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-28
0.00 0.00 0.00
1.80 0.00
14.40 8.00 0.02
10.22 2.44
0.44<BR>dm-29
0.00 0.00 0.00
0.80 0.00
6.40 8.00 0.01
10.25 3.75 0.30</P>
<P>avg-cpu: %user %nice %system %iowait
%steal
%idle<BR>
0.05 0.00 0.00
0.00 0.00 99.95</P>
<P>Device: rrqm/s
wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm
%util<BR>sda
0.00 0.20 0.00
0.60 0.00
6.40 10.67 0.01
14.67 6.00
0.36<BR>sdb
0.00 0.40 0.00
0.40 0.00
1.60 4.00
0.00 0.00 0.00
0.00<BR>sdc
0.00 0.00 4.80 4.80
14.40 4.80
2.00 8.81 1292.92 77.06
73.98<BR>dm-0
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.74 1723.33
1233.00
73.98<BR>dm-1
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.73 1722.00
1223.33
73.40<BR>dm-2
0.00 0.00 0.20 0.40
0.60
0.40 1.67 0.73 1726.33
1222.00
73.32<BR>dm-3
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.73 1725.67
1222.00
73.32<BR>dm-4
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.73 1726.00
1222.00
73.32<BR>dm-5
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.73 1722.33
1222.00
73.32<BR>dm-6
0.00 0.00 0.20
0.40 0.60 0.40
1.67 0.73 1722.00
1223.67
73.42<BR>dm-7
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.73 1726.00
1222.00
73.32<BR>dm-8
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.73 1722.33
1222.00
73.32<BR>dm-9
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.73 1723.00
1214.33
72.86<BR>dm-10
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.73 1725.67
1222.00
73.32<BR>dm-12
0.00 0.00 0.20
0.40 0.60
0.40 1.67 0.74 1722.00
1231.67
73.90<BR>dm-13
0.00 0.00 0.00
0.80 0.00
1.60 2.00
0.00 0.00 0.00
0.00<BR>dm-18
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-19
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00 0.00
<BR>dm-21
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-26
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-27
0.00 0.00 0.00
0.60 0.00
4.80 8.00 0.01
13.00 4.33
0.26<BR>dm-28
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-29
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00 0.00<BR></P></DIV>
<DIV><BR>iostat server2:</DIV>
<P>[root@impaxdb ~]# iostat -x 5<BR>Linux 2.6.18-8.el5PAE
(impaxdb) 08/24/2007</P>
<P>avg-cpu: %user %nice %system %iowait
%steal
%idle<BR>
0.48 0.00 0.10
15.70 0.00 83.72</P>
<P>Device: rrqm/s
wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm
%util<BR>sda
0.28 1.38 0.32 0.82
18.52 17.59 31.55
0.00 1.98 0.83
0.10<BR>sdb
0.01 1.73 0.02
0.04 3.20 14.15
297.21 0.00 9.18
0.87
0.01<BR>sdc
6.44 14.59 3.94 4.26
65.36 126.12 23.36 9.52
1160.63 91.33
74.88<BR>dm-0
0.00 0.00 0.29
0.29 0.89
0.29 2.01 0.71 1214.34
1214.01
71.18<BR>dm-1
0.00 0.00 0.29
0.29 0.89
0.29 2.01 0.71 1206.47
1206.38
70.95<BR>dm-2
0.00 0.00 0.29 0.29
0.89
0.29 2.01 0.71 1207.35
1207.27
70.96<BR>dm-3
0.00 0.00 7.10 9.84
55.33 76.63 7.79
25.51 1506.27 43.35
73.42<BR>dm-4
0.00 0.00 0.32
5.48 1.10
41.82 7.39 9.21 1586.36
122.41
71.06<BR>dm-5
0.00 0.00 0.30
0.88 0.90
5.02 5.02 0.72 606.80
600.72
70.89<BR>dm-6
0.00 0.00 0.29
0.29 0.89
0.29 2.01 0.71 1206.90
1206.94
70.91<BR>dm-7
0.00 0.00 0.29
0.29 0.89
0.29 2.01 0.71 1202.65
1202.69
70.82<BR>dm-8
0.00 0.00 0.29
0.29 0.89
0.29 2.01 0.71 1204.47
1204.43
70.84<BR>dm-9
0.00 0.00 0.29
0.29 0.89
0.29 2.01 0.71 1202.38
1202.42
70.77<BR>dm-10
0.00 0.00 0.29
0.29 0.88
0.29 2.01 0.71 1208.53
1208.48
70.93<BR>dm-12
0.00 0.00 0.30
0.29 0.91
0.29 2.04 0.71 1202.82
1202.49
70.92<BR>dm-13
0.00 0.00 0.00
0.00 0.01
0.00 6.99
0.00 0.47 0.06
0.00<BR>dm-18
0.00 0.00 0.02
1.77 3.16
14.15 9.67 0.03
16.69 0.03
0.01<BR>dm-19
0.00 0.00 0.00
0.00 0.01
0.00 8.02
0.00 0.69 0.34
0.00<BR>dm-20
0.00 0.00 0.05
0.28 3.48
2.26 17.18 0.00
3.37 0.47
0.02<BR>dm-25
0.00 0.00 0.01
0.05 0.16
0.44 9.68
0.00 1.20 0.65
0.00<BR>dm-26
0.00 0.00 0.00
0.14 0.02
1.11 8.03
0.00 3.28 0.05
0.00<BR>dm-27
0.00 0.00 0.31
0.15 9.69
1.24 23.61 0.00
9.39 0.88
0.04<BR>dm-28
0.00 0.00 0.06 1.08
1.24
8.61 8.65
0.00 1.04 0.21 0.02</P>
<P>avg-cpu: %user %nice %system %iowait
%steal
%idle<BR>
0.05 0.00 0.05
23.62 0.00 76.27</P>
<P>Device: rrqm/s
wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm
%util<BR>sda
0.00 4.85 0.00
2.22 0.00 56.57
25.45 0.00 0.91
0.36
0.08<BR>sdb
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>sdc
0.00 1.21 4.85 5.45
14.55 24.24 3.76
12.51 1556.98 94.24
97.09<BR>dm-0
0.00 0.00 0.20
0.40 0.61
0.40 1.67 0.97 2055.67
1602.00
97.09<BR>dm-1
0.00 0.00 0.20
0.40 0.61
0.40 1.67 0.96 2064.67
1589.00
96.30<BR>dm-2
0.00 0.00 0.20 0.40
0.61
0.40 1.67 0.96 2063.67
1582.67
95.92<BR>dm-3
0.00 0.00 0.20
0.40 0.61
0.40 1.67 4.79 8988.33
1582.00
95.88<BR>dm-4
0.00 0.00 0.20
1.21 0.61
6.87 5.29 0.96 886.57
679.14
96.04<BR>dm-5
0.00 0.00 0.20
1.21 0.61
6.87 5.29 0.96 878.00
680.00
96.16<BR>dm-6
0.00 0.00 0.20
0.40 0.61
0.40 1.67 0.97 2055.67
1593.00
96.55<BR>dm-7
0.00 0.00 0.20
0.40 0.61
0.40 1.67 0.96 2064.67
1584.33
96.02<BR>dm-8
0.00 0.00 0.20
0.40 0.61
0.40 1.67 0.96 2064.67
1586.33
96.14<BR>dm-9
0.00 0.00 0.20
0.40 0.61
0.40 1.67 0.96 2064.67
1583.33
95.96<BR>dm-10
0.00 0.00 0.20
0.40 0.61
0.40 1.67 0.97 2055.67
1597.67
96.83<BR>dm-12
0.00 0.00 0.20
0.40 0.61
0.40 1.67 0.97 2071.67
1595.67
96.71<BR>dm-13
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-18
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-19
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-20
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-25
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-26
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-27
0.00 0.00 0.00
0.81 0.00
6.46 8.00
0.00 0.00 0.00
0.00<BR>dm-28
0.00 0.00 0.00 4.85
0.00 38.79
8.00 0.01 2.17
0.17 0.08</P>
<P>avg-cpu: %user %nice %system %iowait
%steal
%idle<BR>
0.00 0.00 0.00
19.85 0.00 80.15</P>
<DIV>Device:
rrqm/s wrqm/s r/s w/s
rsec/s wsec/s avgrq-sz avgqu-sz await
svctm
%util<BR>sda
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>sdb
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>sdc
0.00 0.61 0.00
0.00 0.00
0.00 0.00
9.12 0.00 0.00
80.26<BR>dm-0
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.65 0.00 3196.00
64.70<BR>dm-1
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.65 0.00 3235.00
65.49<BR>dm-2
0.00 0.00 0.20 0.00
0.61
0.00 3.00
0.66 0.00 3254.00
65.87<BR>dm-3
0.00 0.00 0.40
0.81 2.23
6.48 7.17
3.29 0.00 660.83
80.26<BR>dm-4
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.66 0.00 3248.00
65.75<BR>dm-5
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.66 0.00 3242.00
65.63<BR>dm-6
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.65 0.00 3223.00
65.24<BR>dm-7
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.66 0.00 3249.00
65.77<BR>dm-8
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.66 0.00 3243.00
65.65<BR>dm-9
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.66 0.00 3252.00
65.83<BR>dm-10
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.65 0.00 3209.00
64.96<BR>dm-12
0.00 0.00 0.20
0.00 0.61
0.00 3.00
0.65 0.00 3215.00
65.08<BR>dm-13
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-18
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-19
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-20
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-25
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-26
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-27
0.00 0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00 0.00
0.00<BR>dm-28
0.00 0.00 0.00 0.00
0.00
0.00 0.00
0.00 0.00 0.00 0.00</DIV>
<DIV> </DIV>
<DIV>top server1:</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>
<P>Tasks: 217 total, 1 running, 214 sleeping, 0
stopped, 2 zombie<BR>Cpu(s): 0.1%us, 0.0%sy,
0.0%ni, 99.4%id, 0.4%wa, 0.0%hi, 0.0%si,
0.0%st<BR>Mem: 4084444k total, 1323728k used, 2760716k
free, 120072k buffers <BR>Swap: 8193020k
total, 0k used, 8193020k
free, 872284k cached</P>
<P> PID USER PR NI VIRT
RES SHR S %CPU %MEM TIME+ COMMAND<BR>14245
root 16 0 2288 988
704 R 2 0.0 0:00.01
top<BR> 1 root 15
0 2036 652 564 S 0 0.0
0:01.13 init<BR> 2 root
RT 0 0
0 0 S 0 0.0 0:00.00
migration/0<BR> 3 root
34 19 0 0 0
S 0 0.0 0:00.46
ksoftirqd/0<BR> 4 root
RT 0 0
0 0 S 0 0.0 0:00.00
watchdog/0<BR> 5 root
RT 0 0
0 0 S 0 0.0 0:00.00
migration/1<BR> 6 root
34 19 0 0 0
S 0 0.0 0:00.07
ksoftirqd/1<BR> 7 root
RT 0 0
0 0 S 0 0.0 0:00.00
watchdog/1<BR> 8 root
RT 0 0
0 0 S 0 0.0 0:00.00
migration/2<BR> 9 root
34 19 0 0 0
S 0 0.0 0:00.19
ksoftirqd/2<BR> 10 root
RT 0 0
0 0 S 0 0.0 0:00.00
watchdog/2<BR> 11 root
RT 0 0
0 0 S 0 0.0 0:00.00
migration/3<BR> 12 root 34
19 0 0 0
S 0 0.0 0:00.23
ksoftirqd/3<BR> 13 root
RT 0 0
0 0 S 0 0.0 0:00.00
watchdog/3<BR> 14 root 10
-5 0 0 0
S 0 0.0 0:00.00 events/0<BR>
15 root 10 -5
0 0 0 S 0
0.0 0:00.00 events/1<BR> 16
root 10 -5
0 0 0 S 0
0.0 0:00.00 events/2<BR> 17
root 10 -5
0 0 0 S 0 0.0
0:00.00 events/3<BR> 18
root 19 -5
0 0 0 S 0
0.0 0:00.00 khelper<BR> 19
root 10 -5
0 0 0 S 0
0.0 0:00.00 kthread<BR> 25
root 10 -5
0 0 0 S 0
0.0 0: 00.00 kblockd/0<BR> 26
root 10 -5
0 0 0 S 0
0.0 0:00.04 kblockd/1<BR> 27
root 10 -5
0 0 0 S 0
0.0 0:00.00 kblockd/2<BR> 28
root 10 -5
0 0 0 S 0
0.0 0: 00.06 kblockd/3<BR> 29
root 14 -5
0 0 0 S 0
0.0 0:00.00 kacpid<BR> 147
root 14 -5
0 0 0 S 0
0.0 0:00.00 cqueue/0<BR> 148
root 14 -5
0 0 0 S 0
0.0 0:00.00 cqueue/1<BR> 149
root 14 -5
0 0 0 S 0
0.0 0:00.00 cqueue/2<BR> 150
root 14 -5
0 0 0 S 0
0.0 0:00.00 cqueue/3<BR> 153
root 10 -5
0 0 0 S 0
0.0 0:00.00 khubd <BR> 155
root 10 -5
0 0 0 S 0
0.0 0:00.00 kseriod<BR> 231
root 19 0
0 0 0 S 0
0.0 0:00.00 pdflush<BR> 232
root 15 0
0 0 0 S 0
0.0 0:00.15 pdflush<BR> 233
root 14 -5
0 0 0 S 0
0.0 0:00.00 kswapd0<BR> 234
root 14 -5
0 0 0 S 0
0.0 0:00.00 aio/0<BR> 235 root
14 -5 0 0 0
S 0 0.0 0:00.00 aio/1<BR> 236
root 14 -5
0 0 0 S 0
0.0 0:00.00 aio/2<BR></P></DIV>
<DIV> </DIV>
<DIV>top server2:</DIV>
<DIV>
<P>Tasks: 264 total, 1 running, 260 sleeping, 0
stopped, 3 zombie<BR>Cpu(s): 0.5%us, 0.1%sy,
0.0%ni, 83.7%id, 15.7%wa, 0.0%hi, 0.0%si,
0.0%st<BR>Mem: 4084444k total, 3941216k used,
143228k free, 46608k buffers <BR>Swap: 8193020k
total, 24660k used, 8168360k free, 3652864k
cached</P>
<P> PID USER PR NI VIRT
RES SHR S %CPU %MEM TIME+ COMMAND<BR> 9507
root 15 0 2292 1020 704
R 2 0.0 0:00.01 top<BR>
1 root 15 0 2032
652 564 S 0 0.0 0:01.20
init<BR> 2 root RT
0 0 0 0
S 0 0.0 0:00.00
migration/0<BR> 3 root
34 19 0 0 0
S 0 0.0 0:00.25
ksoftirqd/0<BR> 4 root
RT 0 0
0 0 S 0 0.0 0:00.00
watchdog/0<BR> 5 root
RT 0 0
0 0 S 0 0.0 0:00.00
migration/1<BR> 6 root
34 19 0 0 0
S 0 0.0 0:00.42
ksoftirqd/1<BR> 7 root
RT 0 0
0 0 S 0 0.0 0:00.00
watchdog/1<BR> 8 root
RT 0 0
0 0 S 0 0.0 0:00.00
migration/2<BR> 9 root
34 19 0 0 0
S 0 0.0 0:00.56
ksoftirqd/2<BR> 10 root
RT 0 0
0 0 S 0 0.0 0:00.00
watchdog/2<BR> 11 root
RT 0 0
0 0 S 0 0.0 0:00.00
migration/3<BR> 12 root 39
19 0 0 0
S 0 0.0 0:00.08
ksoftirqd/3<BR> 13 root
RT 0 0
0 0 S 0 0.0 0:00.00
watchdog/3<BR> 14 root 10
-5 0 0 0
S 0 0.0 0:00.03 events/0<BR>
15 root 10 -5
0 0 0 S 0
0.0 0:00.00 events/1<BR> 16
root 10 -5
0 0 0 S 0
0.0 0:00.02 events/2<BR> 17
root 10 -5
0 0 0 S 0 0.0
0:00.02 events/3<BR> 18
root 10 -5
0 0 0 S 0
0.0 0:00.00 khelper<BR> 19
root 11 -5
0 0 0 S 0
0.0 0:00.00 kthread<BR> 25
root 10 -5
0 0 0 S 0
0.0 0: 00.08 kblockd/0<BR> 26
root 10 -5
0 0 0 S 0
0.0 0:00.00 kblockd/1<BR> 27
root 10 -5
0 0 0 S 0
0.0 0:00.00 kblockd/2<BR> 28
root 10 -5
0 0 0 S 0
0.0 0: 00.08 kblockd/3<BR> 29
root 18 -5
0 0 0 S 0
0.0 0:00.00 kacpid<BR> 147
root 18 -5
0 0 0 S 0
0.0 0:00.00 cqueue/0<BR> 148
root 20 -5
0 0 0 S 0
0.0 0:00.00 cqueue/1<BR> 149
root 10 -5
0 0 0 S 0
0.0 0:00.00 cqueue/2<BR> 150
root 10 -5
0 0 0 S 0
0.0 0:00.00 cqueue/3<BR> 153
root 10 -5
0 0 0 S 0
0.0 0:00.00 khubd <BR> 155
root 10 -5
0 0 0 S 0
0.0 0:00.00 kseriod<BR> 233
root 15 -5
0 0 0 S 0
0.0 0:00.84 kswapd0<BR> 234
root 20 -5
0 0 0 S 0
0.0 0:00.00 aio/0<BR> 235 root
20 -5 0 0 0
S 0 0.0 0:00.00 aio/1<BR> 236
root 20 -5
0 0 0 S 0
0.0 0:00.00 aio/2<BR> 237 root
20 -5 0 0 0
S 0 0.0 0:00.00 aio/3<BR> 409
root 11 -5
0 0 0 S 0
0.0 0:00.00 kpsmoused<BR></P>
<P>slabtop server1:</P>
<P> Active / Total Objects (% used) : 301060 / 312027
(96.5%)<BR> Active / Total Slabs (% used) :
12055 / 12055 (100.0%)<BR> Active / Total Caches (%
used) : 106 / 146 (72.6%)<BR> Active / Total Size
(% used) : 45714.46K / 46814.27K
(97.7%)<BR> Minimum / Average / Maximum Object : 0.01K / 0.15K /
128.00K</P>
<P> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME<BR>148176 148095 99% 0.05K
2058 72
8232K buffer_head<BR> 44573 44497 99%
0.13K 1537
29 6148K dentry_cache<BR> 32888
32878 99% 0.48K
4111 8
16444K ext3_inode_cache<BR> 10878 10878 100%
0.27K 777
14 3108K radix_tree_node<BR>
7943 7766 97%
0.02K 47
169 188K dm_io<BR> 7917
7766 98% 0.02K
39 203 156K
dm_tio <BR> 6424 5414 84%
0.09K 146
44 584K vm_area_struct<BR>
6384 6223 97%
0.04K 76
84 304K sysfs_dir_cache<BR>
5989 5823 97%
0.03K 53
113 212K size-32<BR>
5074 4833 95%
0.06K 86
59 344K size-64<BR> 4488
4488 100% 0.33K
408 11 1632K
inode_cache<BR> 2610 2561 98%
0.12K 87
30 348K size-128<BR>
2540 1213 47%
0.01K 10
254 40K anon_vma <BR>
2380 1475 61% 0.19K
119 20
476K filp<BR> 1430 1334 93%
0.35K 130
11 520K proc_inode_cache<BR>
1326 1031 77%
0.05K 17
78 68K
selinux_inode_security<BR> 1320 1304
98% 0.25K
88 15
352K size-256<BR> 1015 629 61%
0.02K 5
203 20K biovec-1<BR>
1008 125 12%
0.05K 14
72 56K journal_head<BR>
930 841 90%
0.12K 31
30 124K bio<BR>
920 666 72%
0.19K 46
20 184K skbuff_head_cache<BR>
798 757 94% 2.00K
399 2
1596K size-2048<BR> 791 690
87% 0.03K
7 113
28K ocfs2_em_ent <BR> 784 748
95% 0.50K
98
8 392K size-512<BR>
590 503 85%
0.06K 10
59 40K biovec-4<BR>
564 562 99% 0.88K
141
4 564K ocfs2_inode_cache<BR>
546 324 59%
0.05K 7
78 28K
delayacct_cache<BR> 531 497
93% 0.43K
59
9 236K shmem_inode_cache<BR>
520 487 93%
0.19K 26
20 104K biovec-16<BR>
505 327 64% 0.04K
5
101 20K pid<BR>
500 487 97% 0.75K
100
5 400K biovec-64<BR>
460 376 81%
0.04K 5
92 20K Acpi-Operand<BR>
452 104 23%
0.03K 4
113 16K pgd <BR>
347 347 100% 4.00K
347 1
1388K size-4096<BR> 338 303
89% 0.02K
2
169 8K
Acpi-Namespace<BR> 336 175
52% 0.04K
4
84 16K crq_pool<BR></P></DIV>
<DIV> </DIV>
<DIV>slabtop server2:</DIV>
<DIV> </DIV>
<DIV>
<P> Active / Total Objects (% used) : 886188 / 907066
(97.7%)<BR> Active / Total Slabs (% used) :
20722 / 20723 (100.0%)<BR> Active / Total Caches (%
used) : 104 / 147 (70.7%)<BR> Active / Total Size
(% used) : 76455.01K / 78620.30K
(97.2%)<BR> Minimum / Average / Maximum Object : 0.01K / 0.09K /
128.00K</P>
<P> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME<BR>736344 729536 99% 0.05K
10227 72 40908K
buffer_head<BR> 37671 34929 92%
0.13K 1299
29 5196K dentry_cache<BR> 32872
32666 99% 0.48K
4109 8
16436K ext3_inode_cache<BR> 20118 20097 99%
0.27K 1437
14 5748K
radix_tree_node<BR> 10912 9437 86%
0.09K 248
44 992K vm_area_struct<BR>
7714 7452 96%
0.02K 38
203 152K dm_tio<BR> 7605
7452 97% 0.02K
45 169 180K
dm_io<BR> 6552 6475 98%
0.04K 78
84 312K sysfs_dir_cache<BR>
5763 5461 94%
0.03K 51
113 204K size-32 <BR>
4661 4276 91%
0.06K 79
59 316K size-64<BR> 4572
3569 78% 0.01K
18 254
72K anon_vma<BR> 4440 3770 84%
0.19K 222
20 888K filp<BR> 2288
1791 78% 0.35K
208 11
832K proc_inode_cache<BR> 1980 1884
95% 0.12K
66 30
264K size-128<BR> 1804 1471 81%
0.33K 164
11 656K inode_cache<BR>
1404 1046 74%
0.05K 18
78 72K selinux_inode_security
<BR> 1060 670 63%
0.19K 53
20 212K skbuff_head_cache<BR>
904 767 84%
0.03K 8
113 32K ocfs2_em_ent<BR>
870 870 100% 0.12K
29 30
116K bio<BR> 840 816 97%
0.50K 105
8 420K size-512<BR>
812 578 71%
0.02K 4
203 16K biovec-1<BR>
810 800 98%
0.25K 54
15 216K size-256<BR>
800 761 95% 2.00K
400 2
1600K size-2048 <BR> 549 514
93% 0.43K
61
9 244K shmem_inode_cache<BR>
546 289 52%
0.05K 7
78 28K
delayacct_cache<BR> 531 478
90% 0.06K
9
59 36K biovec-4<BR>
520 471 90%
0.19K 26
20 104K biovec-16<BR>
505 292 57%
0.04K 5
101 20K pid<BR>
490 479 97%
0.75K 98
5 392K biovec-64<BR>
460 376 81% 0.04K
5
92 20K Acpi-Operand<BR>
452 175 38%
0.03K 4
113 16K pgd<BR>
440 357 81%
0.38K 44
10 176K sock_inode_cache<BR>
369 303 82%
0.44K 41
9 164K UNIX <BR>
360 320 88%
1.00K 90
4 360K size-1024<BR>
354 124 35%
0.06K 6
59 24K fs_cache<BR>
351 351 100% 4.00K
351 1
1404K pmd<BR></P>
<P>uptime server1:</P>
<P>[root@impax ~]# uptime<BR> 10:21:09 up 18:16, 1 user, load
average: 7.22, 7.72, 7.83<BR></P>
<P>uptime server2:</P>
<P>[root@impaxdb ~]# uptime<BR> 10:21:17 up 18:16, 1 user,
load average: 8.79, 9.02, 8.98<BR></P></DIV>
<DIV>yours</DIV>
<DIV> </DIV>
<DIV>arnold<BR> </DIV>
<DIV><SPAN class=gmail_quote>On 8/24/07, <B
class=gmail_sendername>Alexei_Roudnev</B> <<A
href="mailto:Alexei_Roudnev@exigengroup.com">Alexei_Roudnev@exigengroup.com</A>>
wrote:</SPAN>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<DIV bgcolor="#ffffff">
<DIV><FONT size=2>Run and send here, please:</FONT></DIV>
<DIV><FONT size=2></FONT> </DIV>
<DIV><FONT size=2>vmstat 5</FONT></DIV>
<DIV><FONT size=2>(5 lines)</FONT></DIV>
<DIV><FONT size=2></FONT> </DIV>
<DIV><FONT size=2>iostat -x 5</FONT></DIV>
<DIV><FONT size=2>(2 - 3 outputs)</FONT></DIV>
<DIV><FONT size=2></FONT> </DIV>
<DIV><FONT size=2>top</FONT></DIV>
<DIV><FONT size=2>(1 screen)</FONT></DIV>
<DIV><FONT size=2></FONT> </DIV>
<DIV><FONT size=2>slabtop (or equivalent, I dont remember)</FONT></DIV>
<DIV><FONT size=2></FONT> </DIV>
<DIV><FONT size=2></FONT> </DIV>
<DIV>----- Original Message ----- </DIV>
<BLOCKQUOTE
style="PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px"><SPAN
class=q>
<DIV style="BACKGROUND: #e4e4e4; FONT: 10pt arial"><B>From:</B> <A
title=arnold.maderthaner@j4care.com
onclick="return top.js.OpenExtLink(window,event,this)"
href="mailto:arnold.maderthaner@j4care.com" target=_blank>Arnold
Maderthaner</A> </DIV>
<DIV style="FONT: 10pt arial"><B>To:</B> <A title=Sunil.Mushran@oracle.com
onclick="return top.js.OpenExtLink(window,event,this)"
href="mailto:Sunil.Mushran@oracle.com" target=_blank>Sunil Mushran</A> ;
<A title=ocfs2-users@oss.oracle.com
onclick="return top.js.OpenExtLink(window,event,this)"
href="mailto:ocfs2-users@oss.oracle.com"
target=_blank>ocfs2-users@oss.oracle.com</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Sent:</B> Thursday, August 23, 2007 4:12
AM</DIV>
<DIV style="FONT: 10pt arial"><B>Subject:</B> Re: [Ocfs2-users] Slow
concurrent actions on the same LVM logicalvolume</DIV>
<DIV><BR> </DIV></SPAN>
<DIV><SPAN class=e id=q_1149408bed155e69_2>Hi !<BR><BR>I watched the
servers today and noticed that the server load is very high but not much
is working on the server:<BR>server1:<BR> 16:37:11 up 32 min, 1
user, load average: 6.45, 7.90, 7.45<BR>server2:<BR> 16:37:19
up 32 min, 4 users, load average: 9.79, 9.76, 8.26<BR>i also
tried to do some selects on the oracle database today.<BR>to do a "select
count(*) from xyz" where xyz has 120 000 rows it took me about 10 sec.
<BR>always the process who does the filesystem access is the first one.
<BR>and every command (ls,du,vgdisplay,lvdisplay,dd,cp,....) which needs
access to a ocfs2 filesystem seams like blocking or it seams to me that
there is a lock. <BR>Can anyone please help me
?<BR><BR>yours<BR><BR>Arnold<BR><BR>
<DIV><SPAN class=gmail_quote>On 8/22/07, <B class=gmail_sendername>Arnold
Maderthaner</B> <<A
onclick="return top.js.OpenExtLink(window,event,this)"
href="mailto:arnold.maderthaner@j4care.com"
target=_blank>arnold.maderthaner@j4care.com </A>> wrote:</SPAN>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">There
is not really a high server load when writing to the disks but will try
it asap. what do you mean with AST ? <BR>Can I enable some debuging on
ocfs2 ? We have 2 Dell servers with about 4gb of ram each and 2 Dual
Core CPUs with 2GHZ each. <BR>Can you please describe which tests todo
again ? <BR><BR>Thx for your help<BR><BR>Arnold
<DIV><SPAN><BR><BR>
<DIV><SPAN class=gmail_quote>On 8/22/07, <B class=gmail_sendername>Sunil
Mushran </B><<A
onclick="return top.js.OpenExtLink(window,event,this)"
href="mailto:Sunil.Mushran@oracle.com" target=_blank>
Sunil.Mushran@oracle.com </A>> wrote:</SPAN>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">Repeat
the first test. This time run top on the first server. Which
<BR>process is eating the cpu? From the time it appears that the
second <BR>server is waiting for the AST that the first server is slow
to send. And<BR>it could be slow because it may be busy flushing the
data to disk.<BR>How much memory/cpu do these servers have?<BR><BR>As
far as the second test goes, what is the concurrent write performance
<BR>directly to the LV (minus fs).<BR><BR>On Wed, Aug 22, 2007 at
01:14:02PM +0530, Arnold Maderthaner wrote: <BR>> Hi 2 all
!<BR>><BR>> I have problems with concurrent filesystem actions
on a ocfs2<BR>> filesystem which is mounted by 2 nodes. OS=RH5ES
and OCFS2= 1.2.6<BR>> F.e.: If I have a LV called testlv which is
mounted on /mnt on both <BR>> servers and I do a "dd if=/dev/zero
of=/mnt/test.a bs=1024<BR>> count=1000000" on server 1 and do at
the same time a du -hs <BR>> /mnt/test.a it takes about 5 seconds
for du -hs to execute:<BR>> 270M
test.a<BR>><BR>> real 0m3.627s<BR>>
user 0m0.000s<BR>>
sys 0m0.002s<BR>><BR>> or even longer.
If i do a du -hs ond a file which is not written by <BR>> another
server its fast:<BR>> 977M
test.b<BR>><BR>> real 0m0.001s<BR>>
user 0m0.001s<BR>>
sys 0m0.000s<BR>><BR>> If I write 2
different files from both servers to the same LV I get <BR>> only
2.8mb/sec.<BR>> If I write 2 different files from both servers to a
different LV <BR>> (testlv,test2lv) then I get about 30mb/sec on
each write (so 60mb/sec<BR>> combined which is very good).<BR>>
does anyone has an idea what could be wrong. Are there any mount
<BR>> options for ocfs2 which could help me? is this a lvm2 or
ocfs2 problem <BR>> (locking,blocking,.... issue) ?<BR>><BR>>
Any help will be appriciated.<BR>><BR>> yours<BR>><BR>>
Arnold<BR>> -- <BR>> Arnold Maderthaner<BR>> J4Care
Inc.<BR>><BR>> _______________________________________________
<BR>> Ocfs2-users mailing list<BR>> <A
onclick="return top.js.OpenExtLink(window,event,this)"
href="mailto:Ocfs2-users@oss.oracle.com"
target=_blank>Ocfs2-users@oss.oracle.com</A><BR>> <A
onclick="return top.js.OpenExtLink(window,event,this)"
href="http://oss.oracle.com/mailman/listinfo/ocfs2-users"
target=_blank>http://oss.oracle.com/mailman/listinfo/ocfs2-users
</A><BR></BLOCKQUOTE></DIV><BR><BR clear=all><BR></SPAN></DIV><SPAN>--
<BR>Arnold Maderthaner<BR>J4Care Inc. </SPAN></BLOCKQUOTE></DIV><BR><BR
clear=all><BR>-- <BR>Arnold Maderthaner<BR>J4Care Inc. </SPAN></DIV>
<P>
<HR>
<P></P>_______________________________________________<SPAN
class=q><BR>Ocfs2-users mailing list<BR><A
onclick="return top.js.OpenExtLink(window,event,this)"
href="mailto:Ocfs2-users@oss.oracle.com"
target=_blank>Ocfs2-users@oss.oracle.com </A><BR><A
onclick="return top.js.OpenExtLink(window,event,this)"
href="http://oss.oracle.com/mailman/listinfo/ocfs2-users"
target=_blank>http://oss.oracle.com/mailman/listinfo/ocfs2-users</A></SPAN>
<P></P>
<P></P></BLOCKQUOTE></DIV></BLOCKQUOTE></DIV><BR><BR clear=all><BR>--
<BR>Arnold Maderthaner<BR>J4Care Inc. </BLOCKQUOTE></BODY></HTML>