<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.6000.16525" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2>OCFSv2 reads and writes (correct? I dont know for
sure) heatbeat data from/to the volume every few seconds. So it may explain why
it dont work properly in your case.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Are you sure that you dont experience FC level
problems (such as reconnection every 10 - 20 seconds)?</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV> </DIV>
<BLOCKQUOTE
style="PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV style="FONT: 10pt arial">----- Original Message ----- </DIV>
<DIV
style="BACKGROUND: #e4e4e4; FONT: 10pt arial; font-color: black"><B>From:</B>
<A title=arnold.maderthaner@j4care.com
href="mailto:arnold.maderthaner@j4care.com">Arnold Maderthaner</A> </DIV>
<DIV style="FONT: 10pt arial"><B>To:</B> <A title=Sunil.Mushran@oracle.com
href="mailto:Sunil.Mushran@oracle.com">Sunil Mushran</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Cc:</B> <A
title=Alexei_Roudnev@exigengroup.com
href="mailto:Alexei_Roudnev@exigengroup.com">Alexei_Roudnev</A> ; <A
title=ocfs2-users@oss.oracle.com
href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users@oss.oracle.com</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Sent:</B> Friday, August 24, 2007 10:44
AM</DIV>
<DIV style="FONT: 10pt arial"><B>Subject:</B> Re: [Ocfs2-users] Slow
concurrent actions on the same LVM logicalvolume</DIV>
<DIV><BR></DIV>We have the newest QLogic driver installed and I can write on
full speed (40mb/sec) with ocfs2 (also with ext3) when I'm the only one
writing to the device (even if it is mounted on both servers). The things I
noticed today is that it is not a write problem it is more a read problem.
When I'm reading files every 3-5 seconds it would halt for lets say 2-3
seconds. Between it is fast. Even when I read the same data again then it is
fast. <BR>Do you have any comments on the server load
?<BR><BR>yours<BR><BR>Arnold<BR><BR><BR> I h<BR>
<DIV><SPAN class=gmail_quote>On 8/24/07, <B class=gmail_sendername>Sunil
Mushran</B> <<A href="mailto:Sunil.Mushran@oracle.com">
Sunil.Mushran@oracle.com</A>> wrote:</SPAN>
<BLOCKQUOTE class=gmail_quote
style="PADDING-LEFT: 1ex; MARGIN: 0pt 0pt 0pt 0.8ex; BORDER-LEFT: rgb(204,204,204) 1px solid">mounted.ocfs2
is still not devmapper savvy. We'll address that <BR>in the next full
release.<BR><BR>Arnold Maderthaner wrote:<BR>> FYI the vg is setuped on
/dev/sdc1.<BR>> Also I noticed some strange thing. The mounted command
doesn't show<BR>> anything:<BR>> [root@impaxdb ~]# mounted.ocfs2
-d<BR>>
Device FS
UUID Label<BR>>
[root@impaxdb ~]# mounted.ocfs2 -f<BR>>
Device FS
Nodes<BR>> [root@impaxdb ~]#<BR>> <BR>> [root@impax ~]#
mounted.ocfs2 -d<BR>>
Device FS
UUID Label<BR>>
[root@impax ~]# mounted.ocfs2 -f<BR>>
Device FS
Nodes<BR>> [root@impax ~]#<BR>><BR>><BR>> but devices are
mounted:<BR>><BR>> [root@impax ~]# mount -t ocfs2<BR>>
/dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-sts002LV on
/srv/dcm4chee/sts/002 type ocfs2 <BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-sts003LV on
/srv/dcm4chee/sts/003 type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-cacheLV on
/srv/dcm4chee/sts/cache type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-u01LV on
/srv/oracle/u01 type ocfs2<BR>> (rw,_netdev,heartbeat=local)<BR>>
/dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2<BR>>
(rw,_netdev,heartbeat=local) <BR>> /dev/mapper/raidVG-u03LV on
/srv/oracle/u03 type ocfs2<BR>> (rw,_netdev,heartbeat=local)<BR>>
/dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-webstartLV on
/var/www/html/webstart type ocfs2 <BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-configLV on
/opt/dcm4chee/server/default/conf type<BR>> ocfs2
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-xmbeanattrsLV
on<BR>> /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2
<BR>> (rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-installLV
on /install type ocfs2<BR>> (rw,_netdev,heartbeat=local)<BR>><BR>>
[root@impaxdb ~]# mount -t ocfs2<BR>> /dev/mapper/raidVG-sts001LV on
/srv/dcm4chee/sts/001 type ocfs2 <BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-sts002LV on
/srv/dcm4chee/sts/002 type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-sts003LV on
/srv/dcm4chee/sts/003 type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-cacheLV on
/srv/dcm4chee/sts/cache type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-u01LV on
/srv/oracle/u01 type ocfs2<BR>> (rw,_netdev,heartbeat=local) <BR>>
/dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-u03LV on
/srv/oracle/u03 type ocfs2<BR>> (rw,_netdev,heartbeat=local)<BR>>
/dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2 <BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-webstartLV on
/var/www/html/webstart type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-configLV on
/opt/dcm4chee/server/default/conf type <BR>> ocfs2
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-xmbeanattrsLV
on<BR>> /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2<BR>>
(rw,_netdev,heartbeat=local)<BR>> /dev/mapper/raidVG-installLV on
/install type ocfs2 <BR>>
(rw,_netdev,heartbeat=local)<BR>><BR>><BR>> yours<BR>><BR>>
arnold<BR>><BR>><BR>><BR>><BR>> On 8/24/07, *Arnold
Maderthaner* <<A
href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com
</A><BR>> <mailto:<A
href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com</A>>>
wrote:<BR>><BR>> vmstat
server1:<BR>> [root@impax ~]# vmstat
5<BR>> procs -----------memory----------
---swap-- -----io---- --system-- <BR>>
-----cpu------<BR>> r b
swpd free buff cache
si
so bi bo
in cs<BR>> us sy id wa
st<BR>> 0 0 0
2760468 119924
872240 0 0
4 4 107<BR>>
63 0 0 99 0 0
<BR>> 0 2 0
2760468 119928
872240 0 0
4 30 1095<BR>>
277 1 0
99 0 0<BR>> 0 0 0
2760468 119940
872248 0 0
7 11 1076<BR>>
232 0 0
100 0 0<BR>> 0 0 0
2760468 119948
872272 0 0
4 18 1084 <BR>>
244 0 0
100 0 0<BR>> 0 0 0
2760468 119948
872272 0 0
7 34 1063<BR>>
220 0 0
100 0 0<BR>> 0 0 0
2760468 119948
872272 0 0
0 1 1086<BR>>
243 0 0 100 0 0
<BR>> 0 0 0
2760716 119956
872272 0 0
7 10 1065<BR>>
234 2 0
98 0 0<BR>> vmstart
server2:<BR>> [root@impaxdb ~]# vmstat
5<BR>> procs -----------memory----------
---swap-- -----io---- --system-- <BR>>
-----cpu------<BR>> r b
swpd free buff cache
si
so bi bo
in cs<BR>> us sy id wa
st<BR>> 0 0 24676
143784 46428
3652848 0 0 11 20 107<BR>>
122 0 0 84 16 0
<BR>> 0 1 24676
143784 46452
3652832 0 0
4 14 1080<BR>>
237 0 0 89
11 0<BR>> 0 1 24672
143660 46460
3652812 6 0 14 34
1074<BR>> 246 0 0 90
10 0<BR>> 0 0 24672
143660 46460
3652852 0 0
7 22 1076 <BR>>
246 0 0 80
20 0<BR>> 0 1 24672
143660 46476
3652844 0 0
7 10 1068<BR>>
227 0 0 85
15 0<BR>><BR>> iostat
server1:<BR>><BR>><BR>> [root@impax ~]#
iostat -x 5 <BR>> Linux 2.6.18-8.el5PAE
(impax) 08/24/2007<BR>><BR>>
avg-cpu: %user %nice %system
%iowait %steal
%idle<BR>> 0.07 0.00 0.04 0.45 0.00
99.44<BR>><BR>>
Device: rrqm/s
wrqm/s r/s w/s rsec/s wsec/s
<BR>> avgrq-sz avgqu-sz
await svctm %util<BR>>
sda
0.24
0.90 0.23 0.76 15.72 13.33<BR>>
29.22 0.01 12.68
4.77 0.47<BR>>
sdb
0.03 1.23
0.02 0.58 1.34
6.81<BR>> 13.59
0.00 0.79 0.30
0.02<BR>>
sdc
0.58
0.87 3.88 3.78 16.69 10.87<BR>>
3.60 7.98
1042.08 89.69 68.68<BR>>
dm-0
0.00 0.00
0.31 1.14 0.96
6.99<BR>> 5.46 1.55
1061.28 459.00 66.92<BR>>
dm-1 0.00
0.00 0.31 0.31
0.95 0.31<BR>>
2.01 0.66 1054.16
1054.17 66.15<BR>>
dm-2
0.00
0.00 0.31 0.31 0.95
0.31<BR>> 2.01 0.66
1057.72 1057.84 66.24<BR>>
dm-3 0.00
0.00 0.96 0.34
6.15 0.55<BR>>
5.12 0.77 592.01
504.72 66.04<BR>>
dm-4
0.00
0.00 0.32 0.33
0.96 0.41<BR>>
2.14 0.66 1031.13
1028.75 66.00<BR>>
dm-5 0.00
0.00 0.31 0.32
0.95 0.40<BR>>
2.12 0.66 1037.27
1036.55 66.09<BR>>
dm-6
0.00
0.00 0.31 0.31
0.94 0.31<BR>>
2.01 0.66 1064.34
1064.45 66.37<BR>>
dm-7 0.00
0.00 0.32 0.31
0.95 0.32<BR>>
2.01 0.66 1043.05
1042.99 65.77<BR>>
dm-8
0.00
0.00 0.31 0.31
0.95 0.32<BR>>
2.03 0.66 1050.73
1048.23 65.97<BR>>
dm-9 0.00
0.00 0.34 0.31
0.99 0.31<BR>>
2.01 0.67 1027.41
1006.68 65.71<BR>>
dm-10
0.00
0.00 0.32 0.32
0.95 0.32<BR>>
2.01 0.68 1078.71
1050.22 66.41<BR>>
dm-12
0.00
0.00 0.31 0.31
0.96 0.31<BR>>
2.03 0.66 1065.33
1065.10 66.43<BR>>
dm-13
0.00
0.00 0.04 1.27
1.27 2.54<BR>>
2.91 0.00 2.43
0.10 0.01<BR>>
dm-18
0.00
0.00 0.01 0.35
0.04 2.80<BR>>
8.00 0.01 25.42
0.03 0.00<BR>>
dm-19
0.00
0.00 0.00 0.18
0.01 1.47<BR>>
8.00 0.00 0.42
0.21 0.00<BR>>
dm-21
0.00
0.00 0.02 0.01
2.20 0.11<BR>>
65.49 0.00 9.84
3.01 0.01<BR>>
dm-26
0.00
0.00 0.01 0.05
0.12 0.43<BR>>
9.21 0.00 11.66
7.05 0.04<BR>>
dm-27
0.00 0.00
0.00 0.21 0.02
1.70<BR>> 8.02
0.00 12.09 4.57
0.10<BR>>
dm-28
0.00
0.00 0.26 0.11
8.90 0.91<BR>>
26.40 0.01 14.94
1.85 0.07<BR>>
dm-29
0.00 0.00
0.05 1.03 1.04<BR>>
8.22 8.56 0.01
13.76 2.48 0.27<BR>>
<BR>> avg-cpu: %user %nice
%system %iowait %steal
%idle<BR>> 0.00 0.00 0.00 0.40 0.00
99.60<BR>><BR>>
Device: rrqm/s
wrqm/s r/s w/s rsec/s
wsec/s<BR>> avgrq-sz avgqu-sz
await svctm %util<BR>>
sda
0.00
2.80 0.00 2.20
0.00 40.00<BR>>
18.18 0.03 12.27
4.73 1.04<BR>>
sdb
0.00 0.40 0.00
0.40 0.00
1.60<BR>> 4.00
0.00 5.50 5.50
0.22<BR>>
sdc
0.00
0.00 4.80 4.80 14.40
4.80<BR>> 2.00 7.21
1560.12 63.67 61.12<BR>>
dm-0
0.00 0.00
0.40 0.40 1.20
0.40<BR>> 2.00 0.60
1559.50 750.25 60.02<BR>>
dm-1 0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1558.50
750.50 60.04<BR>>
dm-2
0.00
0.00 0.40 0.40 1.20
0.40<BR>> 2.00 0.60
1559.50 752.75 60.22<BR>>
dm-3 0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1562.75
752.25 60.18<BR>>
dm-4
0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1559.50
752.50 60.20<BR>>
dm-5 0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1563.00
750.25 60.02<BR>>
dm-6
0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1558.50
750.25 60.02<BR>>
dm-7 0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1559.50
752.50 60.20<BR>>
dm-8
0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1559.75
750.25 60.02<BR>>
dm-9 0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1554.00
750.25 60.02<BR>>
dm-10
0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1555.75 752.25
60.18<BR>>
dm-12
0.00
0.00 0.40 0.40
1.20 0.40<BR>>
2.00 0.60 1571.25
750.25 60.02<BR>>
dm-13
0.00
0.00 0.00 0.80
0.00 1.60<BR>>
2.00 0.01 8.25
2.75 0.22<BR>>
dm-18
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-19
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-21
0.00 0.00
0.00 0.00 0.00
0.00<BR>> 0.00
0.00 0.00 0.00
0.00<BR>>
dm-26
0.00
0.00 0.00 0.60
0.00 4.80<BR>>
8.00 0.01 12.67
8.33 0.50<BR>>
dm-27
0.00 0.00
0.00 0.00 0.00<BR>>
0.00 0.00
0.00 0.00 0.00
0.00<BR>>
dm-28
0.00
0.00 0.00 1.80
0.00 14.40<BR>>
8.00 0.02 10.22
2.44 0.44<BR>>
dm-29
0.00
0.00 0.00 0.80 0.00
6.40<BR>> 8.00
0.01 10.25 3.75 0.30<BR>>
<BR>> avg-cpu: %user %nice
%system %iowait %steal
%idle<BR>> 0.05 0.00 0.00 0.00 0.00
99.95<BR>><BR>>
Device: rrqm/s
wrqm/s r/s w/s rsec/s
wsec/s<BR>> avgrq-sz avgqu-sz
await svctm %util<BR>>
sda
0.00
0.20 0.00 0.60
0.00 6.40<BR>>
10.67 0.01 14.67
6.00 0.36<BR>>
sdb
0.00 0.40 0.00
0.40 0.00
1.60<BR>> 4.00
0.00 0.00 0.00
0.00<BR>>
sdc
0.00
0.00 4.80 4.80 14.40
4.80<BR>> 2.00 8.81
1292.92 77.06 73.98<BR>>
dm-0
0.00 0.00
0.20 0.40 0.60
0.40<BR>> 1.67 0.74
1723.33 1233.00 73.98<BR>>
dm-1 0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1722.00
1223.33 73.40<BR>>
dm-2
0.00
0.00 0.20 0.40 0.60
0.40<BR>> 1.67 0.73
1726.33 1222.00 73.32<BR>>
dm-3 0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1725.67
1222.00 73.32<BR>>
dm-4
0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1726.00
1222.00 73.32<BR>>
dm-5 0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1722.33
1222.00 73.32<BR>>
dm-6
0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1722.00
1223.67 73.42<BR>>
dm-7 0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1726.00
1222.00 73.32<BR>>
dm-8
0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1722.33
1222.00 73.32<BR>>
dm-9 0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1723.00
1214.33 72.86<BR>>
dm-10
0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.73 1725.67
1222.00 73.32<BR>>
dm-12
0.00
0.00 0.20 0.40
0.60 0.40<BR>>
1.67 0.74 1722.00
1231.67 73.90<BR>>
dm-13
0.00
0.00 0.00 0.80
0.00 1.60<BR>>
2.00
0.00 0.00 0.00
0.00<BR>>
dm-18
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-19
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-21
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-26
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-27
0.00
0.00 0.00 0.60
0.00 4.80<BR>>
8.00 0.01 13.00
4.33 0.26<BR>>
dm-28
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-29
0.00 0.00
0.00 0.00 0.00
0.00<BR>> 0.00
0.00 0.00 0.00 0.00<BR>>
<BR>><BR>> iostat
server2:<BR>><BR>> [root@impaxdb ~]# iostat -x
5<BR>> Linux 2.6.18-8.el5PAE
(impaxdb)
08/24/2007<BR>><BR>>
avg-cpu: %user %nice %system
%iowait %steal %idle
<BR>> 0.48 0.00 0.10
15.70 0.00
83.72<BR>><BR>>
Device: rrqm/s
wrqm/s r/s w/s rsec/s
wsec/s<BR>> avgrq-sz avgqu-sz
await svctm %util<BR>>
sda
0.28
1.38 0.32 0.82 18.52 17.59<BR>>
31.55 0.00 1.98
0.83 0.10<BR>>
sdb
0.01
1.73 0.02 0.04
3.20 14.15<BR>>
297.21 0.00 9.18
0.87 0.01<BR>>
sdc
6.44 14.59 3.94 4.26 65.36
126.12<BR>> 23.36 9.52
1160.63 91.33 74.88<BR>>
dm-0 0.00
0.00 0.29 0.29
0.89 0.29<BR>>
2.01 0.71 1214.34
1214.01 71.18<BR>>
dm-1
0.00
0.00 0.29 0.29
0.89 0.29<BR>>
2.01 0.71 1206.47
1206.38 70.95<BR>>
dm-2 0.00
0.00 0.29 0.29 0.89
0.29<BR>> 2.01 0.71
1207.35 1207.27 70.96<BR>>
dm-3
0.00
0.00 7.10 9.84 55.33 76.63<BR>>
7.79 25.51
1506.27 43.35 73.42<BR>>
dm-4 0.00
0.00 0.32 5.48
1.10 41.82<BR>>
7.39 9.21 1586.36
122.41 71.06<BR>>
dm-5
0.00
0.00 0.30 0.88
0.90 5.02<BR>>
5.02 0.72 606.80
600.72 70.89<BR>>
dm-6 0.00
0.00 0.29 0.29
0.89 0.29<BR>>
2.01 0.71 1206.90
1206.94 70.91<BR>>
dm-7
0.00
0.00 0.29 0.29
0.89 0.29<BR>>
2.01 0.71 1202.65
1202.69 70.82<BR>>
dm-8 0.00
0.00 0.29 0.29
0.89 0.29<BR>>
2.01 0.71 1204.47
1204.43 70.84<BR>>
dm-9
0.00
0.00 0.29 0.29
0.89 0.29<BR>>
2.01 0.71 1202.38
1202.42 70.77<BR>>
dm-10
0.00
0.00 0.29 0.29
0.88 0.29<BR>>
2.01 0.71 1208.53
1208.48 70.93<BR>>
dm-12
0.00
0.00 0.30 0.29
0.91 0.29<BR>>
2.04 0.71 1202.82
1202.49 70.92<BR>>
dm-13
0.00
0.00 0.00 0.00
0.01 0.00<BR>>
6.99 0.00 0.47
0.06 0.00<BR>>
dm-18
0.00
0.00 0.02 1.77
3.16 14.15<BR>>
9.67 0.03 16.69
0.03 0.01<BR>>
dm-19
0.00
0.00 0.00 0.00
0.01 0.00<BR>>
8.02 0.00 0.69
0.34 0.00<BR>>
dm-20
0.00
0.00 0.05 0.28
3.48 2.26<BR>>
17.18 0.00 3.37
0.47 0.02<BR>>
dm-25
0.00
0.00 0.01 0.05
0.16 0.44<BR>>
9.68 0.00 1.20
0.65 0.00<BR>>
dm-26
0.00 0.00
0.00 0.14 0.02
1.11<BR>> 8.03
0.00 3.28 0.05
0.00<BR>>
dm-27
0.00
0.00 0.31 0.15
9.69 1.24<BR>>
23.61 0.00 9.39
0.88 0.04<BR>>
dm-28
0.00 0.00
0.06 1.08 1.24<BR>>
8.61 8.65
0.00 1.04 0.21
0.02<BR>><BR>>
avg-cpu: %user %nice %system
%iowait %steal
%idle<BR>> 0.05 0.00 0.05
23.62 0.00
76.27<BR>><BR>>
Device: rrqm/s
wrqm/s r/s w/s rsec/s
wsec/s<BR>> avgrq-sz avgqu-sz
await svctm %util<BR>>
sda
0.00
4.85 0.00 2.22
0.00 56.57 <BR>>
25.45 0.00 0.91
0.36 0.08<BR>>
sdb
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
sdc
0.00 1.21 4.85
5.45 14.55 24.24<BR>>
3.76 12.51
1556.98 94.24 97.09<BR>>
dm-0 0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 0.97 2055.67
1602.00 97.09<BR>>
dm-1
0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 0.96 2064.67
1589.00 96.30<BR>>
dm-2 0.00
0.00 0.20 0.40 0.61
0.40<BR>> 1.67 0.96
2063.67 1582.67 95.92<BR>>
dm-3
0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 4.79 8988.33
1582.00 95.88<BR>>
dm-4 0.00
0.00 0.20 1.21
0.61 6.87<BR>>
5.29 0.96 886.57
679.14 96.04<BR>>
dm-5
0.00
0.00 0.20 1.21
0.61 6.87<BR>>
5.29 0.96 878.00
680.00 96.16<BR>>
dm-6 0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 0.97 2055.67
1593.00 96.55<BR>>
dm-7
0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 0.96 2064.67
1584.33 96.02<BR>>
dm-8 0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 0.96 2064.67
1586.33 96.14<BR>>
dm-9
0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 0.96 2064.67
1583.33 95.96<BR>>
dm-10
0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 0.97 2055.67
1597.67 96.83<BR>>
dm-12
0.00
0.00 0.20 0.40
0.61 0.40<BR>>
1.67 0.97 2071.67
1595.67 96.71<BR>>
dm-13
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-18
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-19
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-20
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-25
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-26
0.00 0.00
0.00 0.00 0.00
0.00<BR>> 0.00
0.00 0.00 0.00
0.00<BR>>
dm-27
0.00
0.00 0.00 0.81
0.00 6.46<BR>>
8.00 0.00 0.00
0.00 0.00<BR>>
dm-28
0.00 0.00
0.00 4.85 0.00<BR>>
38.79 8.00
0.01 2.17 0.17
0.08<BR>><BR>>
avg-cpu: %user %nice %system
%iowait %steal
%idle<BR>> 0.00 0.00 0.00
19.85 0.00
80.15<BR>><BR>>
Device: rrqm/s
wrqm/s r/s w/s rsec/s
wsec/s<BR>> avgrq-sz avgqu-sz
await svctm %util<BR>>
sda
0.00
0.00 0.00 0.00
0.00 0.00 <BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
sdb
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
sdc
0.00 0.61 0.00
0.00 0.00
0.00<BR>> 0.00
9.12 0.00
0.00 80.26<BR>>
dm-0 0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.65 0.00
3196.00 64.70<BR>>
dm-1
0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.65 0.00
3235.00 65.49<BR>>
dm-2 0.00
0.00 0.20 0.00 0.61
0.00<BR>> 3.00
0.66 0.00
3254.00 65.87<BR>>
dm-3
0.00
0.00 0.40 0.81
2.23 6.48<BR>>
7.17 3.29 0.00
660.83 80.26<BR>>
dm-4 0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.66 0.00
3248.00 65.75<BR>>
dm-5
0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.66 0.00
3242.00 65.63<BR>>
dm-6 0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.65 0.00
3223.00 65.24<BR>>
dm-7
0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.66 0.00
3249.00 65.77<BR>>
dm-8 0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.66 0.00
3243.00 65.65<BR>>
dm-9
0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.66 0.00
3252.00 65.83<BR>>
dm-10
0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.65 0.00
3209.00 64.96<BR>>
dm-12
0.00
0.00 0.20 0.00
0.61 0.00<BR>>
3.00 0.65 0.00
3215.00 65.08<BR>>
dm-13
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00
0.00 0.00 0.00
0.00<BR>>
dm-18
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-19
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-20
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-25
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-26
0.00 0.00
0.00 0.00 0.00
0.00<BR>> 0.00
0.00 0.00 0.00
0.00<BR>>
dm-27
0.00
0.00 0.00 0.00
0.00 0.00<BR>>
0.00 0.00 0.00
0.00 0.00<BR>>
dm-28
0.00 0.00
0.00 0.00 0.00<BR>>
0.00 0.00
0.00 0.00 0.00
0.00<BR>><BR>> top
server1:<BR>><BR>><BR>><BR>> Tasks: 217
total, 1 running, 214 sleeping, 0
stopped, 2 zombie <BR>>
Cpu(s): 0.1%us, 0.0%sy, 0.0%ni,
99.4%id, 0.4%wa, 0.0%hi,<BR>>
0.0%si, 0.0%st<BR>> Mem:
4084444k total, 1323728k used, 2760716k
free, 120072k<BR>>
buffers<BR>> Swap: 8193020k
total, 0k
used, 8193020k free, 872284k
<BR>>
cached<BR>><BR>> PID
USER PR NI VIRT RES SHR
S %CPU
%MEM TIME+ COMMAND<BR>>
14245 root 16
0 2288 988 704
R 2 0.0 0:00.01
top<BR>> 1
root 15
0 2036 652 564
S 0 0.0 0:01.13
init<BR>> 2
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
migration/0<BR>> 3
root 34 19
0 0 0
S 0 0.0
0:00.46<BR>>
ksoftirqd/0<BR>> 4
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
watchdog/0<BR>> 5
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
migration/1<BR>> 6
root 34 19
0 0 0
S 0 0.0
0:00.07<BR>> ksoftirqd/1
<BR>> 7
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
watchdog/1<BR>> 8
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
migration/2<BR>> 9
root 34 19
0 0 0
S 0 0.0
0:00.19<BR>>
ksoftirqd/2<BR>> 10
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
watchdog/2<BR>> 11
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>> migration/3
<BR>> 12
root 34 19
0 0 0
S 0 0.0
0:00.23<BR>>
ksoftirqd/3<BR>> 13
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
watchdog/3<BR>> 14
root 10 -5
0 0 0
S 0 0.0 0:00.00
events/0<BR>> 15
root 10 -5
0 0 0
S 0 0.0 0:00.00
events/1<BR>> 16
root 10 -5
0 0 0
S 0 0.0 0:00.00
events/2<BR>> 17
root 10 -5
0 0 0
S 0 0.0 0:00.00
events/3<BR>> 18
root 19 -5
0 0 0
S 0 0.0 0:00.00
khelper<BR>> 19
root 10 -5
0 0 0
S 0 0.0 0:00.00
kthread<BR>> 25
root 10 -5
0 0 0
S 0 0.0 0:
00.00<BR>>
kblockd/0<BR>> 26
root 10 -5
0 0 0
S 0 0.0 0:00.04
kblockd/1<BR>> 27
root 10 -5
0 0 0
S 0 0.0 0:00.00
kblockd/2<BR>> 28
root 10 -5
0 0 0
S 0 0.0 0:
00.06<BR>>
kblockd/3<BR>> 29
root 14 -5
0 0 0
S 0 0.0 0:00.00
kacpid<BR>> 147
root 14 -5
0 0 0
S 0 0.0 0:00.00
cqueue/0<BR>> 148
root 14 -5
0 0 0
S 0 0.0 0:00.00
cqueue/1<BR>> 149
root 14 -5
0 0 0
S 0 0.0 0:00.00
cqueue/2<BR>> 150
root 14 -5
0 0 0
S 0 0.0 0:00.00
cqueue/3<BR>> 153
root 10 -5
0 0 0
S 0 0.0 0:00.00
khubd<BR>> 155
root 10 -5
0 0 0
S 0 0.0 0:00.00
kseriod<BR>> 231
root 19
0
0 0 0
S 0 0.0 0:00.00
pdflush<BR>> 232
root 15
0
0 0 0
S 0 0.0 0:00.15
pdflush<BR>> 233
root 14 -5
0 0 0
S 0 0.0 0:00.00
kswapd0<BR>> 234
root 14 -5
0 0 0
S 0 0.0 0:00.00
aio/0<BR>> 235
root 14 -5
0 0 0
S 0 0.0 0:00.00
aio/1<BR>> 236
root 14 -5
0 0 0
S 0 0.0 0:00.00
aio/2<BR>><BR>><BR>> top
server2:<BR>><BR>> Tasks: 264
total, 1 running, 260 sleeping, 0
stopped, 3 zombie <BR>>
Cpu(s): 0.5%us, 0.1%sy, 0.0%ni, 83.7%id,
15.7%wa, 0.0%hi,<BR>>
0.0%si, 0.0%st<BR>> Mem:
4084444k total, 3941216k used, 143228k
free, 46608k<BR>>
buffers<BR>> Swap: 8193020k
total, 24660k used, 8168360k
free, 3652864k <BR>>
cached<BR>><BR>> PID
USER PR NI VIRT RES SHR
S %CPU
%MEM TIME+ COMMAND<BR>> 9507
root 15 0 2292
1020 704 R 2 0.0
0:00.01 top<BR>> 1
root 15
0 2032 652 564
S 0 0.0 0:01.20
init<BR>> 2
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
migration/0<BR>> 3
root 34 19
0 0 0
S 0 0.0
0:00.25<BR>>
ksoftirqd/0<BR>> 4
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
watchdog/0<BR>> 5
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
migration/1<BR>> 6
root 34 19
0 0 0
S 0 0.0
0:00.42<BR>> ksoftirqd/1
<BR>> 7
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
watchdog/1<BR>> 8
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
migration/2<BR>> 9
root 34 19
0 0 0
S 0 0.0
0:00.56<BR>>
ksoftirqd/2<BR>> 10
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
watchdog/2<BR>> 11
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>> migration/3
<BR>> 12
root 39 19
0 0 0
S 0 0.0
0:00.08<BR>>
ksoftirqd/3<BR>> 13
root RT
0
0 0 0
S 0 0.0
0:00.00<BR>>
watchdog/3<BR>> 14
root 10 -5
0 0 0
S 0 0.0 0:00.03
events/0<BR>> 15
root 10 -5
0 0 0
S 0 0.0 0:00.00
events/1<BR>> 16
root 10 -5
0 0 0
S 0 0.0 0:00.02
events/2<BR>> 17
root 10 -5
0 0 0
S 0 0.0 0:00.02
events/3<BR>> 18
root 10 -5
0 0 0
S 0 0.0 0:00.00
khelper<BR>> 19
root 11 -5
0 0 0
S 0 0.0 0:00.00
kthread<BR>> 25
root 10 -5
0 0 0
S 0 0.0 0:
00.08<BR>>
kblockd/0<BR>> 26
root 10 -5
0 0 0
S 0 0.0 0:00.00
kblockd/1<BR>> 27
root 10 -5
0 0 0
S 0 0.0 0:00.00
kblockd/2<BR>> 28
root 10 -5
0 0 0
S 0 0.0 0:
00.08<BR>>
kblockd/3<BR>> 29
root 18 -5
0 0 0
S 0 0.0 0:00.00
kacpid<BR>> 147
root 18 -5
0 0 0
S 0 0.0 0:00.00
cqueue/0<BR>> 148
root 20 -5
0 0 0
S 0 0.0 0:00.00
cqueue/1<BR>> 149
root 10 -5
0 0 0
S 0 0.0 0:00.00
cqueue/2<BR>> 150
root 10 -5
0 0 0
S 0 0.0 0:00.00
cqueue/3<BR>> 153
root 10 -5
0 0 0
S 0 0.0 0:00.00
khubd<BR>> 155
root 10 -5
0 0 0
S 0 0.0 0:00.00
kseriod<BR>> 233
root 15 -5
0 0 0
S 0 0.0 0:00.84
kswapd0<BR>> 234
root 20 -5
0 0 0
S 0 0.0 0:00.00
aio/0<BR>> 235
root 20 -5
0 0 0
S 0 0.0 0:00.00
aio/1<BR>> 236
root 20 -5
0 0 0
S 0 0.0 0:00.00
aio/2<BR>> 237
root 20 -5
0 0 0
S 0 0.0 0:00.00
aio/3<BR>> 409
root 11 -5
0 0 0
S 0 0.0 0:00.00
kpsmoused<BR>><BR>> slabtop
server1:<BR>><BR>> Active / Total
Objects (% used) : 301060 / 312027 (96.5%)
<BR>> Active / Total Slabs (%
used) : 12055 / 12055
(100.0%)<BR>> Active / Total Caches (%
used) : 106 / 146
(72.6%)<BR>> Active / Total Size (%
used) : 45714.46K / 46814.27K (97.7%)
<BR>> Minimum / Average / Maximum
Object : 0.01K / 0.15K /
128.00K<BR>><BR>> OBJS
ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE
NAME<BR>> 148176
148095 99% 0.05K
2058
72 8232K buffer_head
<BR>> 44573 44497 99% 0.13K
1537
29 6148K
dentry_cache<BR>> 32888 32878 99% 0.48K
4111 8
16444K<BR>>
ext3_inode_cache<BR>> 10878 10878
100% 0.27K 777
14 3108K radix_tree_node
<BR>> 7943
7766 97% 0.02K
47 169
188K dm_io<BR>> 7917
7766 98% 0.02K
39 203
156K dm_tio<BR>> 6424
5414 84% 0.09K 146
44 584K vm_area_struct
<BR>> 6384
6223 97% 0.04K
76
84 304K
sysfs_dir_cache<BR>> 5989
5823 97% 0.03K
53 113
212K size-32<BR>> 5074
4833 95% 0.06K
86
59 344K size-64
<BR>> 4488 4488
100% 0.33K 408
11 1632K
inode_cache<BR>> 2610
2561 98% 0.12K
87
30 348K
size-128<BR>> 2540
1213 47% 0.01K
10 254 40K
anon_vma <BR>> 2380
1475 61% 0.19K 119
20 476K
filp<BR>> 1430
1334 93% 0.35K 130
11 520K<BR>>
proc_inode_cache<BR>>
1326
1031 77% 0.05K
17
78 68K
<BR>>
selinux_inode_security<BR>>
1320
1304 98% 0.25K
88
15 352K
size-256<BR>>
1015 629 61% 0.02K 5 203 20K
biovec-1<BR>>
1008 125 12%
0.05K 14
72 56K
journal_head<BR>> 930 841 90% 0.12K
31
30 124K
bio<BR>> 920 666 72% 0.19K
46
20 184K<BR>>
skbuff_head_cache<BR>> 798 757 94%
2.00K 399 2 1596K
size-2048<BR>> 791 690 87% 0.03K 7 113 28K
ocfs2_em_ent<BR>> 784 748 95% 0.50K
98 8
392K
size-512<BR>> 590 503 85%
0.06K 10
59 40K
biovec-4<BR>> 564 562 99% 0.88K 141 4
564K<BR>>
ocfs2_inode_cache<BR>> 546 324 59% 0.05K 7
78 28K delayacct_cache
<BR>> 531 497 93% 0.43K
59 9
236K<BR>>
shmem_inode_cache<BR>> 520 487 93% 0.19K
26
20 104K
biovec-16<BR>> 505 327 64% 0.04K
5 101 20K
pid
<BR>> 500 487 97% 0.75K 100 5
400K
biovec-64<BR>> 460 376 81% 0.04K 5
92 20K
Acpi-Operand<BR>> 452 104 23% 0.03K 4 113 16K
pgd
<BR>> 347 347
100% 4.00K 347 1 1388K
size-4096<BR>> 338 303 89% 0.02K 2 169
8K
Acpi-Namespace<BR>> 336 175 52% 0.04K 4
84 16K crq_pool
<BR>><BR>><BR>> slabtop
server2:<BR>><BR>><BR>> Active /
Total Objects (% used) : 886188 / 907066
(97.7%)<BR>> Active / Total Slabs (%
used) : 20722 / 20723
(100.0%)<BR>> Active / Total Caches (%
used) : 104 / 147 (
70.7%)<BR>> Active / Total Size (%
used) : 76455.01K / 78620.30K
(97.2%)<BR>> Minimum / Average /
Maximum Object : 0.01K / 0.09K /
128.00K<BR>><BR>> OBJS
ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
<BR>> 736344
729536 99% 0.05K 10227
72 40908K
buffer_head<BR>> 37671 34929 92% 0.13K
1299
29 5196K
dentry_cache<BR>> 32872 32666 99% 0.48K
4109 8
16436K <BR>>
ext3_inode_cache<BR>> 20118 20097 99% 0.27K
1437
14 5748K
radix_tree_node<BR>> 10912
9437 86% 0.09K 248
44 992K
vm_area_struct<BR>> 7714
7452 96% 0.02K
38 203
152K dm_tio<BR>> 7605
7452 97% 0.02K
45 169
180K dm_io<BR>> 6552
6475 98% 0.04K
78
84 312K
sysfs_dir_cache<BR>> 5763
5461 94% 0.03K
51 113
204K size-32<BR>> 4661
4276 91% 0.06K
79
59 316K
size-64<BR>> 4572
3569 78% 0.01K
18 254 72K
anon_vma<BR>> 4440
3770 84%
0.19K 222
20 888K
filp<BR>> 2288
1791 78% 0.35K 208
11 832K<BR>>
proc_inode_cache<BR>>
1980
1884 95% 0.12K
66
30 264K
size-128<BR>> 1804
1471 81%
0.33K 164
11 656K
inode_cache<BR>> 1404
1046 74% 0.05K
18
78 72K<BR>>
selinux_inode_security<BR>>
1060 670 63% 0.19K
53
20 212K<BR>>
skbuff_head_cache
<BR>> 904 767 84% 0.03K 8 113 32K
ocfs2_em_ent<BR>> 870 870
100% 0.12K
29
30 116K
bio<BR>> 840 816 97% 0.50K 105 8
420K size-512
<BR>> 812 578 71% 0.02K 4 203 16K
biovec-1<BR>> 810 800 98% 0.25K
54
15 216K
size-256<BR>> 800 761 95% 2.00K 400 2 1600K
size-2048
<BR>> 549 514 93% 0.43K
61 9
244K<BR>>
shmem_inode_cache<BR>> 546 289 52% 0.05K 7
78 28K
delayacct_cache<BR>> 531 478 90% 0.06K 9
59 36K biovec-4
<BR>> 520 471 90% 0.19K
26
20 104K
biovec-16<BR>> 505 292 57% 0.04K 5 101 20K
pid<BR>> 490 479 97% 0.75K
98 5
392K biovec-64
<BR>> 460 376 81% 0.04K
5
92 20K
Acpi-Operand<BR>> 452 175 38% 0.03K 4 113 16K
pgd<BR>> 440 357 81% 0.38K
44
10 176K<BR>>
sock_inode_cache<BR>> 369 303 82% 0.44K
41 9
164K
UNIX<BR>> 360 320 88% 1.00K
90 4
360K
size-1024<BR>> 354 124 35% 0.06K 6
59 24K fs_cache
<BR>> 351 351
100% 4.00K 351 1 1404K
pmd<BR>><BR>> uptime
server1:<BR>><BR>> [root@impax ~]#
uptime<BR>> 10:21:09 up
18:16, 1 user, load average: 7.22, 7.72, 7.83
<BR>><BR>> uptime
server2:<BR>><BR>> [root@impaxdb ~]#
uptime<BR>> 10:21:17 up
18:16, 1 user, load average: 8.79, 9.02,
8.98<BR>><BR>>
yours<BR>><BR>>
arnold<BR>><BR>> On 8/24/07, *Alexei_Roudnev*
< <A
href="mailto:Alexei_Roudnev@exigengroup.com">Alexei_Roudnev@exigengroup.com</A><BR>>
<mailto:<A
href="mailto:Alexei_Roudnev@exigengroup.com">Alexei_Roudnev@exigengroup.com</A>>>
wrote:<BR>><BR>> Run
and send here, please:
<BR>><BR>> vmstat
5<BR>> (5
lines)<BR>><BR>>
iostat -x 5<BR>> (2 - 3
outputs)<BR>><BR>>
top<BR>> (1
screen)<BR>><BR>>
slabtop (or equivalent, I dont remember)
<BR>><BR>><BR>>
----- Original Message
-----<BR>><BR>>
*From:* Arnold
Maderthaner<BR>>
<mailto:<A
href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com
</A>><BR>>
*To:* Sunil Mushran <mailto:<A
href="mailto:Sunil.Mushran@oracle.com">Sunil.Mushran@oracle.com</A>>
;<BR>>
<A href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users@oss.oracle.com
</A><BR>>
<mailto:<A
href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users@oss.oracle.com</A>><BR>>
*Sent:* Thursday, August 23, 2007 4:12
AM<BR>>
*Subject:* Re: [Ocfs2-users] Slow concurrent actions on
<BR>>
the same LVM
logicalvolume<BR>><BR>><BR>>
Hi
!<BR>><BR>>
I watched the servers today and noticed that the
server<BR>>
load is very high but not much is working on the server:
<BR>>
server1:<BR>> 16:37:11
up 32 min, 1 user, load average: 6.45, 7.90,
7.45<BR>>
server2:<BR>> 16:37:19
up 32 min, 4 users, load average: 9.79, 9.76, 8.26
<BR>>
i also tried to do some selects on the oracle database
today.<BR>>
to do a "select count(*) from xyz" where xyz has 120
000<BR>>
rows it took me about 10
sec.<BR>>
always the process who does the filesystem access is
the<BR>>
first
one.<BR>>
and every command
(ls,du,vgdisplay,lvdisplay,dd,cp,....)<BR>>
which needs access to a ocfs2 filesystem seams like
<BR>>
blocking or it seams to me that there is a
lock.<BR>>
Can anyone please help me
?<BR>><BR>>
yours<BR>><BR>>
Arnold<BR>><BR>>
On 8/22/07, *Arnold Maderthaner*
<BR>>
<<A
href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com</A><BR>>
<mailto:<A
href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com</A>>>
wrote:
<BR>><BR>>
There is not really a high server load when writing
to<BR>>
the disks but will try it asap. what do you mean
with<BR>>
AST
?<BR>>
Can I enable some debuging on ocfs2 ? We have 2 Dell
<BR>>
servers with about 4gb of ram each and 2 Dual
Core<BR>>
CPUs with 2GHZ
each.<BR>>
Can you please describe which tests todo again
?<BR>><BR>>
Thx for your help
<BR>><BR>>
Arnold<BR>><BR>><BR>>
On 8/22/07, *Sunil Mushran *< <A
href="mailto:Sunil.Mushran@oracle.com">Sunil.Mushran@oracle.com</A><BR>>
<mailto:<A href="mailto:Sunil.Mushran@oracle.com">
Sunil.Mushran@oracle.com</A>>>
wrote:<BR>><BR>>
Repeat the first test. This time run top on
the<BR>>
first server.
Which<BR>>
process is eating the cpu? From the time it
<BR>>
appears that the
second<BR>>
server is waiting for the AST that the
first<BR>>
server is slow to send.
And<BR>>
it could be slow because it may be busy flushing
<BR>>
the data to
disk.<BR>>
How much memory/cpu do these servers
have?<BR>><BR>>
As far as the second test goes, what is
the<BR>>
concurrent write performance
<BR>>
directly to the LV (minus
fs).<BR>><BR>>
On Wed, Aug 22, 2007 at 01:14:02PM +0530,
Arnold<BR>>
Maderthaner
wrote:<BR>>
> Hi 2 all !
<BR>>
><BR>>
> I have problems with concurrent
filesystem<BR>>
actions on a
ocfs2<BR>>
> filesystem which is mounted by 2 nodes. OS=RH5ES
<BR>>
and OCFS2=
1.2.6<BR>>
> F.e.: If I have a LV called testlv which
is<BR>>
mounted on /mnt on
both<BR>>
> servers and I do a "dd if=/dev/zero
<BR>>
of=/mnt/test.a
bs=1024<BR>>
> count=1000000" on server 1 and do at the
same<BR>>
time a du
-hs<BR>>
> /mnt/test.a it takes about 5 seconds for du -hs
<BR>>
to
execute:<BR>>
> 270M
test.a<BR>>
><BR>>
>
real 0m3.627s<BR>>
> user 0m0.000s
<BR>>
> sys
0m0.002s<BR>>
><BR>>
> or even longer. If i do a du -hs ond a file
which<BR>>
is not written
by<BR>>
> another server its fast:
<BR>>
> 977M
test.b<BR>>
><BR>>
>
real 0m0.001s<BR>>
>
user 0m0.001s<BR>>
> sys
0m0.000s<BR>>
><BR>>
> If I write 2 different files from both servers
to<BR>>
the same LV I
get<BR>>
> only
2.8mb/sec.<BR>>
> If I write 2 different files from both servers
to<BR>>
a different
LV<BR>>
> (testlv,test2lv) then I get about 30mb/sec
on<BR>>
each write (so 60mb/sec
<BR>>
> combined which is very
good).<BR>>
> does anyone has an idea what could be wrong.
Are<BR>>
there any
mount<BR>>
> options for ocfs2 which could help me? is this a
<BR>>
lvm2 or ocfs2
problem<BR>>
> (locking,blocking,.... issue)
?<BR>>
><BR>>
> Any help will be
appriciated.<BR>>
>
<BR>>
>
yours<BR>>
><BR>>
>
Arnold<BR>>
>
--<BR>>
> Arnold
Maderthaner<BR>>
> J4Care Inc.
<BR>>
><BR>>
>
_______________________________________________<BR>>
> Ocfs2-users mailing
list<BR>>
> <A
href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</A><BR>>
<mailto:<A
href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</A>><BR>>
> <A
href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">http://oss.oracle.com/mailman/listinfo/ocfs2-users</A><BR>>
<<A
href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">http://oss.oracle.com/mailman/listinfo/ocfs2-users</A>><BR>><BR>>
<BR>><BR>><BR>>
--<BR>>
Arnold
Maderthaner<BR>>
J4Care
Inc.<BR>><BR>><BR>><BR>><BR>>
--<BR>>
Arnold
Maderthaner<BR>>
J4Care Inc.
<BR>><BR>>
------------------------------------------------------------------------<BR>>
_______________________________________________<BR>>
Ocfs2-users mailing
list<BR>>
<A href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</A>
<mailto:<A
href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</A>><BR>>
<A
href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">http://oss.oracle.com/mailman/listinfo/ocfs2-users</A><BR>><BR>><BR>><BR>><BR>>
--<BR>> Arnold
Maderthaner<BR>> J4Care
Inc.<BR>><BR>><BR>><BR>><BR>> --<BR>> Arnold Maderthaner
<BR>> J4Care Inc.<BR>>
------------------------------------------------------------------------<BR>><BR>>
_______________________________________________<BR>> Ocfs2-users mailing
list<BR>> <A
href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</A><BR>>
<A
href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">http://oss.oracle.com/mailman/listinfo/ocfs2-users</A><BR><BR></BLOCKQUOTE></DIV><BR><BR
clear=all><BR>-- <BR>Arnold Maderthaner <BR>J4Care Inc.
</BLOCKQUOTE></BODY></HTML>