[Ocfs2-users] Slow concurrent actions on the same LVM
logicalvolume
Arnold Maderthaner
arnold.maderthaner at j4care.com
Fri Aug 24 11:38:15 PDT 2007
Don't think that it is the FC because I did lots auf tests where the
problems could be with ocfs2.
I think its more when I read/write and then there is the heartbeat write.
yours
arnold
On 8/24/07, Alexei_Roudnev <Alexei_Roudnev at exigengroup.com> wrote:
>
> OCFSv2 reads and writes (correct? I dont knonw for sure) heatbeat data
> from/to the volume every few seconds. So it may explain why it dont work
> properly in your case.
>
> Are you sure that you dont experience FC level problems (such as
> reconnection every 10 - 20 seconds)?
>
>
>
> ----- Original Message -----
> *From:* Arnold Maderthaner <arnold.maderthaner at j4care.com>
> *To:* Sunil Mushran <Sunil.Mushran at oracle.com>
> *Cc:* Alexei_Roudnev <Alexei_Roudnev at exigengroup.com> ;
> ocfs2-users at oss.oracle.com
> *Sent:* Friday, August 24, 2007 10:44 AM
> *Subject:* Re: [Ocfs2-users] Slow concurrent actions on the same LVM
> logicalvolume
>
> We have the newest QLogic driver installed and I can write on full speed
> (40mb/sec) with ocfs2 (also with ext3) when I'm the only one writing to the
> device (even if it is mounted on both servers). The things I noticed today
> is that it is not a write problem it is more a read problem. When I'm
> reading files every 3-5 seconds it would halt for lets say 2-3 seconds.
> Between it is fast. Even when I read the same data again then it is fast.
> Do you have any comments on the server load ?
>
> yours
>
> Arnold
>
>
> I h
> On 8/24/07, Sunil Mushran < Sunil.Mushran at oracle.com> wrote:
> >
> > mounted.ocfs2 is still not devmapper savvy. We'll address that
> > in the next full release.
> >
> > Arnold Maderthaner wrote:
> > > FYI the vg is setuped on /dev/sdc1.
> > > Also I noticed some strange thing. The mounted command doesn't show
> > > anything:
> > > [root at impaxdb ~]# mounted.ocfs2 -d
> > > Device FS
> > UUID Label
> > > [root at impaxdb ~]# mounted.ocfs2 -f
> > > Device FS Nodes
> > > [root at impaxdb ~]#
> > >
> > > [root at impax ~]# mounted.ocfs2 -d
> > > Device FS
> > UUID Label
> > > [root at impax ~]# mounted.ocfs2 -f
> > > Device FS Nodes
> > > [root at impax ~]#
> > >
> > >
> > > but devices are mounted:
> > >
> > > [root at impax ~]# mount -t ocfs2
> > > /dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-sts002LV on /srv/dcm4chee/sts/002 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-sts003LV on /srv/dcm4chee/sts/003 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-cacheLV on /srv/dcm4chee/sts/cache type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-u01LV on /srv/oracle/u01 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-u03LV on /srv/oracle/u03 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-webstartLV on /var/www/html/webstart type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-configLV on /opt/dcm4chee/server/default/conf type
> > > ocfs2 (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-xmbeanattrsLV on
> > > /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-installLV on /install type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > >
> > > [root at impaxdb ~]# mount -t ocfs2
> > > /dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-sts002LV on /srv/dcm4chee/sts/002 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-sts003LV on /srv/dcm4chee/sts/003 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-cacheLV on /srv/dcm4chee/sts/cache type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-u01LV on /srv/oracle/u01 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-u03LV on /srv/oracle/u03 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-webstartLV on /var/www/html/webstart type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-configLV on /opt/dcm4chee/server/default/conf type
> > > ocfs2 (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-xmbeanattrsLV on
> > > /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > > /dev/mapper/raidVG-installLV on /install type ocfs2
> > > (rw,_netdev,heartbeat=local)
> > >
> > >
> > > yours
> > >
> > > arnold
> > >
> > >
> > >
> > >
> > > On 8/24/07, *Arnold Maderthaner* <arnold.maderthaner at j4care.com
> > > <mailto:arnold.maderthaner at j4care.com>> wrote:
> > >
> > > vmstat server1:
> > > [root at impax ~]# vmstat 5
> > > procs -----------memory---------- ---swap-- -----io---- --system--
> >
> > > -----cpu------
> > > r b swpd free buff cache si so bi bo in cs
> > > us sy id wa st
> > > 0 0 0 2760468 119924 872240 0 0 4 4 107
> > > 63 0 0 99 0 0
> > > 0 2 0 2760468 119928 872240 0 0 4 30 1095
> > > 277 1 0 99 0 0
> > > 0 0 0 2760468 119940 872248 0 0 7 11 1076
> > > 232 0 0 100 0 0
> > > 0 0 0 2760468 119948 872272 0 0 4 18 1084
> > > 244 0 0 100 0 0
> > > 0 0 0 2760468 119948 872272 0 0 7 34 1063
> > > 220 0 0 100 0 0
> > > 0 0 0 2760468 119948 872272 0 0 0 1 1086
> > > 243 0 0 100 0 0
> > > 0 0 0 2760716 119956 872272 0 0 7 10 1065
> > > 234 2 0 98 0 0
> > > vmstart server2:
> > > [root at impaxdb ~]# vmstat 5
> > > procs -----------memory---------- ---swap-- -----io---- --system--
> >
> > > -----cpu------
> > > r b swpd free buff cache si so bi bo in cs
> > > us sy id wa st
> > > 0 0 24676 143784 46428 3652848 0 0 11 20 107
> > > 122 0 0 84 16 0
> > > 0 1 24676 143784 46452 3652832 0 0 4 14 1080
> > > 237 0 0 89 11 0
> > > 0 1 24672 143660 46460 3652812 6 0 14 34 1074
> > > 246 0 0 90 10 0
> > > 0 0 24672 143660 46460 3652852 0 0 7 22 1076
> > > 246 0 0 80 20 0
> > > 0 1 24672 143660 46476 3652844 0 0 7 10 1068
> > > 227 0 0 85 15 0
> > >
> > > iostat server1:
> > >
> > >
> > > [root at impax ~]# iostat -x 5
> > > Linux 2.6.18-8.el5PAE (impax) 08/24/2007
> > >
> > > avg-cpu: %user %nice %system %iowait %steal %idle
> > > 0.07 0.00 0.04 0.45 0.00 99.44
> > >
> > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> > > avgrq-sz avgqu-sz await svctm %util
> > > sda 0.24 0.90 0.23 0.76 15.72 13.33
> > > 29.22 0.01 12.68 4.77 0.47
> > > sdb 0.03 1.23 0.02 0.58 1.34 6.81
> > > 13.59 0.00 0.79 0.30 0.02
> > > sdc 0.58 0.87 3.88 3.78 16.69 10.87
> > > 3.60 7.98 1042.08 89.69 68.68
> > > dm-0 0.00 0.00 0.31 1.14 0.96 6.99
> > > 5.46 1.55 1061.28 459.00 66.92
> > > dm-1 0.00 0.00 0.31 0.31 0.95 0.31
> > > 2.01 0.66 1054.16 1054.17 66.15
> > > dm-2 0.00 0.00 0.31 0.31 0.95 0.31
> > > 2.01 0.66 1057.72 1057.84 66.24
> > > dm-3 0.00 0.00 0.96 0.34 6.15 0.55
> > > 5.12 0.77 592.01 504.72 66.04
> > > dm-4 0.00 0.00 0.32 0.33 0.96 0.41
> > > 2.14 0.66 1031.13 1028.75 66.00
> > > dm-5 0.00 0.00 0.31 0.32 0.95 0.40
> > > 2.12 0.66 1037.27 1036.55 66.09
> > > dm-6 0.00 0.00 0.31 0.31 0.94 0.31
> > > 2.01 0.66 1064.34 1064.45 66.37
> > > dm-7 0.00 0.00 0.32 0.31 0.95 0.32
> > > 2.01 0.66 1043.05 1042.99 65.77
> > > dm-8 0.00 0.00 0.31 0.31 0.95 0.32
> > > 2.03 0.66 1050.73 1048.23 65.97
> > > dm-9 0.00 0.00 0.34 0.31 0.99 0.31
> > > 2.01 0.67 1027.41 1006.68 65.71
> > > dm-10 0.00 0.00 0.32 0.32 0.95 0.32
> > > 2.01 0.68 1078.71 1050.22 66.41
> > > dm-12 0.00 0.00 0.31 0.31 0.96 0.31
> > > 2.03 0.66 1065.33 1065.10 66.43
> > > dm-13 0.00 0.00 0.04 1.27 1.27 2.54
> > > 2.91 0.00 2.43 0.10 0.01
> > > dm-18 0.00 0.00 0.01 0.35 0.04 2.80
> > > 8.00 0.01 25.42 0.03 0.00
> > > dm-19 0.00 0.00 0.00 0.18 0.01 1.47
> > > 8.00 0.00 0.42 0.21 0.00
> > > dm-21 0.00 0.00 0.02 0.01 2.20 0.11
> > > 65.49 0.00 9.84 3.01 0.01
> > > dm-26 0.00 0.00 0.01 0.05 0.12 0.43
> > > 9.21 0.00 11.66 7.05 0.04
> > > dm-27 0.00 0.00 0.00 0.21 0.02 1.70
> > > 8.02 0.00 12.09 4.57 0.10
> > > dm-28 0.00 0.00 0.26 0.11 8.90 0.91
> > > 26.40 0.01 14.94 1.85 0.07
> > > dm-29 0.00 0.00 0.05 1.03 1.04
> > > 8.22 8.56 0.01 13.76 2.48 0.27
> > >
> > > avg-cpu: %user %nice %system %iowait %steal %idle
> > > 0.00 0.00 0.00 0.40 0.00 99.60
> > >
> > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> > > avgrq-sz avgqu-sz await svctm %util
> > > sda 0.00 2.80 0.00 2.20 0.00 40.00
> > > 18.18 0.03 12.27 4.73 1.04
> > > sdb 0.00 0.40 0.00 0.40 0.00 1.60
> > > 4.00 0.00 5.50 5.50 0.22
> > > sdc 0.00 0.00 4.80 4.80 14.40 4.80
> > > 2.00 7.21 1560.12 63.67 61.12
> > > dm-0 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1559.50 750.25 60.02
> > > dm-1 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1558.50 750.50 60.04
> > > dm-2 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1559.50 752.75 60.22
> > > dm-3 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1562.75 752.25 60.18
> > > dm-4 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1559.50 752.50 60.20
> > > dm-5 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1563.00 750.25 60.02
> > > dm-6 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1558.50 750.25 60.02
> > > dm-7 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1559.50 752.50 60.20
> > > dm-8 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1559.75 750.25 60.02
> > > dm-9 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1554.00 750.25 60.02
> > > dm-10 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1555.75 752.25 60.18
> > > dm-12 0.00 0.00 0.40 0.40 1.20 0.40
> > > 2.00 0.60 1571.25 750.25 60.02
> > > dm-13 0.00 0.00 0.00 0.80 0.00 1.60
> > > 2.00 0.01 8.25 2.75 0.22
> > > dm-18 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-19 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-21 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-26 0.00 0.00 0.00 0.60 0.00 4.80
> > > 8.00 0.01 12.67 8.33 0.50
> > > dm-27 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00 0.00
> > > dm-28 0.00 0.00 0.00 1.80 0.00 14.40
> > > 8.00 0.02 10.22 2.44 0.44
> > > dm-29 0.00 0.00 0.00 0.80 0.00 6.40
> > > 8.00 0.01 10.25 3.75 0.30
> > >
> > > avg-cpu: %user %nice %system %iowait %steal %idle
> > > 0.05 0.00 0.00 0.00 0.00 99.95
> > >
> > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> > > avgrq-sz avgqu-sz await svctm %util
> > > sda 0.00 0.20 0.00 0.60 0.00 6.40
> > > 10.67 0.01 14.67 6.00 0.36
> > > sdb 0.00 0.40 0.00 0.40 0.00 1.60
> > > 4.00 0.00 0.00 0.00 0.00
> > > sdc 0.00 0.00 4.80 4.80 14.40 4.80
> > > 2.00 8.81 1292.92 77.06 73.98
> > > dm-0 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.74 1723.33 1233.00 73.98
> > > dm-1 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1722.00 1223.33 73.40
> > > dm-2 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1726.33 1222.00 73.32
> > > dm-3 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1725.67 1222.00 73.32
> > > dm-4 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1726.00 1222.00 73.32
> > > dm-5 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1722.33 1222.00 73.32
> > > dm-6 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1722.00 1223.67 73.42
> > > dm-7 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1726.00 1222.00 73.32
> > > dm-8 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1722.33 1222.00 73.32
> > > dm-9 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1723.00 1214.33 72.86
> > > dm-10 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.73 1725.67 1222.00 73.32
> > > dm-12 0.00 0.00 0.20 0.40 0.60 0.40
> > > 1.67 0.74 1722.00 1231.67 73.90
> > > dm-13 0.00 0.00 0.00 0.80 0.00 1.60
> > > 2.00 0.00 0.00 0.00 0.00
> > > dm-18 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-19 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-21 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-26 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-27 0.00 0.00 0.00 0.60 0.00 4.80
> > > 8.00 0.01 13.00 4.33 0.26
> > > dm-28 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-29 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > >
> > >
> > > iostat server2:
> > >
> > > [root at impaxdb ~]# iostat -x 5
> > > Linux 2.6.18-8.el5PAE (impaxdb) 08/24/2007
> > >
> > > avg-cpu: %user %nice %system %iowait %steal %idle
> > > 0.48 0.00 0.10 15.70 0.00 83.72
> > >
> > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> > > avgrq-sz avgqu-sz await svctm %util
> > > sda 0.28 1.38 0.32 0.82 18.52 17.59
> > > 31.55 0.00 1.98 0.83 0.10
> > > sdb 0.01 1.73 0.02 0.04 3.20 14.15
> > > 297.21 0.00 9.18 0.87 0.01
> > > sdc 6.44 14.59 3.94 4.26 65.36 126.12
> > > 23.36 9.52 1160.63 91.33 74.88
> > > dm-0 0.00 0.00 0.29 0.29 0.89 0.29
> > > 2.01 0.71 1214.34 1214.01 71.18
> > > dm-1 0.00 0.00 0.29 0.29 0.89 0.29
> > > 2.01 0.71 1206.47 1206.38 70.95
> > > dm-2 0.00 0.00 0.29 0.29 0.89 0.29
> > > 2.01 0.71 1207.35 1207.27 70.96
> > > dm-3 0.00 0.00 7.10 9.84 55.33 76.63
> > > 7.79 25.51 1506.27 43.35 73.42
> > > dm-4 0.00 0.00 0.32 5.48 1.10 41.82
> > > 7.39 9.21 1586.36 122.41 71.06
> > > dm-5 0.00 0.00 0.30 0.88 0.90 5.02
> > > 5.02 0.72 606.80 600.72 70.89
> > > dm-6 0.00 0.00 0.29 0.29 0.89 0.29
> > > 2.01 0.71 1206.90 1206.94 70.91
> > > dm-7 0.00 0.00 0.29 0.29 0.89 0.29
> > > 2.01 0.71 1202.65 1202.69 70.82
> > > dm-8 0.00 0.00 0.29 0.29 0.89 0.29
> > > 2.01 0.71 1204.47 1204.43 70.84
> > > dm-9 0.00 0.00 0.29 0.29 0.89 0.29
> > > 2.01 0.71 1202.38 1202.42 70.77
> > > dm-10 0.00 0.00 0.29 0.29 0.88 0.29
> > > 2.01 0.71 1208.53 1208.48 70.93
> > > dm-12 0.00 0.00 0.30 0.29 0.91 0.29
> > > 2.04 0.71 1202.82 1202.49 70.92
> > > dm-13 0.00 0.00 0.00 0.00 0.01 0.00
> > > 6.99 0.00 0.47 0.06 0.00
> > > dm-18 0.00 0.00 0.02 1.77 3.16 14.15
> > > 9.67 0.03 16.69 0.03 0.01
> > > dm-19 0.00 0.00 0.00 0.00 0.01 0.00
> > > 8.02 0.00 0.69 0.34 0.00
> > > dm-20 0.00 0.00 0.05 0.28 3.48 2.26
> > > 17.18 0.00 3.37 0.47 0.02
> > > dm-25 0.00 0.00 0.01 0.05 0.16 0.44
> > > 9.68 0.00 1.20 0.65 0.00
> > > dm-26 0.00 0.00 0.00 0.14 0.02 1.11
> > > 8.03 0.00 3.28 0.05 0.00
> > > dm-27 0.00 0.00 0.31 0.15 9.69 1.24
> > > 23.61 0.00 9.39 0.88 0.04
> > > dm-28 0.00 0.00 0.06 1.08 1.24
> > > 8.61 8.65 0.00 1.04 0.21 0.02
> > >
> > > avg-cpu: %user %nice %system %iowait %steal %idle
> > > 0.05 0.00 0.05 23.62 0.00 76.27
> > >
> > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> > > avgrq-sz avgqu-sz await svctm %util
> > > sda 0.00 4.85 0.00 2.22 0.00 56.57
> > > 25.45 0.00 0.91 0.36 0.08
> > > sdb 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > sdc 0.00 1.21 4.85 5.45 14.55 24.24
> > > 3.76 12.51 1556.98 94.24 97.09
> > > dm-0 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.97 2055.67 1602.00 97.09
> > > dm-1 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.96 2064.67 1589.00 96.30
> > > dm-2 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.96 2063.67 1582.67 95.92
> > > dm-3 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 4.79 8988.33 1582.00 95.88
> > > dm-4 0.00 0.00 0.20 1.21 0.61 6.87
> > > 5.29 0.96 886.57 679.14 96.04
> > > dm-5 0.00 0.00 0.20 1.21 0.61 6.87
> > > 5.29 0.96 878.00 680.00 96.16
> > > dm-6 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.97 2055.67 1593.00 96.55
> > > dm-7 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.96 2064.67 1584.33 96.02
> > > dm-8 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.96 2064.67 1586.33 96.14
> > > dm-9 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.96 2064.67 1583.33 95.96
> > > dm-10 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.97 2055.67 1597.67 96.83
> > > dm-12 0.00 0.00 0.20 0.40 0.61 0.40
> > > 1.67 0.97 2071.67 1595.67 96.71
> > > dm-13 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-18 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-19 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-20 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-25 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-26 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-27 0.00 0.00 0.00 0.81 0.00 6.46
> > > 8.00 0.00 0.00 0.00 0.00
> > > dm-28 0.00 0.00 0.00 4.85 0.00
> > > 38.79 8.00 0.01 2.17 0.17 0.08
> > >
> > > avg-cpu: %user %nice %system %iowait %steal %idle
> > > 0.00 0.00 0.00 19.85 0.00 80.15
> > >
> > > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> > > avgrq-sz avgqu-sz await svctm %util
> > > sda 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > sdb 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > sdc 0.00 0.61 0.00 0.00 0.00 0.00
> > > 0.00 9.12 0.00 0.00 80.26
> > > dm-0 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.65 0.00 3196.00 64.70
> > > dm-1 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.65 0.00 3235.00 65.49
> > > dm-2 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.66 0.00 3254.00 65.87
> > > dm-3 0.00 0.00 0.40 0.81 2.23 6.48
> > > 7.17 3.29 0.00 660.83 80.26
> > > dm-4 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.66 0.00 3248.00 65.75
> > > dm-5 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.66 0.00 3242.00 65.63
> > > dm-6 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.65 0.00 3223.00 65.24
> > > dm-7 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.66 0.00 3249.00 65.77
> > > dm-8 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.66 0.00 3243.00 65.65
> > > dm-9 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.66 0.00 3252.00 65.83
> > > dm-10 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.65 0.00 3209.00 64.96
> > > dm-12 0.00 0.00 0.20 0.00 0.61 0.00
> > > 3.00 0.65 0.00 3215.00 65.08
> > > dm-13 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-18 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-19 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-20 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-25 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-26 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-27 0.00 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00
> > > dm-28 0.00 0.00 0.00 0.00 0.00
> > > 0.00 0.00 0.00 0.00 0.00 0.00
> > >
> > > top server1:
> > >
> > >
> > >
> > > Tasks: 217 total, 1 running, 214 sleeping, 0 stopped, 2
> > zombie
> > > Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.4%id, 0.4%wa, 0.0%hi,
> > > 0.0%si, 0.0%st
> > > Mem: 4084444k total, 1323728k used, 2760716k free, 120072k
> > > buffers
> > > Swap: 8193020k total, 0k used, 8193020k free, 872284k
> > > cached
> > >
> > > PID USER PR NI VIRT RES SHR S %CPU
> > %MEM TIME+ COMMAND
> > > 14245 root 16 0 2288 988 704 R 2 0.0 0:00.01 top
> > > 1 root 15 0 2036 652 564 S 0 0.0 0:01.13 init
> > > 2 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > migration/0
> > > 3 root 34 19 0 0 0 S 0 0.0 0:00.46
> > > ksoftirqd/0
> > > 4 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > watchdog/0
> > > 5 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > migration/1
> > > 6 root 34 19 0 0 0 S 0 0.0 0:00.07
> > > ksoftirqd/1
> > > 7 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > watchdog/1
> > > 8 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > migration/2
> > > 9 root 34 19 0 0 0 S 0 0.0 0:00.19
> > > ksoftirqd/2
> > > 10 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > watchdog/2
> > > 11 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > migration/3
> > > 12 root 34 19 0 0 0 S 0 0.0 0:00.23
> > > ksoftirqd/3
> > > 13 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > watchdog/3
> > > 14 root 10 -5 0 0 0 S 0 0.0 0:00.00events/0
> > > 15 root 10 -5 0 0 0 S 0 0.0 0:00.00events/1
> > > 16 root 10 -5 0 0 0 S 0 0.0 0:00.00events/2
> > > 17 root 10 -5 0 0 0 S 0 0.0 0:00.00events/3
> > > 18 root 19 -5 0 0 0 S 0 0.0 0:00.00khelper
> > > 19 root 10 -5 0 0 0 S 0 0.0 0:00.00kthread
> > > 25 root 10 -5 0 0 0 S 0 0.0 0: 00.00
> > > kblockd/0
> > > 26 root 10 -5 0 0 0 S 0 0.0 0:00.04kblockd/1
> > > 27 root 10 -5 0 0 0 S 0 0.0 0:00.00kblockd/2
> > > 28 root 10 -5 0 0 0 S 0 0.0 0: 00.06
> > > kblockd/3
> > > 29 root 14 -5 0 0 0 S 0 0.0 0:00.00kacpid
> > > 147 root 14 -5 0 0 0 S 0 0.0 0:00.00cqueue/0
> > > 148 root 14 -5 0 0 0 S 0 0.0 0:00.00cqueue/1
> > > 149 root 14 -5 0 0 0 S 0 0.0 0:00.00cqueue/2
> > > 150 root 14 -5 0 0 0 S 0 0.0 0:00.00cqueue/3
> > > 153 root 10 -5 0 0 0 S 0 0.0 0:00.00khubd
> > > 155 root 10 -5 0 0 0 S 0 0.0 0:00.00kseriod
> > > 231 root 19 0 0 0 0 S 0 0.0 0:00.00pdflush
> > > 232 root 15 0 0 0 0 S 0 0.0 0:00.15pdflush
> > > 233 root 14 -5 0 0 0 S 0 0.0 0:00.00kswapd0
> > > 234 root 14 -5 0 0 0 S 0 0.0 0:00.00 aio/0
> > > 235 root 14 -5 0 0 0 S 0 0.0 0:00.00aio/1
> > > 236 root 14 -5 0 0 0 S 0 0.0 0:00.00 aio/2
> > >
> > >
> > > top server2:
> > >
> > > Tasks: 264 total, 1 running, 260 sleeping, 0 stopped, 3
> > zombie
> > > Cpu(s): 0.5%us, 0.1%sy, 0.0%ni, 83.7%id, 15.7%wa, 0.0%hi,
> > > 0.0%si, 0.0%st
> > > Mem: 4084444k total, 3941216k used, 143228k free, 46608k
> > > buffers
> > > Swap: 8193020k total, 24660k used, 8168360k free, 3652864k
> > > cached
> > >
> > > PID USER PR NI VIRT RES SHR S %CPU
> > %MEM TIME+ COMMAND
> > > 9507 root 15 0 2292 1020 704 R 2 0.0 0:00.01 top
> > > 1 root 15 0 2032 652 564 S 0 0.0 0:01.20 init
> > > 2 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > migration/0
> > > 3 root 34 19 0 0 0 S 0 0.0 0:00.25
> > > ksoftirqd/0
> > > 4 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > watchdog/0
> > > 5 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > migration/1
> > > 6 root 34 19 0 0 0 S 0 0.0 0:00.42
> > > ksoftirqd/1
> > > 7 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > watchdog/1
> > > 8 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > migration/2
> > > 9 root 34 19 0 0 0 S 0 0.0 0:00.56
> > > ksoftirqd/2
> > > 10 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > watchdog/2
> > > 11 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > migration/3
> > > 12 root 39 19 0 0 0 S 0 0.0 0:00.08
> > > ksoftirqd/3
> > > 13 root RT 0 0 0 0 S 0 0.0 0:00.00
> > > watchdog/3
> > > 14 root 10 -5 0 0 0 S 0 0.0 0:00.03events/0
> > > 15 root 10 -5 0 0 0 S 0 0.0 0:00.00events/1
> > > 16 root 10 -5 0 0 0 S 0 0.0 0:00.02events/2
> > > 17 root 10 -5 0 0 0 S 0 0.0 0:00.02events/3
> > > 18 root 10 -5 0 0 0 S 0 0.0 0:00.00khelper
> > > 19 root 11 -5 0 0 0 S 0 0.0 0:00.00kthread
> > > 25 root 10 -5 0 0 0 S 0 0.0 0: 00.08
> > > kblockd/0
> > > 26 root 10 -5 0 0 0 S 0 0.0 0:00.00kblockd/1
> > > 27 root 10 -5 0 0 0 S 0 0.0 0:00.00kblockd/2
> > > 28 root 10 -5 0 0 0 S 0 0.0 0: 00.08
> > > kblockd/3
> > > 29 root 18 -5 0 0 0 S 0 0.0 0:00.00kacpid
> > > 147 root 18 -5 0 0 0 S 0 0.0 0:00.00cqueue/0
> > > 148 root 20 -5 0 0 0 S 0 0.0 0:00.00cqueue/1
> > > 149 root 10 -5 0 0 0 S 0 0.0 0:00.00cqueue/2
> > > 150 root 10 -5 0 0 0 S 0 0.0 0:00.00cqueue/3
> > > 153 root 10 -5 0 0 0 S 0 0.0 0:00.00khubd
> > > 155 root 10 -5 0 0 0 S 0 0.0 0:00.00kseriod
> > > 233 root 15 -5 0 0 0 S 0 0.0 0:00.84kswapd0
> > > 234 root 20 -5 0 0 0 S 0 0.0 0:00.00aio/0
> > > 235 root 20 -5 0 0 0 S 0 0.0 0:00.00 aio/1
> > > 236 root 20 -5 0 0 0 S 0 0.0 0:00.00 aio/2
> > > 237 root 20 -5 0 0 0 S 0 0.0 0:00.00aio/3
> > > 409 root 11 -5 0 0 0 S 0 0.0 0:00.00kpsmoused
> > >
> > > slabtop server1:
> > >
> > > Active / Total Objects (% used) : 301060 / 312027 (96.5%)
> > > Active / Total Slabs (% used) : 12055 / 12055 (100.0%)
> > > Active / Total Caches (% used) : 106 / 146 (72.6%)
> > > Active / Total Size (% used) : 45714.46K / 46814.27K (97.7%)
> >
> > > Minimum / Average / Maximum Object : 0.01K / 0.15K / 128.00K
> > >
> > > OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
> > > 148176 148095 99% 0.05K 2058 72 8232K buffer_head
> >
> > > 44573 44497 99% 0.13K 1537 29 6148K
> > dentry_cache
> > > 32888 32878 99% 0.48K 4111 8 16444K
> > > ext3_inode_cache
> > > 10878 10878 100% 0.27K 777 14 3108K
> > radix_tree_node
> > > 7943 7766 97% 0.02K 47 169 188K dm_io
> > > 7917 7766 98% 0.02K 39 203 156K dm_tio
> > > 6424 5414 84% 0.09K 146 44 584K
> > vm_area_struct
> > > 6384 6223 97% 0.04K 76 84 304K
> > sysfs_dir_cache
> > > 5989 5823 97% 0.03K 53 113 212K size-32
> > > 5074 4833 95% 0.06K 86 59 344K size-64
> > > 4488 4488 100% 0.33K 408 11 1632K inode_cache
> > > 2610 2561 98% 0.12K 87 30 348K size-128
> > > 2540 1213 47% 0.01K 10 254 40K anon_vma
> > > 2380 1475 61% 0.19K 119 20 476K filp
> > > 1430 1334 93% 0.35K 130 11 520K
> > > proc_inode_cache
> > > 1326 1031 77% 0.05K 17 78 68K
> > > selinux_inode_security
> > > 1320 1304 98% 0.25K 88 15 352K size-256
> > > 1015 629 61% 0.02K 5 203 20K biovec-1
> > > 1008 125 12% 0.05K 14 72 56K
> > journal_head
> > > 930 841 90% 0.12K 31 30 124K bio
> > > 920 666 72% 0.19K 46 20 184K
> > > skbuff_head_cache
> > > 798 757 94% 2.00K 399 2 1596K size-2048
> > > 791 690 87% 0.03K 7 113 28K
> > ocfs2_em_ent
> > > 784 748 95% 0.50K 98 8 392K size-512
> > > 590 503 85% 0.06K 10 59 40K biovec-4
> > > 564 562 99% 0.88K 141 4 564K
> > > ocfs2_inode_cache
> > > 546 324 59% 0.05K 7 78 28K
> > delayacct_cache
> > > 531 497 93% 0.43K 59 9 236K
> > >
>
> ...
>
> [Message clipped]
--
Arnold Maderthaner
J4Care Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20070824/5b1d7175/attachment-0001.html
More information about the Ocfs2-users
mailing list