[Ocfs2-users] Slow concurrent actions on the same LVM logicalvolume

Sunil Mushran Sunil.Mushran at oracle.com
Fri Aug 24 10:38:44 PDT 2007


mounted.ocfs2 is still not devmapper savvy. We'll address that
in the next full release.

Arnold Maderthaner wrote:
> FYI the vg is setuped on /dev/sdc1.
> Also I noticed some strange thing. The mounted command doesn't show 
> anything:
> [root at impaxdb ~]# mounted.ocfs2 -d
> Device                FS     UUID                                  Label
> [root at impaxdb ~]# mounted.ocfs2 -f
> Device                FS     Nodes
> [root at impaxdb ~]#
>  
> [root at impax ~]# mounted.ocfs2 -d
> Device                FS     UUID                                  Label
> [root at impax ~]# mounted.ocfs2 -f
> Device                FS     Nodes
> [root at impax ~]#
>  
>  
> but devices are mounted:
>  
> [root at impax ~]# mount -t ocfs2
> /dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-sts002LV on /srv/dcm4chee/sts/002 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-sts003LV on /srv/dcm4chee/sts/003 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-cacheLV on /srv/dcm4chee/sts/cache type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-u01LV on /srv/oracle/u01 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-u03LV on /srv/oracle/u03 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-webstartLV on /var/www/html/webstart type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-configLV on /opt/dcm4chee/server/default/conf type 
> ocfs2 (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-xmbeanattrsLV on 
> /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-installLV on /install type ocfs2 
> (rw,_netdev,heartbeat=local)
>  
> [root at impaxdb ~]# mount -t ocfs2
> /dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-sts002LV on /srv/dcm4chee/sts/002 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-sts003LV on /srv/dcm4chee/sts/003 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-cacheLV on /srv/dcm4chee/sts/cache type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-u01LV on /srv/oracle/u01 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-u03LV on /srv/oracle/u03 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-webstartLV on /var/www/html/webstart type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-configLV on /opt/dcm4chee/server/default/conf type 
> ocfs2 (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-xmbeanattrsLV on 
> /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2 
> (rw,_netdev,heartbeat=local)
> /dev/mapper/raidVG-installLV on /install type ocfs2 
> (rw,_netdev,heartbeat=local)
>  
>  
> yours
>  
> arnold
>  
>
>
>  
> On 8/24/07, *Arnold Maderthaner* <arnold.maderthaner at j4care.com 
> <mailto:arnold.maderthaner at j4care.com>> wrote:
>
>     vmstat server1:
>     [root at impax ~]# vmstat 5
>     procs -----------memory---------- ---swap-- -----io---- --system--
>     -----cpu------
>      r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs
>     us sy id wa st
>      0  0      0 2760468 119924 872240    0    0     4     4  107  
>     63  0  0 99  0  0
>      0  2      0 2760468 119928 872240    0    0     4    30 1095 
>     277  1  0 99  0  0
>      0  0      0 2760468 119940 872248    0    0     7    11 1076 
>     232  0  0 100  0  0
>      0  0      0 2760468 119948 872272    0    0     4    18 1084 
>     244  0  0 100  0  0
>      0  0      0 2760468 119948 872272    0    0     7    34 1063 
>     220  0  0 100  0  0
>      0  0      0 2760468 119948 872272    0    0     0     1 1086 
>     243  0  0 100  0  0
>      0  0      0 2760716 119956 872272    0    0     7    10 1065 
>     234  2  0 98  0  0
>     vmstart server2:
>     [root at impaxdb ~]# vmstat 5
>     procs -----------memory---------- ---swap-- -----io---- --system--
>     -----cpu------
>      r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs
>     us sy id wa st
>      0  0  24676 143784  46428 3652848    0    0    11    20  107 
>     122  0  0 84 16  0
>      0  1  24676 143784  46452 3652832    0    0     4    14 1080 
>     237  0  0 89 11  0
>      0  1  24672 143660  46460 3652812    6    0    14    34 1074 
>     246  0  0 90 10  0
>      0  0  24672 143660  46460 3652852    0    0     7    22 1076 
>     246  0  0 80 20  0
>      0  1  24672 143660  46476 3652844    0    0     7    10 1068 
>     227  0  0 85 15  0
>      
>     iostat server1:
>      
>
>     [root at impax ~]# iostat -x 5
>     Linux 2.6.18-8.el5PAE (impax)   08/24/2007
>
>     avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                0.07    0.00    0.04    0.45    0.00   99.44
>
>     Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s
>     avgrq-sz avgqu-sz   await  svctm  %util
>     sda               0.24     0.90  0.23  0.76    15.72    13.33   
>     29.22     0.01   12.68   4.77   0.47
>     sdb               0.03     1.23  0.02  0.58     1.34     6.81   
>     13.59     0.00    0.79   0.30   0.02
>     sdc               0.58     0.87  3.88  3.78    16.69    10.87    
>     3.60     7.98 1042.08  89.69  68.68
>     dm-0              0.00     0.00   0.31  1.14     0.96     6.99    
>     5.46     1.55 1061.28 459.00  66.92
>     dm-1              0.00     0.00  0.31  0.31     0.95     0.31    
>     2.01     0.66 1054.16 1054.17  66.15
>     dm-2              0.00     0.00  0.31  0.31      0.95     0.31    
>     2.01     0.66 1057.72 1057.84  66.24
>     dm-3              0.00     0.00  0.96  0.34     6.15     0.55    
>     5.12     0.77  592.01 504.72  66.04
>     dm-4              0.00     0.00  0.32  0.33     0.96      0.41    
>     2.14     0.66 1031.13 1028.75  66.00
>     dm-5              0.00     0.00  0.31  0.32     0.95     0.40    
>     2.12     0.66 1037.27 1036.55  66.09
>     dm-6              0.00     0.00  0.31  0.31     0.94     0.31     
>     2.01     0.66 1064.34 1064.45  66.37
>     dm-7              0.00     0.00  0.32  0.31     0.95     0.32    
>     2.01     0.66 1043.05 1042.99  65.77
>     dm-8              0.00     0.00  0.31  0.31     0.95     0.32    
>     2.03      0.66 1050.73 1048.23  65.97
>     dm-9              0.00     0.00  0.34  0.31     0.99     0.31    
>     2.01     0.67 1027.41 1006.68  65.71
>     dm-10             0.00     0.00  0.32  0.32     0.95     0.32    
>     2.01     0.68 1078.71 1050.22  66.41
>     dm-12             0.00     0.00  0.31  0.31     0.96     0.31    
>     2.03     0.66 1065.33 1065.10  66.43
>     dm-13             0.00     0.00  0.04  1.27     1.27     2.54    
>     2.91     0.00    2.43   0.10   0.01
>     dm-18             0.00     0.00  0.01  0.35     0.04     2.80    
>     8.00     0.01   25.42   0.03   0.00
>     dm-19             0.00     0.00  0.00  0.18     0.01     1.47    
>     8.00     0.00    0.42   0.21   0.00
>     dm-21             0.00     0.00  0.02  0.01     2.20     0.11   
>     65.49     0.00    9.84   3.01   0.01
>     dm-26             0.00     0.00  0.01  0.05     0.12     0.43    
>     9.21     0.00   11.66   7.05   0.04
>     dm-27             0.00     0.00   0.00  0.21     0.02     1.70    
>     8.02     0.00   12.09   4.57   0.10
>     dm-28             0.00     0.00  0.26  0.11     8.90     0.91   
>     26.40     0.01   14.94   1.85   0.07
>     dm-29             0.00     0.00   0.05  1.03      1.04    
>     8.22     8.56     0.01   13.76   2.48   0.27
>
>     avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                0.00    0.00    0.00    0.40    0.00   99.60
>
>     Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s
>     avgrq-sz avgqu-sz   await  svctm  %util
>     sda               0.00     2.80  0.00  2.20     0.00    40.00   
>     18.18     0.03   12.27   4.73   1.04
>     sdb               0.00     0.40  0.00  0.40     0.00     1.60    
>     4.00     0.00    5.50   5.50   0.22
>     sdc               0.00     0.00  4.80  4.80    14.40     4.80    
>     2.00     7.21 1560.12  63.67  61.12
>     dm-0              0.00     0.00   0.40  0.40     1.20     0.40    
>     2.00     0.60 1559.50 750.25  60.02
>     dm-1              0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1558.50 750.50  60.04
>     dm-2              0.00     0.00  0.40  0.40      1.20     0.40    
>     2.00     0.60 1559.50 752.75  60.22
>     dm-3              0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1562.75 752.25  60.18
>     dm-4              0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1559.50 752.50  60.20
>     dm-5              0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1563.00 750.25  60.02
>     dm-6              0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00      0.60 1558.50 750.25  60.02
>     dm-7              0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1559.50 752.50  60.20
>     dm-8              0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1559.75 750.25  60.02
>     dm-9              0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1554.00 750.25  60.02
>     dm-10             0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1555.75 752.25   60.18
>     dm-12             0.00     0.00  0.40  0.40     1.20     0.40    
>     2.00     0.60 1571.25 750.25  60.02
>     dm-13             0.00     0.00  0.00  0.80     0.00     1.60    
>     2.00     0.01    8.25   2.75   0.22
>     dm-18             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-19             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-21             0.00     0.00   0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-26             0.00     0.00  0.00  0.60     0.00     4.80    
>     8.00     0.01   12.67   8.33   0.50
>     dm-27             0.00     0.00   0.00  0.00      0.00    
>     0.00     0.00     0.00    0.00   0.00   0.00
>     dm-28             0.00     0.00  0.00  1.80     0.00    14.40    
>     8.00     0.02   10.22   2.44   0.44
>     dm-29             0.00     0.00  0.00  0.80      0.00     6.40    
>     8.00     0.01   10.25   3.75   0.30
>
>     avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                0.05    0.00    0.00    0.00    0.00   99.95
>
>     Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s
>     avgrq-sz avgqu-sz   await  svctm  %util
>     sda               0.00     0.20  0.00  0.60     0.00     6.40   
>     10.67     0.01   14.67   6.00   0.36
>     sdb               0.00     0.40  0.00  0.40     0.00     1.60    
>     4.00     0.00    0.00   0.00   0.00
>     sdc               0.00     0.00  4.80  4.80    14.40     4.80    
>     2.00     8.81 1292.92  77.06  73.98
>     dm-0              0.00     0.00   0.20  0.40     0.60     0.40    
>     1.67     0.74 1723.33 1233.00  73.98
>     dm-1              0.00     0.00  0.20  0.40     0.60     0.40    
>     1.67     0.73 1722.00 1223.33  73.40
>     dm-2              0.00     0.00  0.20  0.40      0.60     0.40    
>     1.67     0.73 1726.33 1222.00  73.32
>     dm-3              0.00     0.00  0.20  0.40     0.60     0.40    
>     1.67     0.73 1725.67 1222.00  73.32
>     dm-4              0.00     0.00  0.20  0.40     0.60      0.40    
>     1.67     0.73 1726.00 1222.00  73.32
>     dm-5              0.00     0.00  0.20  0.40     0.60     0.40    
>     1.67     0.73 1722.33 1222.00  73.32
>     dm-6              0.00     0.00  0.20  0.40     0.60     0.40     
>     1.67     0.73 1722.00 1223.67  73.42
>     dm-7              0.00     0.00  0.20  0.40     0.60     0.40    
>     1.67     0.73 1726.00 1222.00  73.32
>     dm-8              0.00     0.00  0.20  0.40     0.60     0.40    
>     1.67      0.73 1722.33 1222.00  73.32
>     dm-9              0.00     0.00  0.20  0.40     0.60     0.40    
>     1.67     0.73 1723.00 1214.33  72.86
>     dm-10             0.00     0.00  0.20  0.40     0.60     0.40    
>     1.67     0.73 1725.67 1222.00  73.32
>     dm-12             0.00     0.00  0.20  0.40     0.60     0.40    
>     1.67     0.74 1722.00 1231.67  73.90
>     dm-13             0.00     0.00  0.00  0.80     0.00     1.60    
>     2.00     0.00    0.00    0.00   0.00
>     dm-18             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-19             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-21             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-26             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-27             0.00     0.00  0.00  0.60     0.00     4.80    
>     8.00     0.01   13.00   4.33   0.26
>     dm-28             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-29             0.00     0.00   0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>
>
>     iostat server2:
>
>     [root at impaxdb ~]# iostat -x 5
>     Linux 2.6.18-8.el5PAE (impaxdb)         08/24/2007
>
>     avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                0.48    0.00    0.10   15.70    0.00   83.72
>
>     Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s
>     avgrq-sz avgqu-sz   await  svctm  %util
>     sda               0.28     1.38  0.32  0.82    18.52    17.59   
>     31.55     0.00    1.98   0.83   0.10
>     sdb               0.01     1.73  0.02  0.04     3.20    14.15  
>     297.21     0.00    9.18   0.87   0.01
>     sdc               6.44    14.59  3.94  4.26    65.36   126.12   
>     23.36     9.52 1160.63  91.33  74.88
>     dm-0              0.00     0.00   0.29  0.29     0.89     0.29    
>     2.01     0.71 1214.34 1214.01  71.18
>     dm-1              0.00     0.00  0.29  0.29     0.89     0.29    
>     2.01     0.71 1206.47 1206.38  70.95
>     dm-2              0.00     0.00  0.29  0.29      0.89     0.29    
>     2.01     0.71 1207.35 1207.27  70.96
>     dm-3              0.00     0.00  7.10  9.84    55.33    76.63    
>     7.79    25.51 1506.27  43.35  73.42
>     dm-4              0.00     0.00  0.32  5.48     1.10    41.82    
>     7.39     9.21 1586.36 122.41  71.06
>     dm-5              0.00     0.00  0.30  0.88     0.90     5.02    
>     5.02     0.72  606.80 600.72  70.89
>     dm-6              0.00     0.00  0.29  0.29     0.89     0.29     
>     2.01      0.71 1206.90 1206.94  70.91
>     dm-7              0.00     0.00  0.29  0.29     0.89     0.29    
>     2.01     0.71 1202.65 1202.69  70.82
>     dm-8              0.00     0.00  0.29  0.29     0.89     0.29    
>     2.01      0.71 1204.47 1204.43  70.84
>     dm-9              0.00     0.00  0.29  0.29     0.89     0.29    
>     2.01     0.71 1202.38 1202.42  70.77
>     dm-10             0.00     0.00  0.29  0.29     0.88     0.29    
>     2.01     0.71 1208.53 1208.48  70.93
>     dm-12             0.00     0.00  0.30  0.29     0.91     0.29    
>     2.04     0.71 1202.82 1202.49  70.92
>     dm-13             0.00     0.00  0.00  0.00     0.01     0.00    
>     6.99     0.00    0.47   0.06   0.00
>     dm-18             0.00     0.00  0.02  1.77     3.16    14.15    
>     9.67     0.03   16.69   0.03   0.01
>     dm-19             0.00     0.00  0.00  0.00     0.01     0.00    
>     8.02     0.00    0.69   0.34   0.00
>     dm-20             0.00     0.00  0.05  0.28     3.48     2.26   
>     17.18     0.00    3.37   0.47   0.02
>     dm-25             0.00     0.00  0.01  0.05     0.16     0.44    
>     9.68     0.00    1.20   0.65   0.00
>     dm-26             0.00     0.00   0.00  0.14     0.02     1.11    
>     8.03     0.00    3.28   0.05   0.00
>     dm-27             0.00     0.00  0.31  0.15     9.69     1.24   
>     23.61     0.00    9.39   0.88   0.04
>     dm-28             0.00     0.00   0.06  1.08      1.24    
>     8.61     8.65     0.00    1.04   0.21   0.02
>
>     avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                0.05    0.00    0.05   23.62    0.00   76.27
>
>     Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s
>     avgrq-sz avgqu-sz   await  svctm  %util
>     sda               0.00     4.85  0.00  2.22     0.00    56.57   
>     25.45     0.00    0.91   0.36   0.08
>     sdb               0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     sdc               0.00     1.21  4.85  5.45    14.55    24.24    
>     3.76    12.51 1556.98  94.24  97.09
>     dm-0              0.00     0.00   0.20  0.40     0.61     0.40    
>     1.67     0.97 2055.67 1602.00  97.09
>     dm-1              0.00     0.00  0.20  0.40     0.61     0.40    
>     1.67     0.96 2064.67 1589.00  96.30
>     dm-2              0.00     0.00  0.20  0.40      0.61     0.40    
>     1.67     0.96 2063.67 1582.67  95.92
>     dm-3              0.00     0.00  0.20  0.40     0.61     0.40    
>     1.67     4.79 8988.33 1582.00  95.88
>     dm-4              0.00     0.00  0.20  1.21     0.61      6.87    
>     5.29     0.96  886.57 679.14  96.04
>     dm-5              0.00     0.00  0.20  1.21     0.61     6.87    
>     5.29     0.96  878.00 680.00  96.16
>     dm-6              0.00     0.00  0.20  0.40     0.61     0.40     
>     1.67     0.97 2055.67 1593.00  96.55
>     dm-7              0.00     0.00  0.20  0.40     0.61     0.40    
>     1.67     0.96 2064.67 1584.33  96.02
>     dm-8              0.00     0.00  0.20  0.40     0.61     0.40    
>     1.67      0.96 2064.67 1586.33  96.14
>     dm-9              0.00     0.00  0.20  0.40     0.61     0.40    
>     1.67     0.96 2064.67 1583.33  95.96
>     dm-10             0.00     0.00  0.20  0.40     0.61     0.40    
>     1.67     0.97 2055.67 1597.67  96.83
>     dm-12             0.00     0.00  0.20  0.40     0.61     0.40    
>     1.67     0.97 2071.67 1595.67  96.71
>     dm-13             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-18             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-19             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-20             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-25             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-26             0.00     0.00   0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-27             0.00     0.00  0.00  0.81     0.00     6.46    
>     8.00     0.00    0.00   0.00   0.00
>     dm-28             0.00     0.00   0.00  4.85      0.00   
>     38.79     8.00     0.01    2.17   0.17   0.08
>
>     avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>                0.00    0.00    0.00   19.85    0.00   80.15
>
>     Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s
>     avgrq-sz avgqu-sz   await  svctm  %util
>     sda               0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     sdb               0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     sdc               0.00     0.61  0.00  0.00     0.00     0.00    
>     0.00     9.12    0.00   0.00  80.26
>     dm-0              0.00     0.00   0.20  0.00     0.61     0.00    
>     3.00     0.65    0.00 3196.00  64.70
>     dm-1              0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.65    0.00 3235.00  65.49
>     dm-2              0.00     0.00  0.20  0.00      0.61     0.00    
>     3.00     0.66    0.00 3254.00  65.87
>     dm-3              0.00     0.00  0.40  0.81     2.23     6.48    
>     7.17     3.29    0.00 660.83  80.26
>     dm-4              0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.66    0.00 3248.00  65.75
>     dm-5              0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.66    0.00 3242.00  65.63
>     dm-6              0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.65    0.00 3223.00  65.24
>     dm-7              0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.66    0.00 3249.00  65.77
>     dm-8              0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.66    0.00 3243.00  65.65
>     dm-9              0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.66    0.00 3252.00  65.83
>     dm-10             0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.65    0.00 3209.00  64.96
>     dm-12             0.00     0.00  0.20  0.00     0.61     0.00    
>     3.00     0.65    0.00 3215.00  65.08
>     dm-13             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00    0.00   0.00
>     dm-18             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-19             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-20             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-25             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-26             0.00     0.00   0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-27             0.00     0.00  0.00  0.00     0.00     0.00    
>     0.00     0.00    0.00   0.00   0.00
>     dm-28             0.00     0.00   0.00  0.00      0.00    
>     0.00     0.00     0.00    0.00   0.00   0.00
>      
>     top server1:
>      
>      
>
>     Tasks: 217 total,   1 running, 214 sleeping,   0 stopped,   2 zombie
>     Cpu(s):  0.1%us,  0.0%sy,  0.0%ni, 99.4%id,  0.4%wa,  0.0%hi, 
>     0.0%si,  0.0%st
>     Mem:   4084444k total,  1323728k used,  2760716k free,   120072k
>     buffers
>     Swap:  8193020k total,        0k used,  8193020k free,   872284k
>     cached
>
>       PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>     14245 root      16   0  2288  988  704 R    2  0.0   0:00.01 top
>         1 root      15   0  2036  652  564 S    0  0.0   0:01.13 init
>         2 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     migration/0
>         3 root      34  19     0    0    0 S    0  0.0   0:00.46
>     ksoftirqd/0
>         4 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     watchdog/0
>         5 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     migration/1
>         6 root      34  19     0    0    0 S    0  0.0   0:00.07
>     ksoftirqd/1
>         7 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     watchdog/1
>         8 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     migration/2
>         9 root      34  19     0    0    0 S    0  0.0   0:00.19
>     ksoftirqd/2
>        10 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     watchdog/2
>        11 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     migration/3
>        12 root      34  19     0    0    0 S    0  0.0   0:00.23
>     ksoftirqd/3
>        13 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     watchdog/3
>        14 root      10  -5     0    0    0 S    0  0.0   0:00.00 events/0
>        15 root      10  -5     0    0    0 S    0  0.0   0:00.00 events/1
>        16 root      10  -5     0    0    0 S    0  0.0   0:00.00 events/2
>        17 root      10  -5     0    0    0 S    0  0.0    0:00.00 events/3
>        18 root      19  -5     0    0    0 S    0  0.0   0:00.00 khelper
>        19 root      10  -5     0    0    0 S    0  0.0   0:00.00 kthread
>        25 root      10  -5     0    0    0 S    0  0.0   0: 00.00
>     kblockd/0
>        26 root      10  -5     0    0    0 S    0  0.0   0:00.04 kblockd/1
>        27 root      10  -5     0    0    0 S    0  0.0   0:00.00 kblockd/2
>        28 root      10  -5     0    0    0 S    0  0.0   0: 00.06
>     kblockd/3
>        29 root      14  -5     0    0    0 S    0  0.0   0:00.00 kacpid
>       147 root      14  -5     0    0    0 S    0  0.0   0:00.00 cqueue/0
>       148 root      14  -5     0    0    0 S    0  0.0   0:00.00 cqueue/1
>       149 root      14  -5     0    0    0 S    0  0.0   0:00.00 cqueue/2
>       150 root      14  -5     0    0    0 S    0  0.0   0:00.00 cqueue/3
>       153 root      10  -5     0    0    0 S    0  0.0   0:00.00 khubd
>       155 root      10  -5     0    0    0 S    0  0.0   0:00.00 kseriod
>       231 root      19   0     0    0    0 S    0  0.0   0:00.00 pdflush
>       232 root      15   0     0    0    0 S    0  0.0   0:00.15 pdflush
>       233 root      14  -5     0    0    0 S    0  0.0   0:00.00 kswapd0
>       234 root      14  -5     0    0    0 S    0  0.0   0:00.00 aio/0
>       235 root      14  -5     0    0    0 S    0  0.0   0:00.00 aio/1
>       236 root      14  -5     0    0    0 S    0  0.0   0:00.00 aio/2
>
>      
>     top server2:
>
>     Tasks: 264 total,   1 running, 260 sleeping,   0 stopped,   3 zombie
>     Cpu(s):  0.5%us,  0.1%sy,  0.0%ni, 83.7%id, 15.7%wa,  0.0%hi, 
>     0.0%si,  0.0%st
>     Mem:   4084444k total,  3941216k used,   143228k free,    46608k
>     buffers
>     Swap:  8193020k total,    24660k used,  8168360k free,  3652864k
>     cached
>
>       PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>      9507 root      15   0  2292 1020  704 R    2  0.0   0:00.01 top
>         1 root      15   0  2032  652  564 S    0  0.0   0:01.20 init
>         2 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     migration/0
>         3 root      34  19     0    0    0 S    0  0.0   0:00.25
>     ksoftirqd/0
>         4 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     watchdog/0
>         5 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     migration/1
>         6 root      34  19     0    0    0 S    0  0.0   0:00.42
>     ksoftirqd/1
>         7 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     watchdog/1
>         8 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     migration/2
>         9 root      34  19     0    0    0 S    0  0.0   0:00.56
>     ksoftirqd/2
>        10 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     watchdog/2
>        11 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     migration/3
>        12 root      39  19     0    0    0 S    0  0.0   0:00.08
>     ksoftirqd/3
>        13 root      RT   0     0    0    0 S    0  0.0   0:00.00
>     watchdog/3
>        14 root      10  -5     0    0    0 S    0  0.0   0:00.03 events/0
>        15 root      10  -5     0    0    0 S    0  0.0   0:00.00 events/1
>        16 root      10  -5     0    0    0 S    0  0.0   0:00.02 events/2
>        17 root      10  -5     0    0    0 S    0  0.0    0:00.02 events/3
>        18 root      10  -5     0    0    0 S    0  0.0   0:00.00 khelper
>        19 root      11  -5     0    0    0 S    0  0.0   0:00.00 kthread
>        25 root      10  -5     0    0    0 S    0  0.0   0: 00.08
>     kblockd/0
>        26 root      10  -5     0    0    0 S    0  0.0   0:00.00 kblockd/1
>        27 root      10  -5     0    0    0 S    0  0.0   0:00.00 kblockd/2
>        28 root      10  -5     0    0    0 S    0  0.0   0: 00.08
>     kblockd/3
>        29 root      18  -5     0    0    0 S    0  0.0   0:00.00 kacpid
>       147 root      18  -5     0    0    0 S    0  0.0   0:00.00 cqueue/0
>       148 root      20  -5     0    0    0 S    0  0.0   0:00.00 cqueue/1
>       149 root      10  -5     0    0    0 S    0  0.0   0:00.00 cqueue/2
>       150 root      10  -5     0    0    0 S    0  0.0   0:00.00 cqueue/3
>       153 root      10  -5     0    0    0 S    0  0.0   0:00.00 khubd
>       155 root      10  -5     0    0    0 S    0  0.0   0:00.00 kseriod
>       233 root      15  -5     0    0    0 S    0  0.0   0:00.84 kswapd0
>       234 root      20  -5     0    0    0 S    0  0.0   0:00.00 aio/0
>       235 root      20  -5     0    0    0 S    0  0.0   0:00.00 aio/1
>       236 root      20  -5     0    0    0 S    0  0.0   0:00.00 aio/2
>       237 root      20  -5     0    0    0 S    0  0.0   0:00.00 aio/3
>       409 root      11  -5     0    0    0 S    0  0.0   0:00.00 kpsmoused
>
>     slabtop server1:
>
>      Active / Total Objects (% used)    : 301060 / 312027 (96.5%)
>      Active / Total Slabs (% used)      : 12055 / 12055 (100.0%)
>      Active / Total Caches (% used)     : 106 / 146 (72.6%)
>      Active / Total Size (% used)       : 45714.46K / 46814.27K (97.7%)
>      Minimum / Average / Maximum Object : 0.01K / 0.15K / 128.00K
>
>       OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
>     148176 148095  99%    0.05K   2058       72      8232K buffer_head
>      44573  44497  99%    0.13K   1537       29      6148K dentry_cache
>      32888  32878  99%    0.48K   4111        8     16444K
>     ext3_inode_cache
>      10878  10878 100%    0.27K    777       14      3108K radix_tree_node
>       7943   7766  97%    0.02K     47      169       188K dm_io
>       7917   7766  98%    0.02K     39      203       156K dm_tio
>       6424   5414  84%    0.09K    146       44       584K vm_area_struct
>       6384   6223  97%    0.04K     76       84       304K sysfs_dir_cache
>       5989   5823  97%    0.03K     53      113       212K size-32
>       5074   4833  95%    0.06K     86       59       344K size-64
>       4488   4488 100%    0.33K    408       11      1632K inode_cache
>       2610   2561  98%    0.12K     87       30       348K size-128
>       2540   1213  47%    0.01K     10      254        40K anon_vma
>       2380   1475  61%    0.19K    119       20       476K filp
>       1430   1334  93%    0.35K    130       11       520K
>     proc_inode_cache
>       1326   1031  77%    0.05K     17       78        68K
>     selinux_inode_security
>       1320   1304  98%    0.25K     88       15       352K size-256
>       1015    629  61%    0.02K      5      203        20K biovec-1
>       1008    125  12%    0.05K     14       72        56K journal_head
>        930    841  90%    0.12K     31       30       124K bio
>        920    666  72%    0.19K     46       20       184K
>     skbuff_head_cache
>        798    757  94%    2.00K    399        2      1596K size-2048
>        791    690  87%    0.03K      7      113        28K ocfs2_em_ent
>        784    748  95%    0.50K     98        8       392K size-512
>        590    503  85%    0.06K     10       59        40K biovec-4
>        564    562  99%    0.88K    141        4       564K
>     ocfs2_inode_cache
>        546    324  59%    0.05K      7       78        28K delayacct_cache
>        531    497  93%    0.43K     59        9       236K
>     shmem_inode_cache
>        520    487  93%    0.19K     26       20       104K biovec-16
>        505    327  64%    0.04K       5      101        20K pid
>        500    487  97%    0.75K    100        5       400K biovec-64
>        460    376  81%    0.04K      5       92        20K Acpi-Operand
>        452    104  23%    0.03K      4      113        16K pgd
>        347    347 100%    4.00K    347        1      1388K size-4096
>        338    303  89%    0.02K      2      169         8K Acpi-Namespace
>        336    175  52%    0.04K      4       84        16K crq_pool
>
>      
>     slabtop server2:
>      
>
>      Active / Total Objects (% used)    : 886188 / 907066 (97.7%)
>      Active / Total Slabs (% used)      : 20722 / 20723 (100.0%)
>      Active / Total Caches (% used)     : 104 / 147 (70.7%)
>      Active / Total Size (% used)       : 76455.01K / 78620.30K (97.2%)
>      Minimum / Average / Maximum Object : 0.01K / 0.09K / 128.00K
>
>       OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
>     736344 729536  99%    0.05K  10227       72     40908K buffer_head
>      37671  34929  92%    0.13K   1299       29      5196K dentry_cache
>      32872  32666  99%    0.48K   4109        8     16436K
>     ext3_inode_cache
>      20118  20097  99%    0.27K   1437       14      5748K radix_tree_node
>      10912   9437  86%    0.09K    248       44       992K vm_area_struct
>       7714   7452  96%    0.02K     38      203       152K dm_tio
>       7605   7452  97%    0.02K     45      169       180K dm_io
>       6552   6475  98%    0.04K     78       84       312K sysfs_dir_cache
>       5763   5461  94%    0.03K     51      113       204K size-32
>       4661   4276  91%    0.06K     79       59       316K size-64
>       4572   3569  78%    0.01K     18      254        72K anon_vma
>       4440   3770  84%    0.19K    222       20       888K filp
>       2288   1791  78%    0.35K    208       11       832K
>     proc_inode_cache
>       1980   1884  95%    0.12K     66       30       264K size-128
>       1804   1471  81%    0.33K    164       11       656K inode_cache
>       1404   1046  74%    0.05K     18       78        72K
>     selinux_inode_security
>       1060    670  63%    0.19K     53       20       212K
>     skbuff_head_cache
>        904    767  84%    0.03K      8      113        32K ocfs2_em_ent
>        870    870 100%    0.12K     29       30       116K bio
>        840    816  97%    0.50K    105        8       420K size-512
>        812    578  71%    0.02K      4      203        16K biovec-1
>        810    800  98%    0.25K     54       15       216K size-256
>        800    761  95%    2.00K    400        2      1600K size-2048
>        549    514  93%    0.43K     61        9       244K
>     shmem_inode_cache
>        546    289  52%    0.05K      7       78        28K delayacct_cache
>        531    478  90%    0.06K      9       59        36K biovec-4
>        520    471  90%    0.19K     26       20       104K biovec-16
>        505    292  57%    0.04K      5      101        20K pid
>        490    479  97%    0.75K     98        5       392K biovec-64
>        460    376  81%    0.04K       5       92        20K Acpi-Operand
>        452    175  38%    0.03K      4      113        16K pgd
>        440    357  81%    0.38K     44       10       176K
>     sock_inode_cache
>        369    303  82%    0.44K     41        9       164K UNIX
>        360    320  88%    1.00K     90        4       360K size-1024
>        354    124  35%    0.06K      6       59        24K fs_cache
>        351    351 100%    4.00K    351        1      1404K pmd
>
>     uptime server1:
>
>     [root at impax ~]# uptime
>      10:21:09 up 18:16,  1 user,  load average: 7.22, 7.72, 7.83
>
>     uptime server2:
>
>     [root at impaxdb ~]# uptime
>      10:21:17 up 18:16,  1 user,  load average: 8.79, 9.02, 8.98
>
>     yours
>      
>     arnold
>      
>     On 8/24/07, *Alexei_Roudnev* <Alexei_Roudnev at exigengroup.com
>     <mailto:Alexei_Roudnev at exigengroup.com>> wrote:
>
>         Run and send here, please:
>          
>         vmstat 5
>         (5 lines)
>          
>         iostat -x 5
>         (2 - 3 outputs)
>          
>         top
>         (1 screen)
>          
>         slabtop (or equivalent, I dont remember)
>          
>          
>         ----- Original Message -----
>
>             *From:* Arnold Maderthaner
>             <mailto:arnold.maderthaner at j4care.com>
>             *To:* Sunil Mushran <mailto:Sunil.Mushran at oracle.com> ;
>             ocfs2-users at oss.oracle.com
>             <mailto:ocfs2-users at oss.oracle.com>
>             *Sent:* Thursday, August 23, 2007 4:12 AM
>             *Subject:* Re: [Ocfs2-users] Slow concurrent actions on
>             the same LVM logicalvolume
>
>              
>             Hi !
>
>             I watched the servers today and noticed that the server
>             load is very high but not much is working on the server:
>             server1:
>              16:37:11 up 32 min,  1 user,  load average: 6.45, 7.90, 7.45
>             server2:
>              16:37:19 up 32 min,  4 users,  load average: 9.79, 9.76, 8.26
>             i also tried to do some selects on the oracle database today.
>             to do a "select count(*) from xyz" where xyz has 120 000
>             rows it took me about 10 sec.
>             always the process who does the filesystem access is the
>             first one.
>             and every command (ls,du,vgdisplay,lvdisplay,dd,cp,....)
>             which needs access to a ocfs2 filesystem seams like
>             blocking or it seams to me that there is a lock.
>             Can anyone please help me ?
>
>             yours
>
>             Arnold
>
>             On 8/22/07, *Arnold Maderthaner*
>             <arnold.maderthaner at j4care.com
>             <mailto:arnold.maderthaner at j4care.com>> wrote:
>
>                 There is not really a high server load when writing to
>                 the disks but will try it asap. what do you mean with
>                 AST ?
>                 Can I enable some debuging on ocfs2 ? We have 2 Dell
>                 servers with about 4gb of ram each and 2 Dual Core
>                 CPUs with 2GHZ each.
>                 Can you please describe which tests todo again ?
>
>                 Thx for your help
>
>                 Arnold
>
>
>                 On 8/22/07, *Sunil Mushran *< Sunil.Mushran at oracle.com
>                 <mailto:Sunil.Mushran at oracle.com>> wrote:
>
>                     Repeat the first test. This time run top on the
>                     first server. Which
>                     process is eating the cpu? From the time it
>                     appears that the second
>                     server is waiting for the AST that the first
>                     server is slow to send. And
>                     it could be slow because it may be busy flushing
>                     the data to disk.
>                     How much memory/cpu do these servers have?
>
>                     As far as the second test goes, what is the
>                     concurrent write performance
>                     directly to the LV (minus fs).
>
>                     On Wed, Aug 22, 2007 at 01:14:02PM +0530, Arnold
>                     Maderthaner wrote:
>                     > Hi 2 all !
>                     >
>                     > I have problems with concurrent filesystem
>                     actions on a ocfs2
>                     > filesystem which is mounted by 2 nodes. OS=RH5ES
>                     and OCFS2= 1.2.6
>                     > F.e.: If I have a LV called testlv which is
>                     mounted on /mnt on both
>                     > servers and I do a "dd if=/dev/zero
>                     of=/mnt/test.a bs=1024
>                     > count=1000000" on server 1 and do at the same
>                     time a du -hs
>                     > /mnt/test.a it takes about 5 seconds for du -hs
>                     to execute:
>                     > 270M     test.a
>                     >
>                     > real    0m3.627s
>                     > user    0m0.000s
>                     > sys     0m0.002s
>                     >
>                     > or even longer. If i do a du -hs ond a file which
>                     is not written by
>                     > another server its fast:
>                     > 977M     test.b
>                     >
>                     > real    0m0.001s
>                     > user    0m0.001s
>                     > sys     0m0.000s
>                     >
>                     > If I write 2 different files from both servers to
>                     the same LV I get
>                     > only 2.8mb/sec.
>                     > If I write 2 different files from both servers to
>                     a different LV
>                     > (testlv,test2lv) then I get about 30mb/sec on
>                     each write (so 60mb/sec
>                     > combined which is very good).
>                     > does anyone has an idea what could be wrong. Are
>                     there any mount
>                     > options for ocfs2 which could help me? is this a
>                     lvm2 or ocfs2 problem
>                     > (locking,blocking,.... issue) ?
>                     >
>                     > Any help will be appriciated.
>                     >
>                     > yours
>                     >
>                     > Arnold
>                     > --
>                     > Arnold Maderthaner
>                     > J4Care Inc.
>                     >
>                     > _______________________________________________
>                     > Ocfs2-users mailing list
>                     > Ocfs2-users at oss.oracle.com
>                     <mailto:Ocfs2-users at oss.oracle.com>
>                     > http://oss.oracle.com/mailman/listinfo/ocfs2-users
>                     <http://oss.oracle.com/mailman/listinfo/ocfs2-users>
>
>
>
>
>                 -- 
>                 Arnold Maderthaner
>                 J4Care Inc. 
>
>
>
>
>             -- 
>             Arnold Maderthaner
>             J4Care Inc.
>
>             ------------------------------------------------------------------------
>             _______________________________________________
>             Ocfs2-users mailing list
>             Ocfs2-users at oss.oracle.com <mailto:Ocfs2-users at oss.oracle.com>
>             http://oss.oracle.com/mailman/listinfo/ocfs2-users 
>
>
>
>
>     -- 
>     Arnold Maderthaner
>     J4Care Inc. 
>
>
>
>
> -- 
> Arnold Maderthaner
> J4Care Inc.
> ------------------------------------------------------------------------
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users




More information about the Ocfs2-users mailing list