We have the newest QLogic driver installed and I can write on full speed (40mb/sec) with ocfs2 (also with ext3) when I'm the only one writing to the device (even if it is mounted on both servers). The things I noticed today is that it is not a write problem it is more a read problem. When I'm reading files every 3-5 seconds it would halt for lets say 2-3 seconds. Between it is fast. Even when I read the same data again then it is fast.
<br>Do you have any comments on the server load ?<br><br>yours<br><br>Arnold<br><br><br> I h<br><div><span class="gmail_quote">On 8/24/07, <b class="gmail_sendername">Sunil Mushran</b> <<a href="mailto:Sunil.Mushran@oracle.com">
Sunil.Mushran@oracle.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">mounted.ocfs2 is still not devmapper savvy. We'll address that
<br>in the next full release.<br><br>Arnold Maderthaner wrote:<br>> FYI the vg is setuped on /dev/sdc1.<br>> Also I noticed some strange thing. The mounted command doesn't show<br>> anything:<br>> [root@impaxdb
~]# mounted.ocfs2 -d<br>> Device FS UUID Label<br>> [root@impaxdb ~]# mounted.ocfs2 -f<br>> Device FS Nodes<br>> [root@impaxdb ~]#<br>>
<br>> [root@impax ~]# mounted.ocfs2 -d<br>> Device FS UUID Label<br>> [root@impax ~]# mounted.ocfs2 -f<br>> Device FS Nodes<br>> [root@impax
~]#<br>><br>><br>> but devices are mounted:<br>><br>> [root@impax ~]# mount -t ocfs2<br>> /dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-sts002LV on /srv/dcm4chee/sts/002 type ocfs2
<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-sts003LV on /srv/dcm4chee/sts/003 type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-cacheLV on /srv/dcm4chee/sts/cache type ocfs2<br>
> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-u01LV on /srv/oracle/u01 type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2<br>> (rw,_netdev,heartbeat=local)
<br>> /dev/mapper/raidVG-u03LV on /srv/oracle/u03 type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-webstartLV on /var/www/html/webstart type ocfs2
<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-configLV on /opt/dcm4chee/server/default/conf type<br>> ocfs2 (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-xmbeanattrsLV on<br>> /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2
<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-installLV on /install type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>><br>> [root@impaxdb ~]# mount -t ocfs2<br>> /dev/mapper/raidVG-sts001LV on /srv/dcm4chee/sts/001 type ocfs2
<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-sts002LV on /srv/dcm4chee/sts/002 type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-sts003LV on /srv/dcm4chee/sts/003 type ocfs2<br>
> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-cacheLV on /srv/dcm4chee/sts/cache type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-u01LV on /srv/oracle/u01 type ocfs2<br>> (rw,_netdev,heartbeat=local)
<br>> /dev/mapper/raidVG-u02LV on /srv/oracle/u02 type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-u03LV on /srv/oracle/u03 type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-u04LV on /srv/oracle/u04 type ocfs2
<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-webstartLV on /var/www/html/webstart type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-configLV on /opt/dcm4chee/server/default/conf type
<br>> ocfs2 (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-xmbeanattrsLV on<br>> /opt/dcm4chee/server/default/data/xmbean-attrs type ocfs2<br>> (rw,_netdev,heartbeat=local)<br>> /dev/mapper/raidVG-installLV on /install type ocfs2
<br>> (rw,_netdev,heartbeat=local)<br>><br>><br>> yours<br>><br>> arnold<br>><br>><br>><br>><br>> On 8/24/07, *Arnold Maderthaner* <<a href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com
</a><br>> <mailto:<a href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com</a>>> wrote:<br>><br>> vmstat server1:<br>> [root@impax ~]# vmstat 5<br>> procs -----------memory---------- ---swap-- -----io---- --system--
<br>> -----cpu------<br>> r b swpd free buff cache si so bi bo in cs<br>> us sy id wa st<br>> 0 0 0 2760468 119924 872240 0 0 4 4 107<br>> 63 0 0 99 0 0
<br>> 0 2 0 2760468 119928 872240 0 0 4 30 1095<br>> 277 1 0 99 0 0<br>> 0 0 0 2760468 119940 872248 0 0 7 11 1076<br>> 232 0 0 100 0 0<br>> 0 0 0 2760468 119948 872272 0 0 4 18 1084
<br>> 244 0 0 100 0 0<br>> 0 0 0 2760468 119948 872272 0 0 7 34 1063<br>> 220 0 0 100 0 0<br>> 0 0 0 2760468 119948 872272 0 0 0 1 1086<br>> 243 0 0 100 0 0
<br>> 0 0 0 2760716 119956 872272 0 0 7 10 1065<br>> 234 2 0 98 0 0<br>> vmstart server2:<br>> [root@impaxdb ~]# vmstat 5<br>> procs -----------memory---------- ---swap-- -----io---- --system--
<br>> -----cpu------<br>> r b swpd free buff cache si so bi bo in cs<br>> us sy id wa st<br>> 0 0 24676 143784 46428 3652848 0 0 11 20 107<br>> 122 0 0 84 16 0
<br>> 0 1 24676 143784 46452 3652832 0 0 4 14 1080<br>> 237 0 0 89 11 0<br>> 0 1 24672 143660 46460 3652812 6 0 14 34 1074<br>> 246 0 0 90 10 0<br>> 0 0 24672 143660 46460 3652852 0 0 7 22 1076
<br>> 246 0 0 80 20 0<br>> 0 1 24672 143660 46476 3652844 0 0 7 10 1068<br>> 227 0 0 85 15 0<br>><br>> iostat server1:<br>><br>><br>> [root@impax ~]# iostat -x 5
<br>> Linux 2.6.18-8.el5PAE (impax) 08/24/2007<br>><br>> avg-cpu: %user %nice %system %iowait %steal %idle<br>> 0.07 0.00 0.04 0.45 0.00 99.44<br>><br>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
<br>> avgrq-sz avgqu-sz await svctm %util<br>> sda 0.24 0.90 0.23 0.76 15.72 13.33<br>> 29.22 0.01 12.68 4.77 0.47<br>> sdb 0.03 1.23
0.02 0.58 1.34 6.81<br>> 13.59 0.00 0.79 0.30 0.02<br>> sdc 0.58 0.87 3.88 3.78 16.69 10.87<br>> 3.60 7.98 1042.08 89.69 68.68<br>> dm-0
0.00 0.00 0.31 1.14 0.96 6.99<br>> 5.46 1.55 1061.28 459.00 66.92<br>> dm-1 0.00 0.00 0.31 0.31 0.95 0.31<br>> 2.01 0.66 1054.16 1054.17 66.15<br>> dm-2
0.00 0.00 0.31 0.31 0.95 0.31<br>> 2.01 0.66 1057.72 1057.84 66.24<br>> dm-3 0.00 0.00 0.96 0.34 6.15 0.55<br>> 5.12 0.77 592.01 504.72 66.04<br>> dm-4
0.00 0.00 0.32 0.33 0.96 0.41<br>> 2.14 0.66 1031.13 1028.75 66.00<br>> dm-5 0.00 0.00 0.31 0.32 0.95 0.40<br>> 2.12 0.66 1037.27 1036.55 66.09<br>> dm-6
0.00 0.00 0.31 0.31 0.94 0.31<br>> 2.01 0.66 1064.34 1064.45 66.37<br>> dm-7 0.00 0.00 0.32 0.31 0.95 0.32<br>> 2.01 0.66 1043.05 1042.99 65.77<br>> dm-8
0.00 0.00 0.31 0.31 0.95 0.32<br>> 2.03 0.66 1050.73 1048.23 65.97<br>> dm-9 0.00 0.00 0.34 0.31 0.99 0.31<br>> 2.01 0.67 1027.41 1006.68 65.71<br>> dm-10
0.00 0.00 0.32 0.32 0.95 0.32<br>> 2.01 0.68 1078.71 1050.22 66.41<br>> dm-12 0.00 0.00 0.31 0.31 0.96 0.31<br>> 2.03 0.66 1065.33 1065.10 66.43<br>> dm-13
0.00 0.00 0.04 1.27 1.27 2.54<br>> 2.91 0.00 2.43 0.10 0.01<br>> dm-18 0.00 0.00 0.01 0.35 0.04 2.80<br>> 8.00 0.01 25.42 0.03 0.00<br>> dm-19
0.00 0.00 0.00 0.18 0.01 1.47<br>> 8.00 0.00 0.42 0.21 0.00<br>> dm-21 0.00 0.00 0.02 0.01 2.20 0.11<br>> 65.49 0.00 9.84 3.01 0.01<br>> dm-26
0.00 0.00 0.01 0.05 0.12 0.43<br>> 9.21 0.00 11.66 7.05 0.04<br>> dm-27 0.00 0.00 0.00 0.21 0.02 1.70<br>> 8.02 0.00 12.09 4.57 0.10<br>> dm-28
0.00 0.00 0.26 0.11 8.90 0.91<br>> 26.40 0.01 14.94 1.85 0.07<br>> dm-29 0.00 0.00 0.05 1.03 1.04<br>> 8.22 8.56 0.01 13.76 2.48 0.27<br>>
<br>> avg-cpu: %user %nice %system %iowait %steal %idle<br>> 0.00 0.00 0.00 0.40 0.00 99.60<br>><br>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s<br>
> avgrq-sz avgqu-sz await svctm %util<br>> sda 0.00 2.80 0.00 2.20 0.00 40.00<br>> 18.18 0.03 12.27 4.73 1.04<br>> sdb 0.00 0.40 0.00
0.40 0.00 1.60<br>> 4.00 0.00 5.50 5.50 0.22<br>> sdc 0.00 0.00 4.80 4.80 14.40 4.80<br>> 2.00 7.21 1560.12 63.67 61.12<br>> dm-0
0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1559.50 750.25 60.02<br>> dm-1 0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1558.50 750.50 60.04<br>> dm-2
0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1559.50 752.75 60.22<br>> dm-3 0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1562.75 752.25 60.18<br>> dm-4
0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1559.50 752.50 60.20<br>> dm-5 0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1563.00 750.25 60.02<br>> dm-6
0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1558.50 750.25 60.02<br>> dm-7 0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1559.50 752.50 60.20<br>> dm-8
0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1559.75 750.25 60.02<br>> dm-9 0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1554.00 750.25 60.02<br>> dm-10
0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1555.75 752.25 60.18<br>> dm-12 0.00 0.00 0.40 0.40 1.20 0.40<br>> 2.00 0.60 1571.25 750.25 60.02<br>> dm-13
0.00 0.00 0.00 0.80 0.00 1.60<br>> 2.00 0.01 8.25 2.75 0.22<br>> dm-18 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-19
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-21 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-26
0.00 0.00 0.00 0.60 0.00 4.80<br>> 8.00 0.01 12.67 8.33 0.50<br>> dm-27 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00 0.00<br>> dm-28
0.00 0.00 0.00 1.80 0.00 14.40<br>> 8.00 0.02 10.22 2.44 0.44<br>> dm-29 0.00 0.00 0.00 0.80 0.00 6.40<br>> 8.00 0.01 10.25 3.75 0.30<br>>
<br>> avg-cpu: %user %nice %system %iowait %steal %idle<br>> 0.05 0.00 0.00 0.00 0.00 99.95<br>><br>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s<br>
> avgrq-sz avgqu-sz await svctm %util<br>> sda 0.00 0.20 0.00 0.60 0.00 6.40<br>> 10.67 0.01 14.67 6.00 0.36<br>> sdb 0.00 0.40 0.00
0.40 0.00 1.60<br>> 4.00 0.00 0.00 0.00 0.00<br>> sdc 0.00 0.00 4.80 4.80 14.40 4.80<br>> 2.00 8.81 1292.92 77.06 73.98<br>> dm-0
0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.74 1723.33 1233.00 73.98<br>> dm-1 0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1722.00 1223.33 73.40<br>> dm-2
0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1726.33 1222.00 73.32<br>> dm-3 0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1725.67 1222.00 73.32<br>> dm-4
0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1726.00 1222.00 73.32<br>> dm-5 0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1722.33 1222.00 73.32<br>> dm-6
0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1722.00 1223.67 73.42<br>> dm-7 0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1726.00 1222.00 73.32<br>> dm-8
0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1722.33 1222.00 73.32<br>> dm-9 0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1723.00 1214.33 72.86<br>> dm-10
0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.73 1725.67 1222.00 73.32<br>> dm-12 0.00 0.00 0.20 0.40 0.60 0.40<br>> 1.67 0.74 1722.00 1231.67 73.90<br>> dm-13
0.00 0.00 0.00 0.80 0.00 1.60<br>> 2.00 0.00 0.00 0.00 0.00<br>> dm-18 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-19
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-21 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-26
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-27 0.00 0.00 0.00 0.60 0.00 4.80<br>> 8.00 0.01 13.00 4.33 0.26<br>> dm-28
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-29 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>>
<br>><br>> iostat server2:<br>><br>> [root@impaxdb ~]# iostat -x 5<br>> Linux 2.6.18-8.el5PAE (impaxdb) 08/24/2007<br>><br>> avg-cpu: %user %nice %system %iowait %steal %idle
<br>> 0.48 0.00 0.10 15.70 0.00 83.72<br>><br>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s<br>> avgrq-sz avgqu-sz await svctm %util<br>> sda
0.28 1.38 0.32 0.82 18.52 17.59<br>> 31.55 0.00 1.98 0.83 0.10<br>> sdb 0.01 1.73 0.02 0.04 3.20 14.15<br>> 297.21 0.00 9.18 0.87 0.01<br>> sdc
6.44 14.59 3.94 4.26 65.36 126.12<br>> 23.36 9.52 1160.63 91.33 74.88<br>> dm-0 0.00 0.00 0.29 0.29 0.89 0.29<br>> 2.01 0.71 1214.34 1214.01 71.18<br>> dm-1
0.00 0.00 0.29 0.29 0.89 0.29<br>> 2.01 0.71 1206.47 1206.38 70.95<br>> dm-2 0.00 0.00 0.29 0.29 0.89 0.29<br>> 2.01 0.71 1207.35 1207.27 70.96<br>> dm-3
0.00 0.00 7.10 9.84 55.33 76.63<br>> 7.79 25.51 1506.27 43.35 73.42<br>> dm-4 0.00 0.00 0.32 5.48 1.10 41.82<br>> 7.39 9.21 1586.36 122.41 71.06<br>> dm-5
0.00 0.00 0.30 0.88 0.90 5.02<br>> 5.02 0.72 606.80 600.72 70.89<br>> dm-6 0.00 0.00 0.29 0.29 0.89 0.29<br>> 2.01 0.71 1206.90 1206.94 70.91<br>> dm-7
0.00 0.00 0.29 0.29 0.89 0.29<br>> 2.01 0.71 1202.65 1202.69 70.82<br>> dm-8 0.00 0.00 0.29 0.29 0.89 0.29<br>> 2.01 0.71 1204.47 1204.43 70.84<br>> dm-9
0.00 0.00 0.29 0.29 0.89 0.29<br>> 2.01 0.71 1202.38 1202.42 70.77<br>> dm-10 0.00 0.00 0.29 0.29 0.88 0.29<br>> 2.01 0.71 1208.53 1208.48 70.93<br>> dm-12
0.00 0.00 0.30 0.29 0.91 0.29<br>> 2.04 0.71 1202.82 1202.49 70.92<br>> dm-13 0.00 0.00 0.00 0.00 0.01 0.00<br>> 6.99 0.00 0.47 0.06 0.00<br>> dm-18
0.00 0.00 0.02 1.77 3.16 14.15<br>> 9.67 0.03 16.69 0.03 0.01<br>> dm-19 0.00 0.00 0.00 0.00 0.01 0.00<br>> 8.02 0.00 0.69 0.34 0.00<br>> dm-20
0.00 0.00 0.05 0.28 3.48 2.26<br>> 17.18 0.00 3.37 0.47 0.02<br>> dm-25 0.00 0.00 0.01 0.05 0.16 0.44<br>> 9.68 0.00 1.20 0.65 0.00<br>> dm-26
0.00 0.00 0.00 0.14 0.02 1.11<br>> 8.03 0.00 3.28 0.05 0.00<br>> dm-27 0.00 0.00 0.31 0.15 9.69 1.24<br>> 23.61 0.00 9.39 0.88 0.04<br>> dm-28
0.00 0.00 0.06 1.08 1.24<br>> 8.61 8.65 0.00 1.04 0.21 0.02<br>><br>> avg-cpu: %user %nice %system %iowait %steal %idle<br>> 0.05 0.00 0.05 23.62
0.00 76.27<br>><br>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s<br>> avgrq-sz avgqu-sz await svctm %util<br>> sda 0.00 4.85 0.00 2.22 0.00 56.57
<br>> 25.45 0.00 0.91 0.36 0.08<br>> sdb 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> sdc 0.00 1.21 4.85
5.45 14.55 24.24<br>> 3.76 12.51 1556.98 94.24 97.09<br>> dm-0 0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.97 2055.67 1602.00 97.09<br>> dm-1
0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.96 2064.67 1589.00 96.30<br>> dm-2 0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.96 2063.67 1582.67 95.92<br>> dm-3
0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 4.79 8988.33 1582.00 95.88<br>> dm-4 0.00 0.00 0.20 1.21 0.61 6.87<br>> 5.29 0.96 886.57 679.14 96.04<br>> dm-5
0.00 0.00 0.20 1.21 0.61 6.87<br>> 5.29 0.96 878.00 680.00 96.16<br>> dm-6 0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.97 2055.67 1593.00 96.55<br>> dm-7
0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.96 2064.67 1584.33 96.02<br>> dm-8 0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.96 2064.67 1586.33 96.14<br>> dm-9
0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.96 2064.67 1583.33 95.96<br>> dm-10 0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.97 2055.67 1597.67 96.83<br>> dm-12
0.00 0.00 0.20 0.40 0.61 0.40<br>> 1.67 0.97 2071.67 1595.67 96.71<br>> dm-13 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-18
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-19 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-20
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-25 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-26
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-27 0.00 0.00 0.00 0.81 0.00 6.46<br>> 8.00 0.00 0.00 0.00 0.00<br>> dm-28
0.00 0.00 0.00 4.85 0.00<br>> 38.79 8.00 0.01 2.17 0.17 0.08<br>><br>> avg-cpu: %user %nice %system %iowait %steal %idle<br>> 0.00 0.00 0.00 19.85
0.00 80.15<br>><br>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s<br>> avgrq-sz avgqu-sz await svctm %util<br>> sda 0.00 0.00 0.00 0.00 0.00 0.00
<br>> 0.00 0.00 0.00 0.00 0.00<br>> sdb 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> sdc 0.00 0.61 0.00
0.00 0.00 0.00<br>> 0.00 9.12 0.00 0.00 80.26<br>> dm-0 0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.65 0.00 3196.00 64.70<br>> dm-1
0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.65 0.00 3235.00 65.49<br>> dm-2 0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.66 0.00 3254.00 65.87<br>> dm-3
0.00 0.00 0.40 0.81 2.23 6.48<br>> 7.17 3.29 0.00 660.83 80.26<br>> dm-4 0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.66 0.00 3248.00 65.75<br>> dm-5
0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.66 0.00 3242.00 65.63<br>> dm-6 0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.65 0.00 3223.00 65.24<br>> dm-7
0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.66 0.00 3249.00 65.77<br>> dm-8 0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.66 0.00 3243.00 65.65<br>> dm-9
0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.66 0.00 3252.00 65.83<br>> dm-10 0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.65 0.00 3209.00 64.96<br>> dm-12
0.00 0.00 0.20 0.00 0.61 0.00<br>> 3.00 0.65 0.00 3215.00 65.08<br>> dm-13 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-18
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-19 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-20
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-25 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-26
0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-27 0.00 0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00<br>> dm-28
0.00 0.00 0.00 0.00 0.00<br>> 0.00 0.00 0.00 0.00 0.00 0.00<br>><br>> top server1:<br>><br>><br>><br>> Tasks: 217 total, 1 running, 214 sleeping, 0 stopped, 2 zombie
<br>> Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.4%id, 0.4%wa, 0.0%hi,<br>> 0.0%si, 0.0%st<br>> Mem: 4084444k total, 1323728k used, 2760716k free, 120072k<br>> buffers<br>> Swap: 8193020k total, 0k used, 8193020k free, 872284k
<br>> cached<br>><br>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br>> 14245 root 16 0 2288 988 704 R 2 0.0 0:00.01 top<br>> 1 root 15 0 2036 652 564 S 0
0.0 0:01.13 init<br>> 2 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> migration/0<br>> 3 root 34 19 0 0 0 S 0 0.0 0:00.46<br>> ksoftirqd/0<br>> 4 root RT 0 0 0 0 S 0
0.0 0:00.00<br>> watchdog/0<br>> 5 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> migration/1<br>> 6 root 34 19 0 0 0 S 0 0.0 0:00.07<br>> ksoftirqd/1
<br>> 7 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> watchdog/1<br>> 8 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> migration/2<br>> 9 root 34 19 0 0 0 S 0
0.0 0:00.19<br>> ksoftirqd/2<br>> 10 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> watchdog/2<br>> 11 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> migration/3
<br>> 12 root 34 19 0 0 0 S 0 0.0 0:00.23<br>> ksoftirqd/3<br>> 13 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> watchdog/3<br>> 14 root 10 -5 0 0 0 S 0
0.0 0:00.00 events/0<br>> 15 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/1<br>> 16 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/2<br>> 17 root 10 -5 0 0 0 S 0
0.0 0:00.00 events/3<br>> 18 root 19 -5 0 0 0 S 0 0.0 0:00.00 khelper<br>> 19 root 10 -5 0 0 0 S 0 0.0 0:00.00 kthread<br>> 25 root 10 -5 0 0 0 S 0
0.0 0: 00.00<br>> kblockd/0<br>> 26 root 10 -5 0 0 0 S 0 0.0 0:00.04 kblockd/1<br>> 27 root 10 -5 0 0 0 S 0 0.0 0:00.00 kblockd/2<br>> 28 root 10 -5 0 0 0 S 0
0.0 0: 00.06<br>> kblockd/3<br>> 29 root 14 -5 0 0 0 S 0 0.0 0:00.00 kacpid<br>> 147 root 14 -5 0 0 0 S 0 0.0 0:00.00 cqueue/0<br>> 148 root 14 -5 0 0 0 S 0
0.0 0:00.00 cqueue/1<br>> 149 root 14 -5 0 0 0 S 0 0.0 0:00.00 cqueue/2<br>> 150 root 14 -5 0 0 0 S 0 0.0 0:00.00 cqueue/3<br>> 153 root 10 -5 0 0 0 S 0
0.0 0:00.00 khubd<br>> 155 root 10 -5 0 0 0 S 0 0.0 0:00.00 kseriod<br>> 231 root 19 0 0 0 0 S 0 0.0 0:00.00 pdflush<br>> 232 root 15 0 0 0 0 S 0
0.0 0:00.15 pdflush<br>> 233 root 14 -5 0 0 0 S 0 0.0 0:00.00 kswapd0<br>> 234 root 14 -5 0 0 0 S 0 0.0 0:00.00 aio/0<br>> 235 root 14 -5 0 0 0 S 0
0.0 0:00.00 aio/1<br>> 236 root 14 -5 0 0 0 S 0 0.0 0:00.00 aio/2<br>><br>><br>> top server2:<br>><br>> Tasks: 264 total, 1 running, 260 sleeping, 0 stopped, 3 zombie
<br>> Cpu(s): 0.5%us, 0.1%sy, 0.0%ni, 83.7%id, 15.7%wa, 0.0%hi,<br>> 0.0%si, 0.0%st<br>> Mem: 4084444k total, 3941216k used, 143228k free, 46608k<br>> buffers<br>> Swap: 8193020k total, 24660k used, 8168360k free, 3652864k
<br>> cached<br>><br>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br>> 9507 root 15 0 2292 1020 704 R 2 0.0 0:00.01 top<br>> 1 root 15 0 2032 652 564 S 0
0.0 0:01.20 init<br>> 2 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> migration/0<br>> 3 root 34 19 0 0 0 S 0 0.0 0:00.25<br>> ksoftirqd/0<br>> 4 root RT 0 0 0 0 S 0
0.0 0:00.00<br>> watchdog/0<br>> 5 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> migration/1<br>> 6 root 34 19 0 0 0 S 0 0.0 0:00.42<br>> ksoftirqd/1
<br>> 7 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> watchdog/1<br>> 8 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> migration/2<br>> 9 root 34 19 0 0 0 S 0
0.0 0:00.56<br>> ksoftirqd/2<br>> 10 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> watchdog/2<br>> 11 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> migration/3
<br>> 12 root 39 19 0 0 0 S 0 0.0 0:00.08<br>> ksoftirqd/3<br>> 13 root RT 0 0 0 0 S 0 0.0 0:00.00<br>> watchdog/3<br>> 14 root 10 -5 0 0 0 S 0
0.0 0:00.03 events/0<br>> 15 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/1<br>> 16 root 10 -5 0 0 0 S 0 0.0 0:00.02 events/2<br>> 17 root 10 -5 0 0 0 S 0
0.0 0:00.02 events/3<br>> 18 root 10 -5 0 0 0 S 0 0.0 0:00.00 khelper<br>> 19 root 11 -5 0 0 0 S 0 0.0 0:00.00 kthread<br>> 25 root 10 -5 0 0 0 S 0
0.0 0: 00.08<br>> kblockd/0<br>> 26 root 10 -5 0 0 0 S 0 0.0 0:00.00 kblockd/1<br>> 27 root 10 -5 0 0 0 S 0 0.0 0:00.00 kblockd/2<br>> 28 root 10 -5 0 0 0 S 0
0.0 0: 00.08<br>> kblockd/3<br>> 29 root 18 -5 0 0 0 S 0 0.0 0:00.00 kacpid<br>> 147 root 18 -5 0 0 0 S 0 0.0 0:00.00 cqueue/0<br>> 148 root 20 -5 0 0 0 S 0
0.0 0:00.00 cqueue/1<br>> 149 root 10 -5 0 0 0 S 0 0.0 0:00.00 cqueue/2<br>> 150 root 10 -5 0 0 0 S 0 0.0 0:00.00 cqueue/3<br>> 153 root 10 -5 0 0 0 S 0
0.0 0:00.00 khubd<br>> 155 root 10 -5 0 0 0 S 0 0.0 0:00.00 kseriod<br>> 233 root 15 -5 0 0 0 S 0 0.0 0:00.84 kswapd0<br>> 234 root 20 -5 0 0 0 S 0
0.0 0:00.00 aio/0<br>> 235 root 20 -5 0 0 0 S 0 0.0 0:00.00 aio/1<br>> 236 root 20 -5 0 0 0 S 0 0.0 0:00.00 aio/2<br>> 237 root 20 -5 0 0 0 S 0
0.0 0:00.00 aio/3<br>> 409 root 11 -5 0 0 0 S 0 0.0 0:00.00 kpsmoused<br>><br>> slabtop server1:<br>><br>> Active / Total Objects (% used) : 301060 / 312027 (96.5%)
<br>> Active / Total Slabs (% used) : 12055 / 12055 (100.0%)<br>> Active / Total Caches (% used) : 106 / 146 (72.6%)<br>> Active / Total Size (% used) : 45714.46K / 46814.27K (97.7%)
<br>> Minimum / Average / Maximum Object : 0.01K / 0.15K / 128.00K<br>><br>> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME<br>> 148176 148095 99% 0.05K 2058 72 8232K buffer_head
<br>> 44573 44497 99% 0.13K 1537 29 6148K dentry_cache<br>> 32888 32878 99% 0.48K 4111 8 16444K<br>> ext3_inode_cache<br>> 10878 10878 100% 0.27K 777 14 3108K radix_tree_node
<br>> 7943 7766 97% 0.02K 47 169 188K dm_io<br>> 7917 7766 98% 0.02K 39 203 156K dm_tio<br>> 6424 5414 84% 0.09K 146 44 584K vm_area_struct
<br>> 6384 6223 97% 0.04K 76 84 304K sysfs_dir_cache<br>> 5989 5823 97% 0.03K 53 113 212K size-32<br>> 5074 4833 95% 0.06K 86 59 344K size-64
<br>> 4488 4488 100% 0.33K 408 11 1632K inode_cache<br>> 2610 2561 98% 0.12K 87 30 348K size-128<br>> 2540 1213 47% 0.01K 10 254 40K anon_vma
<br>> 2380 1475 61% 0.19K 119 20 476K filp<br>> 1430 1334 93% 0.35K 130 11 520K<br>> proc_inode_cache<br>> 1326 1031 77% 0.05K 17 78 68K
<br>> selinux_inode_security<br>> 1320 1304 98% 0.25K 88 15 352K size-256<br>> 1015 629 61% 0.02K 5 203 20K biovec-1<br>> 1008 125 12%
0.05K 14 72 56K journal_head<br>> 930 841 90% 0.12K 31 30 124K bio<br>> 920 666 72% 0.19K 46 20 184K<br>> skbuff_head_cache<br>> 798 757 94%
2.00K 399 2 1596K size-2048<br>> 791 690 87% 0.03K 7 113 28K ocfs2_em_ent<br>> 784 748 95% 0.50K 98 8 392K size-512<br>> 590 503 85%
0.06K 10 59 40K biovec-4<br>> 564 562 99% 0.88K 141 4 564K<br>> ocfs2_inode_cache<br>> 546 324 59% 0.05K 7 78 28K delayacct_cache
<br>> 531 497 93% 0.43K 59 9 236K<br>> shmem_inode_cache<br>> 520 487 93% 0.19K 26 20 104K biovec-16<br>> 505 327 64% 0.04K 5 101 20K pid
<br>> 500 487 97% 0.75K 100 5 400K biovec-64<br>> 460 376 81% 0.04K 5 92 20K Acpi-Operand<br>> 452 104 23% 0.03K 4 113 16K pgd
<br>> 347 347 100% 4.00K 347 1 1388K size-4096<br>> 338 303 89% 0.02K 2 169 8K Acpi-Namespace<br>> 336 175 52% 0.04K 4 84 16K crq_pool
<br>><br>><br>> slabtop server2:<br>><br>><br>> Active / Total Objects (% used) : 886188 / 907066 (97.7%)<br>> Active / Total Slabs (% used) : 20722 / 20723 (100.0%)<br>> Active / Total Caches (% used) : 104 / 147 (
70.7%)<br>> Active / Total Size (% used) : 76455.01K / 78620.30K (97.2%)<br>> Minimum / Average / Maximum Object : 0.01K / 0.09K / 128.00K<br>><br>> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
<br>> 736344 729536 99% 0.05K 10227 72 40908K buffer_head<br>> 37671 34929 92% 0.13K 1299 29 5196K dentry_cache<br>> 32872 32666 99% 0.48K 4109 8 16436K
<br>> ext3_inode_cache<br>> 20118 20097 99% 0.27K 1437 14 5748K radix_tree_node<br>> 10912 9437 86% 0.09K 248 44 992K vm_area_struct<br>> 7714 7452 96%
0.02K 38 203 152K dm_tio<br>> 7605 7452 97% 0.02K 45 169 180K dm_io<br>> 6552 6475 98% 0.04K 78 84 312K sysfs_dir_cache<br>> 5763 5461 94%
0.03K 51 113 204K size-32<br>> 4661 4276 91% 0.06K 79 59 316K size-64<br>> 4572 3569 78% 0.01K 18 254 72K anon_vma<br>> 4440 3770 84%
0.19K 222 20 888K filp<br>> 2288 1791 78% 0.35K 208 11 832K<br>> proc_inode_cache<br>> 1980 1884 95% 0.12K 66 30 264K size-128<br>> 1804 1471 81%
0.33K 164 11 656K inode_cache<br>> 1404 1046 74% 0.05K 18 78 72K<br>> selinux_inode_security<br>> 1060 670 63% 0.19K 53 20 212K<br>> skbuff_head_cache
<br>> 904 767 84% 0.03K 8 113 32K ocfs2_em_ent<br>> 870 870 100% 0.12K 29 30 116K bio<br>> 840 816 97% 0.50K 105 8 420K size-512
<br>> 812 578 71% 0.02K 4 203 16K biovec-1<br>> 810 800 98% 0.25K 54 15 216K size-256<br>> 800 761 95% 2.00K 400 2 1600K size-2048
<br>> 549 514 93% 0.43K 61 9 244K<br>> shmem_inode_cache<br>> 546 289 52% 0.05K 7 78 28K delayacct_cache<br>> 531 478 90% 0.06K 9 59 36K biovec-4
<br>> 520 471 90% 0.19K 26 20 104K biovec-16<br>> 505 292 57% 0.04K 5 101 20K pid<br>> 490 479 97% 0.75K 98 5 392K biovec-64
<br>> 460 376 81% 0.04K 5 92 20K Acpi-Operand<br>> 452 175 38% 0.03K 4 113 16K pgd<br>> 440 357 81% 0.38K 44 10 176K<br>
> sock_inode_cache<br>> 369 303 82% 0.44K 41 9 164K UNIX<br>> 360 320 88% 1.00K 90 4 360K size-1024<br>> 354 124 35% 0.06K 6 59 24K fs_cache
<br>> 351 351 100% 4.00K 351 1 1404K pmd<br>><br>> uptime server1:<br>><br>> [root@impax ~]# uptime<br>> 10:21:09 up 18:16, 1 user, load average: 7.22, 7.72, 7.83
<br>><br>> uptime server2:<br>><br>> [root@impaxdb ~]# uptime<br>> 10:21:17 up 18:16, 1 user, load average: 8.79, 9.02, 8.98<br>><br>> yours<br>><br>> arnold<br>><br>> On 8/24/07, *Alexei_Roudnev* <
<a href="mailto:Alexei_Roudnev@exigengroup.com">Alexei_Roudnev@exigengroup.com</a><br>> <mailto:<a href="mailto:Alexei_Roudnev@exigengroup.com">Alexei_Roudnev@exigengroup.com</a>>> wrote:<br>><br>> Run and send here, please:
<br>><br>> vmstat 5<br>> (5 lines)<br>><br>> iostat -x 5<br>> (2 - 3 outputs)<br>><br>> top<br>> (1 screen)<br>><br>> slabtop (or equivalent, I dont remember)
<br>><br>><br>> ----- Original Message -----<br>><br>> *From:* Arnold Maderthaner<br>> <mailto:<a href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com
</a>><br>> *To:* Sunil Mushran <mailto:<a href="mailto:Sunil.Mushran@oracle.com">Sunil.Mushran@oracle.com</a>> ;<br>> <a href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users@oss.oracle.com
</a><br>> <mailto:<a href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users@oss.oracle.com</a>><br>> *Sent:* Thursday, August 23, 2007 4:12 AM<br>> *Subject:* Re: [Ocfs2-users] Slow concurrent actions on
<br>> the same LVM logicalvolume<br>><br>><br>> Hi !<br>><br>> I watched the servers today and noticed that the server<br>> load is very high but not much is working on the server:
<br>> server1:<br>> 16:37:11 up 32 min, 1 user, load average: 6.45, 7.90, 7.45<br>> server2:<br>> 16:37:19 up 32 min, 4 users, load average: 9.79, 9.76, 8.26
<br>> i also tried to do some selects on the oracle database today.<br>> to do a "select count(*) from xyz" where xyz has 120 000<br>> rows it took me about 10 sec.<br>
> always the process who does the filesystem access is the<br>> first one.<br>> and every command (ls,du,vgdisplay,lvdisplay,dd,cp,....)<br>> which needs access to a ocfs2 filesystem seams like
<br>> blocking or it seams to me that there is a lock.<br>> Can anyone please help me ?<br>><br>> yours<br>><br>> Arnold<br>><br>> On 8/22/07, *Arnold Maderthaner*
<br>> <<a href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com</a><br>> <mailto:<a href="mailto:arnold.maderthaner@j4care.com">arnold.maderthaner@j4care.com</a>>> wrote:
<br>><br>> There is not really a high server load when writing to<br>> the disks but will try it asap. what do you mean with<br>> AST ?<br>> Can I enable some debuging on ocfs2 ? We have 2 Dell
<br>> servers with about 4gb of ram each and 2 Dual Core<br>> CPUs with 2GHZ each.<br>> Can you please describe which tests todo again ?<br>><br>> Thx for your help
<br>><br>> Arnold<br>><br>><br>> On 8/22/07, *Sunil Mushran *< <a href="mailto:Sunil.Mushran@oracle.com">Sunil.Mushran@oracle.com</a><br>> <mailto:<a href="mailto:Sunil.Mushran@oracle.com">
Sunil.Mushran@oracle.com</a>>> wrote:<br>><br>> Repeat the first test. This time run top on the<br>> first server. Which<br>> process is eating the cpu? From the time it
<br>> appears that the second<br>> server is waiting for the AST that the first<br>> server is slow to send. And<br>> it could be slow because it may be busy flushing
<br>> the data to disk.<br>> How much memory/cpu do these servers have?<br>><br>> As far as the second test goes, what is the<br>> concurrent write performance
<br>> directly to the LV (minus fs).<br>><br>> On Wed, Aug 22, 2007 at 01:14:02PM +0530, Arnold<br>> Maderthaner wrote:<br>> > Hi 2 all !
<br>> ><br>> > I have problems with concurrent filesystem<br>> actions on a ocfs2<br>> > filesystem which is mounted by 2 nodes. OS=RH5ES
<br>> and OCFS2= 1.2.6<br>> > F.e.: If I have a LV called testlv which is<br>> mounted on /mnt on both<br>> > servers and I do a "dd if=/dev/zero
<br>> of=/mnt/test.a bs=1024<br>> > count=1000000" on server 1 and do at the same<br>> time a du -hs<br>> > /mnt/test.a it takes about 5 seconds for du -hs
<br>> to execute:<br>> > 270M test.a<br>> ><br>> > real 0m3.627s<br>> > user 0m0.000s
<br>> > sys 0m0.002s<br>> ><br>> > or even longer. If i do a du -hs ond a file which<br>> is not written by<br>> > another server its fast:
<br>> > 977M test.b<br>> ><br>> > real 0m0.001s<br>> > user 0m0.001s<br>> > sys
0m0.000s<br>> ><br>> > If I write 2 different files from both servers to<br>> the same LV I get<br>> > only 2.8mb/sec.<br>
> > If I write 2 different files from both servers to<br>> a different LV<br>> > (testlv,test2lv) then I get about 30mb/sec on<br>> each write (so 60mb/sec
<br>> > combined which is very good).<br>> > does anyone has an idea what could be wrong. Are<br>> there any mount<br>> > options for ocfs2 which could help me? is this a
<br>> lvm2 or ocfs2 problem<br>> > (locking,blocking,.... issue) ?<br>> ><br>> > Any help will be appriciated.<br>> >
<br>> > yours<br>> ><br>> > Arnold<br>> > --<br>> > Arnold Maderthaner<br>> > J4Care Inc.
<br>> ><br>> > _______________________________________________<br>> > Ocfs2-users mailing list<br>> > <a href="mailto:Ocfs2-users@oss.oracle.com">
Ocfs2-users@oss.oracle.com</a><br>> <mailto:<a href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</a>><br>> > <a href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">
http://oss.oracle.com/mailman/listinfo/ocfs2-users</a><br>> <<a href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">http://oss.oracle.com/mailman/listinfo/ocfs2-users</a>><br>><br>>
<br>><br>><br>> --<br>> Arnold Maderthaner<br>> J4Care Inc.<br>><br>><br>><br>><br>> --<br>> Arnold Maderthaner<br>> J4Care Inc.
<br>><br>> ------------------------------------------------------------------------<br>> _______________________________________________<br>> Ocfs2-users mailing list<br>>
<a href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</a> <mailto:<a href="mailto:Ocfs2-users@oss.oracle.com">Ocfs2-users@oss.oracle.com</a>><br>> <a href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">
http://oss.oracle.com/mailman/listinfo/ocfs2-users</a><br>><br>><br>><br>><br>> --<br>> Arnold Maderthaner<br>> J4Care Inc.<br>><br>><br>><br>><br>> --<br>> Arnold Maderthaner
<br>> J4Care Inc.<br>> ------------------------------------------------------------------------<br>><br>> _______________________________________________<br>> Ocfs2-users mailing list<br>> <a href="mailto:Ocfs2-users@oss.oracle.com">
Ocfs2-users@oss.oracle.com</a><br>> <a href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">http://oss.oracle.com/mailman/listinfo/ocfs2-users</a><br><br></blockquote></div><br><br clear="all"><br>-- <br>Arnold Maderthaner
<br>J4Care Inc.