<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD><TITLE>[Ocfs2-users] ocfs2 is still eating memory</TITLE>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2800.1561" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Notice that you use a beta version of the SLES10
(SLEs10 Sp1-beta) - it means that you may expect other problems as well as OCFS
ones. Can you try the same on most stable, SLES9 Sp3 kernel 283 (not less), just
to determine if problem is in OCFS or not?</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>PS> My own estimation for OCFSv2 (I
tested mainly SLES9 Sp3 versions, not SLES10 or original versions) - is
already OK for backups, </FONT></DIV>
<DIV><FONT face=Arial size=2>may be fine for archive logs, must be well tested
for database files, and should not be used for the file storage systems yet. But
it is _my own_ observation, it can be wrong or outdated. OCFSv2 improved
dramatically during last few month.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<BLOCKQUOTE
style="PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV style="FONT: 10pt arial">----- Original Message ----- </DIV>
<DIV
style="BACKGROUND: #e4e4e4; FONT: 10pt arial; font-color: black"><B>From:</B>
<A title=Ernest.Cline@petersons.com
href="mailto:Ernest.Cline@petersons.com">Cline, Ernie</A> </DIV>
<DIV style="FONT: 10pt arial"><B>To:</B> <A title=ocfs2-users@oss.oracle.com
href="mailto:ocfs2-users@oss.oracle.com">ocfs2-users</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Sent:</B> Monday, March 05, 2007 1:45
PM</DIV>
<DIV style="FONT: 10pt arial"><B>Subject:</B> RE: [Ocfs2-users] ocfs2 is still
eating memory</DIV>
<DIV><BR></DIV>
<DIV id=idOWAReplyText95312 dir=ltr>
<DIV dir=ltr><FONT face=Arial color=#000000 size=2>If this is not an OCFS2
issue, then what is it? I have recently been told by Oracle support that
the supported method for RAC on Oracle 10G is OCFS2. These systems are
going into production very shortly ... </FONT></DIV></DIV>
<DIV dir=ltr><BR>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> <A
href="mailto:ocfs2-users-bounces@oss.oracle.com">ocfs2-users-bounces@oss.oracle.com</A>
on behalf of John Lange<BR><B>Sent:</B> Mon 3/5/2007 4:25 PM<BR><B>To:</B>
ocfs2-users<BR><B>Subject:</B> [Ocfs2-users] ocfs2 is still eating
memory<BR></FONT><BR></DIV>
<DIV>
<P><FONT size=2>With a large (12 tbyte) ocfs2 file system mounted, all I need
to do is a<BR>"find ." and ocfs2 will slowly consume all ram until oom-killer
is<BR>invoked.<BR><BR>Previously this was thought to be related to the file
system being<BR>exported via nfs but this has now been ruled out.<BR><BR>The
following are some stats; first section is after the "find ." had<BR>been
running for about 10 minutes. It is far from out of memory at this<BR>point
but I didn't have time this afternoon to run it until it died.<BR><BR>Second
section is right after issuing a<BR><BR>sync ; echo 3 >
/proc/sys/vm/drop_caches<BR><BR>so you can see the memory get freed
up.<BR><BR>This is running a recent SUSE SP1 BRANCH
kernel<BR><BR>2.6.16.37-SLES10_SP1_BRANCH_20070213192756-smp<BR><BR>I have
already been told on the bug tracker that this is not an ocfs2<BR>issue. I'm
not trying to be a pain, I just wanted to repeat my findings<BR>so that ocfs2
list is aware that while it may not be an ocfs2 code<BR>problem it certainly
does prevent you from using ocfs2 in production.<BR><BR>If someone has a
better place where these finding could be reported<BR>please let me
know.<BR><BR><BR><BR><BR>======= section 1 =========<BR><BR>Mon Mar 5
15:15:01 CST 2007<BR># vmstat<BR>procs -----------memory---------- ---swap--
-----io---- -system-- -----cpu------<BR> r b
swpd free buff cache si
so bi bo in cs us
sy id wa st<BR> 2 1 240 1563768 241264
47076 0 0
1 4 11 6 0 1
94 4 0<BR># vmstat -s<BR>
2075084 total memory<BR>
511456 used memory<BR> 182952
active memory<BR> 223820 inactive
memory<BR> 1563628 free
memory<BR> 241264 buffer
memory<BR> 47076 swap
cache<BR> 2056304 total
swap<BR> 240 used
swap<BR> 2056064 free
swap<BR> 612937 non-nice user cpu
ticks<BR> 7186 nice user cpu
ticks<BR> 2272575 system cpu
ticks<BR> 293987671 idle cpu
ticks<BR> 13392542 IO-wait cpu
ticks<BR> 50629 IRQ cpu
ticks<BR> 1667207 softirq cpu
ticks<BR> 0
steal cpu ticks<BR> 88657366 pages paged
in<BR> 1816763473 pages paged
out<BR> 0
pages swapped
in<BR> 61 pages
swapped out<BR> 2268215696 interrupts<BR> 2382140383
CPU context switches<BR> 1171569943 boot
time<BR> 109385 forks<BR># vmstat
-m<BR>Cache
Num Total Size
Pages<BR>rpc_buffers
8 8
2048
2<BR>rpc_tasks
8 15 256
15<BR>rpc_inode_cache
0 0
512
7<BR>ocfs2_lock
152 203 16
203<BR>ocfs2_inode_cache
29868 29868 896
4<BR>ocfs2_uptodate
6655 6780 32
113<BR>ocfs2_em_ent
29854 29854 64
59<BR>dlmfs_inode_cache
1 6
640
6<BR>dlm_mle_cache
10 10 384
10<BR>configfs_dir_cache
33 78
48
78<BR>fib6_nodes
7 113 32
113<BR>ip6_dst_cache
7 15 256
15<BR>ndisc_cache
1 15 256
15<BR>RAWv6
5 6
640
6<BR>UDPv6
3 6
640
6<BR>tw_sock_TCPv6
0 0
128
30<BR>request_sock_TCPv6
0 0
128
30<BR>TCPv6
10 12 1280
3<BR>ip_fib_alias
16 113 32
113<BR>ip_fib_hash
16 113 32
113<BR>dm_events
16 169 20
169<BR>Cache
Num Total Size
Pages<BR>dm_tio
3676 3857 16
203<BR>dm_io
3695 3887 20
169<BR>uhci_urb_priv
0 0
40
92<BR>ext3_inode_cache
950 1912 512
8<BR>ext3_xattr
0 0
48
78<BR>journal_handle
2 169 20
169<BR>journal_head
462 648 52
72<BR>revoke_table
6 254 12
254<BR>revoke_record
0 0 16
203<BR>qla2xxx_srbs
242 270 128
30<BR>scsi_cmd_cache
59 90 384
10<BR>sgpool-256
32 32 4096
1<BR>sgpool-128
32 32 2048
2<BR>sgpool-64
32 32 1024
4<BR>sgpool-32
32 32
512
8<BR>sgpool-16
32 45 256
15<BR>sgpool-8
156 210 128
30<BR>scsi_io_context
0 0
104
37<BR>UNIX
410 427 512
7<BR>ip_mrt_cache
0 0
128
30<BR>tcp_bind_bucket
12 203 16
203<BR>Cache
Num Total Size
Pages<BR>inet_peer_cache
89 118 64
59<BR>secpath_cache
0 0
128
30<BR>xfrm_dst_cache
0 0
384
10<BR>ip_dst_cache
178 285 256
15<BR>arp_cache
5 15 256
15<BR>RAW
3 7
512
7<BR>UDP
32 56
512
7<BR>tw_sock_TCP
0 0
128
30<BR>request_sock_TCP
0 0
64
59<BR>TCP
16 35 1152
7<BR>flow_cache
0 0
128
30<BR>cfq_ioc_pool
184 720 96
40<BR>cfq_pool
166 600 96
40<BR>crq_pool
181 312 48
78<BR>deadline_drq
0 0
52
72<BR>as_arq
0 0
64
59<BR>mqueue_inode_cache
1 6
640
6<BR>isofs_inode_cache
0 0
384
10<BR>minix_inode_cache
0 0
420
9<BR>hugetlbfs_inode_cache
1 11 356
11<BR>ext2_inode_cache
0 0
492
8<BR>Cache
Num Total Size
Pages<BR>ext2_xattr
0 0
48
78<BR>dnotify_cache
1 169 20
169<BR>dquot
0 0
128
30<BR>eventpoll_pwq
17 101 36
101<BR>eventpoll_epi
17 30 128
30<BR>inotify_event_cache
0 0 28
127<BR>inotify_watch_cache
40 184 40
92<BR>kioctx
0 0
256
15<BR>kiocb
0 0
128
30<BR>fasync_cache
1 203 16
203<BR>shmem_inode_cache
618 624 460
8<BR>posix_timers_cache
0 0
100
39<BR>uid_cache
7 59
64
59<BR>blkdev_ioc
88 254 28
127<BR>blkdev_queue
58 60
960
4<BR>blkdev_requests
204 264 176
22<BR>biovec-(256)
312 312 3072
2<BR>biovec-128
368 370 1536
5<BR>biovec-64
480 485 768
5<BR>biovec-16
480 495 256
15<BR>biovec-4
480 531 64
59<BR>Cache
Num Total Size
Pages<BR>biovec-1
609 5684 16
203<BR>bio
592 720 128
30<BR>sock_inode_cache
505 553 512
7<BR>skbuff_fclone_cache
73 80 384
10<BR>skbuff_head_cache
673 870 256
15<BR>file_lock_cache
8 126 92
42<BR>acpi_operand
634 828 40
92<BR>acpi_parse_ext
0 0
44
84<BR>acpi_parse
0 0 28
127<BR>acpi_state
0 0
48
78<BR>delayacct_cache
191 312 48
78<BR>taskstats_cache
32 32 236
16<BR>proc_inode_cache
59 140 372
10<BR>sigqueue
97 135 144
27<BR>radix_tree_node
87360 87360 276
14<BR>bdev_cache
56 56
512
7<BR>sysfs_dir_cache
4887 4968 40
92<BR>mnt_cache
29 60 128
30<BR>inode_cache
1071 1309 356
11<BR>dentry_cache
34133 34133 132
29<BR>filp
3048 4180 192
20<BR>Cache
Num Total Size
Pages<BR>names_cache
26 26 4096
1<BR>idr_layer_cache
203 232 136
29<BR>buffer_head
60840 60840 52
72<BR>mm_struct
120 171 448
9<BR>vm_area_struct
5831 8888 88
44<BR>fs_cache
120 236 64
59<BR>files_cache
121 198 448
9<BR>signal_cache
166 200 384
10<BR>sighand_cache
157 177 1344
3<BR>task_struct
182 230 1376
5<BR>anon_vma
2781 3048 12
254<BR>pgd
97 97 4096
1<BR>size-131072(DMA)
0 0 131072
1<BR>size-131072
0 0 131072
1<BR>size-65536(DMA)
0 0 65536
1<BR>size-65536
0 0 65536
1<BR>size-32768(DMA)
0 0 32768
1<BR>size-32768
3 3 32768
1<BR>size-16384(DMA)
0 0 16384
1<BR>size-16384
21 21 16384
1<BR>size-8192(DMA)
0 0
8192
1<BR>Cache
Num Total Size
Pages<BR>size-8192
175 175 8192
1<BR>size-4096(DMA)
0 0
4096
1<BR>size-4096
111 111 4096
1<BR>size-2048(DMA)
0 0
2048
2<BR>size-2048
682 708 2048
2<BR>size-1024(DMA)
0 0
1024
4<BR>size-1024
373 404 1024
4<BR>size-512(DMA)
1 8
512
8<BR>size-512
529 568 512
8<BR>size-256(DMA)
0 0
256
15<BR>size-256
30690 30690 256
15<BR>size-128(DMA)
0 0
128
30<BR>size-128
33360 33360 128
30<BR>size-64(DMA)
0 0
64
59<BR>size-32(DMA)
0 0 32
113<BR>size-64
4166 7080 64
59<BR>size-32
35030 35030 32
113<BR>kmem_cache
150 150 256
15<BR><BR><BR>======== section 2 ========<BR><BR>Mon Mar 5 15:15:28 CST
2007<BR># vmstat<BR>procs -----------memory---------- ---swap-- -----io----
-system-- -----cpu------<BR> r b swpd
free buff cache si
so bi bo in cs us
sy id wa st<BR> 0 1 240 1873872
5460 42120 0
0 1 4
11 6 0 1 94 4 0<BR># vmstat
-s<BR> 2075084 total
memory<BR> 201212 used
memory<BR> 161028 active
memory<BR> 4184 inactive
memory<BR> 1873872 free
memory<BR> 5460 buffer
memory<BR> 42120 swap
cache<BR> 2056304 total
swap<BR> 240 used
swap<BR> 2056064 free
swap<BR> 612951 non-nice user cpu
ticks<BR> 7186 nice user cpu
ticks<BR> 2272690 system cpu
ticks<BR> 293990233 idle cpu
ticks<BR> 13395382 IO-wait cpu
ticks<BR> 50630 IRQ cpu
ticks<BR> 1667219 softirq cpu
ticks<BR> 0
steal cpu ticks<BR> 88673230 pages paged
in<BR> 1816764408 pages paged
out<BR> 0
pages swapped
in<BR> 61 pages
swapped out<BR> 2268233528 interrupts<BR> 2382233674
CPU context switches<BR> 1171569943 boot
time<BR> 109411 forks<BR># vmstat
-m<BR>Cache
Num Total Size
Pages<BR>rpc_buffers
8 8
2048
2<BR>rpc_tasks
8 15 256
15<BR>rpc_inode_cache
0 0
512
7<BR>ocfs2_lock
152 203 16
203<BR>ocfs2_inode_cache
452 452 896
4<BR>ocfs2_uptodate
6496 6780 32
113<BR>ocfs2_em_ent
523 1357 64
59<BR>dlmfs_inode_cache
1 6
640
6<BR>dlm_mle_cache
10 20 384
10<BR>configfs_dir_cache
33 78
48
78<BR>fib6_nodes
7 113 32
113<BR>ip6_dst_cache
7 15 256
15<BR>ndisc_cache
1 15 256
15<BR>RAWv6
5 6
640
6<BR>UDPv6
3 6
640
6<BR>tw_sock_TCPv6
0 0
128
30<BR>request_sock_TCPv6
0 0
128
30<BR>TCPv6
10 12 1280
3<BR>ip_fib_alias
16 113 32
113<BR>ip_fib_hash
16 113 32
113<BR>dm_events
16 169 20
169<BR>Cache
Num Total Size
Pages<BR>dm_tio
3718 3857 16
203<BR>dm_io
3663 3887 20
169<BR>uhci_urb_priv
0 0
40
92<BR>ext3_inode_cache
859 1904 512
8<BR>ext3_xattr
0 0
48
78<BR>journal_handle
18 169 20
169<BR>journal_head
127 648 52
72<BR>revoke_table
6 254 12
254<BR>revoke_record
0 0 16
203<BR>qla2xxx_srbs
211 270 128
30<BR>scsi_cmd_cache
86 90 384
10<BR>sgpool-256
32 32 4096
1<BR>sgpool-128
32 32 2048
2<BR>sgpool-64
32 32 1024
4<BR>sgpool-32
40 40
512
8<BR>sgpool-16
49 60 256
15<BR>sgpool-8
159 210 128
30<BR>scsi_io_context
0 0
104
37<BR>UNIX
394 427 512
7<BR>ip_mrt_cache
0 0
128
30<BR>tcp_bind_bucket
12 203 16
203<BR>Cache
Num Total Size
Pages<BR>inet_peer_cache
89 118 64
59<BR>secpath_cache
0 0
128
30<BR>xfrm_dst_cache
0 0
384
10<BR>ip_dst_cache
169 285 256
15<BR>arp_cache
6 15 256
15<BR>RAW
3 7
512
7<BR>UDP
32 56
512
7<BR>tw_sock_TCP
0 0
128
30<BR>request_sock_TCP
0 0
64
59<BR>TCP
16 35 1152
7<BR>flow_cache
0 0
128
30<BR>cfq_ioc_pool
261 720 96
40<BR>cfq_pool
243 600 96
40<BR>crq_pool
241 312 48
78<BR>deadline_drq
0 0
52
72<BR>as_arq
0 0
64
59<BR>mqueue_inode_cache
1 6
640
6<BR>isofs_inode_cache
0 0
384
10<BR>minix_inode_cache
0 0
420
9<BR>hugetlbfs_inode_cache
1 11 356
11<BR>ext2_inode_cache
0 0
492
8<BR>Cache
Num Total Size
Pages<BR>ext2_xattr
0 0
48
78<BR>dnotify_cache
1 169 20
169<BR>dquot
0 0
128
30<BR>eventpoll_pwq
1 101 36
101<BR>eventpoll_epi
1 30 128
30<BR>inotify_event_cache
0 127 28
127<BR>inotify_watch_cache
40 184 40
92<BR>kioctx
0 0
256
15<BR>kiocb
0 0
128
30<BR>fasync_cache
1 203 16
203<BR>shmem_inode_cache
618 624 460
8<BR>posix_timers_cache
0 0
100
39<BR>uid_cache
7 59
64
59<BR>blkdev_ioc
150 254 28
127<BR>blkdev_queue
58 60
960
4<BR>blkdev_requests
211 264 176
22<BR>biovec-(256)
312 312 3072
2<BR>biovec-128
368 370 1536
5<BR>biovec-64
485 485 768
5<BR>biovec-16
495 495 256
15<BR>biovec-4
531 531 64
59<BR>Cache
Num Total Size
Pages<BR>biovec-1
723 5684 16
203<BR>bio
654 750 128
30<BR>sock_inode_cache
473 553 512
7<BR>skbuff_fclone_cache
78 80 384
10<BR>skbuff_head_cache
673 870 256
15<BR>file_lock_cache
12 126 92
42<BR>acpi_operand
634 828 40
92<BR>acpi_parse_ext
0 0
44
84<BR>acpi_parse
0 0 28
127<BR>acpi_state
0 0
48
78<BR>delayacct_cache
236 312 48
78<BR>taskstats_cache
12 32 236
16<BR>proc_inode_cache
21 140 372
10<BR>sigqueue
93 135 144
27<BR>radix_tree_node
3225 9464 276
14<BR>bdev_cache
56 56
512
7<BR>sysfs_dir_cache
4887 4968 40
92<BR>mnt_cache
29 60 128
30<BR>inode_cache
1055 1309 356
11<BR>dentry_cache
3351 9541 132
29<BR>filp
3096 4180 192
20<BR>Cache
Num Total Size
Pages<BR>names_cache
26 26 4096
1<BR>idr_layer_cache
203 232 136
29<BR>buffer_head
1577 11592 52
72<BR>mm_struct
120 171 448
9<BR>vm_area_struct
5748 8888 88
44<BR>fs_cache
210 236 64
59<BR>files_cache
121 198 448
9<BR>signal_cache
150 200 384
10<BR>sighand_cache
157 177 1344
3<BR>task_struct
182 230 1376
5<BR>anon_vma
2751 3048 12
254<BR>pgd
96 96 4096
1<BR>size-131072(DMA)
0 0 131072
1<BR>size-131072
0 0 131072
1<BR>size-65536(DMA)
0 0 65536
1<BR>size-65536
0 0 65536
1<BR>size-32768(DMA)
0 0 32768
1<BR>size-32768
3 3 32768
1<BR>size-16384(DMA)
0 0 16384
1<BR>size-16384
21 21 16384
1<BR>size-8192(DMA)
0 0
8192
1<BR>Cache
Num Total Size
Pages<BR>size-8192
169 169 8192
1<BR>size-4096(DMA)
0 0
4096
1<BR>size-4096
109 109 4096
1<BR>size-2048(DMA)
0 0
2048
2<BR>size-2048
682 708 2048
2<BR>size-1024(DMA)
0 0
1024
4<BR>size-1024
373 404 1024
4<BR>size-512(DMA)
1 8
512
8<BR>size-512
491 560 512
8<BR>size-256(DMA)
0 0
256
15<BR>size-256
1560 2295 256
15<BR>size-128(DMA)
0 0
128
30<BR>size-128
3870 17580 128
30<BR>size-64(DMA)
0 0
64
59<BR>size-32(DMA)
0 0 32
113<BR>size-64
4118 7080 64
59<BR>size-32
5920 19097 32
113<BR>kmem_cache
150 150 256
15<BR><BR><BR>================<BR><BR><BR><BR>_______________________________________________<BR>Ocfs2-users
mailing list<BR>Ocfs2-users@oss.oracle.com<BR><A
href="http://oss.oracle.com/mailman/listinfo/ocfs2-users">http://oss.oracle.com/mailman/listinfo/ocfs2-users</A><BR><BR></FONT></P></DIV>
<P>
<HR>
<P></P>_______________________________________________<BR>Ocfs2-users mailing
list<BR>Ocfs2-users@oss.oracle.com<BR>http://oss.oracle.com/mailman/listinfo/ocfs2-users</BLOCKQUOTE></BODY></HTML>