[Ocfs2-users] ocfs2 is still eating memory
Alexei_Roudnev
Alexei_Roudnev at exigengroup.com
Mon Mar 5 18:10:04 PST 2007
[Ocfs2-users] ocfs2 is still eating memory
Notice that you use a beta version of the SLES10 (SLEs10 Sp1-beta) - it means that you may expect other problems as well as OCFS ones. Can you try the same on most stable, SLES9 Sp3 kernel 283 (not less), just to determine if problem is in OCFS or not?
PS> My own estimation for OCFSv2 (I tested mainly SLES9 Sp3 versions, not SLES10 or original versions) - is already OK for backups,
may be fine for archive logs, must be well tested for database files, and should not be used for the file storage systems yet. But it is _my own_ observation, it can be wrong or outdated. OCFSv2 improved dramatically during last few month.
----- Original Message -----
From: Cline, Ernie
To: ocfs2-users
Sent: Monday, March 05, 2007 1:45 PM
Subject: RE: [Ocfs2-users] ocfs2 is still eating memory
If this is not an OCFS2 issue, then what is it? I have recently been told by Oracle support that the supported method for RAC on Oracle 10G is OCFS2. These systems are going into production very shortly ...
------------------------------------------------------------------------------
From: ocfs2-users-bounces at oss.oracle.com on behalf of John Lange
Sent: Mon 3/5/2007 4:25 PM
To: ocfs2-users
Subject: [Ocfs2-users] ocfs2 is still eating memory
With a large (12 tbyte) ocfs2 file system mounted, all I need to do is a
"find ." and ocfs2 will slowly consume all ram until oom-killer is
invoked.
Previously this was thought to be related to the file system being
exported via nfs but this has now been ruled out.
The following are some stats; first section is after the "find ." had
been running for about 10 minutes. It is far from out of memory at this
point but I didn't have time this afternoon to run it until it died.
Second section is right after issuing a
sync ; echo 3 > /proc/sys/vm/drop_caches
so you can see the memory get freed up.
This is running a recent SUSE SP1 BRANCH kernel
2.6.16.37-SLES10_SP1_BRANCH_20070213192756-smp
I have already been told on the bug tracker that this is not an ocfs2
issue. I'm not trying to be a pain, I just wanted to repeat my findings
so that ocfs2 list is aware that while it may not be an ocfs2 code
problem it certainly does prevent you from using ocfs2 in production.
If someone has a better place where these finding could be reported
please let me know.
======= section 1 =========
Mon Mar 5 15:15:01 CST 2007
# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 1 240 1563768 241264 47076 0 0 1 4 11 6 0 1 94 4 0
# vmstat -s
2075084 total memory
511456 used memory
182952 active memory
223820 inactive memory
1563628 free memory
241264 buffer memory
47076 swap cache
2056304 total swap
240 used swap
2056064 free swap
612937 non-nice user cpu ticks
7186 nice user cpu ticks
2272575 system cpu ticks
293987671 idle cpu ticks
13392542 IO-wait cpu ticks
50629 IRQ cpu ticks
1667207 softirq cpu ticks
0 steal cpu ticks
88657366 pages paged in
1816763473 pages paged out
0 pages swapped in
61 pages swapped out
2268215696 interrupts
2382140383 CPU context switches
1171569943 boot time
109385 forks
# vmstat -m
Cache Num Total Size Pages
rpc_buffers 8 8 2048 2
rpc_tasks 8 15 256 15
rpc_inode_cache 0 0 512 7
ocfs2_lock 152 203 16 203
ocfs2_inode_cache 29868 29868 896 4
ocfs2_uptodate 6655 6780 32 113
ocfs2_em_ent 29854 29854 64 59
dlmfs_inode_cache 1 6 640 6
dlm_mle_cache 10 10 384 10
configfs_dir_cache 33 78 48 78
fib6_nodes 7 113 32 113
ip6_dst_cache 7 15 256 15
ndisc_cache 1 15 256 15
RAWv6 5 6 640 6
UDPv6 3 6 640 6
tw_sock_TCPv6 0 0 128 30
request_sock_TCPv6 0 0 128 30
TCPv6 10 12 1280 3
ip_fib_alias 16 113 32 113
ip_fib_hash 16 113 32 113
dm_events 16 169 20 169
Cache Num Total Size Pages
dm_tio 3676 3857 16 203
dm_io 3695 3887 20 169
uhci_urb_priv 0 0 40 92
ext3_inode_cache 950 1912 512 8
ext3_xattr 0 0 48 78
journal_handle 2 169 20 169
journal_head 462 648 52 72
revoke_table 6 254 12 254
revoke_record 0 0 16 203
qla2xxx_srbs 242 270 128 30
scsi_cmd_cache 59 90 384 10
sgpool-256 32 32 4096 1
sgpool-128 32 32 2048 2
sgpool-64 32 32 1024 4
sgpool-32 32 32 512 8
sgpool-16 32 45 256 15
sgpool-8 156 210 128 30
scsi_io_context 0 0 104 37
UNIX 410 427 512 7
ip_mrt_cache 0 0 128 30
tcp_bind_bucket 12 203 16 203
Cache Num Total Size Pages
inet_peer_cache 89 118 64 59
secpath_cache 0 0 128 30
xfrm_dst_cache 0 0 384 10
ip_dst_cache 178 285 256 15
arp_cache 5 15 256 15
RAW 3 7 512 7
UDP 32 56 512 7
tw_sock_TCP 0 0 128 30
request_sock_TCP 0 0 64 59
TCP 16 35 1152 7
flow_cache 0 0 128 30
cfq_ioc_pool 184 720 96 40
cfq_pool 166 600 96 40
crq_pool 181 312 48 78
deadline_drq 0 0 52 72
as_arq 0 0 64 59
mqueue_inode_cache 1 6 640 6
isofs_inode_cache 0 0 384 10
minix_inode_cache 0 0 420 9
hugetlbfs_inode_cache 1 11 356 11
ext2_inode_cache 0 0 492 8
Cache Num Total Size Pages
ext2_xattr 0 0 48 78
dnotify_cache 1 169 20 169
dquot 0 0 128 30
eventpoll_pwq 17 101 36 101
eventpoll_epi 17 30 128 30
inotify_event_cache 0 0 28 127
inotify_watch_cache 40 184 40 92
kioctx 0 0 256 15
kiocb 0 0 128 30
fasync_cache 1 203 16 203
shmem_inode_cache 618 624 460 8
posix_timers_cache 0 0 100 39
uid_cache 7 59 64 59
blkdev_ioc 88 254 28 127
blkdev_queue 58 60 960 4
blkdev_requests 204 264 176 22
biovec-(256) 312 312 3072 2
biovec-128 368 370 1536 5
biovec-64 480 485 768 5
biovec-16 480 495 256 15
biovec-4 480 531 64 59
Cache Num Total Size Pages
biovec-1 609 5684 16 203
bio 592 720 128 30
sock_inode_cache 505 553 512 7
skbuff_fclone_cache 73 80 384 10
skbuff_head_cache 673 870 256 15
file_lock_cache 8 126 92 42
acpi_operand 634 828 40 92
acpi_parse_ext 0 0 44 84
acpi_parse 0 0 28 127
acpi_state 0 0 48 78
delayacct_cache 191 312 48 78
taskstats_cache 32 32 236 16
proc_inode_cache 59 140 372 10
sigqueue 97 135 144 27
radix_tree_node 87360 87360 276 14
bdev_cache 56 56 512 7
sysfs_dir_cache 4887 4968 40 92
mnt_cache 29 60 128 30
inode_cache 1071 1309 356 11
dentry_cache 34133 34133 132 29
filp 3048 4180 192 20
Cache Num Total Size Pages
names_cache 26 26 4096 1
idr_layer_cache 203 232 136 29
buffer_head 60840 60840 52 72
mm_struct 120 171 448 9
vm_area_struct 5831 8888 88 44
fs_cache 120 236 64 59
files_cache 121 198 448 9
signal_cache 166 200 384 10
sighand_cache 157 177 1344 3
task_struct 182 230 1376 5
anon_vma 2781 3048 12 254
pgd 97 97 4096 1
size-131072(DMA) 0 0 131072 1
size-131072 0 0 131072 1
size-65536(DMA) 0 0 65536 1
size-65536 0 0 65536 1
size-32768(DMA) 0 0 32768 1
size-32768 3 3 32768 1
size-16384(DMA) 0 0 16384 1
size-16384 21 21 16384 1
size-8192(DMA) 0 0 8192 1
Cache Num Total Size Pages
size-8192 175 175 8192 1
size-4096(DMA) 0 0 4096 1
size-4096 111 111 4096 1
size-2048(DMA) 0 0 2048 2
size-2048 682 708 2048 2
size-1024(DMA) 0 0 1024 4
size-1024 373 404 1024 4
size-512(DMA) 1 8 512 8
size-512 529 568 512 8
size-256(DMA) 0 0 256 15
size-256 30690 30690 256 15
size-128(DMA) 0 0 128 30
size-128 33360 33360 128 30
size-64(DMA) 0 0 64 59
size-32(DMA) 0 0 32 113
size-64 4166 7080 64 59
size-32 35030 35030 32 113
kmem_cache 150 150 256 15
======== section 2 ========
Mon Mar 5 15:15:28 CST 2007
# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 1 240 1873872 5460 42120 0 0 1 4 11 6 0 1 94 4 0
# vmstat -s
2075084 total memory
201212 used memory
161028 active memory
4184 inactive memory
1873872 free memory
5460 buffer memory
42120 swap cache
2056304 total swap
240 used swap
2056064 free swap
612951 non-nice user cpu ticks
7186 nice user cpu ticks
2272690 system cpu ticks
293990233 idle cpu ticks
13395382 IO-wait cpu ticks
50630 IRQ cpu ticks
1667219 softirq cpu ticks
0 steal cpu ticks
88673230 pages paged in
1816764408 pages paged out
0 pages swapped in
61 pages swapped out
2268233528 interrupts
2382233674 CPU context switches
1171569943 boot time
109411 forks
# vmstat -m
Cache Num Total Size Pages
rpc_buffers 8 8 2048 2
rpc_tasks 8 15 256 15
rpc_inode_cache 0 0 512 7
ocfs2_lock 152 203 16 203
ocfs2_inode_cache 452 452 896 4
ocfs2_uptodate 6496 6780 32 113
ocfs2_em_ent 523 1357 64 59
dlmfs_inode_cache 1 6 640 6
dlm_mle_cache 10 20 384 10
configfs_dir_cache 33 78 48 78
fib6_nodes 7 113 32 113
ip6_dst_cache 7 15 256 15
ndisc_cache 1 15 256 15
RAWv6 5 6 640 6
UDPv6 3 6 640 6
tw_sock_TCPv6 0 0 128 30
request_sock_TCPv6 0 0 128 30
TCPv6 10 12 1280 3
ip_fib_alias 16 113 32 113
ip_fib_hash 16 113 32 113
dm_events 16 169 20 169
Cache Num Total Size Pages
dm_tio 3718 3857 16 203
dm_io 3663 3887 20 169
uhci_urb_priv 0 0 40 92
ext3_inode_cache 859 1904 512 8
ext3_xattr 0 0 48 78
journal_handle 18 169 20 169
journal_head 127 648 52 72
revoke_table 6 254 12 254
revoke_record 0 0 16 203
qla2xxx_srbs 211 270 128 30
scsi_cmd_cache 86 90 384 10
sgpool-256 32 32 4096 1
sgpool-128 32 32 2048 2
sgpool-64 32 32 1024 4
sgpool-32 40 40 512 8
sgpool-16 49 60 256 15
sgpool-8 159 210 128 30
scsi_io_context 0 0 104 37
UNIX 394 427 512 7
ip_mrt_cache 0 0 128 30
tcp_bind_bucket 12 203 16 203
Cache Num Total Size Pages
inet_peer_cache 89 118 64 59
secpath_cache 0 0 128 30
xfrm_dst_cache 0 0 384 10
ip_dst_cache 169 285 256 15
arp_cache 6 15 256 15
RAW 3 7 512 7
UDP 32 56 512 7
tw_sock_TCP 0 0 128 30
request_sock_TCP 0 0 64 59
TCP 16 35 1152 7
flow_cache 0 0 128 30
cfq_ioc_pool 261 720 96 40
cfq_pool 243 600 96 40
crq_pool 241 312 48 78
deadline_drq 0 0 52 72
as_arq 0 0 64 59
mqueue_inode_cache 1 6 640 6
isofs_inode_cache 0 0 384 10
minix_inode_cache 0 0 420 9
hugetlbfs_inode_cache 1 11 356 11
ext2_inode_cache 0 0 492 8
Cache Num Total Size Pages
ext2_xattr 0 0 48 78
dnotify_cache 1 169 20 169
dquot 0 0 128 30
eventpoll_pwq 1 101 36 101
eventpoll_epi 1 30 128 30
inotify_event_cache 0 127 28 127
inotify_watch_cache 40 184 40 92
kioctx 0 0 256 15
kiocb 0 0 128 30
fasync_cache 1 203 16 203
shmem_inode_cache 618 624 460 8
posix_timers_cache 0 0 100 39
uid_cache 7 59 64 59
blkdev_ioc 150 254 28 127
blkdev_queue 58 60 960 4
blkdev_requests 211 264 176 22
biovec-(256) 312 312 3072 2
biovec-128 368 370 1536 5
biovec-64 485 485 768 5
biovec-16 495 495 256 15
biovec-4 531 531 64 59
Cache Num Total Size Pages
biovec-1 723 5684 16 203
bio 654 750 128 30
sock_inode_cache 473 553 512 7
skbuff_fclone_cache 78 80 384 10
skbuff_head_cache 673 870 256 15
file_lock_cache 12 126 92 42
acpi_operand 634 828 40 92
acpi_parse_ext 0 0 44 84
acpi_parse 0 0 28 127
acpi_state 0 0 48 78
delayacct_cache 236 312 48 78
taskstats_cache 12 32 236 16
proc_inode_cache 21 140 372 10
sigqueue 93 135 144 27
radix_tree_node 3225 9464 276 14
bdev_cache 56 56 512 7
sysfs_dir_cache 4887 4968 40 92
mnt_cache 29 60 128 30
inode_cache 1055 1309 356 11
dentry_cache 3351 9541 132 29
filp 3096 4180 192 20
Cache Num Total Size Pages
names_cache 26 26 4096 1
idr_layer_cache 203 232 136 29
buffer_head 1577 11592 52 72
mm_struct 120 171 448 9
vm_area_struct 5748 8888 88 44
fs_cache 210 236 64 59
files_cache 121 198 448 9
signal_cache 150 200 384 10
sighand_cache 157 177 1344 3
task_struct 182 230 1376 5
anon_vma 2751 3048 12 254
pgd 96 96 4096 1
size-131072(DMA) 0 0 131072 1
size-131072 0 0 131072 1
size-65536(DMA) 0 0 65536 1
size-65536 0 0 65536 1
size-32768(DMA) 0 0 32768 1
size-32768 3 3 32768 1
size-16384(DMA) 0 0 16384 1
size-16384 21 21 16384 1
size-8192(DMA) 0 0 8192 1
Cache Num Total Size Pages
size-8192 169 169 8192 1
size-4096(DMA) 0 0 4096 1
size-4096 109 109 4096 1
size-2048(DMA) 0 0 2048 2
size-2048 682 708 2048 2
size-1024(DMA) 0 0 1024 4
size-1024 373 404 1024 4
size-512(DMA) 1 8 512 8
size-512 491 560 512 8
size-256(DMA) 0 0 256 15
size-256 1560 2295 256 15
size-128(DMA) 0 0 128 30
size-128 3870 17580 128 30
size-64(DMA) 0 0 64 59
size-32(DMA) 0 0 32 113
size-64 4118 7080 64 59
size-32 5920 19097 32 113
kmem_cache 150 150 256 15
================
_______________________________________________
Ocfs2-users mailing list
Ocfs2-users at oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users
------------------------------------------------------------------------------
_______________________________________________
Ocfs2-users mailing list
Ocfs2-users at oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20070305/394d61e1/attachment-0001.html
More information about the Ocfs2-users
mailing list