OpenVZ Forum


Home » General » Support » Kernel cache dentry leak? (Kernel CentOS 6 2.6.32-042stab044.11 and 2.6.32-042stab044.17)
Re: Kernel cache dentry leak? [message #45014 is a reply to message #44943] Fri, 20 January 2012 19:10 Go to previous messageGo to previous message
insider
Messages: 11
Registered: January 2012
Junior Member
After a last manual cache clear with echo 2 >/proc/sys/vm/drop_caches there a 2 days passed, "dentry" now holds 15174252 objects and uses 3372056K and keeps incressing...

slabtop command information:
 Active / Total Objects (% used)    : 15292469 / 15303649 (99.9%)
 Active / Total Slabs (% used)      : 851679 / 851681 (100.0%)
 Active / Total Caches (% used)     : 122 / 240 (50.8%)
 Active / Total Size (% used)       : 3232528.04K / 3234571.11K (99.9%)
 Minimum / Average / Maximum Object : 0.02K / 0.21K / 4096.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
15174252 15174204  99%    0.21K 843014       18   3372056K dentry   <<======!!!!!
 24790  24759  99%    0.10K    670   37  2680K buffer_head
 20048  19762  98%    0.03K    179  112   716K size-32
 12528  12373  98%    0.08K    261   48  1044K sysfs_dir_cache
 10384   9963  95%    0.06K    176   59   704K size-64
  6816   6777  99%    0.62K   1136        6      4544K inode_cache
  4928   3144  63%    0.05K     64   77   256K anon_vma_chain
  4921   4204  85%    0.20K    259   19  1036K vm_area_struct
  4500   4425  98%    0.12K    150   30   600K size-128
  3899   3866  99%    0.55K    557        7      2228K radix_tree_node
  3411   3404  99%    1.05K   1137        3      4548K ext4_inode_cache
  3372   3354  99%    0.83K    843        4      3372K ext3_inode_cache
  3205   3184  99%    0.68K    641        5      2564K proc_inode_cache
  2862   2694  94%    0.07K     54   53   216K Acpi-Operand
  2695   1937  71%    0.05K     35   77   140K anon_vma
  1940   1116  57%    0.19K     97   20   388K cred_jar
  1740   1714  98%    1.00K    435        4      1740K size-1024
  1620   1583  97%    0.19K     81   20   324K size-192
  1440   1055  73%    0.25K     96   15   384K filp
  1380   1332  96%    0.04K     15   92            60K Acpi-Namespace
  1376   1286  93%    0.50K    172        8       688K size-512
   945    904  95%    0.84K    105        9       840K shmem_inode_cache
   612    571  93%    2.00K    306        2      1224K size-2048
   540    345  63%    0.25K     36   15   144K size-256
   510    240  47%    0.11K     15   34            60K task_delay_info
   468    287  61%    0.31K     39   12   156K skbuff_head_cache
   424     56  13%    0.06K      8   53            32K fs_cache
   420    222  52%    0.12K     14   30            56K pid
   308    298  96%    0.53K     44        7       176K idr_layer_cache
   288    231  80%    1.00K     72        4       288K signal_cache
   288     29  10%    0.08K      6   48            24K blkdev_ioc
   288    256  88%    0.02K      2  144             8K dm_target_io
   280    239  85%    0.19K     14   20            56K kmem_cache
   280     54  19%    0.13K     10   28            40K cfq_io_context
   276     62  22%    0.03K      3   92            12K size-32(UBC)
   276    256  92%    0.04K      3   92            12K dm_io
   270    233  86%    2.06K     90        3       720K sighand_cache
   260    238  91%    2.75K    130        2      1040K task_struct
   242    242 100%    4.00K    242        1       968K size-4096
   240    182  75%    0.75K     48        5       192K sock_inode_cache
   202      2   0%    0.02K      1  202             4K jbd2_revoke_table
   202      4   1%    0.02K      1  202             4K revoke_table
   187     54  28%    0.69K     17   11   136K files_cache
   168     56  33%    0.27K     12   14            48K cfq_queue
   162     55  33%    0.81K     18        9       144K task_xstate
   159     18  11%    0.06K      3   53            12K size-64(UBC)
   153     99  64%    0.81K     17        9       136K UNIX
   144     32  22%    0.02K      1  144             4K jbd2_journal_handle
   124     74  59%    1.00K     31        4       124K size-1024(UBC)
   120     18  15%    0.19K      6   20            24K bio-0
   120     45  37%    0.12K      4   30            16K inotify_inode_mark_entry



Is there a way to dump contents of dentry to a file, maybe to inspect and investigate, what this cache contains?

I have tried with "dd" copy from /dev/mem to a files, but it not allows to dump full kernel memory...

Maybe this problem is related to a filesystem?
We have mounted these filesystems:
/dev/md2 on / type ext4 (rw)
proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
/dev/mapper/vg0-vz on /vz type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)


Searching for a solution or a way to investigate this problem, but still unsuccessful...

Upgraded kernel to 2.6.32-042stab044.17 #1 SMP Fri Jan 13 12:53:58 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux, but this not solved dentry leak problem.

Any thoughts?

Does nobody else got this problem with RHEL 6 63bit 2.6.32 ?

[Updated on: Fri, 20 January 2012 21:23]

Report message to a moderator

 
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Previous Topic: vzctl RHEL6 Kernel - No ploop module (Squeeze)
Next Topic: IO Problems
Goto Forum:
  


Current Time: Tue Feb 27 06:23:05 GMT 2024

Total time taken to generate the page: 0.02622 seconds