OpenVZ Forum


Home » General » Support » high shmpages after debian upgrade to squeeze
high shmpages after debian upgrade to squeeze [message #41714] Tue, 15 February 2011 19:56
Bevan is currently offline  Bevan
Messages: 1
Registered: February 2011
Junior Member
Hi!

I just tested an "apt-get dist-upgrade" on a debian lenny container to upgrade to squeeze and after that the ressource usage is extremely high:

vztest:~# cat /proc/user_beancounters 
Version: 2.5
       uid  resource                     held              maxheld              barrier                limit              failcnt
      101:  kmemsize                  9689201             15546245             50000000             55000000                    4
            lockedpages                     0                  109                  256                  256                   30
            privvmpages                151894               230561               300000               350000                   11
            shmpages                   151086               208080               256000               256000                    6
            dummy                           0                    0                    0                    0                    0
            numproc                         5                   43                  240                  240                    0
            physpages                  150932               211221                    0  9223372036854775807                    0
            vmguarpages                     0                    0                33792  9223372036854775807                    0
            oomguarpages               150932               211221                26112  9223372036854775807                    0
            numtcpsock                      2                   12                  360                  360                    0
            numflock                        0                    9                  188                  206                    0
            numpty                          1                    4                   16                   16                    0
            numsiginfo                      0                    9                  256                  256                    0
            tcpsndbuf                   25432               117120              1720320              2703360                    0
            tcprcvbuf                   32768               342464              1720320              2703360                    0
            othersockbuf                    0                49000              1126080              2097152                    0
            dgramrcvbuf                     0                 8456               262144               262144                    0
            numothersock                    0                   23                  360                  360                    0
            dcachesize                      0                    0              3409920              3624960                    0
            numfile                       113                  801                 9312                 9312                    0
            dummy                           0                    0                    0                    0                    0
            dummy                           0                    0                    0                    0                    0
            dummy                           0                    0                    0                    0                    0
            numiptent                      10                   10                  128                  128                    0


As you can see shmpages are at 151086. Before the upgrade shmpages were at 712.

I tried to kill all running processes and to set a lower limit for shared memory (kernel.shmmax and kernel.shmall). It did not change anything.

Here some output that may be useful:

vztest:~# ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0   8348   772 ?        Ss   00:53   0:00 init [2]      
root      9562  0.0  0.0  49164  1156 ?        Ss   00:53   0:00 /usr/sbin/sshd
root      9705  0.0  0.2  70452  3332 ?        Ss   00:56   0:00 sshd: root@pts/0 
root      9707  0.0  0.1  17704  1888 pts/0    Ss   00:56   0:00 -bash
root     10163  0.0  0.0  14804   996 pts/0    R+   01:24   0:00 ps aux


vztest:~# ipcs 

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      

------ Semaphore Arrays --------
key        semid      owner      perms      nsems     

------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages    


Here you can see that shared memory is limited to 40MB:
vztest:~# ipcs -l

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 10
max total shared memory (kbytes) = 40960
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384


Also there is no space used by tmpfs volumes:
vztest:~# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/simfs             2560000   1062068   1497932  42% /
tmpfs                   600000         0    600000   0% /lib/init/rw
tmpfs                   600000         0    600000   0% /dev/shm


To compare the resource usage I created a new container by using the debian-6.0-amd64-minimal template and in this container shmpages are at 16 (even when running more processes).

Do you have any idea what the problem could be? What could be using so much shared memory here?

Greetings,
Michael

Update:
I don't think that it is related to debian or the "apt-get upgrade". In general all resources stay used after stopping processes or even stopping the whole container.
After stopping all containers and waiting for several minutes vzmemcheck and /proc/user_beancounters on the host-system show that the resources are still used.

I'm using the "Openwall GNU/*/Linux (or Owl)" Live-CD with kernel 2.6.18-194.26.1.el5.028stab079.1.owl2.

[Updated on: Tue, 15 February 2011 21:44]

Report message to a moderator

Previous Topic: OpenVZ in 10.04
Next Topic: Desktop & ovz kernel - Sound skipping
Goto Forum:
  


Current Time: Sun Aug 11 19:56:28 GMT 2024

Total time taken to generate the page: 0.02886 seconds