Broken memory reporting since moving containers via vzmigrate [message #44771] |
Sat, 31 December 2011 03:27 |
untal3nted
Messages: 1 Registered: December 2011
|
Junior Member |
|
|
Hello,
Recently I was doing some node upgrading and container consolidating for a company. Previous, nodes and containers were using CentOS 5 32-bit. The new nodes were setup and configured using Scientific Linux 6 64-bit. The containers remained the same configuration CentOS 5 32-bit. There were 4 containers moved to these 2 nodes (2 containers to each node). After the move, I noticed the following in 3 of the 4 containers;
total used free shared buffers cached
Mem: 4294967295 0 4294967295 0 0 672692
-/+ buffers/cache: 4294294604 672691
Swap: 0 0 0
Now obviously that is just not right, as the node the container is on has the following;
total used free shared buffers cached
Mem: 8040768 7816836 223932 0 65908 6514288
-/+ buffers/cache: 1236640 6804128
Swap: 8388600 0 8388600
Are there any ideas? All these containers are configured nearly identical, and I have basically given up trying to figure out the issue. The containers reporting the incorrect usage all seem to think they have '4294967295' of memory available. The funny thing is, there is one container which is reporting the memory assigned to it and being used correctly;
total used free shared buffers cached
Mem: 3014656 1794340 1220316 0 0 1794340
-/+ buffers/cache: 0 3014656
Swap: 0 0 0
Node#1 (both containers reporting memory wrong): 2.6.32-042stab037.1 #1 SMP Fri Sep 16 22:18:06 MSD 2011 x86_64 x86_64 x86_64 GNU/Linux
Node#2 (one container reporting wrong, one reporting correct): 2.6.32-042stab044.11 #1 SMP Wed Dec 14 16:02:00 MSK 2011 x86_64 x86_64 x86_64 GNU/Linux
I have gone over the configurations multiple times, and even tried resetting oomguarpages and privvmpages and it seems to make no different in the "broken" containers. vzmigrate was used to move the containers. Memory reporting worked without issue before the move.
Thanks in advanced for any help or insight.
|
|
|