OpenVZ Forum


Home » Mailing lists » Users » occasional high loadavg without any noticeable cpu/memory/io load
Re: occasional high loadavg without any noticeable cpu/memory/io load [message #47136 is a reply to message #47135] Tue, 10 July 2012 18:36 Go to previous message
Rene Dokbua is currently offline  Rene Dokbua
Messages: 24
Registered: May 2012
Junior Member
Thanks, that'd be very cool. Access to the hardware node is limited by IP
but if you send me (privately if you prefer) the IP address you will use to
access I'll add that to allowed hosts and reply with the login coordinates.

Rene

On Tue, Jul 10, 2012 at 11:34 PM, Kirill Korotaev <dev@parallels.com> wrote:

> I can take a look if you give me access to node.
> If agree - send it privately, w/o users@ on CC.
>
> Kirill
>
>
> On Jul 10, 2012, at 18:40 , Rene C. wrote:
>
> No takers for this one?
>
> If I missed to provide any important information please let me know. The
> issue happens regularly on several hardware nodes so if I missed anything I
> can check it next time it happens.
>
> On Wed, Jul 4, 2012 at 4:16 PM, Rene C. <openvz@dokbua.com> wrote:
>
>> Today I again had a VE that went up to a relative high load for no
>> apparent reason.
>>
>> Below are the details for the hardware node, followed by the high-load
>> container.
>>
>> I realize it's not the latest kernel, but a reboot takes half an hour
>> (from first VE goes down to last VE is back up, assuming everything goes
>> well and no FSCK is forced) so we only reboot into new kernels when there
>> is a really serious reason for it or the server crashes - but I don't see
>> anything in the kernel updates since our current kernel that would address
>> this issue anyway.
>>
>> Why does the load in this container suddenly go up like that? Websites
>> hosted by the container becomes very sluggish, so it is a real problem.
>>
>> It isn't just a problem with this container - or even this hardware node
>> for that reason, I occasionally see it with containers on other hardware
>> nodes as well. One idea I brought up before was that perhaps it's the file
>> system journal, as suggested in http://wiki.openvz.org/Ploop/Why - but I
>> think that would affect all containers on that file system, not just a
>> single container?
>>
>> --- HARDWARE NODE ---
>>
>> # uname -a
>> Linux server15.hardwarenode.com 2.6.32-042stab049.6 #1 SMP Mon Feb 6
>> 19:17:43 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux
>>
>> # rpm -q sl-release
>> sl-release-6.1-2.x86_64
>>
>> # top -cbn1 | head -17
>> top - 21:00:02 up 123 days, 15:31, 1 user, load average: 0.97, 2.70,
>> 2.37
>> Tasks: 886 total, 6 running, 880 sleeping, 0 stopped, 0 zombie
>> Cpu(s): 8.4%us, 1.7%sy, 0.0%ni, 86.3%id, 3.5%wa, 0.0%hi, 0.1%si,
>> 0.0%st
>> Mem: 16420716k total, 15566264k used, 854452k free, 1477372k buffers
>> Swap: 16777184k total, 623672k used, 16153512k free, 4578176k cached
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 94153 27 20 0 164m 41m 3392 S 150.9 0.3 50575:37
>> /usr/libexec/mys
>> 9178 27 20 0 159m 29m 3000 S 72.6 0.2 1284:50
>> /usr/libexec/mysq
>> 567031 apache 20 0 40296 15m 3588 S 17.2 0.1 0:00.09
>> /usr/sbin/httpd
>> 567382 root 20 0 15672 1820 864 R 5.7 0.0 0:00.04 top -cbn1
>> 38 root 20 0 0 0 0 S 1.9 0.0 2:55.25 [events/3]
>> 41 root 20 0 0 0 0 S 1.9 0.0 0:29.00 [events/6]
>> 566362 apache 20 0 43240 19m 4448 R 1.9 0.1 0:01.04
>> /usr/sbin/httpd
>> 566857 apache 20 0 55248 11m 3456 R 1.9 0.1 0:00.05
>> /usr/sbin/httpd
>> 566918 apache 20 0 42596 17m 3704 S 1.9 0.1 0:00.15
>> /usr/sbin/httpd
>> 567033 apache 20 0 39784 14m 3468 S 1.9 0.1 0:00.01
>> /usr/sbin/httpd
>>
>> # vzlist -o ctid,laverage
>> CTID LAVERAGE
>> 1501 0.00/0.05/0.02
>> 1502 0.00/0.00/0.00
>> 1503 0.08/0.03/0.01
>> 1504 0.00/0.00/0.00
>> 1505 8.29/6.04/3.67
>> 1506 27.11/16.97/7.89
>> 1507 0.00/0.00/0.00
>> 1508 0.19/0.06/0.01
>> 1509 0.07/0.03/0.00
>> 1510 0.02/0.02/0.00
>> 1512 0.00/0.00/0.00
>> 1514 0.00/0.00/0.00
>>
>> # iostat -xN
>> Linux 2.6.32-042stab049.6 (server15.hardwarenode.com) 07/03/12
>> _x86_64_ (8 CPU)
>>
>> avg-cpu: %user %nice %system %iowait %steal %idle
>> 8.41 0.04 1.75 3.51 0.00 86.28
>>
>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
>> avgrq-sz avgqu-sz await svctm %util
>> sdd 0.76 56.58 0.59 0.59 20.27 457.28
>> 402.66 0.25 211.66 4.03 0.48
>> sdc 1.72 27.94 17.20 16.16 887.30 336.18
>> 36.68 0.02 12.71 5.23 17.45
>> sdb 1.65 27.79 19.48 12.95 975.43 318.64
>> 39.91 0.09 15.22 3.77 12.23
>> sda 0.01 0.16 0.10 0.24 1.95 2.79
>> 13.79 0.00 7.06 4.16 0.14
>> vg01-swap 0.00 0.00 0.00 0.00 0.00 0.00
>> 8.00 0.00 3.68 2.22 0.00
>> vg01-root 0.00 0.00 0.11 0.35 1.94 2.78
>> 10.30 0.02 38.30 3.12 0.14
>> vg04-swap 0.00 0.00 1.30 0.22 10.41 1.80
>> 8.00 0.01 9.28 1.44 0.22
>> vg04-vz 0.00 0.00 0.05 56.94 9.86 455.49
>> 8.17 0.01 0.18 0.05 0.27
>> vg03-swap 0.00 0.00 0.00 0.00 0.00 0.00
>> 8.00 0.00 6.72 1.10 0.00
>> vg03-vz 0.00 0.00 18.98 42.41 887.30 336.18
>> 19.93 0.39 6.33 2.84 17.45
>> vg02-swap 0.00 0.00 0.00 0.00 0.00 0.00
>> 8.00 0.00 7.03 0.89 0.00
>> vg02-vz 0.00 0.00 21.19 39.91 975.43 318.64
>> 21.18 0.15 8.99 2.00 12.23
>> vg01-vz 0.00 0.00 0.00 0.00 0.00 0.00
>> 7.98 0.00 17.73 17.73 0.00
>>
>> --- CONTAINER ---
>>
>> # top -cbn1 | head -100
>> top - 21:00:04 up 123 days, 15:25, 0 users, load average: 27.11, 16.97,
>> 7.89
>> Tasks: 86 total, 2 running, 84 sleeping, 0 stopped, 0 zombie
>> Cpu(s): 1.4%us, 0.2%sy, 0.0%ni, 98.1%id, 0.1%wa, 0.0%hi, 0.0%si,
>> 0.2%st
>> Mem: 655360k total, 316328k used, 339032k free, 0k buffers
>> Swap: 1310720k total, 68380k used, 1242340k free, 58268k cached
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 916 mysql 20 0 159m 29m 3000 S 79.3 4.6 1284:51
>> /usr/libexec/mysqld
>> 1 root 20 0 2156 92 64 S 0.0 0.0 0:36.50 init [3]
>> 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00
>> [kthreadd/1506]
>> 3 root 20 0 0 0 0 S 0.0 0.0 0:00.00
>> [khelper/1506]
>> 97 root 16 -4 2244 8 4 S 0.0 0.0 0:00.00 /sbin/udevd
>> -d
>> 634 root 20 0 1812 212 136 S 0.0 0.0 2:39.88 syslogd -m 0
>> 667 root 20 0 7180 268 168 S 0.0 0.0 1:01.55
>> /usr/sbin/sshd
>> 676 root 20 0 2832 392 304 S 0.0 0.1 0:15.13 xinetd
>> -stayalive -
>> 690 root 20 0 6040 124 72 S 0.0 0.0 0:02.45
>> /usr/lib/courier-im
>> 693 root 20 0 4872 252 200 S 0.0 0.0 0:01.94
>> /usr/sbin/courierlo
>> 701 root 20 0 6040 124 72 S 0.0 0.0 0:06.34
>> /usr/lib/courier-im
>> 703 root 20 0 4872 256 200 S 0.0 0.0 0:03.09
>> /usr/sbin/courierlo
>> 709 root 20 0 6040 128 72 S 0.0 0.0 0:18.15
>> /usr/lib/courier-im
>> 711 root 20 0 4872 256 200 S 0.0 0.0 0:09.15
>> /usr/sbin/courierlo
>> 718 root 20 0 6040 124 72 S 0.0 0.0 0:05.68
>> /usr/lib/courier-im
>> 720 root 20 0 4872 252 200 S 0.0 0.0 0:02.54
>> /usr/sbin/courierlo
>> 730 qmails 20 0 1796 224 144 S 0.0 0.0 1:27.21 qmail-send
>> 732 qmaill 20 0 1752 244 192 S 0.0 0.0 0:22.64 splogger
>> qmail
>> 733 root 20 0 1780 140 64 S 0.0 0.0 0:07.85 qmail-lspawn
>> | /usr
>> 734 qmailr 20 0 1776 148 76 S 0.0 0.0 0:14.07 qmail-rspawn
>> 735 qmailq 20 0 1748 104 68 S 0.0 0.0 0:14.01 qmail-clean
>> 781 root 20 0 51880 4364 196 S 0.0 0.7 1:35.02
>> /usr/sbin/httpd
>> 828 named 20 0 44104 5708 1112 S 0.0 0.9 10:10.53
>> /usr/sbin/named -u
>> 866 root 20 0 3708 8 4 S 0.0 0.0 0:00.00 /bin/sh
>> /usr/bin/my
>> 981 root 20 0 33912 3756 916 S 0.0 0.6 10:55.30
>> /usr/bin/spamd --us
>> 1107 xfs 20 0 3392 72 40 S 0.0 0.0 0:00.09 xfs
>> -droppriv -daem
>> 1115 root 20 0 5672 8 4 S 0.0 0.0 0:00.00
>> /usr/sbin/saslauthd
>> 1116 root 20 0 5672 8 4 S 0.0 0.0 0:00.00
>> /usr/sbin/saslauthd
>> 1122 root 20 0 22992 1868 1084 S 0.0 0.3 2:09.79
>> /usr/bin/sw-engine
>> 1123 root 20 0 27328 1508 1160 S 0.0 0.2 6:06.30
>> /usr/local/psa/admi
>> 7251 root 20 0 4488 192 136 S 0.0 0.0 0:22.85 crond
>> 9463 apache 20 0 59184 14m 4356 S 0.0 2.3 0:05.10
>> /usr/sbin/httpd
>>
...

 
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Previous Topic: cat: /proc/self/mountinfo: Cannot allocate memory
Next Topic: vmstat and floating point exception
Goto Forum:
  


Current Time: Sun Oct 05 19:17:18 GMT 2025

Total time taken to generate the page: 0.13724 seconds