OpenVZ Forum


Home » General » Support » NOHZ: local_softirq_pending 100 - is there something to worry about? (Kernel 2.6.32-042stab044.17 64bit)
Re: NOHZ: local_softirq_pending 100 - is there something to worry about? [message #45135 is a reply to message #45134] Tue, 31 January 2012 21:30 Go to previous messageGo to previous message
insider
Messages: 11
Registered: January 2012
Junior Member
Paparaciz wrote on Tue, 31 January 2012 22:56
I just wanted to say that I have small set of servers running centos6 servers with rhel6 based openvz kernels with running centos5 or centos6 CT's inside them and did not have any issues of kernel panicks or whatever. it works like a charm.

I suggest that you use latest stable kernel, and if you still have issues and can provide why server crashes than submit a bug report.


p.s. I use 2.6.32-042stab044.xx kernel versions


Yes, in production centos 6 nodes we are using latest stable 2.6.32 versions only. And I already submited three bug reports regarding our previous issues in bugzilla with this kernel, so I hope this will help to fix these bugs to make 2.6.32 kernel more and more stable.

So, until 2.6.32 kernel becomes more stable, is there a reason to temporary use 2.6.18 stable kernel from centos 5 on the centos 6 nodes. And after some time, when 2.6.32 will be stable enough, switch back from 2.6.18 to a 2.6.32 kernel.
The reason to use centos 6 OS (not centos 5) is longer support than for centos 5. Currently this is the single reason for us to start using centos 6, because all our centos 5 nodes runs stable, but there will be day when comes end-of-life for centos 5 support, so we will be forced to move all our centos 5 nodes to a centos 6 OS anyway, and if more centos 5 nodes we will use the more moving need to be done, more downtime, more work and more problems.
Upgrade from centos 5 to centos 6 is not officially supported. So, to upgrade 5=>6 you'll have to completly reinstall centos 6 from zero and after that move containers. If there is just a few nodes, then this is not so big problem, but if there is a few tens or hunderds of nodes, then there will be a problem.
 
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Previous Topic: tc RTNETLINK answers: Invalid argument / We have an error talking to the kernel
Next Topic: Script to backup groups of vms each day via cron job
Goto Forum:
  


Current Time: Sun Sep 01 12:36:24 GMT 2024

Total time taken to generate the page: 0.06895 seconds