mtrob Messages: 3 Registered: April 2011 Location: Bradenton, Fl, US
Got it, worked great !
PS for others, that was a manual update of the kernel, at this moment, yum repo does not have it. Download from the link above and then do a "yum localinstall xxxxx", then you can do a regular "yum update" and things should flow once again.
I can see that the latest stable RHEL5 i686 and x86_64 kernels (238.5.1.el5.028stab085.3) are available trough the yum repository so it seems odd to me when you say that they aren't. These kernels should be ok with vzctl-184.108.40.206. If not, a bug should be filed ASAP.
Did you have problems with some other kernel subvariant such as PAE or XEN?
If you don't wish to upgrade the stock kernel you can use yum config to exclude it from the updates or you can even uninstall it.
Otherwise yum offers to upgrade all the existing packages on a server, including the kernel, as it should. Just installing and upgrading the stock kernel will not break your server in any way. You can have it on the server, just don't use it.
I find this solution the easiest & cleanest - the default kernel setting will prevent yum from removing your ovzkernel from being the default one in grub.conf. That's all you need.
I really think you are confused about something here... Stock kernel and ovzkernel do not conflict in any way. Accidentally booting into stock kernel should not require remote hands to fix, you can simply log in to hardware node remotely, change grub.conf setting so that ovzkernel is set as default and reboot again. That's it.
the latest ovz kernel is installed and there are 20 customers on that box so i dont fancy experimenting, thanks for the offer
i dont know whether iptables configured for the hostnode (or another ovz kernel setting conflicting with stock kernel) was responsible for the lock out but it is a caution for admins to take care to manually exclude stock kernels from yum updates
it might also be down to the fact the server has been a hostnode for 2+ years and the ovz version installed at the time does not set the server in exactly the same way as the current ovz package
a lot of unknowns, but when a routine update takes down a hostnode with 20 customers left waiting 24 hours for their service to be restored there is a serious issue. it has been said that ovz is not suitable for the retail environment
I've been in the sysadmin/hosting business for more than a decade now and yes, there are many variables involved in administering a server that can cause unforeseen problems. Every server is a unique environment and the situation further worsens over time.
Not wanting to experiment on a live server is understandable, that's what test servers are for. I only suggested troubleshooting further because this kind of an error seems likely to repeat itself unless properly fixed.
In my experience openvz is production ready. Point anyone who thinks differently to this forum and I'm sure we'll have an interesting discussion.