OpenVZ Forum


Home » General » Support » Active SSH Session to Container is Lost when Suspend/Resume Container
Re: Active SSH Session to Container is Lost when Suspend/Resume Container [message #53457 is a reply to message #53455] Mon, 08 October 2018 07:13 Go to previous message
sindimo is currently offline  sindimo
Messages: 2
Registered: October 2018
Location: USA
Junior Member
Dear Vasily,

Thank you for opening the ticket and informing me about jira.

Dear Tom,

Thank you for the clarification and pointing me to the supported OpenVZ iso.

I finally got an AWS instance installed with the latest OpenVZ release iso. Unfortunately the problem still remains even after installing with the supported OpenVZ iso. For reference, below are the versions used after doing a "yum update" on the system:

[ec2-user@openvz-node ~]$ cat /etc/redhat-release
Virtuozzo Linux release 7.5

[ec2-user@openvz-node ~]$ uname -r
3.10.0-862.14.4.vz7.72.4

[ec2-user@openvz-node ~]$ rpm -qa | egrep " openvz-release|criu|prlctl|prl-disp-service|vzkernel|ploop|p ython-subprocess32|yum-plugin-priorities|libprlsdk "

criu-3.10.0.7-1.vz7.x86_64
libprlsdk-7.0.220-6.vz7.x86_64
libprlsdk-python-7.0.220-6.vz7.x86_64
openvz-release-7.0.9-2.vz7.x86_64
ploop-7.0.131-1.vz7.x86_64
ploop-lib-7.0.131-1.vz7.x86_64
prlctl-7.0.156-1.vz7.x86_64
prl-disp-service-7.0.863-1.vz7.x86_64
prl-disp-service-tests-7.0.863-1.vz7.x86_64
python-criu-3.10.0.7-1.vz7.x86_64
python-ploop-7.0.131-1.vz7.x86_64
python-subprocess32-3.2.7-1.vz7.5.x86_64
vzkernel-3.10.0-862.14.4.vz7.72.4.x86_64
yum-plugin-priorities-1.1.31-46.vl7.noarch


I tried to investigate this further and I was able to figure out what's triggering the issue but not sure how to fix it.

The container I am launching has an NFS4 mount inside it.

If I disable that nfs mount and try to suspend/resume the container, then it works fine and the active ssh sessions to the container resumes fine once the resume operation is completed.

However if I kept the nfs mount inside the container and I try to suspend/resume the container, any active ssh session to the container gets broken after resume is done (broken pipe error). Please note that once the container is resumed, I am able to establish a new ssh session to it and the nfs mount inside it is active and accessible and has no issues. So the nfs mount is successfully intact after resuming. It's just the fact that an nfs mount existing inside the container seems to be messing up restoring active ssh sessions once the resume is done.

I hope this gives more insight to have the problem investigated further, and please if you have any suggestions to get around this I would truly appreciate your feedback.

Many thanks for your help.

Sincerely,

Mohamad
 
Read Message
Read Message
Read Message
Read Message
Previous Topic: Centos 7 cannot open shared object file libpcs_client.so.1
Next Topic: hds vs hdd
Goto Forum:
  


Current Time: Sun Jul 14 12:35:40 GMT 2024

Total time taken to generate the page: 0.02476 seconds