OpenVZ Forum


Home » General » Support » OpenVZ7 - VE Migration fails
OpenVZ7 - VE Migration fails [message #52749] Fri, 24 February 2017 10:33 Go to next message
A(r|d)min is currently offline  A(r|d)min
Messages: 4
Registered: February 2017
Junior Member
From: 89.106.185*
Hi,

i have a fresh installation of Virtuozzo Linux 7.2 with OpenVZ7 on two physical server. I created a VE and try to migrate this one to the other host. But this is failing. Would be great if someone can give me a hint about this. These are the details:

# cat /etc/redhat-release 
Virtuozzo Linux release 7.2


# uname -a
Linux srv01 3.10.0-327.36.1.vz7.18.7 #1 SMP Tue Oct 11 15:39:22 MSK 2016 x86_64 x86_64 x86_64 GNU/Linux


# rpm -qa | grep -i vz7
qemu-kvm-vz-2.3.0-31.2.21.vz7.69.x86_64
libvzevent-7.0.7-1.vz7.x86_64
ploop-lib-7.0.74-1.vz7.x86_64
libprlcommon-7.0.78-1.vz7.x86_64
libvirt-daemon-driver-nwfilter-1.3.3.2-1.vz7.11.x86_64
libvirt-daemon-driver-vz-1.3.3.2-1.vz7.11.x86_64
libvirt-daemon-driver-lxc-1.3.3.2-1.vz7.11.x86_64
libvzctl-7.0.293-1.vz7.x86_64
vztt-7.0.43-1.vz7.x86_64
python-subprocess32-3.2.6-5.vz7.x86_64
libvirt-daemon-driver-qemu-1.3.3.2-1.vz7.11.x86_64
vzkernel-3.10.0-327.36.1.vz7.18.7.x86_64
seavgabios-bin-1.7.5-11.vz7.3.noarch
coripper-1.0.3-2.vz7.x86_64
vzpkgenv410x64-7.0.9-11.vz7.x86_64
libvcmmd-7.0.12-1.vz7.x86_64
openvz-logos-70.0.11-1.vz7.noarch
seabios-bin-1.7.5-11.vz7.3.noarch
libvirt-daemon-kvm-1.3.3.2-1.vz7.11.x86_64
libguestfs-1.32.3-1.vz7.11.x86_64
prlctl-7.0.95-1.vz7.x86_64
centos-7-x86_64-ez-7.0.0-16.vz7.noarch
libreport-plugin-vz-bugs-1.0.1-1.vz7.noarch
qt-4.8.5-12.vz7.2.x86_64
libprlxmlmodel-7.0.50-1.vz7.x86_64
libprlsdk-python-7.0.142-3.vz7.x86_64
libvirt-daemon-1.3.3.2-1.vz7.11.x86_64
libvirt-daemon-driver-nodedev-1.3.3.2-1.vz7.11.x86_64
libvirt-daemon-config-nwfilter-1.3.3.2-1.vz7.11.x86_64
libvirt-python-1.3.5-1.vz7.1.x86_64
libvirt-daemon-driver-network-1.3.3.2-1.vz7.11.x86_64
criu-2.5.0.16-1.vz7.x86_64
vztt-lib-7.0.43-1.vz7.x86_64
vzctl-7.0.120-1.vz7.x86_64
qemu-img-vz-2.3.0-31.2.21.vz7.69.x86_64
libvirt-daemon-driver-storage-1.3.3.2-1.vz7.11.x86_64
libvirt-daemon-driver-interface-1.3.3.2-1.vz7.11.x86_64
ipxe-roms-qemu-20130517-7.gitc4bce43.vz7.3.noarch
prl-disp-service-7.0.533.1-1.vz7.x86_64
spfs-0.08.018-1.vz7.x86_64
ploop-7.0.74-1.vz7.x86_64
libprlsdk-7.0.142-3.vz7.x86_64
libvirt-client-1.3.3.2-1.vz7.11.x86_64
libvirt-daemon-driver-secret-1.3.3.2-1.vz7.11.x86_64
vcmmd-7.0.91-1.vz7.x86_64
qemu-kvm-common-vz-2.3.0-31.2.21.vz7.69.x86_64
libvirt-1.3.3.2-1.vz7.11.x86_64


# prlctl list -a
UUID                                    STATUS       IP_ADDR         T  NAME
{fea3c1cf-40d4-4b10-89c4-f5c3ac9bfa61}  running      172.1.1.10    CT VE01


# prlctl migrate VE01 srv02 -v 10
Logging in
server uuid={41ca7251-106c-44ec-9aa4-4b28dc490d58}
sessionid={210ea24c-280e-44b2-bb49-3357d4974269}
The virtual machine found: VE01
Migrate the CT VE01 on srv02  ()
security_level=0
PrlCleanup::register_hook: 8179c0
EVENT type=100030
Migration started.
EVENT type=100031
Migration cancelled!

Failed to migrate the CT: Failed to migrate the Container. An internal error occurred when performing the operation. Try to migrate the Container again. If the problem persists, contact the Parallels support team for assistance.
resultCount: 0
PrlCleanup::unregister_hook: 8179c0
Logging off


This ist the output of prl-disp.log of srv01 (migration source):
02-24 11:27:25.806 W /disp:1751:20480/ handleClientConnected
02-24 11:27:25.807 F /disp:1751:20480/ Processing command 'DspCmdUserLoginLocal' 2040 (PJOC_SRV_LOGIN_LOCAL)
02-24 11:27:25.811 F /disp:1751:20480/ Processing command 'DspCmdUserLoginLocalStage2' 2041 (PJOC_SRV_LOGIN_LOCAL)
02-24 11:27:25.812 F /disp:1751:20480/ Parallels user [root@.] successfully logged on( LOCAL ). [sessionId = {0e00a342-236b-476c-aa8d-a128f6a49350} ]
02-24 11:27:25.812 F /disp:1751:20480/ Session with uuid[ {0e00a342-236b-476c-aa8d-a128f6a49350} ] was started.
02-24 11:27:25.817 F /disp:1751:20480/ Processing command 'DspCmdSetNonInteractiveSession' 2125 (PJOC_SRV_SET_NON_INTERACTIVE_SESSION)
02-24 11:27:25.818 F /disp:1751:20480/ Session was non-interactive, session now is non-interactive
02-24 11:27:25.825 F /disp:1751:20480/ Processing command 'DspCmdGetVmConfigById' 2173 (PJOC_SRV_GET_VM_CONFIG)
02-24 11:27:26.131 F /disp:1751:20480/ Processing command 'DspCmdGetHostCommonInfo' 2048 (PJOC_SRV_GET_COMMON_PREFS)
02-24 11:27:26.147 F /disp:1751:20480/ Processing command 'DspCmdUserGetProfile' 2045 (PJOC_SRV_GET_USER_PROFILE)
02-24 11:27:26.153 F /disp:1751:20480/ Processing command 'DspCmdDirVmMigrate' 2028 (PJOC_VM_MIGRATE)
02-24 11:27:26.154 F /disp:1751:20480/ Processing command 'DspCmdDirVmMigrate' 2028 for CT uuid='{fea3c1cf-40d4-4b10-89c4-f5c3ac9bfa61}' 
02-24 11:27:26.155 F /disp:1751:20486/ Task '20Task_MigrateCtSource' with uuid = {f90a25d5-ffba-4c54-95aa-04395ee170e8} was started. Flags = 0
02-24 11:27:26.163 F /disp:1751:20486/ connect to the target dispatcher: host 'srv02' port 64000
02-24 11:27:26.316 I /disp:20489:20489/ Run migration command: '/usr/sbin/vzmsrc -ps 64 66 68 70 --online --nonsharedfs srv02 140093'
02-24 11:27:26.337 F /disp:1751:20486/ Sending SIGTERM to 20489...
02-24 11:27:26.337 F /disp:1751:20486/ Sending SIGKILL to 20489...
02-24 11:27:26.339 F /disp:1751:20486/ Task '20Task_MigrateCtSource' with uuid = {f90a25d5-ffba-4c54-95aa-04395ee170e8} was finished with result PRL_ERR_CT_MIGRATE_INTERNAL_ERROR (0x80031035) ) 
02-24 11:27:26.340 I /IOCommunication:1751:20487/ IO client ctx [read thr] (sender 2): Stop in progress for read thread
02-24 11:27:26.345 F /disp:1751:20480/ Processing command 'DspCmdUserLogoff' 2042 (PJOC_SRV_LOGOFF)
02-24 11:27:26.345 F /disp:1751:20480/ Parallels user [root@.] successfully logged off. [sessionId = {0e00a342-236b-476c-aa8d-a128f6a49350} ]
02-24 11:27:26.346 I /IOCommunication:1751:20480/ IO server ctx [read thr] (handle 46, sender 2): Stop in progress for read thread
02-24 11:27:26.346 W /disp:1751:20480/ handleClientDisconnected


This is the output of prl-disp.log of srv02 (migration destination):
02-24 11:27:25.626 W /disp:1910:19181/ handleClientConnected
02-24 11:27:25.627 F /disp:1910:19181/ Processing command 'DspCmdUserLoginLocal' 2040 (PJOC_SRV_LOGIN_LOCAL)
02-24 11:27:25.631 F /disp:1910:19181/ Processing command 'DspCmdUserLoginLocalStage2' 2041 (PJOC_SRV_LOGIN_LOCAL)
02-24 11:27:25.632 F /disp:1910:19181/ Parallels user [root@.] successfully logged on( LOCAL ). [sessionId = {19f8afe6-efe5-495e-a41c-b2734608bd86} ]
02-24 11:27:25.633 F /disp:1910:19181/ Session with uuid[ {19f8afe6-efe5-495e-a41c-b2734608bd86} ] was started.
02-24 11:27:25.646 F /disp:1910:19181/ Processing command 'DspCmdGetHostCommonInfo' 2048 (PJOC_SRV_GET_COMMON_PREFS)
02-24 11:27:25.748 F /disp:1910:19186/ Processing command 6001
02-24 11:27:25.749 F /disp:1910:19186/ Dispatcher session was successfully authorized with '{19f8afe6-efe5-495e-a41c-b2734608bd86}' session UUID
02-24 11:27:25.769 F /disp:1910:19186/ Processing command 6501
02-24 11:27:25.782 F /disp:1910:19188/ Task '20Task_MigrateCtTarget' with uuid = {fa161df0-96f7-4710-9451-8258f6f4021b} was started. Flags = 0
02-24 11:27:25.804 F /disp:1910:19186/ Processing command 6502
02-24 11:27:25.824 I /disp:19189:19189/ Run migration command: '/usr/sbin/vzmdest -ps 62 64 66 68 --online --nonsharedfs localhost 140093'
02-24 11:27:25.843 F /disp:1910:19188/ waitpid() : No child processes
02-24 11:27:25.844 F /disp:1910:19188/ Task '20Task_MigrateCtTarget' with uuid = {fa161df0-96f7-4710-9451-8258f6f4021b} was finished with result PRL_ERR_CT_MIGRATE_INTERNAL_ERROR (0x80031035) ) 
02-24 11:27:25.872 F /disp:1910:19186/ Processing command 6002
02-24 11:27:25.873 I /IOCommunication:1910:19186/ IO server ctx [read thr] (handle 50, sender 2): Socket graceful shutdown detected. No worries, everything goes fine.
02-24 11:27:25.877 F /disp:1910:19181/ Processing command 'DspCmdUserLogoff' 2042 (PJOC_SRV_LOGOFF)
02-24 11:27:25.877 F /disp:1910:19181/ Parallels user [root@.] successfully logged off. [sessionId = {19f8afe6-efe5-495e-a41c-b2734608bd86} ]
02-24 11:27:25.878 I /IOCommunication:1910:19181/ IO server ctx [read thr] (handle 44, sender 2): Stop in progress for read thread
02-24 11:27:25.878 W /disp:1910:19181/ handleClientDisconnected


Does someone knows what "EVENT type=100031" means? Is it possible to get more debug information? Has someone a hint to fix this?
Re: OpenVZ7 - VE Migration fails [message #52751 is a reply to message #52749] Fri, 24 February 2017 16:39 Go to previous message
A(r|d)min is currently offline  A(r|d)min
Messages: 4
Registered: February 2017
Junior Member
From: 89.106.185*
I found out by reading some Jira posts, people still use "vzmigrate" to migrate VE from vz7 to vz7 host even "prlctl migrate" should be able to do this. So I installed vzmigrate. At first I got this error:
# vzmigrate -r yes srv01 2f64cf14-2534-4d29-bc71-6046ecfce652
Connection to destination node (srv01) is successfully established
Moving/copying CT 2f64cf14-2534-4d29-bc71-6046ecfce652 -> CT 2f64cf14-2534-4d29-bc71-6046ecfce652, [], [] ...
locking 2f64cf14-2534-4d29-bc71-6046ecfce652
Checking bindmounts
Check cluster ID
Checking keep dir for private area copy
Checking technologies
Check target CT name: VE02
rsync : rsync: --fdin: unknown option
rsync : rsync error: syntax or usage error (code 1) at main.c(1435) [client=3.0.9]
rsync exited with code 1
Can't move/copy CT 2f64cf14-2534-4d29-bc71-6046ecfce652 -> CT 2f64cf14-2534-4d29-bc71-6046ecfce652, [], [] : rsync exited with code 1

But after reading https://bugs.openvz.org/browse/OVZ-6308 I upgraded my systems to VZ Linux 7.3 and all the related latest packages (including rsync which was mentioned in the ticket). So afterwards a migration was working successfully:
# vzmigrate -r yes srv01 2f64cf14-2534-4d29-bc71-6046ecfce652
Connection to destination node (srv01) is successfully established
Moving/copying CT 2f64cf14-2534-4d29-bc71-6046ecfce652 -> CT 2f64cf14-2534-4d29-bc71-6046ecfce652, [], [] ...
locking 2f64cf14-2534-4d29-bc71-6046ecfce652
Checking bindmounts
Check cluster ID
Checking keep dir for private area copy
Checking technologies
Check target CT name: VE02
Checking RATE parameters in config
Checking ploop format 2
copy CT private /vz/private/2f64cf14-2534-4d29-bc71-6046ecfce652
Successfully completed


I hope this post might help someone.
Previous Topic: Update Issue (Requires: kernel-firmware >= 2.6.32-642.6.1.el6)
Next Topic: kernel 3.10 stable
Goto Forum:
  


Current Time: Wed Aug 23 04:10:37 GMT 2017