We have a lot of nodes with centos 6 + Openvz (solusvm slave) and in all old nodes IPv6 is working inside containers without any problems.
Now we installed new centos 6 + openvz (solusvm slave) slave node server with the same network configuration as in our old nodes, but in this newest node Ipv6 inside containers not working.
Had tryed alot of configurations etc but nothing helps...
Signs and details of this problem:
1) Node can ping external Ipv6 address, so Ipv6 in the node itself is working.
2) Node can ping local container Ipv6 address
3) Container can ping node ipv6 address
4) Container cannot ping external ipv6 address
One of the difference sign in this newest node is when we restarting networking in the node with the command: service network restart. It shows some errors (changed some IPs details to AAAA:BBB:CCC):
Shutting down interface eth0: [ OK ]
Shutting down interface venet0: Shutting down interface venet0:
[ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: RTNETLINK answers: Invalid argument
ndsend: Error in sendto(): Cannot assign requested address
ifup-local WARNING: arpsend -c 1 -w 1 -U -i AAAA:BBB:CCC:4208:0:0:0:3 -e AAAA:BBB:CCC:4208:0:0:0:3 eth0 FAILED
[ OK ]
Bringing up interface venet0: Bringing up interface venet0:
Configuring interface venet0:
net.ipv4.conf.venet0.send_redirects = 0
Configuring ipv6 venet0:
ndsend: Error in sendto(): Cannot assign requested address
ifup-venet WARNING: arpsend -c 1 -w 1 -U -i AAAA:BBB:CCC:4208:0:0:0:3 -e AAAA:BBB:CCC:4208:0:0:0:3 eth0 FAILED
[ OK ]
So, this error may be related to our problem:
ndsend: Error in sendto(): Cannot assign requested address
ifup-local WARNING: arpsend -c 1 -w 1 -U -i AAAA:BBB:CCC:4208:0:0:0:3 -e AAAA:BBB:CCC:4208:0:0:0:3 eth0 FAILED
We have contacted data center stuff regarding this problem, but they replyied that everything is working normally from their side (IPv6 is working in the node itself).
IPv6 /64 subnet (changed) assigned to node: AAAA:BBBB:CCCC:4208::/64
Node IP (changed): AAAA:BBBB:CCCC:4208::2
Container IP (changed): AAAA:BBBB:CCCC:4208::3
Gateway: fe80::1
# cat /etc/sysctl.conf
net.ipv4.icmp_echo_ignore_broadcasts=1
net.ipv4.ip_forward = 1
kernel.sysrq = 1
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.proxy_ndp = 1
net.ipv6.conf.all.proxy_ndp = 1
# cat /etc/sysconfig/network
# general networking
NETWORKING=yes
HOSTNAME=server.domain.com
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# device: eth0
DEVICE=eth0
BOOTPROTO=static
BROADCAST=x.x.x.x
HWADDR=x:x:x:x:x:x
IPADDR=y.y.y.y
NETMASK=255.255.255.255
SCOPE="peer x.x.x.x"
IPV6INIT=yes
IPV6ADDR=AAAA:BBBB:CCCC:4208::2
IPV6_DEFAULTGW=fe80::1
IPV6_DEFAULTDEV=eth0
# cat /etc/sysconfig/network-scripts/route6-eth0
fe80::1 dev eth0
::/0 via fe80::1
# ip -6 route list
unreachable ::/96 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
unreachable 2002:a00::/24 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
unreachable 2002:7f00::/24 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
unreachable 2002:a9fe::/32 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
unreachable 2002:ac10::/28 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
unreachable 2002:c0a8::/32 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
unreachable 2002:e000::/19 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
AAAA:BBBB:CCCC:4208::3 dev venet0 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295
AAAA:BBBB:CCCC:4208::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295
unreachable 3ffe:ffff::/32 dev lo metric 1024 error -101 mtu 16436 advmss 16376 hoplimit 4294967295
fe80::1 dev venet0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295
fe80::1 dev eth0 metric 1024 mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev venet0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295
default via fe80::1 dev eth0 metric 1 mtu 1500 advmss 1440 hoplimit 4294967295
Maybe someone else experienced this kind of problem or know solution?
Thank you very much for any reply, help or advise regarding this issue.
[Updated on: Fri, 21 December 2012 20:25]
Report message to a moderator