OpenVZ Forum


Home » General » Support » Container Networking Problem (Network Problem)
Container Networking Problem [message #37985] Sat, 07 November 2009 02:35 Go to next message
RogerHenry is currently offline  RogerHenry
Messages: 3
Registered: November 2009
Junior Member
Hello:
I have completed a smooth install of openvz on centos 5 and was able to create a centos container with the standard template. The ip that i assigned to the container is still hitting the main server. I am using the default VENET, and a rented dedicated server. The server has a public ip and 7 other public ips etho:0 main ip etho:2 spare ip ETC.

I believe my sysctl.conf settings are correct they are as follows

# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(Cool and
# sysctl.conf(5) for more details.
# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0

# Enables source route verification
net.ipv4.conf.all.rp_filter = 1

# Enables the magic-sysrq key
kernel.sysrq = 1

# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 1

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456


I noticed in another post something about checking arp. Arp currently only has one entry, im unsure what to do with this.

Is anyone able to offer some guidance please ?


*update*
I used the centos network GUI tool and gave venet0 an ip, i can now access the outside world from the container, but the ip is still hitting the main machine....

[Updated on: Sat, 07 November 2009 03:04]

Report message to a moderator

Re: Container Networking Problem [message #37990 is a reply to message #37985] Sat, 07 November 2009 10:34 Go to previous messageGo to next message
kir is currently offline  kir
Messages: 1645
Registered: August 2005
Location: Moscow, Russia
Senior Member

Remove the IP you have assigned to a container from the main server.

If that doesn't help, post here
output of /sbin/ip a l
output of /sbin/ip r l
output of cat /proc/sys/net/ipv4/ip_forward


Kir Kolyshkin
http://static.openvz.org/userbars/openvz-developer.png
Re: Container Networking Problem [message #37998 is a reply to message #37990] Sat, 07 November 2009 17:16 Go to previous messageGo to next message
RogerHenry is currently offline  RogerHenry
Messages: 3
Registered: November 2009
Junior Member
Thank You for your reply

I have removed all the spare ips from the server, and i am still experiencing the same problem. Below i posted ifconfig output and the requested output

/sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:30:48:B0:3C:0A
inet addr:64.15.156.213 Bcast:64.15.156.223 Mask:255.255.255.224
inet6 addr: fe80::230:48ff:feb0:3c0a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:65629 errors:0 dropped:0 overruns:0 frame:0
TX packets:73614 errors:1 dropped:0 overruns:0 carrier:1
collisions:16 txqueuelen:10
RX bytes:7156084 (6.8 MiB) TX bytes:12516149 (11.9 MiB)
Memory:d0100000-d0120000

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:6737 errors:0 dropped:0 overruns:0 frame:0
TX packets:6737 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:310024 (302.7 KiB) TX bytes:310024 (302.7 KiB)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:174.142.148.210 P-t-P:174.142.148.210 Bcast:174.142.255.255 Mask:255.255.0.0
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:71 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3976 (3.8 KiB) TX bytes:0 (0.0 b)

/sbin/ip a l
2: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 10
link/ether 00:30:48:b0:3c:0a brd ff:ff:ff:ff:ff:ff
inet 64.15.156.213/27 brd 64.15.156.223 scope global eth0
inet6 fe80::230:48ff:feb0:3c0a/64 scope link
valid_lft forever preferred_lft forever
1: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
3: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue
link/void
inet 174.142.148.210/16 brd 174.142.255.255 scope global venet0

/sbin/ip r l
64.15.156.192/27 dev eth0 proto kernel scope link src 64.15.156.213
174.142.0.0/16 dev venet0 proto kernel scope link src 174.142.148.210
169.254.0.0/16 dev venet0 scope link
default via 64.15.156.193 dev eth0


cat /proc/sys/net/ipv4/ip_forward
1


Re: Container Networking Problem [message #38048 is a reply to message #37985] Thu, 12 November 2009 02:58 Go to previous message
RogerHenry is currently offline  RogerHenry
Messages: 3
Registered: November 2009
Junior Member
This problem has recently been solved. Thread can be closed.
Previous Topic: [solved] Help me install r8168 Network driver
Next Topic: Traffic shaping not working
Goto Forum:
  


Current Time: Fri Jul 19 05:17:40 GMT 2024

Total time taken to generate the page: 0.02632 seconds