Run XORP in VE [message #27517] |
Tue, 19 February 2008 04:28 |
yoolee
Messages: 23 Registered: November 2007
|
Junior Member |
|
|
Hi, All
I would like to host multiple XORP instances using OpenVZ. But there is a strange problem:
After xorp runs for 7-8 minutes I can not ping all interfaces in that particular VE, including lo interface. This is strange because xorp uses lo interface to communicate among its processes, it keeps running though I can not ping lo intf.
Xorp uses netlink socket to get interfaces information and update forwarding information in kernel. It also uses normal socket and raw socket to exchange routing messages.
I did the same thing using Xen before, and I would like to port previous work to OpenVZ for a large scale testing.
Could anyone give me some clues? Thanks a lot!
|
|
|
|
|
|
Re: Run XORP in VE [message #27559 is a reply to message #27556] |
Tue, 19 February 2008 16:30 |
yoolee
Messages: 23 Registered: November 2007
|
Junior Member |
|
|
Oh, those commands show nothing when the problem occurs.
For some reason the routes are cleaned by xorp?
The following are the normal output of those commands:
# ip route list table local
broadcast 10.16.0.4 dev tun1 proto kernel scope link src 10.16.0.5
local 10.16.0.5 dev tun1 proto kernel scope host src 10.16.0.5
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 10.16.0.7 dev tun1 proto kernel scope link src 10.16.0.5
broadcast 10.16.0.0 dev tun0 proto kernel scope link src 10.16.0.2
local 10.10.0.102 dev venet0 proto kernel scope host src 10.10.0.102
local 10.16.0.2 dev tun0 proto kernel scope host src 10.16.0.2
broadcast 10.16.0.3 dev tun0 proto kernel scope link src 10.16.0.2
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev venet0 proto kernel scope host src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
# ip l l
1: lo: <LOOPBACK,UP,10000> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,10000> mtu 1500 qdisc noqueue
link/void
13: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,10000> mtu 1500 qdisc pfifo_fast qlen 500
link/[65534]
15: tun1: <POINTOPOINT,MULTICAST,NOARP,UP,10000> mtu 1500 qdisc pfifo_fast qlen 500
link/[65534]
# ip a l
1: lo: <LOOPBACK,UP,10000> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,10000> mtu 1500 qdisc noqueue
link/void
inet 127.0.0.1/32 scope host venet0
inet 10.10.0.102/32 scope global venet0:0
13: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,10000> mtu 1500 qdisc pfifo_fast qlen 500
link/[65534]
inet 10.16.0.2/30 scope global tun0
15: tun1: <POINTOPOINT,MULTICAST,NOARP,UP,10000> mtu 1500 qdisc pfifo_fast qlen 500
link/[65534]
inet 10.16.0.5/30 scope global tun1
|
|
|
|
|
|
|
|
Re: Run XORP in VE [message #27653 is a reply to message #27625] |
Thu, 21 February 2008 21:27 |
yoolee
Messages: 23 Registered: November 2007
|
Junior Member |
|
|
From my observation the parameter "dgramrcvbuf" depends on how many multicast packets xorp sends in that VE. For example if I enlarge the Hello-interval of OSPF protocol (the hello packet is a kind of multicast packet with dst 224.0.0.5) from 1 sec. to 2 sec., it takes "dgramrcvbuf" exactly twice as much time to reach the quota.
Now I set the limit of "dgramrcvbuf" to 680000, and I found "dgramrcvbuf" increases to about 620000 and stops increasing. So I can run my experiment for a long time.
Yes, I will try to find out an easy way to reproduce the problem for your debugging.
Thanks a lot!
|
|
|