OpenVZ Forum


Home » Mailing lists » Users » Using multicast in virtual servers
Using multicast in virtual servers [message #20707] Tue, 25 September 2007 12:01 Go to next message
Peter Hinse is currently offline  Peter Hinse
Messages: 7
Registered: September 2007
Junior Member
Hi all,

we have several OpenVZ instances (CentOS 4.5) running on several
physical servers (CentOS 5.0) as a QA-/testing environment for Java
applications running in JBoss application server. Since we do have to
test clustering, multicast has to work for all virtual servers, no
matter which physical host they run on.

Right now, we use veth interfaces with local IPs (192.168.*.*) and
tested multicast with ssmping (http://www.venaas.no/multicast/ssmping/)
without success.

Any idea how to get this running?

Regards,

	Peter
Re: Using multicast in virtual servers [message #20709 is a reply to message #20707] Tue, 25 September 2007 13:00 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

what OVZ kernel do you use?
we'll try to check with this tool as well.

Regards,
Kirill

> Hi all,
> 
> we have several OpenVZ instances (CentOS 4.5) running on several
> physical servers (CentOS 5.0) as a QA-/testing environment for Java
> applications running in JBoss application server. Since we do have to
> test clustering, multicast has to work for all virtual servers, no
> matter which physical host they run on.
> 
> Right now, we use veth interfaces with local IPs (192.168.*.*) and
> tested multicast with ssmping (http://www.venaas.no/multicast/ssmping/)
> without success.
> 
> Any idea how to get this running?
> 
> Regards,
> 
> 	Peter
Re: Using multicast in virtual servers [message #20710 is a reply to message #20709] Tue, 25 September 2007 13:00 Go to previous messageGo to next message
Peter Hinse is currently offline  Peter Hinse
Messages: 7
Registered: September 2007
Junior Member
Kirill Korotaev wrote:
> what OVZ kernel do you use?
> we'll try to check with this tool as well.

we use the rhel5 kernel series:

Linux bladeG4 2.6.18-8.1.8.el5.028stab039.1 #1 SMP Mon Jul 23 18:02:32
MSD 2007 x86_64 x86_64 x86_64 GNU/Linux

> 
> Regards,
> Kirill
> 
>> Hi all,
>>
>> we have several OpenVZ instances (CentOS 4.5) running on several
>> physical servers (CentOS 5.0) as a QA-/testing environment for Java
>> applications running in JBoss application server. Since we do have to
>> test clustering, multicast has to work for all virtual servers, no
>> matter which physical host they run on.
>>
>> Right now, we use veth interfaces with local IPs (192.168.*.*) and
>> tested multicast with ssmping (http://www.venaas.no/multicast/ssmping/)
>> without success.
>>
>> Any idea how to get this running?
>>
>> Regards,
>>
>> 	Peter
Re: Using multicast in virtual servers [message #20716 is a reply to message #20710] Tue, 25 September 2007 14:03 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

Peter,

Is the same setup working without openvz? 
Have you used multicast before? Multicast is a bit complex to set up,
requires support from routers/switches etc., so this might well be not openvz-related.
But we setting up this test case right now to check ourselfes.

Can you please also provide a bit more information about your configuration
like whether you use bridge for veth-eth0 traffic bridging or routed networking,
any configuration options (including sysctl) you used/changed etc.?

Thanks,
Kirill

Peter Hinse wrote:
> Kirill Korotaev wrote:
> 
>>what OVZ kernel do you use?
>>we'll try to check with this tool as well.
> 
> 
> we use the rhel5 kernel series:
> 
> Linux bladeG4 2.6.18-8.1.8.el5.028stab039.1 #1 SMP Mon Jul 23 18:02:32
> MSD 2007 x86_64 x86_64 x86_64 GNU/Linux
> 
> 
>>Regards,
>>Kirill
>>
>>
>>>Hi all,
>>>
>>>we have several OpenVZ instances (CentOS 4.5) running on several
>>>physical servers (CentOS 5.0) as a QA-/testing environment for Java
>>>applications running in JBoss application server. Since we do have to
>>>test clustering, multicast has to work for all virtual servers, no
>>>matter which physical host they run on.
>>>
>>>Right now, we use veth interfaces with local IPs (192.168.*.*) and
>>>tested multicast with ssmping (http://www.venaas.no/multicast/ssmping/)
>>>without success.
>>>
>>>Any idea how to get this running?
>>>
>>>Regards,
>>>
>>>	Peter
Re: Using multicast in virtual servers [message #20717 is a reply to message #20707] Tue, 25 September 2007 14:18 Go to previous messageGo to next message
Vitaliy Gusev is currently offline  Vitaliy Gusev
Messages: 7
Registered: September 2007
Junior Member
On the Tuesday 25 September 2007 16:01 Peter Hinse, wrote:
> Hi all,
>
> we have several OpenVZ instances (CentOS 4.5) running on several
> physical servers (CentOS 5.0) as a QA-/testing environment for Java
> applications running in JBoss application server. Since we do have to
> test clustering, multicast has to work for all virtual servers, no
> matter which physical host they run on.
>
> Right now, we use veth interfaces with local IPs (192.168.*.*) and
> tested multicast with ssmping (http://www.venaas.no/multicast/ssmping/)
> without success.
>
> Any idea how to get this running?
Please, print  ssmping command, output for  ifconfig, route -n, etc.

>
> Regards,
>
> 	Peter
-- 
Thanks,
Vitaliy Gusev
Re: Using multicast in virtual servers [message #20760 is a reply to message #20716] Wed, 26 September 2007 03:18 Go to previous messageGo to next message
Daniel Pittman is currently offline  Daniel Pittman
Messages: 26
Registered: January 2007
Junior Member
Kirill Korotaev <dev@sw.ru> writes:

> Is the same setup working without openvz?  Have you used multicast
> before? Multicast is a bit complex to set up, requires support from
> routers/switches etc., so this might well be not openvz-related.  But
> we setting up this test case right now to check ourselfes.
>
> Can you please also provide a bit more information about your
> configuration like whether you use bridge for veth-eth0 traffic
> bridging or routed networking, any configuration options (including
> sysctl) you used/changed etc.?

One thing that is worth noting: I found a bug in the ... veth code, I
think, where it wouldn't pass a multicast packet through.  The code
checked with the 'is_broadcast' flag, for a matching mac, and assumed
that anything else was not for this host.

Perhaps this is a similar issue?  I can try to dig out the fault report
if it helps, but at the time it was simply changing is_broadcast to
include an is_multicast test on the Ethernet MAC.

Regards,
        Daniel

I hope that actually helps. :)
-- 
Daniel Pittman <daniel@cybersource.com.au>           Phone: 03 9621 2377
Level 4, 10 Queen St, Melbourne             Web: http://www.cyber.com.au
Cybersource: Australia's Leading Linux and Open Source Solutions Company
Re: Using multicast in virtual servers [message #20766 is a reply to message #20760] Wed, 26 September 2007 06:47 Go to previous messageGo to next message
Andrey Mirkin is currently offline  Andrey Mirkin
Messages: 193
Registered: May 2006
Senior Member
Hello,

On Wednesday 26 September 2007 07:18 Daniel Pittman wrote:
> One thing that is worth noting: I found a bug in the ... veth code, I
> think, where it wouldn't pass a multicast packet through.  The code
> checked with the 'is_broadcast' flag, for a matching mac, and assumed
> that anything else was not for this host.
Actually the bug you mention about was fixed in 2.6.18-el5-028stab034.1.
So it seems that we have here another problem.

Best regards,
Andrey
Re: Using multicast in virtual servers [message #20776 is a reply to message #20760] Wed, 26 September 2007 07:13 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

Daniel Pittman wrote:
> Kirill Korotaev <dev@sw.ru> writes:
> 
> 
>>Is the same setup working without openvz?  Have you used multicast
>>before? Multicast is a bit complex to set up, requires support from
>>routers/switches etc., so this might well be not openvz-related.  But
>>we setting up this test case right now to check ourselfes.
>>
>>Can you please also provide a bit more information about your
>>configuration like whether you use bridge for veth-eth0 traffic
>>bridging or routed networking, any configuration options (including
>>sysctl) you used/changed etc.?
> 
> 
> One thing that is worth noting: I found a bug in the ... veth code, I
> think, where it wouldn't pass a multicast packet through.  The code
> checked with the 'is_broadcast' flag, for a matching mac, and assumed
> that anything else was not for this host.

plz make sure you really use and look at the sources of 028stab039 kernel.
this check in veth_xmit() was fixed in 028stab034 with this commit:
http://git.openvz.org/?p=linux-2.6.18-openvz;a=commitdiff;h=993241dcdfc8ae22d339e08ed78db6e9760b1d89
> 
> Perhaps this is a similar issue?  I can try to dig out the fault report
> if it helps, but at the time it was simply changing is_broadcast to
> include an is_multicast test on the Ethernet MAC.

and it started to work?

Thanks,
Kirill
Re: Using multicast in virtual servers [message #20777 is a reply to message #20716] Wed, 26 September 2007 07:28 Go to previous messageGo to next message
Peter Hinse is currently offline  Peter Hinse
Messages: 7
Registered: September 2007
Junior Member
Kirill Korotaev wrote:
> Peter,
> 
> Is the same setup working without openvz? 
> Have you used multicast before? Multicast is a bit complex to set up,
> requires support from routers/switches etc., so this might well be not openvz-related.
> But we setting up this test case right now to check ourselfes.

We use several JBoss clusters with multicast in our datacenters.
Multicast itself works between the physical hosts:

ssmping joined (S,G) = (192.168.198.54,232.43.211.234)
pinging S from 192.168.198.53
  unicast from 192.168.198.54, seq=1 dist=0 time=1.513 ms
multicast from 192.168.198.54, seq=1 dist=0 time=1.528 ms
  unicast from 192.168.198.54, seq=2 dist=0 time=0.107 ms
multicast from 192.168.198.54, seq=2 dist=0 time=0.115 ms
  unicast from 192.168.198.54, seq=3 dist=0 time=0.097 ms
multicast from 192.168.198.54, seq=3 dist=0 time=0.104 ms
  unicast from 192.168.198.54, seq=4 dist=0 time=0.113 ms
multicast from 192.168.198.54, seq=4 dist=0 time=0.123 ms
  unicast from 192.168.198.54, seq=5 dist=0 time=0.102 ms
multicast from 192.168.198.54, seq=5 dist=0 time=0.113 ms
  unicast from 192.168.198.54, seq=6 dist=0 time=0.119 ms
multicast from 192.168.198.54, seq=6 dist=0 time=0.130 ms

--- 192.168.198.54 statistics ---
6 packets transmitted, time 5276 ms
unicast:
   6 packets received, 0% packet loss
   rtt min/avg/max/std-dev = 0.097/0.341/1.513/0.524 ms
multicast:
   6 packets received, 0% packet loss since first mc packet (seq 1) recvd
   rtt min/avg/max/std-dev = 0.104/0.352/1.528/0.526 ms

If I try the ssmpingd/ssmping between two virtual instances on two
different hosts (or from one virtual instance to a physical host an vice
versa):

pinging S from 192.168.198.142
  unicast from 192.168.198.132, seq=1 dist=1 time=2627.063 ms
  unicast from 192.168.198.132, seq=2 dist=1 time=1626.828 ms
  unicast from 192.168.198.132, seq=3 dist=1 time=626.718 ms
  unicast from 192.168.198.132, seq=4 dist=1 time=0.100 ms
  unicast from 192.168.198.132, seq=5 dist=1 time=0.101 ms
  unicast from 192.168.198.132, seq=6 dist=1 time=0.150 ms

--- 192.168.198.132 statistics ---
6 packets transmitted, time 5372 ms
unicast:
   6 packets received, 0% packet loss
   rtt min/avg/max/std-dev = 0.100/813.493/2627.063/997.511 ms
multicast:
   0 packets received, 100% packet loss


> Can you please also provide a bit more information about your configuration
> like whether you use bridge for veth-eth0 traffic bridging or routed networking,
> any configuration options (including sysctl) you used/changed etc.?

sysctl settings for the virtual interface on one of the host systems:

net.ipv4.conf.veth1981320.promote_secondaries = 0
net.ipv4.conf.veth1981320.force_igmp_version = 0
net.ipv4.conf.veth1981320.disable_policy = 0
net.ipv4.conf.veth1981320.disable_xfrm = 0
net.ipv4.conf.veth1981320.arp_accept = 0
net.ipv4.conf.veth1981320.arp_ignore = 0
net.ipv4.conf.veth1981320.arp_announce = 0
net.ipv4.conf.veth1981320.arp_filter = 0
net.ipv4.conf.veth1981320.tag = 0
net.ipv4.conf.veth1981320.log_martians = 0
net.ipv4.conf.veth1981320.bootp_relay = 0
net.ipv4.conf.veth1981320.medium_id = 0
net.ipv4.conf.veth1981320.proxy_arp = 1
net.ipv4.conf.veth1981320.accept_source_route = 1
net.ipv4.conf.veth1981320.send_redirects = 1
net.ipv4.conf.veth1981320.rp_filter = 0
net.ipv4.conf.veth1981320.shared_media = 1
net.ipv4.conf.veth1981320.secure_redirects = 1
net.ipv4.conf.veth1981320.accept_redirects = 1
net.ipv4.conf.veth1981320.mc_forwarding = 0
net.ipv4.conf.veth1981320.forwarding = 1


Network config on the VPS:

eth0      Link encap:Ethernet  HWaddr 00:0C:29:0A:3D:C6
          inet addr:192.168.198.132  Bcast:192.168.198.255
Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe0a:3dc6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2855871 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5495903 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1453942778 (1.3 GiB)  TX bytes:3721100335 (3.4 GiB)

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use
Iface
192.168.198.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U     0      0        0 eth0

Any more information you need?

Regards,

	Peter
Re: Using multicast in virtual servers [message #20779 is a reply to message #20777] Wed, 26 September 2007 07:35 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

# uname -a
(from both VEs if you VE <-> VE multicasting)

Thanks,
Kirill
P.S. plz check my another today email about veth code.


Peter Hinse wrote:
> Kirill Korotaev wrote:
> 
>>Peter,
>>
>>Is the same setup working without openvz? 
>>Have you used multicast before? Multicast is a bit complex to set up,
>>requires support from routers/switches etc., so this might well be not openvz-related.
>>But we setting up this test case right now to check ourselfes.
> 
> 
> We use several JBoss clusters with multicast in our datacenters.
> Multicast itself works between the physical hosts:
> 
> ssmping joined (S,G) = (192.168.198.54,232.43.211.234)
> pinging S from 192.168.198.53
>   unicast from 192.168.198.54, seq=1 dist=0 time=1.513 ms
> multicast from 192.168.198.54, seq=1 dist=0 time=1.528 ms
>   unicast from 192.168.198.54, seq=2 dist=0 time=0.107 ms
> multicast from 192.168.198.54, seq=2 dist=0 time=0.115 ms
>   unicast from 192.168.198.54, seq=3 dist=0 time=0.097 ms
> multicast from 192.168.198.54, seq=3 dist=0 time=0.104 ms
>   unicast from 192.168.198.54, seq=4 dist=0 time=0.113 ms
> multicast from 192.168.198.54, seq=4 dist=0 time=0.123 ms
>   unicast from 192.168.198.54, seq=5 dist=0 time=0.102 ms
> multicast from 192.168.198.54, seq=5 dist=0 time=0.113 ms
>   unicast from 192.168.198.54, seq=6 dist=0 time=0.119 ms
> multicast from 192.168.198.54, seq=6 dist=0 time=0.130 ms
> 
> --- 192.168.198.54 statistics ---
> 6 packets transmitted, time 5276 ms
> unicast:
>    6 packets received, 0% packet loss
>    rtt min/avg/max/std-dev = 0.097/0.341/1.513/0.524 ms
> multicast:
>    6 packets received, 0% packet loss since first mc packet (seq 1) recvd
>    rtt min/avg/max/std-dev = 0.104/0.352/1.528/0.526 ms
> 
> If I try the ssmpingd/ssmping between two virtual instances on two
> different hosts (or from one virtual instance to a physical host an vice
> versa):
> 
> pinging S from 192.168.198.142
>   unicast from 192.168.198.132, seq=1 dist=1 time=2627.063 ms
>   unicast from 192.168.198.132, seq=2 dist=1 time=1626.828 ms
>   unicast from 192.168.198.132, seq=3 dist=1 time=626.718 ms
>   unicast from 192.168.198.132, seq=4 dist=1 time=0.100 ms
>   unicast from 192.168.198.132, seq=5 dist=1 time=0.101 ms
>   unicast from 192.168.198.132, seq=6 dist=1 time=0.150 ms
> 
> --- 192.168.198.132 statistics ---
> 6 packets transmitted, time 5372 ms
> unicast:
>    6 packets received, 0% packet loss
>    rtt min/avg/max/std-dev = 0.100/813.493/2627.063/997.511 ms
> multicast:
>    0 packets received, 100% packet loss
> 
> 
> 
>>Can you please also provide a bit more information about your configuration
>>like whether you use bridge for veth-eth0 traffic bridging or routed networking,
>>any configuration options (including sysctl) you used/changed etc.?
> 
> 
> sysctl settings for the virtual interface on one of the host systems:
> 
> net.ipv4.conf.veth1981320.promote_secondaries = 0
> net.ipv4.conf.veth1981320.force_igmp_version = 0
> net.ipv4.conf.veth1981320.disable_policy = 0
> net.ipv4.conf.veth1981320.disable_xfrm = 0
> net.ipv4.conf.veth1981320.arp_accept = 0
> net.ipv4.conf.veth1981320.arp_ignore = 0
> net.ipv4.conf.veth1981320.arp_announce = 0
> net.ipv4.conf.veth1981320.arp_filter = 0
> net.ipv4.conf.veth1981320.tag = 0
> net.ipv4.conf.veth1981320.log_martians = 0
> net.ipv4.conf.veth1981320.bootp_relay = 0
> net.ipv4.conf.veth1981320.medium_id = 0
> net.ipv4.conf.veth1981320.proxy_arp = 1
> net.ipv4.conf.veth1981320.accept_source_route = 1
> net.ipv4.conf.veth1981320.send_redirects = 1
> net.ipv4.conf.veth1981320.rp_filter = 0
> net.ipv4.conf.veth1981320.shared_media = 1
> net.ipv4.conf.veth1981320.secure_redirects = 1
> net.ipv4.conf.veth1981320.accept_redirects = 1
> net.ipv4.conf.veth1981320.mc_forwarding = 0
> net.ipv4.conf.veth1981320.forwarding = 1
> 
> 
> Network config on the VPS:
> 
> eth0      Link encap:Ethernet  HWaddr 00:0C:29:0A:3D:C6
>           inet addr:192.168.198.132  Bcast:192.168.198.255
> Mask:255.255.255.0
>           inet6 addr: fe80::20c:29ff:fe0a:3dc6/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:2855871 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:5495903 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:1453942778 (1.3 GiB)  TX bytes:3721100335 (3.4 GiB)
> 
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags Metric Ref    Use
> Iface
> 192.168.198.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
> 169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
> 224.0.0.0       0.0.0.0         240.0.0.0       U     0      0        0 eth0
> 
> Any more information you need?
> 
> Regards,
> 
> 	Peter
>
Re: Using multicast in virtual servers [message #20780 is a reply to message #20779] Wed, 26 September 2007 07:39 Go to previous messageGo to next message
Peter Hinse is currently offline  Peter Hinse
Messages: 7
Registered: September 2007
Junior Member
Kirill Korotaev wrote:
> # uname -a
> (from both VEs if you VE <-> VE multicasting)

identical for both VEs:

Linux vps-mpp132 2.6.18-8.1.8.el5.028stab039.1 #1 SMP Mon Jul 23
18:02:32 MSD 2007 x86_64 x86_64 x86_64 GNU/Linux

Linux vps-mpp142 2.6.18-8.1.8.el5.028stab039.1 #1 SMP Mon Jul 23
18:02:32 MSD 2007 x86_64 x86_64 x86_64 GNU/Linux

Regards,

	Peter
Re: Using multicast in virtual servers [message #20785 is a reply to message #20780] Wed, 26 September 2007 08:12 Go to previous messageGo to next message
Vitaliy Gusev is currently offline  Vitaliy Gusev
Messages: 7
Registered: September 2007
Junior Member
Please, print output for ssmping in VE (which doesn't work), route for VE0, 
brctl show for VE0, ifconfig for VE0.


-- 
Thanks,
Vitaliy Gusev
Re: Using multicast in virtual servers [message #20787 is a reply to message #20776] Wed, 26 September 2007 08:31 Go to previous messageGo to next message
Daniel Pittman is currently offline  Daniel Pittman
Messages: 26
Registered: January 2007
Junior Member
Kirill Korotaev <dev@sw.ru> writes:
> Daniel Pittman wrote:
>> Kirill Korotaev <dev@sw.ru> writes:
>> 
>>>Is the same setup working without openvz?  Have you used multicast
>>>before? Multicast is a bit complex to set up, requires support from
>>>routers/switches etc., so this might well be not openvz-related.  But
>>>we setting up this test case right now to check ourselfes.
>>>
>>>Can you please also provide a bit more information about your
>>>configuration like whether you use bridge for veth-eth0 traffic
>>>bridging or routed networking, any configuration options (including
>>>sysctl) you used/changed etc.?
>> 
>> One thing that is worth noting: I found a bug in the ... veth code, I
>> think, where it wouldn't pass a multicast packet through.  The code
>> checked with the 'is_broadcast' flag, for a matching mac, and assumed
>> that anything else was not for this host.
>
> plz make sure you really use and look at the sources of 028stab039 kernel.
> this check in veth_xmit() was fixed in 028stab034 with this commit:
> http://git.openvz.org/?p=linux-2.6.18-openvz;a=commitdiff;h=993241dcdfc8ae22d339e08ed78db6e9760b1d89

I suspected that you would remember. :)

>> Perhaps this is a similar issue?  I can try to dig out the fault report
>> if it helps, but at the time it was simply changing is_broadcast to
>> include an is_multicast test on the Ethernet MAC.
>
> and it started to work?

I never got to test it; that particular job (enable CUPS server browse
announcements in a VE) is still outstanding on my list because it was
low priority.  (sorry)

        Daniel
-- 
Daniel Pittman <daniel@cybersource.com.au>           Phone: 03 9621 2377
Level 4, 10 Queen St, Melbourne             Web: http://www.cyber.com.au
Cybersource: Australia's Leading Linux and Open Source Solutions Company
Re: Using multicast in virtual servers [message #20788 is a reply to message #20777] Wed, 26 September 2007 08:52 Go to previous messageGo to next message
Vitaliy Gusev is currently offline  Vitaliy Gusev
Messages: 7
Registered: September 2007
Junior Member
On the Wednesday 26 September 2007 11:28 Peter Hinse, wrote:
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags Metric Ref    Use
> Iface
> 192.168.198.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0 
> 169.254.0.0    0.0.0.0         255.255.0.0     U     0      0        0 eth0 
> 224.0.0.0       0.0.0.0         240.0.0.0       U     0      0     0 eth0

try to set default gateway

-- 
Thanks,
Vitaliy Gusev
Re: Using multicast in virtual servers [message #20789 is a reply to message #20785] Wed, 26 September 2007 09:00 Go to previous messageGo to next message
Peter Hinse is currently offline  Peter Hinse
Messages: 7
Registered: September 2007
Junior Member
Vitaliy Gusev wrote:

> Please, print output for ssmping in VE (which doesn't work), route for VE0, 
> brctl show for VE0, ifconfig for VE0.

ssmping output in VE:

pinging S from 192.168.198.142
  unicast from 192.168.198.132, seq=1 dist=1 time=2627.063 ms
  unicast from 192.168.198.132, seq=2 dist=1 time=1626.828 ms
  unicast from 192.168.198.132, seq=3 dist=1 time=626.718 ms
  unicast from 192.168.198.132, seq=4 dist=1 time=0.100 ms
  unicast from 192.168.198.132, seq=5 dist=1 time=0.101 ms
  unicast from 192.168.198.132, seq=6 dist=1 time=0.150 ms

--- 192.168.198.132 statistics ---
6 packets transmitted, time 5372 ms
unicast:
   6 packets received, 0% packet loss
   rtt min/avg/max/std-dev = 0.100/813.493/2627.063/997.511 ms
multicast:
   0 packets received, 100% packet loss

route for VE0:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use
Iface
192.168.198.141 0.0.0.0         255.255.255.255 UH    0      0        0
veth1981410
192.168.198.142 0.0.0.0         255.255.255.255 UH    0      0        0
veth1981420
195.x.x.x       0.0.0.0         255.255.255.255 UH    0      0        0
veth1981411
195.x.x.x       0.0.0.0         255.255.255.224 U     0      0        0 eth0
192.168.198.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U     0      0        0 eth0
0.0.0.0         195.x.x.x       0.0.0.0         UG    0      0        0 eth0

ifconfig for VE0:

eth0      Link encap:Ethernet  HWaddr 00:1A:64:32:0A:F8
          inet addr:192.168.198.54  Bcast:192.168.198.255
Mask:255.255.255.0
          inet6 addr: fe80::21a:64ff:fe32:af8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:19330162 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3542937 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4931551683 (4.5 GiB)  TX bytes:1232779800 (1.1 GiB)
          Interrupt:98 Memory:da000000-da011100

eth0:1    Link encap:Ethernet  HWaddr 00:1A:64:32:0A:F8
          inet addr:195.x.x.x  Bcast:195.x.x.x  Mask:255.255.255.224
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:98 Memory:da000000-da011100

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:788 errors:0 dropped:0 overruns:0 frame:0
          TX packets:788 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:246284 (240.5 KiB)  TX bytes:246284 (240.5 KiB)

venet0    Link encap:UNSPEC  HWaddr
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

veth1981410 Link encap:Ethernet  HWaddr 00:0C:29:91:B1:81
          inet6 addr: fe80::20c:29ff:fe91:b181/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:345225 errors:0 dropped:0 overruns:0 frame:0
          TX packets:379368 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:221623267 (211.3 MiB)  TX bytes:349531019 (333.3 MiB)

veth1981411 Link encap:Ethernet  HWaddr 00:0C:29:91:B1:83
          inet6 addr: fe80::20c:29ff:fe91:b183/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:558932 errors:0 dropped:0 overruns:0 frame:0
          TX packets:696100 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:75514675 (72.0 MiB)  TX bytes:432185018 (412.1 MiB)

veth1981420 Link encap:Ethernet  HWaddr 00:0C:29:F7:A0:88
          inet6 addr: fe80::20c:29ff:fef7:a088/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4101690 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3166888 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1071935233 (1022.2 MiB)  TX bytes:3124980969 (2.9 GiB)

No bridging used right now.
Re: Using multicast in virtual servers [message #20794 is a reply to message #20789] Wed, 26 September 2007 09:58 Go to previous messageGo to next message
Vitaliy Gusev is currently offline  Vitaliy Gusev
Messages: 7
Registered: September 2007
Junior Member
On the Wednesday 26 September 2007 13:00 Peter Hinse, wrote:

> veth1981420 Link encap:Ethernet  HWaddr 00:0C:29:F7:A0:88
>           inet6 addr: fe80::20c:29ff:fef7:a088/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:4101690 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:3166888 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:1071935233 (1022.2 MiB)  TX bytes:3124980969 (2.9 GiB)
>
> No bridging used right now.

I use veth with bridges and it works.

Now I try without bridges.

-- 
Thanks,
Vitaliy Gusev
Re: Using multicast in virtual servers [message #20803 is a reply to message #20794] Wed, 26 September 2007 12:50 Go to previous messageGo to next message
Vitaliy Gusev is currently offline  Vitaliy Gusev
Messages: 7
Registered: September 2007
Junior Member
> On the Wednesday 26 September 2007 13:00 Peter Hinse, wrote:
> > veth1981420 Link encap:Ethernet  HWaddr 00:0C:29:F7:A0:88
> >           inet6 addr: fe80::20c:29ff:fef7:a088/64 Scope:Link
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:4101690 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:3166888 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:1071935233 (1022.2 MiB)  TX bytes:3124980969 (2.9 GiB)
> >
> > No bridging used right now.
>
You must use a bridge. Multicast packets is not forwarded.

-- 
Thanks,
Vitaliy Gusev
Re: Using multicast in virtual servers [message #21090 is a reply to message #20803] Mon, 01 October 2007 17:00 Go to previous message
Peter Hinse is currently offline  Peter Hinse
Messages: 7
Registered: September 2007
Junior Member
Vitaliy Gusev schrieb:
>> On the Wednesday 26 September 2007 13:00 Peter Hinse, wrote:
>>> veth1981420 Link encap:Ethernet  HWaddr 00:0C:29:F7:A0:88
>>>           inet6 addr: fe80::20c:29ff:fef7:a088/64 Scope:Link
>>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>           RX packets:4101690 errors:0 dropped:0 overruns:0 frame:0
>>>           TX packets:3166888 errors:0 dropped:0 overruns:0 carrier:0
>>>           collisions:0 txqueuelen:0
>>>           RX bytes:1071935233 (1022.2 MiB)  TX bytes:3124980969 (2.9 GiB)
>>>
>>> No bridging used right now.
> You must use a bridge. Multicast packets is not forwarded.
> 

OK, multicast works with bridging enabled. Big thx for help!

Regards,

	Peter
Previous Topic: OpenVZ kernel based on RHEL5 kernel-2.6.18-8.1.14.el5.x86_64.rpm
Next Topic: linux-2.6.22-ovz004
Goto Forum:
  


Current Time: Sat May 04 22:57:53 GMT 2024

Total time taken to generate the page: 0.01952 seconds