OpenVZ Forum

Home » General » Discussions » Multicast in OVZ CTs and VMs? (How to enable Multicast to pass from OVZ guests to the LAN)
Multicast in OVZ CTs and VMs? [message #53808] Mon, 09 January 2023 01:28 Go to next message
jjs - mainphrame is currently offline  jjs - mainphrame
Messages: 42
Registered: January 2012
I've been doing some testing with ucarp, in debian VMs and containers.

(ucarp is an implementation of VRRP, a means of providing a highly available floating virtual IP within a cluster of machines)

It works fine on proxmox VMs and CTs, and I would rather run it on openvz, but so far my attempts to get it fully up and running have failed.

Basically, both nodes become master, because neither node is seeing the multicast traffic from the other.

What is the secret of allowing multicast traffic to pass from openvz VMs and CTs, onto the lan?

Re: Multicast in OVZ CTs and VMs? [message #53810 is a reply to message #53808] Tue, 10 January 2023 08:23 Go to previous messageGo to next message
vzadmin is currently offline  vzadmin
Messages: 10
Registered: December 2008
Junior Member

OpenVZ is a container-based virtualization solution for Linux. It allows multiple isolated containers (sometimes referred to as "Virtual Private Servers" or VPSs) to run on a single physical host. Each container runs its own copy of the Linux kernel and has its own network stack and IP addresses.

Multicast is a method of sending network packets to multiple destinations simultaneously. It is often used for streaming multimedia or other data that needs to be received by multiple recipients at the same time. In the context of OpenVZ, it is possible to enable multicast support in the containers, but there are some considerations to keep in mind.

One important thing to note is that, because each container has its own network stack and IP addresses, multicast packets sent from one container will not be visible to other containers on the same physical host. To enable multicast support for a container, you would need to configure it to use the host's network stack and IP addresses for multicast traffic. This can be done by adding the following line to the container's configuration file:

Copy code
However, adding this line would also mean that the Container uses the host IP and it would not be isolated anymore.

Another option is to use a bridge-based virtualization solution like KVM or XEN. Their virtualization is on the Hypervisor level, where each guest has their own isolated network interfaces. This would allow multicast to work between guests and could be useful if you need to have multicast-enabled communication between multiple CTs.

It's also worth noting that, even if you do configure the container to use the host's network stack for multicast traffic, there may still be some limitations depending on the specific version of OpenVZ that you are using. For example, some older versions of OpenVZ do not support multicast routing, which could prevent multicast packets from being forwarded between different interfaces.

In general, if you need robust multicast support in an OpenVZ environment, it may be more appropriate to consider using a different virtualization solution that provides better support for this feature.

Re: Multicast in OVZ CTs and VMs? [message #53814 is a reply to message #53810] Tue, 17 January 2023 20:44 Go to previous message
jjs - mainphrame is currently offline  jjs - mainphrame
Messages: 42
Registered: January 2012
Thanks vzadmin - duly noted

But to follow up on my post, I discovered that keepalived works, where ucarp doesn't.

I find that puzzling, because they both use vrrp multicast ( Some rainy day I may dig into why the vrrp multicast from keepalived is able to get to the lan, while the vrrp multicast from ucarp doesn't. At this point I'm just happy to have a working solution.

Details -
Host OS: OpenVZ release 7.0.19 (347)
Container - Debian, uses virtual adapters connected to host bridge networks.
Previous Topic: Container as router?
Next Topic: OpenVZ userbars
Goto Forum:

Current Time: Sun Jan 29 13:00:37 GMT 2023

Total time taken to generate the page: 0.00833 seconds