OpenVZ Forum


Home » Mailing lists » Users » Outbound networking failure
Outbound networking failure [message #45518] Wed, 14 March 2012 06:28 Go to next message
iowissen is currently offline  iowissen
Messages: 11
Registered: February 2012
Junior Member
sorry if this is more of a Xen problem.

we are running openvz in Xen PVs (kernel
2.6.18-194.3.1.el5.028stab069.6xen), and occasionally the network of a PV,
as well as the PV's hosted containers, at a virtual interface is lost.
sometimes restarting the network can solve the problem but in most cases we
have to restart the PV. the PVs another virtual interface was working and
other PVs and the HV keeps working.

detailed observation through tcpdump at the trouble time shows:
1. the inbound traffics can be delivered correctly through eth0 of the PV
until venet0;
2. it looks the venet0 is also correctly functioned because TCP sync may
get response from container and this sync ack can be seen at venet0;
3. but at the eth0 of PV, we only have seen the TCP sync request towards
the containers while the sync ack is missing, thus TCP state of the
container's application remains SYNC_RECV
4. if ping the PV, at the eth0, we only have seen the ICMP echo requests
but the replies are missing.

we didn't encounter the same problem when those openvz stuffs were running
over physical machine instead of xen.

does anyone have any hints or ideas? thanks a lot in advance.

- maoke
Re: Outbound networking failure [message #45520 is a reply to message #45518] Wed, 14 March 2012 07:12 Go to previous messageGo to next message
Andrew Vagin is currently offline  Andrew Vagin
Messages: 28
Registered: November 2011
Junior Member
2.6.18-194.3.1.el5.028stab069.6xen is a very old version. Could you
update it?

On 03/14/2012 10:28 AM, Maoke wrote:
> sorry if this is more of a Xen problem.
>
> we are running openvz in Xen PVs (kernel
> 2.6.18-194.3.1.el5.028stab069.6xen), and occasionally the network of a
> PV, as well as the PV's hosted containers, at a virtual interface is
> lost. sometimes restarting the network can solve the problem but in
> most cases we have to restart the PV. the PVs another virtual
> interface was working and other PVs and the HV keeps working.
>
> detailed observation through tcpdump at the trouble time shows:
> 1. the inbound traffics can be delivered correctly through eth0 of the
> PV until venet0;
> 2. it looks the venet0 is also correctly functioned because TCP sync
> may get response from container and this sync ack can be seen at venet0;
> 3. but at the eth0 of PV, we only have seen the TCP sync request
> towards the containers while the sync ack is missing, thus TCP state
> of the container's application remains SYNC_RECV
> 4. if ping the PV, at the eth0, we only have seen the ICMP echo
> requests but the replies are missing.
>
> we didn't encounter the same problem when those openvz stuffs were
> running over physical machine instead of xen.
>
> does anyone have any hints or ideas? thanks a lot in advance.
>
> - maoke
>
>
Re: Outbound networking failure [message #45522 is a reply to message #45520] Wed, 14 March 2012 07:26 Go to previous messageGo to next message
iowissen is currently offline  iowissen
Messages: 11
Registered: February 2012
Junior Member
yes but we still have other troubles with the 2.6.32-042stab044.11. :P so
we are trying to understand the problem first. - maoke

2012/3/14 Andrew Vagin <avagin@parallels.com>

> 2.6.18-194.3.1.el5.028stab069.6xen is a very old version. Could you update
> it?
>
>
> On 03/14/2012 10:28 AM, Maoke wrote:
>
>> sorry if this is more of a Xen problem.
>>
>> we are running openvz in Xen PVs (kernel
>> 2.6.18-194.3.1.el5.028stab069.6xen), and occasionally the network of a PV,
>> as well as the PV's hosted containers, at a virtual interface is lost.
>> sometimes restarting the network can solve the problem but in most cases we
>> have to restart the PV. the PVs another virtual interface was working and
>> other PVs and the HV keeps working.
>>
>> detailed observation through tcpdump at the trouble time shows:
>> 1. the inbound traffics can be delivered correctly through eth0 of the PV
>> until venet0;
>> 2. it looks the venet0 is also correctly functioned because TCP sync may
>> get response from container and this sync ack can be seen at venet0;
>> 3. but at the eth0 of PV, we only have seen the TCP sync request towards
>> the containers while the sync ack is missing, thus TCP state of the
>> container's application remains SYNC_RECV
>> 4. if ping the PV, at the eth0, we only have seen the ICMP echo requests
>> but the replies are missing.
>>
>> we didn't encounter the same problem when those openvz stuffs were
>> running over physical machine instead of xen.
>>
>> does anyone have any hints or ideas? thanks a lot in advance.
>>
>> - maoke
>>
>>
Re: Outbound networking failure [message #45523 is a reply to message #45522] Wed, 14 March 2012 07:33 Go to previous messageGo to next message
Andrew Vagin is currently offline  Andrew Vagin
Messages: 28
Registered: November 2011
Junior Member
On 03/14/2012 11:26 AM, Maoke wrote:
> yes but we still have other troubles with the 2.6.32-042stab044.11. :P
> so we are trying to understand the problem first.
I don't suggest to move on 2.6.32 kernel. I said about
2.6.18-028stabXXX.Y. You can find a last stable 2.6.18 kernel here:
http://download.openvz.org/kernel/branches/rhel5-2.6.18/stab le/

This issue may be already fixed.

> - maoke
>
> 2012/3/14 Andrew Vagin <avagin@parallels.com
> <mailto:avagin@parallels.com>>
>
> 2.6.18-194.3.1.el5.028stab069.6xen is a very old version. Could
> you update it?
>
>
> On 03/14/2012 10:28 AM, Maoke wrote:
>
> sorry if this is more of a Xen problem.
>
> we are running openvz in Xen PVs (kernel
> 2.6.18-194.3.1.el5.028stab069.6xen), and occasionally the
> network of a PV, as well as the PV's hosted containers, at a
> virtual interface is lost. sometimes restarting the network
> can solve the problem but in most cases we have to restart the
> PV. the PVs another virtual interface was working and other
> PVs and the HV keeps working.
>
> detailed observation through tcpdump at the trouble time shows:
> 1. the inbound traffics can be delivered correctly through
> eth0 of the PV until venet0;
> 2. it looks the venet0 is also correctly functioned because
> TCP sync may get response from container and this sync ack can
> be seen at venet0;
> 3. but at the eth0 of PV, we only have seen the TCP sync
> request towards the containers while the sync ack is missing,
> thus TCP state of the container's application remains SYNC_RECV
> 4. if ping the PV, at the eth0, we only have seen the ICMP
> echo requests but the replies are missing.
>
> we didn't encounter the same problem when those openvz stuffs
> were running over physical machine instead of xen.
>
> does anyone have any hints or ideas? thanks a lot in advance.
>
> - maoke
>
>
> _______________________________________________
> Users mailing list
> Users@openvz.org <mailto:Users@openvz.org>
> https://openvz.org/mailman/listinfo/users
>
>
>
Re: Outbound networking failure [message #45524 is a reply to message #45523] Wed, 14 March 2012 07:55 Go to previous messageGo to next message
iowissen is currently offline  iowissen
Messages: 11
Registered: February 2012
Junior Member
2012/3/14 Andrew Vagin <avagin@parallels.com>

> On 03/14/2012 11:26 AM, Maoke wrote:
>
>> yes but we still have other troubles with the 2.6.32-042stab044.11. :P so
>> we are trying to understand the problem first.
>>
> I don't suggest to move on 2.6.32 kernel. I said about
> 2.6.18-028stabXXX.Y. You can find a last stable 2.6.18 kernel here:
> http://download.openvz.org/kernel/branches/rhel5-2.6.18/stab le/
>
> This issue may be already fixed.
>

thanks for the information! then is this a known bug of the old kernel? is
there a way to repeat it so that we may test before putting the new kernel
to the production environment. thanks again!

maoke


>
> - maoke
>>
>> 2012/3/14 Andrew Vagin <avagin@parallels.com <mailto:avagin@parallels.com
>> >>
>>
>>
>> 2.6.18-194.3.1.el5.028stab069.6xen is a very old version. Could
>> you update it?
>>
>>
>> On 03/14/2012 10:28 AM, Maoke wrote:
>>
>> sorry if this is more of a Xen problem.
>>
>> we are running openvz in Xen PVs (kernel
>> 2.6.18-194.3.1.el5.028stab069.6xen), and occasionally the
>> network of a PV, as well as the PV's hosted containers, at a
>> virtual interface is lost. sometimes restarting the network
>> can solve the problem but in most cases we have to restart the
>> PV. the PVs another virtual interface was working and other
>> PVs and the HV keeps working.
>>
>> detailed observation through tcpdump at the trouble time shows:
>> 1. the inbound traffics can be delivered correctly through
>> eth0 of the PV until venet0;
>> 2. it looks the venet0 is also correctly functioned because
>> TCP sync may get response from container and this sync ack can
>> be seen at venet0;
>> 3. but at the eth0 of PV, we only have seen the TCP sync
>> request towards the containers while the sync ack is missing,
>> thus TCP state of the container's application remains SYNC_RECV
>> 4. if ping the PV, at the eth0, we only have seen the ICMP
>> echo requests but the replies are missing.
>>
>> we didn't encounter the same problem when those openvz stuffs
>> were running over physical machine instead of xen.
>>
>> does anyone have any hints or ideas? thanks a lot in advance.
>>
>> - maoke
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users@openvz.org <mailto:Users@openvz.org>
>> https://openvz.org/mailman/listinfo/users
>>
>>
>>
>>
>
separate block device for CT's swap [message #45533 is a reply to message #45518] Thu, 15 March 2012 09:38 Go to previous messageGo to next message
stealth is currently offline  stealth
Messages: 17
Registered: June 2010
Junior Member
Is it possible to use separate block device as a swap for CT?
Re: separate block device for CT's swap [message #45534 is a reply to message #45533] Thu, 15 March 2012 09:59 Go to previous message
Rick van Rein is currently offline  Rick van Rein
Messages: 5
Registered: January 2012
Junior Member
Hello,

> Is it possible to use separate block device as a swap for CT?

That conflicts the design of OpenVZ; resources such as memory are
shared between containers, so you cannot split the swap.

-Rick
Re: separate block device for CT's swap [message #45535 is a reply to message #45533] Thu, 15 March 2012 09:58 Go to previous message
Tim Small is currently offline  Tim Small
Messages: 24
Registered: April 2011
Junior Member
On 15/03/12 09:38, stealth wrote:
> Is it possible to use separate block device as a swap for CT?

AFAIK, no because of the way that the kernel's VM subsystem works. You
can of course use multiple block devices as swap for the entire machine,
and this will give you better performance overall than separate swaps
for each CT would anyway...

Cheers,

Tim.

--
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53 http://seoss.co.uk/ +44-(0)1273-808309
Previous Topic: where are vm
Next Topic: ploop sanity check
Goto Forum:
  


Current Time: Tue May 07 14:17:28 GMT 2024

Total time taken to generate the page: 0.01944 seconds