OpenVZ Forum


Home » General » Support » Occasionally iptables blocks simply stop working
Occasionally iptables blocks simply stop working [message #53614] Wed, 08 January 2020 15:18 Go to next message
wsap is currently offline  wsap
Messages: 60
Registered: March 2018
Location: Halifax, NS
Member
Heya folks,

We've got one container out of hundreds where, every once in a while, it just stops actually blocking things with iptables, and we're having a hard time figuring out why.

We have at least a dozen containers running essentially the same software stack with CentOS 7 x64 and CSF (Firewall) with LFD (among other things running) and none of the others have encountered this issue.

Here's the timeline each time this occurs:

1. Notice higher than normal load on the container (2-3 rather than 0.5)
2. Check processes, see php-fpm processes using CPU. Monitor logs of that website and see xml-rpc attacks or wp-login attacks on WordPress sites. LFD has rules in place to detect both of these types of attacks on WordPress and add them to CSF.
3. Query CSF with csf -g {ip} -> CSF returns that the IP is either blocked in iptables directly or in an IPSET chain. (Note: if we disable IPSET in CSF's config and use only iptables, the result is the same, so I don't think this is specific to IPSET). Just to be sure, I also queried iptables directly using iptables -L -n | grep {ip} to confirm the IP is indeed listed there (when ipset is disabled) and the IP is definitely there and configured to DROP all packets from it.
4. Yet the bruteforce attack continues, despite clearly seeing LFD blocking the IP and finding the IP in iptables or IPSET's block list.

I've confirmed that the chain in iptables has a default policy of DROP. Running csf -r to reload the config does not resolve it.

Restarting the container always resolves it for some unknown period of time. Typically at least 24 hours - 1 week later the issue returns.

Recently I've had good luck resolving it without a reboot by running: systemctl restart network

The *one* major difference this container has over our others that are similarly configured is that it has a larger number of non-contiguous IPv4 addresses assigned.

While I don't know that this is specifically an OpenVZ problem, given that the entire network stack of any given container is emulated / provided by the host node in OpenVZ software code, it seems plausible.

Part of the problem is that we don't even know *when* it begins each time. Could be hours before, could be days before we detect the issue. Does anyone know where is best to look to find the source of the issue?

Thanks in advance for any guidance.

Re: Occasionally iptables blocks simply stop working [message #53742 is a reply to message #53614] Fri, 02 April 2021 00:13 Go to previous message
wsap is currently offline  wsap
Messages: 60
Registered: March 2018
Location: Halifax, NS
Member
I believe I've found the solution to this. Unfortunately I don't know exactly which setting resolved it and it's a bit perplexing that this would be necessary. Here's everything that was last changed, it's entirely container config values:

PHYSPAGES="3130368:3130368"
SWAPPAGES="0:1048576"
KMEMSIZE="3G:4G"
LOCKEDPAGES="256M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
DCACHESIZE="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"


The astounding part is that the values previously were all numeric equivalents of 'unlimited' (massive INT values), which makes me wonder if perhaps that older notation no longer works properly. They were all set as such because the container was migrated from a vz6 node using the ovzmigrate script.

The most likely possibilities are in the TCP*, NUMIPTENT, NUMTCPSOCK changes. However again, these were changed from massive values to 'unlimited' which in real-world usage should have meant the same thing, yet they didn't.
Previous Topic: BUG? OVZ 7 + CentOS 8 + iptables v1.8.4 (nf_tables)
Next Topic: ext4 checksum errors after upgrade to OpenVZ 7.0.16
Goto Forum:
  


Current Time: Tue Mar 19 04:33:26 GMT 2024

Total time taken to generate the page: 0.02371 seconds