OpenVZ Forum


Home » Test » TestForum » loopback problem on some container
loopback problem on some container [message #49306] Fri, 19 April 2013 12:48
chut is currently offline  chut
Messages: 3
Registered: April 2013
Location: Thailand
Junior Member
Can somebody help me.

On my VPS Host
HP ProLiant 165G7
2x AMD Opteron 6128
ECC DDR3 4x 4GB Transcend UDIMM
2x WD 2TB Black with RAID SW (OS and SWAP)
1x IBM HBA, 2 Dual SAS Port

IBM DS3512 with Dual Controller
6x IBM SAS NL 2TB with RAID Level 10 (VPS Container Data)

All my vps running on SolusVM 1.13.03 License (VPS Control Panel)

but some Container is issue with telnet i try to rebuild new os
1. centos 5 x86_64 - not working on test telnet 127.0.0.1 80
2. centos 6 x86_64 - not working on test telnet 127.0.0.1 80

on my VPS Host i test with nmap
[root@vpsserver3 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md1              1.7T  2.1G  1.6T   1% /
tmpfs                 7.8G     0  7.8G   0% /dev/shm
/dev/mapper/mpathbp1  493G  132G  337G  29% /backup_SAN
/dev/mapper/mpathbp2   20G  1.1G   18G   6% /var
/dev/mapper/mpathap1  2.0T  278G  1.6T  15% /vz
/dev/mapper/mpathcp1  2.0T  199M  1.9T   1% /vz2
/dev/mapper/mpathbp3  957G  200M  908G   1% /vz3
[root@vpsserver3 ~]#

[root@vpsserver3 ~]# uname -a
Linux vpsserver3.dlthhost.com 2.6.32-042stab076.5 #1 SMP Mon Mar 18 20:41:34 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux

[root@vpsserver3 ~]# nmap -p2086,2087 203.151.45.x6

Starting Nmap 5.51  at 2013-04-19 19:26 ICT
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Failed to find device venet0 which was referenced in /proc/net/route
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.51 seconds

[root@vpsserver3 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.all.rp_filter = 1
kernel.sysrq = 1
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
error: "net.ipv4.ip_conntrack_max" is an unknown key
kernel.shmall = 4294967296
net.core.netdev_max_backlog = 2048
net.core.dev_weight = 64
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_sack = 0
net.ipv4.tcp_fin_timeout = 20
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_max_orphans = 32768
net.core.optmem_max = 20480
net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_default = 16777216
net.core.wmem_max = 16777216
net.core.somaxconn = 500
net.ipv4.tcp_orphan_retries = 1
net.ipv4.tcp_max_tw_buckets = 540000
[root@vpsserver3 ~]#

[root@vpsserver3 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr xx:xx:DE:F1:B1:CE
          inet addr:203.151.45.x1  Bcast:203.151.45.255  Mask:255.255.254.0
          inet6 addr: xxxx::xxxx:deff:fef1:b1ce/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:189829860 errors:0 dropped:0 overruns:890 frame:0
          TX packets:241780265 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:48337595152 (45.0 GiB)  TX bytes:174247780127 (162.2 GiB)
          Memory:fea60000-fea80000

eth3      Link encap:Ethernet  HWaddr xx:xx:DE:F1:B1:CD
          inet addr:203.151.45.x2  Bcast:203.151.45.255  Mask:255.255.254.0
          inet6 addr: xxxx::xxxx:deff:fef1:b1cd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4052234 errors:0 dropped:0 overruns:0 frame:0
          TX packets:512 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1258038993 (1.1 GiB)  TX bytes:32220 (31.4 KiB)
          Memory:fe9e0000-fea00000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:3987227 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3987227 errors:0 dropped:0 
...

Previous Topic: loopback problem on some container
Goto Forum:
  


Current Time: Tue Mar 19 11:47:45 GMT 2024

Total time taken to generate the page: 0.02562 seconds