System Status Pages shows Errors when Port Forward to Container

I have installed mailinabox into a LXD Container.

I’m port forwarding to the private Container IP.

Every Service seems to work fine, but the system status page fails because it can not connect to the public ip address from the container.

Not sure if this is something that could or should be fixed in code or if there is just an other workaround?

You can see here how I have installed the LXD Container.
http://blog.inetpeople.net/mail-in-a-box-with-lxd-container/

the issue is that the server cannot connect to the External IP Address. I have managed to set the settings that the host, where the container runs can curl to the external IP Address, eq: curl box.mail.me,
but the container self is still not doing it.
The box runs with a 10.0.x.x IP. The server does port redirects to the private IP.

Incoming Mail (SMTP/postfix) is running but is not publicly accessible at PUBLIC_IP:25 (timed out).

# This is port forwarding:
iptables -A PREROUTING -i eth0 -p tcp --dport 25 -j DNAT --to-destination 10.0.3.218
# This let me curl from the server to the mail-in-a-box container
iptables -A OUTPUT -o lo ! -s 127.0.0.0/8 -p tcp --dport 25 -j DNAT --to-destination 10.0.3.8

I’m struggling with this:

[UFW BLOCK] IN=lxcbr0 OUT= MAC=fe:3a:26:0a:91:64:00:16:3e:b8:f6:24:08:00 SRC=10.0.3.198 DST=128.199.144.187 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=48622 DF PROTO=TCP SPT=40541 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0

Is it that the only workaround would be to setup a proxy server?

I think it’s a firewall problem.

I’m interested in running in LXC as well. I’ve followed @efries guide and have come to the conclusion that there are a few ways to go about this:

  1. Set up ufw to do hairpinning on the host. I wasn’t able to get this to work, but I didn’t have all of my variables isolated, so it could still be worth attempting. (I think this would be the preferred option if it could be scripted to work universally.)
  2. As @efries suggested, having a proxy server set up outside of the host. EDIT: Thinking about this, maybe a proxy server on the host would be sufficient, such that it uses the loopback rules… Sounds a lot like hairpinning, doesn’t it.
  3. Modify mailinabox to recognize split DNS instances (setting an environment variable SPLIT_DNS=1 perhaps). Mailinabox would need to do status checks against the hostname and just have the status screen display a warning that these results aren’t representative of external availability.

Disclaimer: I don’t have a whole lot of background in networking (just a class or two in college) and I haven’t dug too much into mailinabox. Option 3) may be misguided (probably a reason the scripts check against PUBLIC_IP instead of HOSTNAME). Setting up hairpinning seems like the way to go. I appreciate the work @efries did to pave the way.

It seems like this would be an issue for the docker implementation as well, but I didn’t see any mention of it in those GitHub issues. Having mailinabox “own” the server is a big hurdle for me, it would be great to get container support ironed out soon. :smile:

Hi, thank you for nice post. I can confirm that there is problem with checking system status. Mailinabox real/virtual machine with private IP 192.168.0.x behind router/fw with external public IP addr. So there is port forwarding and split-dns involved.

Hey, thank you for pointing to harepinning.
I tried this on my box:

*nat
:PREROUTING ACCEPT [0:0]
-A PREROUTING -d 128.199.144.187/32 -p tcp -m multiport --dports 25,53,80,443,587,993,995 -j  DNAT --to-destination 10.0.3.243
-A PREROUTING -i eth0 -p UDP -d 128.199.144.187/32 --dport 53 -j DNAT --to-destination 10.0.3.243:53
-A POSTROUTING -s 10.0.3.0/24 -d 10.0.3.243/32 -p tcp -m multiport --dports 25,53,80,443,587,993,995 -j MASQUERADE
-A POSTROUTING -s 10.0.3.0/24 -d 10.0.3.243/32 -p udp --dport 53 -j MASQUERADE
COMMIT

no other rules seems to be needed.

It seems to be working at least on my box. Can anyone confirm it please?

1 Like

Fantastic! I experimented with MASQERUADE, but only for lxc subnet outbound via eth0.

-A POSTROUTING -s 10.0.3.78/24 -o eth0 -j MASQUERADE

I’ll give your new rules a try tonight. I still haven’t been able to get hairpinning to work (unless tcpdump is dumping…)

I also have a

:POSTROUTING ACCEPT [0:0]

at the start of the file, but I’m not entirely sure what difference it makes, especially if you report it works without.

Ok. I have a new batch of rules and another command.
I had to turn hairpinning on for the bridge (the clue was that tcpdump made everything magically work)
Both values can be found via ifconfig, list the lxcbr0 interface as the bridge and vethG9BNU8 as the port (I don’t get it, but it works)

brctl hairpin lxcbr0 vethG9BNU8 on

The test cases were:

lxc exec mail -- wget https://google.com 
lxc exec mail -- wget https://mymaildomain.com
wget https://mymaildomain.com

All three finally succeeded with this configuration:

*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
# Inspired by @efries
# https://discourse.mailinabox.email/t/system-status-pages-shows-errors-when-port-forward-to-container/470
# http://blog.inetpeople.net/mail-in-a-box-with-lxd-container/
# To parse these rules, check out http://explainshell.com
 
# Test: from the browser, access https://$MYDOMAIN.com
# The following route incoming connections to the relevant ports to the LXC container
-A PREROUTING -d $MY_IP/32 -p tcp -m multiport --dports 25,53,80,443,587,993,995 -j  DNAT --to-destination $LXC_IP
-A PREROUTING -d $MY_IP/32 -p udp --dport 53 -j DNAT --to-destination $LXC_IP
 
# The following route connections from the server to itself 
# Interestingly, with the commented set of rules,
# wget https://127.0.0.1 times out, instead of connection refused
#-A OUTPUT -o lo -p tcp -m multiport --dports 25,53,80,443,587,993,995 -j  DNAT --to-destination $LXC_IP
#-A OUTPUT -o lo -p udp --dport 53 -j DNAT --to-destination $LXC_IP
-A OUTPUT -o lo ! -s 127.0.0.0/8 -p tcp -m multiport --dports 25,53,80,443,587,993,995 -j  DNAT --to-destination $LXC_IP
-A OUTPUT -o lo ! -s 127.0.0.0/8 -p udp --dport 53 -j DNAT --to-destination $LXC_IP
 
# Test: lxc exec mail -- wget https://$MYDOMAIN.com
# The following will "hairpin" connections back from the LXC containers
-A POSTROUTING -s 10.0.3.0/24 -d 10.0.3.0/24 -p tcp -m multiport --dports 25,53,80,443,587,993,995 -j MASQUERADE
-A POSTROUTING -s 10.0.3.0/24 -d 10.0.3.0/24 -p udp --dport 53 -j MASQUERADE
 
# Test: lxc exec mail -- wget https://google.com
# The following will disguise the containers outbound connections as originating from the server
-A POSTROUTING -s 10.0.3.0/24 -o eth0 -j MASQUERADE
 
# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT

I wonder if there’s any more configuration I’ve forgotten about? I also need to repeat the rules for my ipv6 address.
I’m going to wipe the VPS and try loading up mailinabox again tonight, hopefully between the two of us, everything needed is documented. @JoshData, would you be interested in merging a PR if I wrote an install script for an LXC host server to prepare a container for mailinabox?

This is all over my head. I couldn’t say until I see/understand what it takes.

@landon:
I will give it a try on a fresh system.

Crossing my fingers, I have a “script” with some manual parts. Erasing the Linode for one last time tonight to see if I have enough notes taken, then I’ll post the “script”.

I also noticed high CPU use on the container I left on overnight. I’ve added some changes that should reduce it.

Here’s the “script”. I was running on an Ubuntu 15.04 Linode. I had to change the kernel w/ the Linode dashboard to Linux 4.0.1 (4.0.3+ will panic from lxd). I’m going to have to let someone with more bash-fu roll with it and fill in the missing bits.

Status page looks green except for some DNS issues unrelated to being in a container :smile:

Also, don’t forget to add

sudo ufw allow ssh

before enabling on host :slight_smile:

A day later and CPU use looks decent, although it is continuously rising and dropping throughout the day. Gets up to 20% from 0% over the day, temporarily drops to 16%, then starts rising again. I do have to note that when I got home, eth0 in the container had lost its ipv4 address. Not sure if that was oversight on my part or an actual issue.