1

I have 3 nodes with public and local IP address, each:

  • Node A: edge router #1 (10.41.1.0/24)
  • Node B: edge router #2 (10.48.2.0/24)
  • Node C: VMS with Debian 12, docker containers and firewalld (ex. 172.17.0.1 - one of many docker containers)

Configuration:

  1. I followed the official guide and setup site-to-site IPSEC (StrongSwan) connection between Node A and Node B, which works fine.
  2. I followed these instructions to enable PING between nodes A & B (between EdgeRouters), which works fine.
  3. I followed this guide to setup site-to-site IPSEC (StrongSwan) connections between nodes C & A and C & B (between Debian and EdgeRouters), which seem to work.
user@node-C:~$ ipsec status
Routed Connections:
peer-node-A-tunnel-1{381}:  ROUTED, TUNNEL, reqid 1
peer-node-A-tunnel-1{381}:   172.16.0.0/12 === 10.41.1.0/24
peer-node-B-tunnel-1{2}:  ROUTED, TUNNEL, reqid 2
peer-node-B-tunnel-1{2}:   172.16.0.0/12 === 10.48.2.0/24
Security Associations (2 up, 0 connecting):
peer-node-A-tunnel-1[167]: ESTABLISHED 18 hours ago, public_ip_C[]...public_ip_A[]
peer-node-A-tunnel-1{384}:  INSTALLED, TUNNEL, reqid 1, ESP SPIs: c9b1c091_i c5a71907_o
peer-node-A-tunnel-1{384}:   172.16.0.0/12 === 10.41.1.0/24
peer-node-B-tunnel-1[163]: ESTABLISHED 20 hours ago, public_ip_C[]...public_ip_B[]
peer-node-B-tunnel-1{385}:  INSTALLED, TUNNEL, reqid 2, ESP SPIs: c25a4162_i c0940a2b_o
peer-node-B-tunnel-1{385}:   172.16.0.0/12 === 10.48.2.0/24

From nodes A and B, I can ping node C, therefore I assume the IPSEC setup is correct.

user@node-A:~$ ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=57.4 ms

user@node-B:~$ ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=32.5 ms

From node C, I can NOT ping A nor B

user@node-C:~$ ping 10.41.1.1 
PING 10.41.1.1 (10.41.1.1) 56(84) bytes of data.
From 85.204.x.y icmp_seq=7 Destination Net Unreachable

user@node-C:~$ ping 10.48.2.1
PING 10.48.2.1 (10.48.2.1) 56(84) bytes of data.
From 85.204.x.y icmp_seq=1 Destination Net Unreachable

It looks like a problem with the routing, but my understanding is that IPSEC does not require a dedicated entry in the routing table in contradiction to OpenVPN or Wireguard. Even if it would, I do not know what should be the gateway in this case - there is no IPSEC/VPN related interface nor address.

user@node-C:~$ ip route show               
default via 194.a.b.c dev ens3 proto dhcp src 194.a.b.d metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.20.0.0/16 dev br-f0cdd0669e2f proto kernel scope link src 172.20.10.128 
172.21.0.0/16 dev br-f871835d2c04 proto kernel scope link src 172.21.10.128 
194.a.b.0/24 dev ens3 proto kernel scope link src 194.a.b.d metric 100 

I am sure the issue is with firewalld setup. When I disable it, all PING's work, which confirms the IPSEC setup is fine.

user@node-C:~$ systemctl stop firewalld.service

user@node-C:~$ ping 10.41.1.1
PING 10.41.1.1 (10.41.1.1) 56(84) bytes of data.
64 bytes from 10.41.1.1: icmp_seq=1 ttl=64 time=58.8 ms

user@node-C:~$ ping 10.48.2.1
PING 10.48.2.1 (10.48.2.1) 56(84) bytes of data.
64 bytes from 10.48.2.1: icmp_seq=1 ttl=64 time=26.0 ms

My current firewalld setup is quite basic: two active zones, zero policies. After reading this answer I thought I had to add something like the below, but it did not help - the Destination Net Unreachable remains.

# 1. Allow encrypted traffic to IPSec tunnel
iptables -I FORWARD -s 127.0.0.1/8 -d 10.0.0.0/8 -m policy --dir out --pol ipsec -j ACCEPT
# 2. Allow encrypted traffic from IPSec tunnel
iptables -I FORWARD -s 10.0.0.0/8 -d 127.0.0.1/8 -m policy --dir in --pol ipsec -j ACCEPT

I guess I should be using rich-rule(s) or firewalld's iptables Direct Interface, but cannot figure it out even after checking all these extensive materials: (firewalld.org/documentation/concepts.html, access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/securing_networks/using-and-configuring-firewalld_securing-networks, fedoraproject.org/wiki/Firewalld). What exact settings do I miss?

3
  • Looks like you have a masquerade (NAT) rule enabled. For this to work with IPsec you have to exclude the IPsec traffic from it (see the strongSwan docs). If you don't do that, the packets won't match the IPsec policies anymore after applying the NAT and won't get tunneled.
    – ecdsa
    Commented Jun 17 at 7:47
  • Thank you @ecdsa. Indeed, iptables -L POSTROUTING -t nat shows several MASQUERADE entries created by the docker and adding iptables -t nat -I POSTROUTING -m policy --pol ipsec --dir out -j ACCEPT as mentioned in the link you provided solved the issue. Would you like to create an official answer? I must admit, I am very disappointed that docker claiming to be well integrated with firewalld adds entries directly to iptables. I cannot see them using: firewall-cmd --list-rich-rules, firewall-cmd --direct --get-all-rules, firewall-cmd --direct --get-all-chains. Or do I try it wrongly?
    – BCT
    Commented Jun 18 at 14:44
  • @BCT firewalld probably uses nftables behind the scene, while docker uses iptables. If iptables (the command line / userspace program) on your system is the "actual" (legacy) variant (instead of the nft variant, which translates most if not all commands into nftables chains/rules), you'd end up having both iptables and nftables (I mean like both "engines" in kernel space) active on your system.
    – Tom Yan
    Commented Jun 18 at 16:54

1 Answer 1

0

If there are MASQUERADE (or SNAT) rules deployed in POSTROUTING, IPsec traffic usually has to be excluded from them as the packets won't match the IPsec policies anymore after the source address is modified. Of course, that's only the case if the NAT's purpose isn't to actually enforce such a match in the first place (like NATing forwarded traffic to a virtual IP on a VPN client).

To exclude traffic that matches an outbound IPsec policy, the following generic rule can be inserted at the top of POSTROUTING:

iptables -t nat -I POSTROUTING -m policy --pol ipsec --dir out -j ACCEPT

The strongSwan docs have some more information on this (also for cases where client traffic should be NATed to the public IP of a server).

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .