Static Egress SNAT IP for Kubernetes Workloads Using Project Antrea — Small Yet Super Useful

Vino Alex
8 min readJul 21, 2021

--

Project Antrea

One of the exciting features of the latest release of Project Antrea-v1.2.0 is the Option to Configure Egress IP Pool or Static Egress IP for the Kubernetes Workloads. Egress IP feature gives a great deal of convenience, especially for use cases where the Kubernetes Operators need to configure IP-based Access Control /Firewall rules to allow the Egress Traffic to reach the Services running outside the K8S Cluster.

I prefer to go with less wordy and more screen shotsin the post for easy reference.

The demo environment used for the lab is as follows.

Fig:1- The Environment

The Environment

I used kubeadm with the following Parameters to bootstrap the Cluster.

sudo kubeadm init — apiserver-advertise-address 10.105.18.30 — pod-network-cidr 192.168.100.0/22 — cri-socket /run/containerd/containerd.sock — v=10 — service-cidr 172.30.0.0/16

Post-Provisioning of K8S cluster nodes, Project Antrea CNI provider deployment is a simple task. The following single Command deploys Project Antrea version v1.2.0 into the cluster.

kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.2.0/antrea.yml

Let me show the default Egress Network traffic from the Pods for those new to the Kubernetes CNI and the default Antrea Network Packet flow behavior.

Create a Namespace to deploy the Test Apps

kubectl create ns antrea-test

I deployed the following application with a custom image containing binaries like `ping` to demonstrate the Egress Network flow.

kubectl create deploy antrea-test-app — image=quay.io/valex/wpcustom:v1 --replicas=2 -n antrea-test

You may find the Pod Status in the following screenshot

Fig:2- Test App Pod Status

Test App Pod Status

To show the default SNAT behavior of the Antrea Pod, I accessed the shell of the Pod Running on the first node (Refer as Node-1 in the Post)and ping to a Remote Work Station having IP address 10.105.18.35.

I used the tcp dump to capture the packet at the Destination. You may see the Pod traffic SNATed to its Node IP while leaving from the Cluster.

Fig:3- Ping test from the Pod running on Node-1

Ping test from the Pod on Node-1

Fig:4- TCP DUMP Output from the Workstation NIC

TCP DUMP Output from the Workstation NIC

To clarify it further, I accessed the remote shell of the Pod running on the second node (Refer as Node-2 in the Post) and repeated the Ping test.

Fig:5- Ping test from the Pod running on Node-2

Ping test from the Pod running on Node-2

Fig:6- TCP DUMP Output from the Workstation NIC

TCP DUMP Output from the Workstation NIC

From the tcpdump, you could find that the Pod IPs SNATed to the respective Node IPs to reach the external Network Destinations. It is the default Network Packet Traffic Pattern of the Antrea CNI.

In this mode, If you wish to communicate from the Kubernetes Pods to an external Service outside the Cluster, you need to allow traffic from all the Cluster Node IPs, providing no node affinityrules in place. It may potentially create security concerns.

Let us see how the latest Project Antrea Egress Featurehelps to define SNAT IP Pool / Static SNAT IP for the outgoing network traffic from the Pods. The versatility of the feature is that it will enable the K8S Operators to selectively assign the SNAT IP based on Pod Labels / Namespace Labels or both. Also, the externalippoolresource has spec. fieldto define the Nodes from which the `SNATed` traffic originates.

Project Antrea Egress SNAT

In the current distribution, the Egress option is disabled by default and set its value as false in the default configuration of Antrea Controller and Agents. You need to change ittrue before testing the Egress SNAT feature.

To modify the Antrea Controller and Agent Configuration, edit/patch the Antrea Configmap file in the kube-system Namespace.

You could find the Configmap in the kube-system Namespace.

Fig:7-Sample Output

Find Antrea Configmap Sample Output

You may edit/patch to uncomment and set the value of Egresstrue for Antra Controller and Agent Configurations.

Fig:8 Antrea Controller Config. segment

Antrea Controller Config. segment

Fig:9 Enable Egress in the Antrea Controller Config

Enable Egress in the Antrea Controller Config

Fig:10 Antrea Agent Config. segment

Antrea Agent Config. segment

Fig:11 Enable Egress in the Antrea Agent Config

Enable Egress in the Antrea Controller Config

Please mind enabling the Egress for Agent Configuration and Controller Configuration segments of the same Configmap.

After updating the Configmap, you need to delete the existing Antrea Controller Pod and Agent Pods to respawn new Pods with the latest configuration. You can execute the following command to delete the Antrea Controller and Agent Pods

kubectl delete -n kube-system pods -l app=antrea

Use Case -1

Assign Static Egress SNAT IP10.105.18.100for the Pods with Label app=antrea test. The SNATed traffic should originate from the Cluster Nodes with Label network-role:snat-origin

Step: 1

1.1 Set the K8S Cluster Node Labels

$kubectl label  no k8santrean1 network-role=snat-origin$kubectl label  no k8santrean2 network-role=snat-origin

1.2 Create an ExternalIPPool manifest. You may notice the /32 CIDR to ensure that the Pool Contains only One IP. You can see the Node Selector specification as per the use case. Antrea Controller will Manage the ExternalIPPool, and you don’t need to assign the IP manually to any Nodes.

Fig:11 ExternalIPPool

https://github.com/emailtovinod/antrea-snat-demo.git

1.3 Create the External IP Pool

kubectl create -f <externalippool_manifest.yaml>

Step: 2

2.1 Prepare an Egress object manifest

Fig:12 Egress Manifest. You may notice the Application Pod Selector as per the use case

https://github.com/emailtovinod/antrea-snat-demo.git

2.2.Create the Egress Rule

kubectl create -f <egress_manifest.yaml>

Step: 3

You may find the ping testresults after applied the SNAT Rule

Fig:13 Ping test from the Pod running on Node-1

Ping test from the Pod running on Node-1

Fig:14 TCP DUMP Output from the Workstation

TCP DUMP Output on the Workstation

Fig:15 Ping test from the Pod running on Node-2

Fig:16 TCP DUMP Output from the Workstation

TCP DUMP Output on the Workstation

You can see in the screenshot that in both the tests, the Network Packet Sourceis the Static Egress IP 10.105.18.100. It helps the Operation team preplan and configure specific IPs or Range of IPs to permit the Network traffic from Application Podsdestined to the External Services deployed outside the Cluster.

If you are further curious, SSH into the Cluster Nodes and execute the following command to show all IP addresses associated with all network devices. You could see that the SNAT IP assigned to an interface -antrea-egress0

$ip a

Fig:17 antrea-egress interface in Node-1

antrea-egress interface in Node-1

Fig:18 antrea-egress interface in Node-2

antrea-egress interface in Node-2

You may notice that an additional virtual interface antrea-egress0 on all the nodes. In the demo environment, antrea-egress0 on Node-1 acts as active and ownthe SNAT IP. If Node-1 goes to Not Readystatus, the SNAT IP will move to the antrea-egress0interface of Node2, and the SNATed traffic remains intact ..( I didn’t notice any packet drop in the demo env.).

How cool? Isn’t it?

Use Case -2

Assign Static Egress SNAT IP10.105.18.101 for the Pods with Label app=staging deployed into the Namespace with label role=staging. The SNATed traffic can initiate from any of the Cluster Nodes.

Step: 1

1.1 Label the Namespace

kubectl label ns app-staging role=staging

1.2 Create an ExternalIPPool manifest. You may notice the /32 CIDRto ensure that the Pool Contains only One IP. You also note that, {}as the Nodeselcetor value. It allows Antrea to assign SNAT IP to any of the Nodes and increase the Egress Network Flow redundancy.

Fig:19 ExternalIPPool Manifest

https://github.com/emailtovinod/antrea-snat-demo.git

1.3 Create the External IP Pool

kubectl create -f <externalippool_manifest.yaml>

2.1.Prepare an Egress object manifest. You may notice the Application Pod Selector as per the use case.You can also see that the new Egress is using the newly created ExternalIPPool.

Fig:20 Egress Manifest.

https://github.com/emailtovinod/antrea-snat-demo.git

2.2.Create the Egress Rule

kubectl create -f <egress_manifest.yaml>

Fig:21 Ping test from the Pod running on Node-1

Ping test from the Pod running on Node-1

Fig:21 TCP DUMP Output from the Workstation

TCP DUMP Output on the Workstation

Fig:22 Ping test from the Pod running on Node-2

Ping test from the Pod running on Node-2

Fig:21 TCP DUMP Output from the Workstation

TCP DUMP Output on the Workstation

You can see in the screenshot that in both the tests, the Network Packet Source is the Static Egress IP 10.105.18.101.

For further investigation, if you check the interface antrea-egress0 of Node-1, you could see the SNAT IP 10.105.18.101 currently assigned to it.

Fig:22 antrea-egress interface in Node-1

antrea-egress interface in Node-1

Summary

Along with the other excellent features, Project Antrea offers the Egress SNAT IP feature too. The feature helps mitigate the security concerns in use-cases, such as creating ACLs/Firewall rules based on specific IPs to allow the Network traffic from the Application Pods deployed into Kubernetes Cluster.

Here I demonstrated one of the configurations option to assign a Single STATIC SNAT IP to Application Pods selectively. The configuration provides redundancy and avoids complex requirements like keepalivedto manage the SNAT IP in a set of K8S Cluster Nodes.

You could try Antrea with any of the Confirment Kubernetes distributions include Tanzu Kubernetes Grid, Openshift, etc.

For Further Ref: https://antrea.io/

Join the Community : #antrea

--

--

Vino Alex
Vino Alex

Written by Vino Alex

Cloud Evangelist & Cloud-Native Architect

Responses (2)