SUSE Support

Here When You Need Us

Network ingress traffic from 192.168.0.0/16 always SNAT'd in Kubernetes clusters with canal network provider

This document (000020231) is provided subject to the disclaimer at the end of this document.

Situation

Issue

Network ingress traffic to a Kubernetes cluster with the canal network provider, from IP addresses in the range 192.168.0.0/16, is always SNAT'd, even in instances where this is not desired.

For example on NodePort services configured with externalTrafficPolicy: Local the source IP should be preserved without SNAT, per the Kubernetes documentation. With this issue the source IP is SNAT'd even in instances of NodePort services configured with externalTrafficPolicy: Local.

Pre-requisites

  • A Kubernetes cluster provisioned via the RKE CLI or Rancher, using the canal network provider

Root cause

When a cluster is provisioned with the canal network provider selected, Flannel is used for networking and Calico for network policy enforcement, and IP address management is therefore managed by Flannel.

The calico-node container in the canal pod is still configured with an (un-used) IP pool, which defaults to 192.168.0.0/16. By default Calico programs iptables rules in the cali-nat-outgoing chain of the nat table on cluster nodes to perform SNAT on traffic from this IP pool. The purpose of these rules is to masquerade egress traffic from pods where Calico is used for networking (and not just network policy). As a result in a canal network provider cluster, where the calico-node container is present for network policy enforcement, these rules are programed and any ingress traffic from the range 192.168.0.0/16 will match and be SNAT'd.

Resolution

The permanent solution to prevent this issue is to update the RKE deployment templates for the canal daemonset, to set the environment variable CALICO_IPV4POOL_NAT_OUTGOING to 0 for the calico-node container. This will prevent programming of the problematic cali-nat-outgoing iptables rules and is tracked in Rancher Issue #20500.

In order to workaround the issue in existing clusters, the Calico ippool configuration can be edited to disable outgoing nat, which removes programming of the cali-nat-outgoing iptables rules. To implement this workaround run kubectl against the affected to edit the default-ipv4-ippool object: kubectl edit ippools default-ipv4-ippool. Edit the line natOutgoing: true to set natOutgoing: false and save the change. Calico will detect the configuration update and remove the cali-nat-outgoing iptables rules.

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000020231
  • Creation Date: 06-May-2021
  • Modified Date:06-May-2021
    • SUSE Rancher

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.