How to setup your network CIDR for a large cluster
This document (000020167) is provided subject to the disclaimer at the end of this document.
If you are expecting to use Rancher to deploy a Kubernetes cluster with more than 256 nodes, you'll need to make sure you adjust the default cluster CIDR settings. The default settings only allows clusters of 256 nodes or less.
- Rancher v2.x
- A lot of hardware or VMs!
Kubernetes provides each pod with an IP address and each node with a block of IP addresses. Each cluster is also provided a block of IP addresses that is distributed to each node.
This is controlled by two settings, the
cluster_cidr block and
node-cidr-mask-size. By default, the
cluster_cidr block is 10.42.0.0/16 and the
node-cidr-mask-size is 24. This gives the cluster 256 blocks of /24 networks to distribute out to the pool of nodes. For example, node1 will get 10.24.0.0/24, node2 will get 10.42.1.0/24, node3 will get 10.42.2.0/24 and so on.
To support more than 256 nodes, you will need to use a larger cluster_cidr block, a smaller node-cidr-mask-size, or adjust both. For example, if you want to support up to 512 nodes you can set:
To support up to 1024 nodes, you can use a larger
node-cidr-mask-size, or combination of both:
You should be aware of the following caveats when specifying your
- Make sure you don't set your
cluster_cidrto overlap with the default cluster service network of 10.43.0.0/16. That's why the examples above used 10.40.0.0/15 and 10.38.0.0/14. A CIDR of 10.42.0.0/15 will clash with the default cluster service CIDR.
- Make sure you don't set your
cluster_cidrto overlap with IP address ranges already used in your enterprise infrastructure such as your node IPs, firewalls, load balancers, DNS, or other internal networks.
- Make sure your
node-cidr-mask-sizeis large enough to accommodate the number of pods you want to run on each node. A size of 24 will give enough IP addresses for about 250 pods per node, which is well above the 110 maximum. However a size of 26 will only give you about 60 IPs, which is below the 110 maximum. If you plan to raise the default pod per node limit beyond 110, make sure sure your
node-cidr-mask-sizeis large enough to support it. Note that pods that have
hostNetwork: truedo not count toward this total.
- Set it right the first time! Once your cluster has been deployed, these values cannot change. You'll need to decommission your cluster and start over again if you don't set it right.
- As of v1.17, Kubernetes supports clusters up to 5000 nodes. If you plan to go beyond this, you're venturing into unknown territory. For the latest large cluster best practices, see https://kubernetes.io/docs/setup/best-practices/cluster-large/
Setting these values can be done when first creating the cluster. You'll need to click on the
Edit as YAML button and merge in the following YAML:
The above configuration should allow you to have about 120 pods per node and 1024 nodes in your cluster. That's over 100,000 pods, wow!
rancher_kubernetes_engine_config: services: kube-controller: cluster_cidr: 10.40.0.0/15 extra_args: node-cidr-mask-size: 25
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000020167
- Creation Date: 06-May-2021
- Modified Date:06-May-2021
- SUSE Rancher
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com