How to run multiple ingress controllers

This document (000020160) is provided subject to the disclaimer at the end of this document.


Why use multiple ingress controllers?

At large numbers of ingresses and related workloads, a single ingress-controller can be a bottleneck in both throughput and reliability. It is recommended to shard ingresses across multiple ingress controllers in these scenarios.


  • A Kubernetes cluster created by Rancher v2.x or RKE
  • A Linux cluster, Windows is currently not supported
  • Helm installed and configured


At a high level, the process for sharding ingresses is to build out one or more extra ingress controllers and logically separate your ingresses to evenly split the load between your ingress controllers. This separation is handled through annotations on the ingresses. When an nginx-ingress-controller pod starts up with an ingressClass set, it will only try to satisfy ingresses that are annotated with the same ingressClass. This allows you to run as many ingress-controllers as needed to satisfy your ingress needs.

Creating extra nginx-ingress-controller charts

It is recommended to use the community nginx-ingress helm chart to install the extra ingress-controllers with NodePort services.

This deployment method allows you to run multiple ingress controllers on a single node, as there are no conflicting ports. You are required to route traffic to the correct ingress controller ports through an external load balancer.

Deploy a second default backend and ingress-controller from the nginx-ingress helm chart with the following values: controller.ingressClass - unique name of the ingress class, such as ingress-nginx-2 controller.service.type=NodePort
controller.service.nodePorts.http - define the NodePort between 30000-32767 you want to expose for http traffic. Optional, if not defined one will be randomly assigned
controller.service.nodePorts.https - define the NodePort between 30000-32767 you want to expose for http traffic. Optional, if not defined one will be randomly assigned

For more configuration options, see the chart readme.

An example daemonset install would be:

helm repo add stable
helm install nginx-ingress-second -n ingress-nginx stable/nginx-ingress --set controller.ingressClass="ingress-class-2" --set controller.service.type=NodePort --set controller.kind=DaemonSet

This will create an ingress-nginx daemonset and service. This ingress controller will handle any ingress routed to it tagged with the annotation ingress-class-2

Sharding Ingresses

It is recommended to shard (split) your ingresses in a way that evenly splits load and configuration size between ingress controllers.

Sharding in this way does mean changing dns and ingress hosts so that traffic for ingresses is sent to the correct ingress controllers, typically through an external load balancer.

The process for sharding ingresses is to tag each ingress with the ingressClass for the ingress controller you want to route them through. For example:

kind: Ingress
  name: app_1_ingress
  annotations: "ingress-class-2"

Once annotated with an ingressClass, these ingresses are now only handled by the ingress-controller that has that ingressClass.

In the default configuration, the Rancher-provided nginx-ingress-controller will only handle ingresses that either have the default ingress.class annotation of nginx or do not have an ingress.class annotation at all.

Next steps

From here it is just a matter of ensuring that the traffic for each ingress is routed to the correct nodePort on the nodes that the daemonset is targeted against.

If you did not specify a nodePort when deploying the chart, you can determine the nodePort that was assigned by checking the service created:

$ kubectl describe svc -n ingress-nginx nginx-ingress-second
Name:                     nginx-ingress-second-controller
Namespace:                ingress-nginx
Labels:                   app=nginx-ingress
Selector:       ,app=nginx-ingress,release=nginx-ingress-second
Type:                     NodePort
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  30155/TCP
Endpoints:                <none>
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30636/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

In this example, the service is exposed on every node on ports 30155 for http and 30636 for https


This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000020160
  • Creation Date: 06-May-2021
  • Modified Date:06-May-2021
    • SUSE Rancher

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Join Our Community

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

SUSE Customer Support Quick Reference Guide SUSE Technical Support Handbook Update Advisories
Support FAQ

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.

Go to Customer Center