SUSE Support

Here When You Need Us

Resolving Uneven Pod Distribution Using Anti-Affinity

This document (000022017) is provided subject to the disclaimer at the end of this document.

Environment

RKE2 

K3S


Situation

When a large number of pods are deployed in a Kubernetes cluster, it's common for them to be packed onto a few nodes, leaving other nodes with very little workload. This creates a situation where some nodes are overloaded, while others are underutilized, leading to performance problems and a higher risk of service disruption if an overloaded node fails.

Resolution


There are multiple ways to control pod placement, and anti-affinity is one of the most effective methods for solving this problem. 
 This is a feature that gives the scheduler a rule to follow, telling it to actively avoid placing a new pod on a node that already has a pod of the same type.

Here is a simple example of how to use anti-affinity in a Kubernetes Deployment. The configuration ensures that all pods of the same application are spread across different nodes 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-app
  template:
    metadata:
      labels:
        app: example-app
    spec:
      containers:
      - name: example-app-container
        image: nginx:latest
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - example-app
            topologyKey: "kubernetes.io/hostname"

In this file:

  • podAntiAffinity: This is the rule for repelling pods.

  • requiredDuringSchedulingIgnoredDuringExecution: This ensures the rule must be followed for the pod to be scheduled.

  • labelSelector: The scheduler looks for any pod with the label app: example-app.

  • topologyKey: "kubernetes.io/hostname": This tells the scheduler to consider each individual node as a separate location. This ensures that no two pods with the label app: example-app are placed on the same node

By using this configuration, an application with three replicas will be automatically placed on three separate nodes, which distributes the workload and improves overall reliability.

Cause

The default Kubernetes scheduler is designed to find a suitable node for each new pod as it comes. It focuses on finding a node with enough available resources (like CPU and memory). However, it does not prioritise distributing pods evenly across all available nodes

Additional Information

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000022017
  • Creation Date: 28-Aug-2025
  • Modified Date:01-Sep-2025
    • SUSE Rancher

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.