RKE2 Is CNCF Certified Kubernetes AI Conformant

Share
Share

RKE2 has achieved CNCF Certified Kubernetes AI Conformance, the community-defined standard for running AI and machine learning workloads reliably on Kubernetes. At KubeCon EU 2026 in Amsterdam, SUSE is putting this milestone in the spotlight. This post covers what that certification means, why it matters for teams building AI infrastructure today, and how it connects to RKE2’s position as a security-hardened foundation for production AI.

Moving AI workloads into production is no longer an experimental exercise for most organizations. According to Linux Foundation Research, 82% of organizations are building custom AI solutions and 58% already run them on Kubernetes. For platform teams, that means Kubernetes isn’t just an application platform anymore. It’s the foundation for GPU scheduling, distributed training, and inference at scale. The question isn’t whether to run AI on Kubernetes. It’s whether your Kubernetes distribution can handle it reliably and consistently. RKE2 now has an independent answer to that question.

AI infrastructure decisions deserve external validation

Choosing a Kubernetes distribution for AI workloads carries real risk. GPU scheduling, high-performance networking, and distributed training frameworks stress clusters in ways that general-purpose workloads simply don’t. Without a shared standard, platform teams have had to evaluate each distribution’s AI readiness on their own, often discovering gaps only after things break in production.

The CNCF Certified Kubernetes AI Conformance Program was created to address exactly this. It establishes open, community-defined standards for running AI workloads on Kubernetes, reducing fragmentation and giving enterprises confidence that certified platforms behave consistently and predictably. Cloud Native Computing Foundation

RKE2 achieving this certification means that confidence now comes with independent validation, not just SUSE’s word for it.

What the certification actually validates for your cluster

The program requires platforms to demonstrate capabilities across accelerator support, networking, scheduling, observability, security, and operator support. GitHub For RKE2, certification confirms that the distribution reliably handles GPU device plugin integration, gang scheduling for distributed training jobs, high-performance networking requirements for multi-node workloads, and support for widely used AI frameworks including PyTorch and TensorFlow.

These aren’t checkboxes. They map directly to the failure modes platform teams encounter when scaling AI from a proof of concept to a production fleet. Gang scheduling failures break distributed training runs. Inconsistent GPU visibility across nodes causes silent workload failures. RKE2’s conformance certification means the foundation your AI infrastructure runs on has been validated against the community standard for all of these.

RKE2 brings its security-hardened defaults into this picture as well. CIS benchmark compliance, mandatory access controls, and a minimal attack surface come built in, so teams don’t have to choose between a platform optimized for AI workloads and one that meets enterprise security requirements.

Available now as part of RKE2 General Availability

CNCF AI Conformance for RKE2 is available now. There’s no separate configuration or add-on required. Any team running RKE2 in their environment is already running on a CNCF Certified Kubernetes AI Conformant distribution.

For teams evaluating Kubernetes distributions for new AI infrastructure projects, this certification is a direct signal that RKE2 meets the community standard for production AI workloads. For teams already running RKE2, it’s external validation of the platform decision you’ve already made.

See it in production at KubeCon EU 2026

If your team is building or expanding AI infrastructure on Kubernetes and you want to understand what CNCF AI conformance means for your specific workloads, visit the SUSE booth at KubeCon EU 2026 in Amsterdam. The team can walk through GPU scheduling, distributed training support, and the security defaults that come with RKE2 in a live environment.

Come see us at KubeCon EU in Amsterdam.
Visit the SUSE booth, join our sessions, and experience firsthand what an AI-native cloud native platform can do for your organization.

For the latest updates, visit suse.com/kubecon  or connect with a SUSE expert to explore what’s possible for your organization.

Share
(Visited 1 times, 1 visits today)
Avatar photo
17 views
Emina Cosic Senior Product Manager, leading on provisioning, lifecycle management and kubernetes distributions for Rancher.