SUSE Rancher Prime Launches First Agentic AI Ecosystem with MCP Plug and Play — KubeCon EU 2026

Share
Share

At KubeCon + CloudNativeCon Europe 2026, we’re evolving SUSE Rancher Prime from a management tool into the industry’s first context aware Agentic AI Ecosystem.
The single integrated platform experience, unifies AI operations, virtualization, security, observability, developer tooling and 3rd party tools

Instead of stitching together disconnected tools with different licenses and workflows, teams run everything on one platform. This cuts complexity, speeds up time to value, lowers cost, and keeps operations consistent from data center to edge.

Every announcement at KubeCon is built around one idea. The platform gets stronger without adding complexity. 

In the latest release of SUSE Rancher Prime and updates across the portfolio, we are highlighting three areas. AI-enhanced operations, enterprise virtualization, and  trusted developer lifecycles. One platform delivers compounding value that separate tools cannot match, and this release shows how.

SUSE Rancher Prime AI Assistant is now an Entire SRE Crew

SUSE Rancher Prime is now an AI-native platform where intelligence isn’t bolted on as an afterthought but woven into the fabric of operations. Because AI is built into the platform itself, every product in the SUSE Cloud Native portfolio benefits automatically, without requiring separate AI tooling or custom integrations.

SUSE Rancher Prime AI Assistant is an Entire Agentic SRE Crew

SUSE’s Rancher AI Assistant, Liz, alone anymore as the only context aware agent expert in infrastructure. With this release, Liz expands into a “Crew” of specialized agents, dedicated experts for Linux, Observability, Security, Provisioning, and Fleet management, all operating directly within the SUSE Rancher Prime experience. An intelligent routing layer directs every prompt to the right expert automatically, so platform engineers get precise, domain-specific answers without context bleed or manual agent selection.

It’s an embedded SRE team that never sleeps. When a cluster fails, the Observability and Provisioning agents collaborate autonomously to correlate logs with infrastructure changes. When a security policy drifts, the Security agent flags it in context. Agents can also complete tasks with human approval. Teams get dramatically faster mean time to recovery (MTTR), reduced cognitive load, and a level of cross-domain intelligence no single chatbot can match. For organizations struggling to hire and retain specialized SRE talent, that translates directly into lower staffing pressure and more consistent operational outcomes.

And with new support for external Model Context Protocol (MCP) servers, enterprises can extend the crew into their own tools and data sources, from Atlassian to internal architecture docs, building toward the largest open agentic ecosystem in cloud-native infrastructure. Read more about the Rancher AI Crew → 

SUSE Rancher Prime’s AI Crew connects to your external data sources via MCP

SUSE Rancher Prime’s AI crew just got extensible. Through the Model Context Protocol (MCP), customers can now “plug in” external services like a proprietary CMDB, a custom security scanner, or a third-party monitoring tool, and turn them into full AI Crew members. No custom integration code. No glue scripts. Just declare the connection, and your tools become intelligent agents inside Rancher.

Rather than forcing organizations to rip and replace their existing tooling, SUSE Rancher Prime becomes the central brain that connects everything. We’re building the industry’s largest cloud native agentic ecosystem, and MCP is the foundation. Organizations protect their existing investments while gaining AI-driven automation across their entire stack, not just the SUSE parts.  Read more on expanding the Rancher with MCP →

Turn Siloed GPUs into a Self-Service AI Factory with Virtual Clusters GPU Multitenancy and Developer Workflows

AI workloads demand GPU resources, but GPUs are expensive and often siloed. SUSE Rancher Prime now enables Virtual Cluster GPU Multi-Tenancy via K3k, providing each tenant with a fully isolated Kubernetes control plane on shared GPU infrastructure. Automated quota management ensures every team gets its fair share of compute without the risk of a single experiment monopolizing the fleet.

Organizations can transform expensive, fragmented GPU hardware into high-velocity AI Factory multitenancy, maximizing ROI through high-density resource sharing while maintaining the isolation and security enterprise teams require. There’s no separate GPU orchestration product to buy or manage. Teams provision GPU-enabled environments through the same Rancher interface they already use for everything else, reducing operational complexity and accelerating time to first AI workload. Read more on Virtual Cluster GPU Multi-Tenancy →

Remediate without hallucinations using Time-Traveling Topology with SUSE Observability’s AI Agent 35+ MCPs

SUSE Observability now features AI-assisted automated triaging, connecting directly to the Crew’s intelligence to surface root causes faster and guide SREs toward resolution before incidents escalate. Observability and AI share context natively, so triaging is informed by the full picture of your environment rather than isolated telemetry from a single product. Shorter outages, fewer escalations, and less time spent on manual investigation. Read more about AI-Assisted Triaging in SUSE Observability →

SUSE Virtualization — Enterprise-Ready, VMware-Free

For organizations modernizing away from legacy virtualization platforms, SUSE Virtualization continues to close every remaining gap with features that enterprise teams have been asking for. And because SUSE Virtualization is part of the broader Rancher platform, customers get unified management of both VMs and containers through a single pane of glass, eliminating the operational overhead of running separate virtualization and container stacks.

Unlock AI Efficiency with NVIDIA MIG vGPU Multitenancy on Kubernetes with SUSE Virtualization

NVIDIA Multi-Instance GPU (MIG) vGPU support is now generally available. Harvester automatically detects GPUs that support MIG-based partitioning, turning a single physical GPU into multiple independent, hardware-isolated instances. Each VM gets dedicated streaming multiprocessors and memory bandwidth, eliminating noisy neighbor problems and performance degradation. For organizations running AI inference, data analytics, or MLOps workloads, MIG support means a single GPU can serve multiple teams simultaneously with guaranteed performance, dramatically improving the return on what’s often the most expensive hardware in the data center. Explore NVIDIA MIG vGPU support in detail →

VM Auto Balance Open Source DRS That Works

Arriving in early access, VM Auto Balancing automatically distributes virtual machines across cluster nodes to optimize hardware utilization and prevent hotspots when demand spikes. VMware users will recognize this as the KubeVirt answer to DRS, and it removes one of the last significant barriers to adoption for organizations making the move. Better hardware utilization means organizations get more out of their existing infrastructure investment, directly reducing the cost per workload. Dive deeper into VM Auto Balancing →

Move volumes without downtime with Live Storage Migration

With certified third-party storage now supported, Live Storage Migration becomes even more powerful. Customers can move VMs between storage arrays without any downtime to workloads, which is especially valuable for organizations leveraging storage infrastructure from partners like NetApp, Dell, HPE, and Pure Storage alongside SUSE Virtualization. Businesses can refresh storage hardware, rebalance capacity, or shift between providers without impacting production services or scheduling maintenance windows. Learn more about Live Storage Migration →

Upgrade Control to roll upgrades on your terms, not the platform’s

New Upgrade Control capabilities let administrators pause automatic node upgrades for specific cluster nodes, which is essential for large-scale deployments where manual maintenance or verification must happen before changes propagate. Once ready, teams explicitly resume the process, reducing risk and protecting uptime. For enterprises running business-critical virtualized workloads, this level of control over the upgrade lifecycle means fewer surprises, less unplanned downtime, and greater confidence in day-two operations. See how Upgrade Control works →

Trusted Developer Lifecycle — From Inner Loop to Production

When developers build on the same platform that runs production, handoff friction disappears. Security policies, trusted images, and operational guardrails travel with the application from the first line of code to the production cluster. That’s what we’re delivering with this pillar, self-service developer access with zero compromise on security.

SUSE Rancher Developer Access – Free Trial

SUSE Rancher Developer Access is now generally available and purchasable through the SUSE Shop, complete with free 30-day trials. It integrates the SUSE Application Collection directly into Rancher Desktop, giving developers secure local development workflows with curated, zero-CVE container images out of the box. From developer tooling to production Kubernetes, it eliminates the “works on my machine” gap with enterprise-grade trust built in from the start. That means faster onboarding for new developers, reduced security risk from the very first build, and a natural on-ramp to the broader SUSE Rancher Prime ecosystem. Read more about SUSE Rancher Developer Access →

SUSE Application Collection

The SUSE Application Collection continues to grow, now featuring over 140 curated applications. A new Helm Charts comparison feature helps teams make faster, more informed decisions when evaluating chart options. The collection is also deepening its role as the trusted registry for the SUSE Rancher Prime ecosystem, now hosting SUSE Security Vulnerability Scanner and SUSE Virtual Cluster Engine alongside the broader catalog of developer-oriented tooling and SUSE Rancher Prime extensions. A single, curated source for platform components and applications reduces supply chain risk and eliminates the time teams spend vetting third-party images on their own. Read more about the SUSE Application Collection →

Virtual Clusters for GPU Workflows

Virtual Clusters now support GPU workflows in shared mode via K3k, letting platform engineers provision GPU-enabled instances for development teams with full control plane isolation and precise quota management. Developers get self-service access to GPU resources for AI and ML experimentation without waiting on infrastructure tickets. That removes one of the biggest bottlenecks in AI development today, the wait for GPU access, while giving platform teams the cost controls they need to keep infrastructure spending predictable. Read more about Virtual Clusters GPU Workflows →

Additional News Across the Portfolio

Beyond these three pillars, this KubeCon brings a broad set of updates across the SUSE Cloud Native portfolio. Each of these capabilities arrives as part of the integrated platform, so customers gain new functionality without adding new vendors, new contracts, or new operational overhead.

SUSE Observability Hosted Prime delivers a fully managed, hosted observability solution for Prime and Suite customers, removing the need for self-management entirely. Teams can focus on insights rather than running observability infrastructure. Discover SUSE Observability Hosted Prime →

RKE2 Is CNCF Certified Kubernetes AI Conformant. RKE2 has achieved CNCF Certified Kubernetes AI Conformance, which is the community-defined standard for running AI and machine learning workloads reliably on Kubernetes. This certification independently validates RKE2’s capabilities, including reliable GPU device plugin integration, gang scheduling for distributed training jobs, high-performance networking, and support for AI frameworks such as PyTorch and TensorFlow. The CNCF AI Conformance is available now as part of RKE2 General Availability and comes with RKE2’s built-in security-hardened defaults and CIS benchmark compliance. Read more about RKE2 CNCF AI Conformance →

RKE2 Ingress Migration with Traefik provides a clear, supported migration path from end-of-life ingress-nginx to the industry-standard Traefik Gateway API implementation. Customers get a risk-free transition plan with LTS support for existing deployments, protecting production stability while modernizing their ingress layer. Get the details on RKE2 ingress migration →

Rancher Continuous Delivery (Fleet) introduces Image Pull Secrets management for HelmOps, simplifying cluster bootstrapping and enabling seamless integration with Application Collection. Platform engineers can now manage curated content through GitOps workflows, reducing manual steps and configuration drift across large cluster fleets. See what’s new in Rancher Continuous Delivery →

Longhorn v2 Tech Preview introduces the new v2 data engine, delivering dramatic storage performance improvements for stateful workloads across cloud native and virtualized environments. Data-intensive AI and virtualized applications can run with fewer bottlenecks, improving workload density and infrastructure ROI. Explore the Longhorn v2 Tech Preview →

SUSE Security brings a new Vulnerability Scanner powered by Trivy and a new Process Enforcer using eBPF in tech-preview. Deeply integrated into the Rancher Suite, organizations can enforce policies through the same GitOps workflows they already use for application delivery. Learn about the new SUSE Security →

SCIM Identity and Access Management enables automated user provisioning and deprovisioning via external identity providers like Okta, strengthening compliance and eliminating manual account management. Automated identity lifecycle management reduces security exposure from orphaned accounts and cuts administrative overhead across the platform. Learn about SCIM for SUSE Rancher Prime →

One Platform. Open Infrastructure. Intelligent Operations.

SUSE Rancher Prime v2.14 isn’t just a release. It’s the inflection point where management becomes intelligent, virtualization becomes enterprise-complete, and the developer experience becomes seamlessly connected to the production platform.

AI that understands your entire stack because it’s embedded in the platform. Virtualization that’s managed alongside your containers because it’s part of the same platform. Developer tools that connect to production because they’re built on the same platform. That’s what a true cloud native platform of choice looks like, and it’s why consolidating on a single, unified platform delivers business outcomes that a collection of best-of-breed point tools never will.

We’re building the future of open infrastructure at SUSE, and we’re doing it with the community, our partners, and our customers at the center of everything.


Come see us at KubeCon EU in Amsterdam.
Visit the SUSE booth, join our sessions, and experience firsthand what an AI-native cloud native platform can do for your organization.

For the latest updates, visit suse.com/kubecon and follow us on social media throughout the week.

Share
(Visited 1 times, 1 visits today)
Avatar photo
7 views
Peter Smails Peter Smails is SVP and GM of SUSE’s Enterprise Container Management (ECM) business unit where he leads engineering and product management while also ensuring cross-functional alignment and execution across the business.