One Release, Three Sovereignty Gaps Fortified by SUSE at KubeCon Amsterdam 2026
KubeCon Amsterdam has started, and this year will be interesting to assess if digital sovereignty is actually a topic at the exhibition floor. In booth conversations, in the questions coming from analysts, in the hallway exchanges between platform teams and their architects. It’s certainly not new, but let’s discover if there is an urgency to it now that I haven’t felt before.
In recent months most of those conversations are about the same thing, harnessing control over the infrastructure layer. Specifically for Kubernetes running on-prem. No cloud dependency for the stack itself. That work is genuinely important, and I’m encouraged by how much progress I see. Many of the teams here are well into that journey. But as I walk from booth to booth, I keep coming back to the same question. What about the layers above?
Today, SUSE is announcing SUSE Rancher Prime 2.14. It’s a single platform release, but its three key elements each address a different dimension of the sovereignty gap that most organizations haven’t fully confronted yet.
The AI managing your platform needs to be sovereign too
Here’s a pattern I’m seeing that deserves more attention. An organization invests significant effort in ensuring its infrastructure has no external dependencies from Kubernetes on-prem, to air-gapped environments, and full control of the Linux layer. Then the AI system managing that infrastructure calls a public large language model (LLM) to determine whether a cluster is healthy. The sovereign foundation has a ceiling with a hole in it.
In SUSE Rancher Prime 2.14, Liz expands from an AI assistant into the supervisor of a specialized crew. It now oversees dedicated agents in Observability, Security, Linux, Provisioning, and Fleet management, all working within a single platform experience. The operator works with one interface; Liz directs the crew. The practical effect so significant that now deployment, health management, and incident investigation are all handled through a unified AI layer, reducing the cognitive load on platform teams and improving mean time to resolution.
The sovereignty design is deliberate. Liz can connect to public LLMs when organizations choose that path, but it can equally run on sovereign LLMs including Ollama, vLLM, and SUSE AI, in fully on-premises or air-gapped deployments. Through Model Context Protocol (MCP) plug-and-play, any third-party tool already in your stack becomes an active participant in the crew. No custom integration code. No data leaving your perimeter. That’s AI-assisted platform management built for organizations where control is non-negotiable.
What makes this significant from a business perspective is that the entire AI ecosystem can run wherever your organization needs it, fully on-premises or in an air-gapped environment. Most cloud-native platforms that offer AI-assisted management require SaaS capabilities, which means operational data leaves your perimeter whether you intend it to or not. SUSE builds this with sovereignty as a design principle, not an afterthought.
That distinction matters enormously for businesses in regulated industries, defense, public sector, and any organization operating under strict data residency requirements: you get the full benefit of a modern, AI-assisted platform without sharing information with the outside world.
Virtualization lock-in is the sovereignty conversation nobody is naming
The migration away from proprietary virtualization is one of the most consequential infrastructure movements I’ve observed in recent years. The customers I speak with are well past the “should we explore this?” stage. The question now is how to execute the migration without disrupting production workloads.
SUSE Virtualization addresses that transition directly. VM Auto Balance rebalances workloads across nodes automatically, the open-source equivalent of the Distributed Resource Scheduler (DRS) capability that VMware customers have long depended on. Live Storage Migration moves volumes between storage arrays without touching running workloads. Upgrade Control gives administrators full authority over the process: pause before each node, validate, resume when ready.
From a digital sovereignty perspective, this is about more than data residency. It’s about owning your operational model. When you migrate to SUSE Virtualization, your environment isn’t subject to licensing changes made without your input, acquisition decisions that alter roadmaps, or pricing structures you can’t predict. The platform runs where your organization needs it to run, on infrastructure you control. CNCF-validated RKE2 underpins the Kubernetes layer, which means the foundation carrying your virtualized workloads has independent conformance certification behind it.
Another valuable addition to the sovereignty story in SUSE Virtualization is native NVIDIA MIG (Multi Instance GPU), where other proprietary solutions similar to VMware require NVIDIA Enterprise while NVIDIA is native to SUSE Virtualization. Using NVIDIA MIG technology you can turn a single GPU into many GPUs by partitioning them. In the past engineers required to wait in time for GPU resources through time slicing great for bursty workloads, with NVIDIA MIG they can have a dedicated GPU instance carved from a single GPU increasing resource efficiency and data sovereignty.
The business case here is straightforward, organizations that migrate to SUSE Virtualization take back control of a cost and risk that has been sitting outside their control. Proprietary virtualization lock-in means someone else sets the pricing, someone else decides when the roadmap changes, and someone else determines what happens to your environment when their company is acquired or restructured.
That is not sovereignty in any meaningful sense. The shift happening now is not just a technology migration; it is organizations choosing to own the financial and operational decisions that govern their core infrastructure, on their terms.
Sovereignty stops at the developer layer. It shouldn’t.
This is the point I find most underappreciated in the current conversation. Organizations spend significant energy building sovereign control at the infrastructure layer. That’s necessary. It isn’t sufficient.
The dynamic layer above that foundation is where development teams actually work: what they build from, what they pull, what they deploy into production every day. The sovereignty question there isn’t just “where does our Kubernetes cluster run?” It’s also what’s inside those container images? Who curated them, who patched them, and how would we know if something changed between releases?
SUSE Application Collection has grown from 74 to 141 curated applications over the past year. Daily pulls have increased from 2,500 to 10,000. Active subscriptions have reached 2,000 across 1,300 enterprise organizations. Every application is Supply chain Levels for Software Artifacts (SLSA) Level 3, daily patched, with near-zero Common Vulnerabilities and Exposures (CVEs). A new Helm Charts comparison feature gives developers a side-by-side view of exactly what changed between releases: vulnerabilities addressed, packages updated, size differences. That’s the kind of traceability that regulated enterprises and sovereignty-conscious platform teams need.
SUSE Rancher Developer Access extends this further, bringing the Application Collection directly into Rancher Desktop so local development runs on the same trusted foundations as production. Same platform, same trusted content, from the developer’s laptop to the enterprise cluster.
The business argument here is one that too few organizations have connected to their sovereignty strategy: if you build full control at the infrastructure layer but leave the developer layer unmanaged, you have not solved the problem. Development teams are the most dynamic part of the stack. They are constantly pulling, building, and deploying, and the software supply chain they rely on carries real risk if it is not curated and verified.
Businesses that extend their sovereignty posture to include trusted developer tools gain consistency from development all the way through to production, which is where compliance, auditability, and genuine operational control actually live.
Sovereignty is being built now, not planned for
Walking the show floor and talking with analysts, platform leads, and practitioners, I’m looking forward to how the conversation has shifted in quality. A year ago, sovereignty felt like something enterprises were being asked to plan for. I’m looking forward to seeing if the practitioners at this show are asking specific architectural questions. The planning is done. The building has started.
What I find genuinely exciting about today’s release is that it addresses sovereignty as a property of the whole stack, not just the foundation. The infrastructure layer matters deeply and that work continues. But the AI intelligence layer, the virtualization layer, and the developer tools layer all have sovereignty implications that deserve the same rigour.
If you’re here at KubeCon Amsterdam this week, I’d welcome the conversation. Find me on the floor.
Come see us at KubeCon EU in Amsterdam.
Visit the SUSE booth, join our sessions, and experience firsthand what an AI-native cloud native platform can do for your organization.
For the latest updates, visit suse.com/kubecon and follow us on social media throughout the week.
Related Articles
May 30th, 2025
How To Choose a Secure Linux Distro
Aug 21st, 2025
SUSE at openSSL’s Brno Office Grand Opening
Mar 13th, 2025