SUSE AI: Next-Generation Enterprise AI Platform Announced at Cloud Native KubeCon NA 2025

Share
Share

SUSE AI LaunchSUSE made a significant entry into the AI platform market at Cloud Native Kubecon NA 2024 with the announcement of SUSE AI. This cloud-native, open, and enterprise platform is specifically engineered for the secure, management, and deployment of AI workloads. 

Now, with the year quickly passing, we are excited to introduce the next version of the product. This release focuses on deploying AI simply and securely at scale, new innovations, and expanding partnerships. 

As we start Cloud Native Kubeccon NA 2025, we are delighted to announce that SUSE AI is a fully CNCF compliant AI Platform!

Achieving CNCF conformance on SUSE AI

SUSE has always been pleased to be a part of the CNCF community and this news is particularly exciting for us!

But what does being CNCF conformant mean to you?  CNCF conformance means that our platform is recognized for trust, interoperability, and community commitment

It operates consistently across different public clouds, private clouds, and on-premises environments preventing vendor lock in.  It is rigorously tested for reliability and security.  And it stays modern and relevant by aligning and leveraging the latest features of the fast-moving AI ecosystem  

In addition, SUSE AI  simplifies deployment and management by leveraging standardized, open source tools and components building confidence and accelerating adoption by meting the highest standards of cloud native technology.

Simplifying AI

With this release of SUSE AI we are excited to announce a tech preview of our SUSE AI Universal Proxy.  With the number of MCP endpoints growing exponentially, we are certain that this component will help simplify and secure your infrastructure.

The SUSE AI Universal Proxy serves as a comprehensive platform and proxying oll your MCP servers. It will tackle the complexities of deploying and managing AI services across the enterprise.  With the AI Universal Proxy you get:

  • A single entry point for all your AI services.
  • Automatic discovery and registration of services
  • Intelligent routing through smart traffic control
  • Comprehensive monitoring and logging through integrated observability
  • Autoscaling with kubernetes native deployment

With the AI Universal Proxy, SUSE AI will bridge the divide between development and production, streamlining the building and maintenance of AI powered applications.

The best part?  We are inviting you to contribute to the project.  Simply click here to join us in truly building a community around  managing and securing the proliferation of MCPs.

Optimizing resources through Observability

AI resources continue to be expensive.  From specialized hardware to the numerous tools and resources needed to run AI workloads, it’s imperative that companies ensure that their AI workloads run efficiently.

With this release, we’ve enhanced the integrated observability features of SUSE AI, providing even more metrics that matter as you scale your AI projects.  A few of these include:

  • Out of the box observability for a variety of the frameworks in the AI Library using Open WebUI pipelines
  • Including the Open Telemetry operator so you can autoiinstrument any component, including the ones you choose to download from the internet
  • More GPU performance metrics, including heat and temperature

SUSE AI Observability know shows you the metrics that matter for your AI workloads, from the hardware to the applications.

Delivering on the SUSE AI promise of choice

It seems like every day there is a new LLM to investigate.  Regardless, whether it’s Gemini, GPT-5, Llama, or Mistral, the choice of LLM should be up to you and your company That’s one of the reasons we recently added vLLM to the AI library.  With vLLM, you get:

  • Exceptional throughput, by boosting the number of requests an LLM can handle at once
  • High efficiency and cost savings, by lowering the hardware requirements for serving LLMs by maximizing GPU resources
  • Simplified deployment, simplifying the process of moving an LLM from experimentation to production

vLLM is a true enterprise inference engine that allows you to run any LLM at scale, efficiently.

Expanding partnerships to deliver full stack solutions

SUSE continues to work with the vast ecosystem of AI partners to deliver full stack AI solutions to our customers.  As we enter Cloud Native Kubecon NA 2025, we are delighted to announce that even more partners have achieved SUSE AI certification. These include:

  • Avesha: Provides AI-powered orchestration and scaling solutions for cloud infrastructure, particularly managing Kubernetes workloads across hybrid and multi-cloud environments.
  • Katonic: Offers AI-powered orchestration and scaling solutions for cloud infrastructure, particularly for managing Kubernetes workloads across hybrid and multi-cloud environments.
  • ClearML: Automates and simplifies the entire AI/ML lifecycle, from experiment tracking and data management to deployment and scaling.
  • AI & Partners: Assists companies in complying with the EU AI Act through a combination of software, training, and consultancy services.
  • Altair: Provides solutions for managing and optimizing complex computing tasks for HPC clusters, clouds, and supercomputers.

Operationalizing AI for the enterprise continues to be difficult.  A recent MIT study showed that 93% of AI projects did not meet ROI expectations.  SUSE AI extends SUSE Rancher Suite, a Leader in the Gartner Magic Quadrant.  It is purpose built to operationalize AI for the enterprise and is a CNCF conformant platform. 

Let us help you be one of the 7% that succeeds.  Learn more about SUSE AI!


 

Share
(Visited 1 times, 1 visits today)
Avatar photo
33 views
Stacey Miller Stacey is a Principal Product Marketing Manager at SUSE. With more than 25 years in the high-tech industry, Stacey has a wide breadth of technical marketing expertise.