SUSE Receives 15 Badges in the Winter G2 Report Across its Product Portfolio

Thursday, 12 January, 2023

 

 

 

 

 

I’m pleased to share that G2, the world’s largest and most trusted tech marketplace, has recognized our solutions in its 2023 Winter Report. We received a total of 15 badges across our business units for Rancher, SUSE Linux Enterprise Server (SLES), SLE Desktop and SLE Real Time – including the Users Love Us badge for all products – as well as three badges for the openSUSE community with Leap and Tumbleweed.

We recently celebrated 30 years of service to our customers, partners and the open source communities and it’s wonderful to keep the celebrations going with this recognition by our peers. Receiving 15 badges this quarter reinforces the depth and breadth of our strong product portfolio as well as the dedication that our team provides for our customers.

As the use of hybrid, multi-cloud and cloud native infrastructures grows, many of our customers are looking to containers. For their business success, they look to Rancher, which has been the leading multi-cluster management for nearly a decade and has one of the strongest adoption rates in the industry.

G2 awarded Rancher four badges, including High Performer badges in the Container Management and the Small Business Container Management categories and Most Implementable and Easiest Admin in the Small Business Container Management category.

Tacking on to the latest badges that SLES received in October, SLES received Momentum Leader and Leader in the Server Virtualization category once again; Momentum Leader and High Performer in the Infrastructure as a Service category; and two badges in the Mid-Market Server Virtualization category for Best Support and High Performer.

In addition, SLE Desktop was again awarded two High Performer badges in the Mid-Market Operating System and Operating System categories. SLE Real Time also received a High Performer badge in the Operating System category. The openSUSE community distribution Leap was recognized as the Fastest Implementation in the Operating System category. It’s clear that our Business Critical Linux solutions continue to be the cornerstone of success for many of our customers and that we continue to provide excellent service for the open source community.

Here’s what some of our customers said in their reviews on G2:

“[Rancher is a] complete package for Kubernetes.”

“RBAC simple management is one of the best upsides in Rancher, attaching Rancher post creation process to manage RBAC, ingress and [getting] a simple UI overview of what is going on.”

“ [Rancher is the] best tool for managing multiple production clusters of Kubernetes orchestration. Easy to deploy services, scale and monitor services on multiple clusters.”

“SLES the best [for] SAP environments. The support is fast and terrific.”

Providing our customers with solutions that they know they can rely on and trust is critical to the work we do every day. These badges are a direct response to customer feedback and product reviews and underscore our ability to serve the needs of our customers for all of our solutions. I’m looking forward to seeing what new badges our team will be rewarded in the future as a result of their excellent work.

 

Rancher Wrap: Another Year of Innovation and Growth

Monday, 12 December, 2022

2022 was another year of innovation and growth for SUSE’s Enterprise Container Management business. We introduced significant upgrades to our Rancher and NeuVector products, launched new open source projects and matured others. Exiting 2022, Rancher remains the industry’s most widely adopted container management platform and SUSE remains the preferred vendor for enabling enterprise cloud native transformation. Here’s a quick look at a few key themes from 2022.  

Security Takes Center Stage 

As the container management market matured in 2022, container security took center stage.  Customers and the open source community alike voiced concerns around the risks posed by their increasing reliance on hybrid-cloud, multi-cloud, and edge infrastructure. Beginning with the open sourcing of NeuVector, which we acquired in Q4 2021, in 2022 we continued to meet our customers’ most stringent security and assurance requirements, making strategic investments across our portfolio, including:  

  • Kubewarden – In June, we donated Kubewarden to the CNCF. Now a CNCF sandbox project, Kubewarden is an open source policy engine for Kubernetes that automates the management and governance of policies across Kubernetes clusters thereby reducing risk.  It also simplifies the management of policies by enabling users to integrate policy management into their CI/CD engines and existing infrastructure.  
  • SUSE NeuVector 5.1 – In November, we released SUSE Neuvector 5.1, further strengthening our already industry leading container security platform. 
  • Rancher Prime– Most recently, we introduced Rancher Prime, our new commercial offering, replacing SUSE Rancher.  Supporting our focus on security assurances, Rancher Prime offers customers the option of accessing their Rancher Prime software directly from a trusted private registry. Additionally, Rancher Prime FIPS-140-3 and SLSA Level 2 and 3 certifications will be finalized in 2023.

Open Source Continues to Fuel Innovation 

 Our innovation did not stop at security. In 2022, we also introduced new projects and matured others, including:  

  • Elemental – Fit for Edge deployments, Elemental is an open source project, that enables centralized management and operations of RKE2 and K3s clusters when deployed with Rancher. 
  • Harvester SUSE’s open-source cloud-native hyper-converged infrastructure (HCI) alternative to proprietary HCI is now utilized across more than 710+ active clusters. 
  • Longhorn – now a CNCF incubator project, Longhorn is deployed across more than 72,000 nodes. 
  • K3s – SUSE’s lightweight Kubernetes distribution designed for the edge which we donated to the CNCF, has surpassed 4 million downloads. 
  • Rancher Desktop – SUSE’s desktop-based container development environment for Windows, macOS, and Linux environments has surpassed 520,000 downloads and 4,000 GitHub stars since its January release. 
  • Epinio – SUSE’s Kubernetes-powered application development platform-as-a-service (PaaS) solution in which users you can deploy apps without setting up infrastructure yourself has surpassed 4,000 downloads and 300 stars on GitHub since its introduction in September. 
  • Opni – SUSE’s multi-cluster observability tool (including logging, monitoring and alerting) with AIOps has seen steady growth with over 75+ active deployments this year.  

 As we head into 2023, Gartner research indicates the container management market will grow ~25% CAGR to $1.4B in 2025. In that same time-period, 85% of large enterprises will have adopted container management solutions, up from 30% in 2022.  SUSE’s 30-year heritage in delivering enterprise infrastructure solutions combined with our market leading container management solutions uniquely position SUSE as the vendor of choice for helping organizations on their cloud native transformation journeys.  I can’t wait to see what 2023 holds in store! 

Q&A: How to Find Value at the Edge Featuring Michele Pelino

Tuesday, 6 December, 2022

We recently held a webinar, “Find Value at the Edge: Innovation Opportunities and Use Cases,” where Forrester Principal Analyst Michele Pelino was our guest speaker. After the event, we held a Q&A with Pelino highlighting edge infrastructure solutions and benefits. Here’s a look into the interview: 

SUSE: What technologies (containers, Kubernetes, cloud native, etc.) enable workload affinity in the context of edge? 

Michele: The concept of workload affinity enables firms to deploy software where it runs best. Workload affinity is increasingly important as firms deploy AI code across a variety of specialized chips and networks. As firms explore these new possibilities, running the right workloads in the right locations — cloud, data center, and edge — is critical. Increasingly, firms are embracing cloud native technologies to achieve these deployment synergies. 

Many technologies enable workload affinity for firms — for example, cloud native integration tools and container platforms’ application architecture solutions that enable the benefits of cloud everywhere. Kubernetes, a key open source system, enables enterprises to automate deployment, as well as to scale and manage containerized applications in a cloud native environment. Kubernetes solutions also provide developers with software design, deployment, and portability strategies to extend applications in a seamless, scalable manner. 

SUSE: What are the benefits of using cloud native technology in implementing edge computing solutions? 

Michele: Proactive enterprises are extending applications to the edge by deploying compute, connectivity, storage, and intelligence close to where it’s needed. Cloud native technologies deliver massive scalability, as well as enable performance, resilience, and ease of management for critical applications and business scenarios. In addition, cloud functions can analyze large data sets, identify trends, generate predictive analytics models, and remotely manage data and applications globally. 

Cloud native apps can leverage development principles such as containers and microservices to make edge solutions more dynamic. Applications running at the edge can be developed, iterated, and deployed at an accelerated rate, which reduces the time it takes to launch new features and services. This approach improves end user experience because updates can be made swiftly. In addition, when connections are lost between the edge and the cloud, those applications at the edge remain up to date and functional. 

SUSE: How do you mitigate/address some of the operational challenges in implementing edge computing at scale? 

Michele: Edge solutions make real-time decisions across key operational processes in distributed sites and local geographies. Firms must address key impacts on network operations and infrastructure. It is essential to ensure interoperability of edge computing deployments, which often have different device, infrastructure, and connectivity requirements. Third-party partners can help stakeholders deploy seamless solutions across edge environments, as well as connect to the cloud when appropriate. Data centers in geographically diverse locations make maintenance more difficult and highlight the need for automated and orchestrated management systems spanning various edge environments. 

Other operational issues include assessing data response requirements for edge use cases and the distance between edge environments and computing resources, which impacts response times. Network connectivity issues include evaluating bandwidth limitations and determining processing characteristics at the edge. It is also important to ensure that deployment initiatives enable seamless orchestration and maintenance of edge solutions. Finally, it is important to identify employee expertise to determine skill-set gaps in areas such as mesh networking, software-defined networking (SDN), analytics, and development expertise. 

SUSE: What are some of the must-haves for securing the edge? 

Michele: Thousands of connected edge devices across multiple locations create a fragmented attack surface for hackers, as well as business-wide networking fabrics that interweave business assets, customers, partners, and digital assets connecting the business ecosystem. This complex environment elevates the importance of addressing edge security and implementing strong end-to-end security from sensors to data centers in order to mitigate security threats. 

Implementing a Zero Trust edge (ZTE) policy for networks and devices powering edge solutions using a least-privileged approach to access control addresses these security issues.[i] ZTE solutions securely connect and transport traffic using Zero Trust access principles in and out of remote sites, leveraging mostly cloud-based security and networking services. These ZTE solutions protect businesses from customers, employees, contractors, and devices at remote sites connecting through WAN fabrics to more open, dangerous, and turbulent environments. When designing a system architecture that incorporates edge computing resources, technology stakeholders need to ensure that the architecture adheres to cybersecurity best practices and regulations that govern data wherever it is located. 

SUSE: Once cloud radio access network (RAN) becomes a reality, will operators be able to monetize the underlying edge infrastructure to run customer applications side by side? 

Michele: Cloud RAN can enhance network versatility and agility, accelerate introduction of new radio features, and enable shared infrastructure with other edge services, such as multiaccess edge computing or fixed-wireless access. In the future, new opportunities will extend use cases to transform business operations and industry-focused applications. Infrastructure sharing will help firms reduce costs, enhance service scalability, and facilitate portable applications. RAN and cloud native application development will extend private 5G in enterprise and industrial environments by reducing latency from the telco edge to the device edge. Enabling compute functions closer to the data will power AI and machine-learning insights to build smarter infrastructure, smarter industry, and smarter city environments. Sharing insights and innovations through open source communities will facilitate evolving innovation in cloud RAN deployments and emerging applications that leverage new hardware features and cloud native design principles.
 

What’s next? 

Register and watch the “Find Value at the Edge: Innovation Opportunities and Use Cases” Webinar today! Also, get a complimentary copy of the Forrester report: The Future of Edge Computing.  

 

Harvester 1.1.0: The Latest Hyperconverged Infrastructure Solution

Wednesday, 26 October, 2022

The Harvester team is pleased to announce the next release of our open source hyperconverged infrastructure product. For those unfamiliar with how Harvester works, I invite you to check out this blog from our 1.0 launch that explains it further. This next version of Harvester adds several new and important features to help our users get more value out of Harvester. It reflects the efforts of many people, both at SUSE and in the open source community, who have contributed to the product thus far. Let’s dive into some of the key features.  

GPU and PCI device pass-through 

The GPU and PCI device pass-through experimental features are some of the most requested features this year and are officially live. These features enable Harvester users to run applications in VMs that need to take advantage of PCI devices on the physical host. Most notably, GPUs are an ever-increasing use case to support the growing demand for Machine Learning, Artificial Intelligence and analytics workloads. Our users have learned that both container and VM workloads need to access GPUs to power their businesses. This feature also can support a variety of other use cases that need PCI; for instance, SR-IOV-enabled Network Interface Cards can expose virtual functions as PCI devices, which Harvester can then attach to VMs. In the future, we plan to extend this function to support advanced forms of device passthrough, such as vGPU technologies.  

VM Import Operator  

Many Harvester users maintain other HCI solutions with a various array of VM workloads. And for some of these use cases, they want to migrate these VMs to Harvester. To make this process easier, we created the VM Import Operator, which automates the migration of VMs from existing HCI to Harvester. It currently supports two popular flavors: OpenStack and VMware vSphere. The operator will connect to either of those systems and copy the virtual disk data for each VM to Harvester’s datastore. Then it will translate the metadata that configures the VM to the comparable settings in Harvester.   

Storage network 

Harvester runs on various hardware profiles, some clusters being more compute-optimized and others optimized for storage performance. In the case of workloads needing high-performance storage, one way to increase efficiency is to dedicate a network to storage replication. For this reason, we created the Storage Network feature. A dedicated storage network removes I/O contention between workload traffic (pod-to-pod communication, VM-to-VM, etc.) and the storage traffic, which is latency sensitive. Additionally, higher capacity network interfaces can be procured for storage, such as 40 or 100 GB Ethernet.  

Storage tiering  

When supporting workloads requiring different types of storage, it is important to be able to define classes or tiers of storage that a user can choose from when provisioning a VM. Tiers can be labeled with convenient terms such as “fast” or “archival” to make them user-friendly. In turn, the administrator can then map those storage tiers to specific disks on the bare metal system. Both node and disk label selectors define the mapping, so a user can specify a unique combination of nodes and disks on those nodes that should be used to back a storage tier. Some of our Harvester users want to use this feature to utilize slower magnetic storage technologies for parts of the application where IOPS is not a concern and low-cost storage is preferred.

In summary, the past year has been an important chapter in the evolution of Harvester. As we look to the future, we expect to see more features and enhancements in store. Harvester plans to have two feature releases next year, allowing for a more rapid iteration of the ideas in our roadmap. You can download the latest version of Harvester on Github. Please continue to share your feedback with us through our community slack or your SUSE account representative.  

Learn more

Download our FREE eBook6 Reasons Why Harvester Accelerates IT Modernization Initiatives. This eBook identifies the top drivers of IT modernization, outlines an IT modernization framework and introduces Harvester, an open, interoperable hyperconverged infrastructure (HCI) solution.

How to Deliver a Successful Technical Presentation: From Zero to Hero

Wednesday, 12 October, 2022

Introduction

I had the chance to talk about Predictive Autoscaling Patterns with Kubernetes at the Container Days 22 Conference in September of 2022.  I delivered the talk with a former colleague in Hamburg, Germany, and was an outstanding experience! The entire process of delivering the talk began when the Call for Papers opened back in March 2022. My colleague and I worked together, playing with the technology, better understanding the components and preparing the labs. 

In this article, I will discuss my experiences, lessons learned and suggestions for providing a successful technical presentation. 

My Experiences

As a Cloud Consultant in a previous role, I have attended events, such as the CNCF KubeCon and the Open Source Infra Summit. I also helped in workshops, serving as a booth staff performing demos and introducing the product to the attendees. Public speaking was something that always piqued my interest, but I didn’t know where to start. 

One of my previous duties was to provide technical expertise to customers and help sales organizations identify potential solutions and create workshops to work with the customers. Doing this gave me a unique opportunity to introduce myself to the process of speaking; I found it interesting and a great source of self-reflection.

Developing communication skills is not something you can learn just by taking a training course or listening to others doing it. I consider rehearsal mandatory, as I always learn something new every time. However, the best way to develop communication skills is to deliver content. 

How to Select the Right Topic 

Selecting the right topic for a speech is one of the first things you should consider. The topic should be a mix of something you are comfortable with and something you have enough technical background knowledge of; it does not need to be work-related, just something you find interesting and want to discuss. 

I delivered a talk with a former colleague, Roberto Carratalá, who works for a competitor. Right now, some of the most-used technologies (Kubernetes, its SIGs, programming languages, Kubevirt and many others) are open sourced projects with no direct companies involved. Talking about the technologies can open new windows to selecting an agnostic topic you and your co-speaker could discuss. Don’t let companies’ differences get in your way of providing a great talk.

In our case, we decided to move on with Vertical Pod Autoscaler (VPA) and our architecture around it. We utilized examples and created use cases to showcase. It is important to narrow down the concept to real use cases so the audience can link with their own use cases, and it can also serve as a baseline for the audience to adapt to their customers. 

VPA is a technology-agnostic vendor that can be used within a vendor distribution with minimum changes. You could consider talking about this technology, which can be applied to a vendor-specific product. 

Whether you are an Engineer, Project Manager, Architect, Consultant or hold a non-technical role, we are all involved in IT. Within your area of specialization, you can talk about your experiences, what you learned, how you performed or even the challenges you faced explaining the process.

From “How to contribute to an Open Source project” to “How to write eBPF programs with Golang,” a different audience will be called. 

Here are some ideas: 

  • Have you recently had a good experience with a tool or project and want to share your experiences? 
  • Did you overcome a downtime situation with your customer? What a good experience to share! 
  • Business challenges and how you faced them. 
  • Are you a maintainer or contributor to a project? Take your chance and generate some hype among developers about your project. 

The bottom line is to not underestimate yourself and share your experiences; we all grow when we share! 

Practice Makes Perfect

In my experience, taking the time to practice and record yourself is important. Every time I reviewed my own recording, I found opportunities for improvement. Rehearse your delivery!

I had to understand that there is no “perfect word” to use; there is no better way to explain yourself than when you feel comfortable speaking about the topic. Use language you are comfortable with, and the audience will appreciate your understanding. 

Repeat your talk, stand up and try to feel comfortable while you’re speaking. Become familiar with the sound of your voice and the content flow. Once you feel comfortable enough, deliver to your partner, your family or even close friends. It was a wonderful opportunity to get initial feedback in a friendly environment, which greatly helped me.

The Audience 

Talking to hundreds or even thousands of attendees is a great challenge but can be frightening. Try to remember that all these people are there because they’re interested in the content you created. They are not expecting to become experts after the talk, nor do they want or expect you to fail. Don’t be afraid to find ‘your’ space on the stage so that you feel more comfortable. Always tell the audience that you’re excited to be at the event and looking forward to sharing your knowledge and experience with them. Speak freely, and remember to have fun while you do! 

Own the content; a speech is not a script. Don’t expect to remember every word that you wrote because it will feel very wooden. Try to riff on your content – evolve it every time it’s delivered, sharpening the emphasis of certain sections or dropping in a bit of humor along the way.  Make sure each time you give the speech it’s a unique experience. 

The Conference 

The time has come: I overcame the lack of self-confidence and all the doubts. It was time to polish up the final details before giving the speech. 

First, I found it useful to familiarize myself with the speaking room. If you are not told to stay in the same place (like a lectern or a marked spot on the stage), spend some time walking around the room, looking at the empty chairs, imagining yourself delivering the speech, and breathe slowly and deeply to reduce any anxiety that you feel. 

While delivering a talk is not 100% a conversation, attempt to talk to the audience; don’t focus on the first few rows and forget about the rest of the auditorium. Look at different parts of the audience when you are talking, make eye contact with them and ask questions. If possible, try to make it interactive. 

The last part of the speech usually consists of a question-and-answer section. One of the most common fears is around “what if they ask something I don’t know?” Remember that no one expects you to know everything, so don’t be afraid to recognize you don’t know something. Some questions can be tricky or too long to answer; just calm down and point to the right resources where they can find the answers from the source directly. 

We got many questions, which shocked me because that proved that the audience was interested.  It was fun to answer many questions and interact with the audience. 

Don’t be in a rush, talk about the content and take your time to breathe while you are speaking. Remind yourself you wrote the content, you own the content and nobody was forced to attend your talk; they attended freely because your content is worth it!  

Conclusion 

Overall, my speaking experiences were outstanding! I delivered mine with my former colleague and friend Roberto Carratalá, and we both really enjoyed the experience. We received good feedback, including some improvements to consider for our future speeches. 

I will submit to the next call for papers, whether it is standalone or co-speaking. So get out there and get speaking!

Meet Epinio: The Application Development Engine for Kubernetes

Tuesday, 4 October, 2022

Epinio is a Kubernetes-powered application development engine. Adding Epinio to your cluster creates your own platform-as-a-service (PaaS) solution in which you can deploy apps without setting up infrastructure yourself.

Epinio abstracts away the complexity of Kubernetes so you can get back to writing code. Apps are launched by pushing their source directly to the platform, eliminating complex CD pipelines and Kubernetes YAML files. You move directly to a live instance of your system that’s accessible at a URL.

This tutorial will show you how to install Epinio and deploy a simple application.

Prerequisites

You’ll need an existing Kubernetes cluster to use Epinio. You can start a local cluster with a tool like K3sminikubeRancher Desktop or with any managed service such as Azure Kubernetes Service (AKS) or Google Kubernetes Engine (GKE).

You must have the following tools to follow along with this guide:

Install them from the links above if they’re missing from your system. You don’t need these to use Epinio, but they are required for the initial installation procedure.

The steps in this guide have been tested with K3s v1.24 (Kubernetes v1.24) and minikube v1.26 (Kubernetes v1.24) on a Linux host. Additional steps may be required to run Epinio in other environments.

What Is Epinio?

Epinio is an application platform that offers a simplified development experience by using Kubernetes to automatically build and deploy your apps. It’s like having your own PaaS solution that runs in a Kubernetes cluster you can control.

Using Epinio to run your apps lets you focus on the logic of your business functions instead of tediously configuring containers and Kubernetes objects. Epinio will automatically work out which programming languages you use, build an appropriate image with a Paketo Buildpack and launch your containers inside your Kubernetes cluster. You can optionally use your own image if you’ve already got one available.

Developer experience (DX) is a hot topic because good tools reduce stress, improve productivity and encourage engineers to concentrate on their strengths without being distracted by low-level components. A simpler app deployment experience frees up developers to work on impactful changes. It also promotes experimentation by allowing new app instances to be rapidly launched in staging and test environments.

Epinio Tames Developer Workflows

Epinio is purpose-built to enhance development workflows by handling deployment for you. It’s quick to set up, simple to use and suitable for all environments from your own laptop to your production cloud. New apps can be deployed by running a single command, removing the hours of work required if you were to construct container images and deployment pipelines from scratch.

While Epinio does a lot of work for you, it’s also flexible in how apps run. You’re not locked into the platform, unlike other PaaS solutions. Because Epinio runs within your own Kubernetes cluster, operators can interact directly with Kubernetes to monitor running apps, optimize cluster performance and act on problems. Epinio is a developer-oriented layer that imbues Kubernetes with greater ease of use.

The platform is compatible with most Kubernetes environments. It’s edge-friendly and capable of running with 2 vCPUs and 4 GB of RAM. Epinio currently supports Kubernetes versions 1.20 to 1.23 and is tested with K3s, k3d, minikube and Rancher Desktop.

How Does Epinio Work?

Epinio wraps several Kubernetes components in higher-level abstractions that allow you to push code straight to the platform. Your Epinio installation inspects your source, selects an appropriate buildpack and creates Kubernetes objects to deploy your app.

The deployment process is fully automated and handled entirely by Epinio. You don’t need to understand containers or Kubernetes to launch your app. Pushing up new code sets off a sequence of actions that allows you to access the project at a public URL.

Epinio first compresses your source and uploads the archive to a MinIO object storage server that runs in your cluster. It then “stages” your application by matching its components to a Paketo Buildpack. This process produces a container image that can be used with Kubernetes.

Once Epinio is installed in your cluster, you can interact with it using the CLI. Epinio also comes with a web UI for managing your applications.

Installing Epinio

Epinio is usually installed with its official Helm chart. This bundles everything needed to run the system, although there are still a few prerequisites.

Before deploying Epinio, you must have an ingress controller available in your cluster. NGINX and Traefik provide two popular options. Ingresses let you expose your applications using URLs instead of raw hostnames and ports. Epinio requires your apps to be deployed with a URL, so it won’t work without an ingress controller. New deployments automatically generate a URL, but you can manually assign one instead. Most popular single-node Kubernetes distributions such as K3s,minikube and Rancher Desktop come with one either built-in or as a bundled add-on.

You can manually install the Traefik ingress controller if you need to by running the following commands:

$ helm repo add traefik https://helm.traefik.io/traefik
$ helm repo update
$ helm install traefik --create-namespace --namespace traefik traefik/traefik

You can skip this step if you’re following along using minikube or K3s.

Preparing K3s

Epinio on K3s doesn’t have any special prerequisites. You’ll need to know your machine’s IP address, though—use it instead of 192.168.49.2 in the following examples.

Preparing minikube

Install the official minikube ingress add-on before you try to run Epinio:

$ minikube addons enable ingress

You should also double-check your minikube IP address with minikube ip:

$ minikube ip
192.168.49.2

Use this IP address instead of 192.168.49.2 in the following examples.

Installing Epinio on K3s or minikube

Epinio needs cert-manager so it can automatically acquire TLS certificates for your apps. You can install cert-manager using its own Helm chart:

$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ helm install cert-manager --create-namespace --namespace cert-manager jetstack/cert-manager --set installCRDs=true

All other components are included with Epinio’s Helm chart. Before you continue, set up a domain to use with Epinio. It needs to be a wildcard where all subdomains resolve back to the IP address of your ingress controller or load balancer. You can use a service such as sslip.io to set up a magic domain that fulfills this requirement while running Epinio locally. sslip.io runs a DNS service that resolves to the IP address given in the hostname used for the query. For instance, any request to *.192.168.49.2.sslip.io will resolve to 192.168.49.2.

Next, run the following commands to add Epinio to your cluster. Change the value of global.domain if you’ve set up a real domain name:

$ helm repo add epinio https://epinio.github.io/helm-charts
$ helm install epinio --create-namespace --namespace epinio epinio/epinio --set global.domain=192.168.49.2.sslip.io

You should get an output similar to the following. It provides information about the Helm chart deployment and some getting started instructions from Epinio.

NAME: epinio
LAST DEPLOYED: Fri Aug 19 17:56:37 2022
NAMESPACE: epinio
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To interact with your Epinio installation download the latest epinio binary from https://github.com/epinio/epinio/releases/latest.

Login to the cluster with any of these:

    `epinio login -u admin https://epinio.192.168.49.2.sslip.io`
    `epinio login -u epinio https://epinio.192.168.49.2.sslip.io`

or go to the dashboard at: https://epinio.192.168.49.2.sslip.io

If you didn't specify a password, the default one is `password`.

For more information about Epinio, feel free to check out https://epinio.io/ and https://docs.epinio.io/.

Epinio is now installed and ready to use. If you hit a problem and Epinio doesn’t start, refer to the documentation to check any specific steps required for compatibility with your Kubernetes distribution.

Installing the CLI

Install the Epinio CLI from the project’s GitHub releases page. It’s available as a self-contained binary for Linux, Mac and Windows. Download the appropriate binary and move it into a location on your PATH:

$ wget https://github.com/epinio/epinio/releases/epinio-linux-x86_64
$ sudo mv epinio-linux-x86_64 /usr/local/bin/epinio
$ sudo chmod +x /usr/local/bin/epinio

Try running the epinio command:

$ Epinio Version: v1.1.0
Go Version: go1.18.3

Next, you can connect the CLI to the Epinio installation running in your cluster.

Connecting the CLI to Epinio

Login instructions are shown in the Helm output displayed after you install Epinio. The Epinio API server is exposed at epinio.<global.domain>. The default user credentials are admin and password. Run the following command in your terminal to connect your CLI to Epinio, assuming you used 192.168.49.2.sslip.io as your global domain:

$ epinio login -u admin https://epinio.192.168.49.2.sslip.io

You’ll be prompted to trust the fake certificate generated by your Kubernetes ingress controller if you’re using a magic domain without setting up SSL. Press the Y key at the prompt to continue:

Logging in to Epinio in the CLI

You should see a green Login successful message that confirms the CLI is ready to use.

Accessing the Web UI

The Epinio web UI is accessed by visiting your global domain in your browser. The login credentials match the CLI, defaulting to admin and password. You’ll see a browser certificate warning and a prompt to continue when you’re using an untrusted SSL certificate.

Epinio web UI

Once logged in, you can view your deployed applications, interactively create a new one using a form and manage templates for quickly launching new app instances. The UI replicates most of the functionality available in the CLI.

Creating a Simple App

Now you’re ready to start your first Epinio app from a directory containing your source. You don’t have to create a container image or run any external tools.

You can use the following Node.js code if you need something simple to deploy. Save it to a file called index.js inside a new directory. It runs an Express web server that responds to incoming HTTP requests with a simple message:

const express = require('express')
const app = express()
const port = 8080;

app.get('/', (req, res) => {
  res.send('This application is served by Epinio!')
})

app.listen(port, () => {
  console.log(`Epinio application is listening on port ${port}`)
});

Next, use npm to install Express as a dependency in your project:

$ npm install express

The Epinio CLI has a push command that deploys the contents of your working directory to your Kubernetes cluster. The only required argument is a name for your app.

$ epinio push -n epinio-demo

Press the Enter key at the prompt to confirm your deployment. Your terminal will fill with output as Epinio logs what’s happening behind the scenes. It first uploads your source to its internal MinIO object storage server, then acquires the right Paketo Buildpack to create your application’s container image. The final step adds the Kubernetes deployment, service and ingress resources to run the app.

Deploying an application with Epinio

Wait until you see the green App is online message appears in your terminal, and visit the displayed URL in your browser to see your live application:

App is online

If everything has worked correctly, you’ll see This application is served by Epinio! when using the source code provided above.

Application running in Epinio

Managing Deployed Apps

App updates are deployed by repeating the epinio push command:

$ epinio push -n epinio-demo

You can retrieve a list of deployed apps with the Epinio CLI:

$ epinio app list
Namespace: workspace

✔️  Epinio Applications:
|        NAME         |            CREATED            | STATUS |                     ROUTES                     | CONFIGURATIONS | STATUS DETAILS |
|---------------------|-------------------------------|--------|------------------------------------------------|----------------|----------------|
| epinio-demo         | 2022-08-23 19:26:38 +0100 BST | 1/1    | epinio-demo-a279f.192.168.49.2.sslip.io         |                |                |

The app logs command provides access to the logs written by your app’s standard output and error streams:

$ epinio app logs epinio-demo

🚢  Streaming application logs
Namespace: workspace
Application: epinio-demo
🕞  [repinio-demo-057d58004dbf05e7fb7516a0c911017766184db8-6d9fflt2w] repinio-demo-057d58004dbf05e7fb7516a0c911017766184db8 Epinio application is listening on port 8080

Scale your application with more instances using the app update command:

$ epinio app update epinio-demo --instances 3

You can delete an app with app delete. This will completely remove the deployment from your cluster, rendering it inaccessible. Epinio won’t touch the local source code on your machine.

$ epinio app delete epinio-demo

You can perform all these operations within the web UI as well.

Conclusion

Epinio makes application development in Kubernetes simple because you can go from code to a live URL in one step. Running a single command gives you a live deployment that runs in your own Kubernetes cluster. It lets developers launch applications without surmounting the Kubernetes learning curve, while operators can continue using their familiar management tools and processes.

Epinio can be used anywhere you’re working, whether on your own workstation or as a production environment in the cloud. Local setup is quick and easy with zero configuration, letting you concentrate on your code. The platform uses Paketo Buildpacks to discover your source, so it’s language and framework-agnostic.

Epinio is one of the many offerings from SUSE, which provides open source technologies for Linux, cloud computing and containers. Epinio is SUSE’s solution to support developers building apps on Kubernetes, sitting alongside products like Rancher Desktop that simplify Kubernetes cluster setup. Install and try Epinio in under five minutes so you can push app deployments straight from your source.

How to Explain Zero Trust to Your Tech Leadership: Gartner Report

Wednesday, 24 August, 2022

Does it seem like everyone’s talking about Zero Trust? Maybe you know everything there is to know about Zero Trust, especially Zero Trust for container security. But if your Zero Trust initiatives are being met with brick walls or blank stares, maybe you need some help from Gartner®. And they’ve got just the thing to help you explain the value of Zero Trust to your leadership; It’s called Quick Answer: How to Explain Zero Trust to Technology Executives.

So What is Zero Trust?

According to authors Charlie Winckless and Neil MacDonald from Gartner, “Zero Trust is a misnomer; it does not mean ‘no trust’ but zero implicit trust and use of risk-appropriate, explicit trust. To obtain funding and support for Zero Trust initiatives, security and risk management leaders must be able to explain the benefits to their technical executive leaders.”

Explaining Zero Trust to Technology Executives

This Quick Answer starts by introducing the concept of Zero Trust so that you can do the same.  According to the authors, “Zero Trust is a mindset (or paradigm) that defines key security initiatives. A Zero Trust mindset extends beyond networking and can be applied to multiple aspects of enterprise systems. It is not solely purchased as a product or set of products.” Furthermore,

”Zero Trust involves systematically removing implicit trust in IT infrastructures.”

The report also helps you explain the business value of Zero Trust to your leadership. For example, “Zero trust forms a guiding principle for security architectures that improve security posture and increase cyber-resiliency,” write Winckless and MacDonald.

Next Steps to Learn about Zero Trust Container Security

Get this report and learn more about Zero Trust, how it can bring greater security to your container infrastructure and how you can explain the need for Zero Trust to your leadership team.

For even more on Zero Trust, read our new book, Zero Trust Container Security for Dummies.

Cloud Modernization Best Practices

Monday, 8 August, 2022

Cloud services have revolutionized the technical industry, and services and tools of all kinds have been created to help organizations migrate to the cloud and become more scalable in the process. This migration is often referred to as cloud modernization.

To successfully implement cloud modernization, you must adapt your existing processes for future feature releases. This could mean adjusting your continuous integration, continuous delivery (CI/CD) pipeline and its technical implementations, updating or redesigning your release approval process (eg from manual approvals to automated approvals), or making other changes to your software development lifecycle.

In this article, you’ll learn some best practices and tips for successfully modernizing your cloud deployments.

Best practices for cloud modernization

The following are a few best practices that you should consider when modernizing your cloud deployments.

Split your app into microservices where possible

Most existing applications deployed on-premises were developed and deployed with a monolithic architecture in mind. In this context, monolithic architecture means that the application is single-tiered and has no modularity. This makes it hard to bring new versions into a production environment because any change in the code can influence every part of the application. Often, this leads to a lot of additional and, at times, manual testing.

Monolithic applications often do not scale horizontally and can cause various problems, including complex development, tight coupling, slow application starts due to application size, and reduced reliability.

To address the challenges that a monolithic architecture presents, you should consider splitting your monolith into microservices. This means that your application is split into different, loosely coupled services that each serve a single purpose.

All of these services are independent solutions, but they are meant to work together to contribute to a larger system at scale. This increases reliability as one failing service does not take down the whole application with it. Also, you now get the freedom to scale each component of your application without affecting other components. On the development side, since each component is independent, you can split the development of your app among your team and work on multiple components parallelly to ensure faster delivery.

For example, the Lyft engineering team managed to quickly grow from a handful of different services to hundreds of services while keeping their developer productivity up. As part of this process, they included automated acceptance testing as part of their pipeline to production.

Isolate apps away from the underlying infrastructure

Engineers built scripts or pieces of code agnostic to the infrastructure they were deployed on in many older applications and workloads. This means they wrote scripts that referenced specific folders or required predefined libraries to be available in the environment in which the scripts were executed. Often, this was due to required configurations on the hardware infrastructure or the operating system or due to dependency on certain packages that were required by the application.

Most cloud providers refer to this as a shared responsibility model. In this model, the cloud provider or service provider takes responsibility for the parts of the services being used, and the service user takes responsibility for protecting and securing the data for any services or infrastructure they use. The interaction between the services or applications deployed on the infrastructure is well-defined through APIs or integration points. This means that the more you move away from managing and relying on the underlying infrastructure, the easier it becomes for you to replace it later. For instance, if required, you only need to adjust the APIs or integration points that connect your application to the underlying infrastructure.

To isolate your apps, you can containerize them, which bakes your application into a repeatable and reproducible container. To further separate your apps from the underlying infrastructure, you can move toward serverless-first development, which includes a serverless architecture. You will be required to re-architect your existing applications to be able to execute on AWS Lambda or Azure Functions or adopt other serverless technologies or services.

While going serverless is recommended in some cases, such as simple CRUD operations or applications with high scaling demands, it’s not a requirement for successful cloud modernization.

Pay attention to your app security

As you begin to incorporate cloud modernization, you’ll need to ensure that any deliverables you ship to your clients are secure and follow a shift-left process. This process lets you quickly provide feedback to your developers by incorporating security checks and guardrails early in your development lifecycle (eg running static code analysis directly after a commit to a feature branch). And to keep things secure at all times during the development cycle, it’s best to set up continuous runtime checks for your workloads. This will ensure that you actively catch future issues in your infrastructure and workloads.

Quickly delivering features, functionality, or bug fixes to customers gives you and your organization more responsibility in ensuring automated verifications in each stage of the software development lifecycle (SDLC). This means that in each stage of the delivery chain, you will need to ensure that the delivered application and customer experience are secure; otherwise, you could expose your organization to data breaches that can cause reputational risk.

Making your deliverables secure includes ensuring that any personally identifiable information is encrypted in transit and at rest. However, it also requires that you ensure your application does not have open security risks. This can be achieved by running static code analysis tools like SonarQube or Checkmarks.

In this blog post, you can read more about the importance of application security in your cloud modernization journey.

Use infrastructure as code and configuration as code

Infrastructure as code (IaC) is an important part of your cloud modernization journey. For instance, if you want to be able to provision infrastructure (ie required hardware, network and databases) in a repeatable way, using IaC will empower you to apply existing software development practices (such as pull requests and code reviews) to change the infrastructure. Using IaC also helps you to have immutable infrastructure that prevents accidentally introducing risk while making changes to existing infrastructure.

Configuration drift is a prominent issue with making ad hoc changes to an infrastructure. If you make any manual changes to your infrastructure and forget to update the configuration, you might end up with an infrastructure that doesn’t match its own configuration. Using IaC enforces that you make changes to the infrastructure only by updating the configuration code, which helps maintain consistency and a reliable record of changes.

All the major cloud providers have their own definition language for IaC, such as AWS CloudFormationGoogle Cloud Platform (GCP) and Microsoft Azure.

Ensuring that you can deploy and redeploy your application or workload in a repeatable manner will empower your teams further because you can deploy the infrastructure in additional regions or target markets without changing your application. If you don’t want to use any of the major cloud providers’ offerings to avoid vendor lock-in, other IaC alternatives include Terraform and Pulumi. These tools offer capabilities to deploy infrastructure into different cloud providers from a single codebase.

Another way of writing IaC is the AWS Cloud Development Kit (CDK), which has unique capabilities that make it a good choice for writing IaC while driving cultural change within your organization. For instance, AWS CDK lets you write automated unit tests for your IaC. From a cultural perspective, this allows developers to write IaC in their preferred programming language. This means that developers can be part of a DevOps team without needing to learn a new language. AWS CDK can also be used to quickly deploy and develop infrastructure on AWS, cdk8s for Kubernetes, and Cloud Development Kit for Terraform (CDKTF).

After adapting to IaC, it’s also recommended to deploy all your configurations as code (CAC). When you use CoC, you can put the same guardrails (ie pull requests) around configuration changes required for any code change in a production environment.

Pay attention to resource usage

It’s common for new entrants to the cloud to miss out on tracking their resource consumption while they’re in the process of migrating to the cloud. Some organizations start with too much (~20 percent) of additional resources, while some forget to set up restricted access to avoid overuse. This is why tracking the resource usage of your new cloud infrastructure from day one is very important.

There are a couple of things you can do about this. The first and a very high-level solution is to set budget alerts so that you’re notified when your resources start to cost more than they are supposed to in a fixed time period. The next step is to go a level down and set up cost consolidation of each resource being used in the cloud. This will help you understand which resource is responsible for the overuse of your budget.

The final and very effective solution is to track and audit the usage of all resources in your cloud. This will give you a direct answer as to why a certain resource overshot its expected budget and might even point you towards the root cause and probable solutions for the issue.

Culture and process recommendations for cloud modernization

How cloud modernization impacts your organization’s culture and processes often goes unnoticed. If you really want to implement cloud modernization, you need to change every engineer in your organization’s mindset drastically.

Modernize SDLC processes

Oftentimes, organizations with a more traditional, non-cloud delivery model follow a checklist-based approach for their SDLC. During your cloud modernization journey, existing SDLC processes will need to be enhanced to be able to cope with the faster delivery of new application versions to the production environment. Verifications that are manual today will need to be automated to ensure faster response times. In addition, client feedback needs to flow faster through the organization to be quickly incorporated into software deliverables. Different tools, such as SecureStack and SUSE Manager, can help automate and improve efficiency in your SDLC, as they take away the burden of manually managing rules and policies.

Drive cultural change toward blameless conversations

As your cloud journey continues to evolve and you need to deliver new features faster or quickly fix bugs as they arise, this higher change frequency and higher usage of applications will lead to more incidents and cause disruptions. To avoid attrition and arguments within the DevOps team, it’s important to create a culture of blameless communication. Blameless conversations are the foundation of a healthy DevOps culture.

One way you can do this is by running blameless post-mortems. A blameless post-mortem is usually set up after a negative experience within an organization. In the post-mortem, which is usually run as a meeting, everyone explains his or her view on what happened in a non-accusing, objective way. If you facilitate a blameless post-mortem, you need to emphasize that there is no intention of blaming or attacking anyone during the discussion.

Track key performance metrics

Google’s annual State of DevOps report uses four key metrics to measure DevOps performance: deploy frequency, lead time for changes, time to restore service, and change fail rate. While this article doesn’t focus specifically on DevOps, tracking these four metrics is also beneficial for your cloud modernization journey because it allows you to compare yourself with other industry leaders. Any improvement of key performance indicators (KPIs) will motivate your teams and ensure you reach your goals.

One of the key things you can measure is the duration of your modernization project. The project’s duration will directly impact the project’s cost, which is another important metric to pay attention to in your cloud modernization journey.

Ultimately, different companies will prioritize different KPIs depending on their goals. The most important thing is to pick metrics that are meaningful to you. For instance, a software-as-a-service (SaaS) business hosting a rapidly growing consumer website will need to track the time it takes to deliver a new feature (from commit to production). However, this metric isn’t meant for a traditional bank that only updates its software once a year.

You should review your chosen metrics regularly. Are they still in line with your current goals? If not, it’s time to adapt.

Conclusion

Migrating your company to the cloud requires changing the entirety of your applications or workloads. But it doesn’t stop there. In order to effectively implement cloud modernization, you need to adjust your existing operations, software delivery process, and organizational culture.

In this roundup, you learned about some best practices that can help you in your cloud modernization journey. By isolating your applications from the underlying infrastructure, you gain flexibility and the ability to shift your workloads easily between different cloud providers. You also learned how implementing a modern SDLC process can help your organization protect your customer’s data and avoid reputational loss by security breaches.

SUSE supports enterprises of all sizes on their cloud modernization journey through their Premium Technical Advisory Services. If you’re looking to restructure your existing solutions and accelerate your business, SUSE’s cloud native transformation approach can help you avoid common pitfalls and accelerate your business transformation.

Learn more in the SUSE & Rancher Community. We offer free classes on Kubernetes, Rancher, and more to support you on your cloud native learning path.

Managing Your Hyperconverged Network with Harvester

Friday, 22 July, 2022

Hyperconverged infrastructure (HCI) is a data center architecture that uses software to provide a scalable, efficient, cost-effective way to deploy and manage resources. HCI virtualizes and combines storage, computing, and networking into a single system that can be easily scaled up or down as required.

A hyperconverged network, a networking architecture component of the HCI stack, helps simplify network management for your IT infrastructure and reduce costs by virtualizing your network. Network virtualization is the most complicated among the storage, compute and network components because you need to virtualize the physical controllers and switches while dividing the network isolation and bandwidth required by the storage and compute. HCI allows organizations to simplify their IT infrastructure via a single control pane while reducing costs and setup time.

This article will dive deeper into HCI with a new tool from SUSE called Harvester. By using Kubernetes’ Container Network Interface (CNI) mechanisms, Harvester enables you to better manage the network in an HCI. You’ll learn the key features of Harvester and how to use it with your infrastructure.

Why you should use Harvester

The data center market offers plenty of proprietary virtualization platforms, but generally, they aren’t open source and enterprise-grade. Harvester fills that gap. The HCI solution built on Kubernetes has garnered about 2,200 GitHub stars as of this article.

In addition to traditional virtual machines (VMs), Harvester supports containerized environments, bridging the gap between legacy and cloud native IT. Harvester allows enterprises to replicate HCI instances across remote locations while managing these resources through a single pane.

Following are several reasons why Harvester could be ideal for your organization.

Open source solution

Most HCI solutions are proprietary, requiring complicated licenses, high fees and support plans to implement across your data centers. Harvester is a free, open source solution with no license fees or vendor lock-in, and it supports environments ranging from core to edge infrastructure. You can also submit a feature request or issue on the GitHub repository. Engineers check the recommendations, unlike other proprietary software that updates too slowly for market demands and only offers support for existing versions.

There is an active community that helps you adopt Harvester and offers to troubleshoot. If needed, you can buy a support plan to receive round-the-clock assistance from support engineers at SUSE.

Rancher integration

Rancher is an open source platform from SUSE that allows organizations to run containers in clusters while simplifying operations and providing security features. Harvester and Rancher, developed by the same engineering team, work together to manage VMs and Kubernetes clusters across environments in a single pane.

Importing an existing Harvester installation is as easy as clicking a few buttons on the Rancher virtualization management page. The tight integration enables you to use authentication and role-based access control for multitenancy support across Rancher and Harvester.

This integration also allows for multicluster management and load balancing of persistent storage resources in both VM and container environments. You can deploy workloads to existing VMs and containers on edge environments to take advantage of edge processing and data analytics.

Lightweight architecture

Harvester was built with the ethos and design principles of the Cloud Native Computing Foundation (CNCF), so it’s lightweight with a small footprint. Despite that, it’s powerful enough to orchestrate VMs and support edge and core use cases.

The three main components of Harvester are:

  • Kubernetes: Used as the Harvester base to produce an enterprise-grade HCI.
  • Longhorn: Provides distributed block storage for your HCI needs.
  • KubeVirt: Provides a VM management kit on top of Kubernetes for your virtualization needs.

The best part is that you don’t need experience in these technologies to use Harvester.

What Harvester offers

As an HCI solution, Harvester is powerful and easy to use, with a web-based dashboard for managing your infrastructure. It offers a comprehensive set of features, including the following:

VM lifecycle management

If you’re creating Windows or Linux VMs on the host, Harvester supports cloud-init, which allows you to assign a startup script to a VM instance that runs when the VM boots up.

The custom cloud-init startup scripts can contain custom user data or network configuration and are inserted into a VM instance using a temporary disc. Using the QEMU guest agent means you can dynamically inject SSH keys through the dashboard to your VM via cloud-init.

Destroying and creating a VM is a click away with a clearly defined UI.

VM live migration support

VMs inside Harvester are created on hosts or bare-metal infrastructure. One of the essential tasks in any infrastructure is reducing downtime and increasing availability. Harvester offers a high-availability solution with VM live migration.

If you want to move your VM to Host 1 while maintaining Host 2, you only need to click migrate. After the migration, your memory pages and disc block are transferred to the new host.

Supported VM backup and restore

Backing up a VM allows you to restore it to a previous state if something goes wrong. This backup is crucial if you’re running a business or other critical application on the machine; otherwise, you could lose data or necessary workflow time if the machine goes down.

Harvester allows you to easily back up your machines in Amazon Simple Storage Service (Amazon S3) or network-attached storage (NAS) devices. After configuring your backup target, click Take Backup on the virtual machine page. You can use the backup to replace or restore a failed VM or create a new machine on a different cluster.

Network interface controllers

Harvester offers a CNI plug-in to connect network providers and configuration management networks. There are two network interface controllers available, and you can choose either or both, depending on your needs.

Management network

This is the default networking method for a VM, using the eth0 interface. The network configures using Canal CNI plug-ins. A VM using this network changes IP after a reboot while only allowing access within the cluster nodes because there’s no DHCP server.

Secondary network

The secondary network controller uses the Multus and bridge CNI plug-ins to implement its customized Layer 2 bridge VLAN. VMs are connected to the host network via a Linux bridge and are assigned IPv4 addresses.

IPv4 addresses’ VMs are accessed from internal and external networks using the physical switch.

When to use Harvester

There are multiple use cases for Harvester. The following are some examples:

Host management

Harvester dashboards support viewing infrastructure nodes from the host page. Kubernetes has HCI built-in, which makes live migrations, like Features, possible. And Kubernetes provides fault tolerance to keep your workloads in other nodes running if one node goes down.

VM management

Harvester offers flexible VM management, with the ability to create Windows or Linux VMs easily and quickly. You can mount volumes to your VM if needed and switch between the administration and a secondary network, according to your strategy.

As noted above, live migration, backups, and cloud-init help manage VM infrastructure.

Monitoring

Harvester has built-in monitoring integration with Prometheus and Grafana, which installs automatically during setup. You can observe CPU, memory, storage metrics, and more detailed metrics, such as CPU utilization, load average, network I/O, and traffic. The metrics included are host level and specific VM level.

These stats help ensure your cluster is healthy and provide valuable details when troubleshooting your hosts or machines. You can also pop out the Grafana dashboard for more detailed metrics.

Conclusion

Harvester is the HCI solution you need to manage and improve your hyperconverged infrastructure. The open source tool provides storage, network and computes in a single pane that’s scalable, reliable, and easy to use.

Harvester is the latest innovation brought to you by SUSE. This open source leader provides enterprise Linux solutions, such as Rancher and K3s, designed to help organizations more easily achieve digital transformation.

Get started

For more on Harvester or to get started, check the official documentation.