Harvester Integrates with Rancher: What Does This Mean for You?

Thursday, 21 October, 2021

Thousands of new technology solutions are created each year, all designed to help serve the open source community of developers and engineers who build better applications. In 2014, Rancher was founded as a software solution aiming to help simplify the lives of engineers building applications in the new market of containers.

Today, Rancher is a market-leading, open source solution supported by a rich community helping thousands of global organizations deliver Kubernetes at scale.

Harvester is a new project from the SUSE Rancher team. It is a 100% open source, Hyperconverged Infrastructure (HCI) solution that offers the same expected integrations as traditional commercial solutions while also incorporating beneficial components of Kubernetes. Harvester is built on a foundation of cloud native technology to bridge the gap between traditional HCI and cloud native solutions.

Why Is This Significant? 

Harvester addresses the intersection of traditional HCI frameworks and modern containerization strategies. Developed by SUSE’s team of Rancher engineers, Harvester preserves the core values of Rancher. This includes enriching the open source community by creating Harvester as a 100% open, interoperable, and reliable HCI solution that fits any environment while retaining the traditional functions of HCI solutions. This helps users efficiently manage and operate their virtual machine workloads.

When Harvester is used with Rancher, it provides cloud native users with a holistic platform to manage new cloud native environments, including Kubernetes and containers alongside legacy Virtual Machine (VM) workloads. Rancher and Harvester together can help organizations modernize their IT environment by simplifying the operations and management of workloads across their infrastructure, reducing the amount of operational debt.

What Can We Expect in the Rancher & Harvester Integration?

There are a couple of significant updates in this v0.3.0 of Harvester with Rancher. The integration with Rancher v2.6.1 gives users extended usability across both platforms, including importing and managing multiple Harvester clusters using the Virtualization Management feature in Rancher v2.6.1. In addition, users can also leverage the authentication mechanisms and RBAC control for multi-tenancy support available in Rancher.  

Harvester users can now provision RKE & RKE2 clusters within Rancher v2.6.1 using the built-in Harvester Node Driver. Additionally, Harvester can now provide built-in Load Balancer support and raw cluster persistent storage support to guest Kubernetes clusters.  

Harvester remains on track to hit its general availability v1.0.0 release later this year.

Learn more about the Rancher and Harvester integration here.  

You can also check out additional feature releases in v0.3.0 of Harvester on GitHub or at harvesterhci.io.

How to Manage Harvester 0.3.0 with Rancher 2.6.1 Running in a VM within Harvester

Wednesday, 20 October, 2021

What I liked about the release of Harvester 0.2.0 was the ease of enabling the embedded Rancher server, which allowed you to create Kubernetes clusters in the same Harvester cluster.

With the release of Harvester 0.3.0, this option was removed in favor of installing Rancher 2.6.1 separately and then importing your Harvester cluster into Rancher, where you could manage it. A Harvester node driver is provided with Rancher 2.6.1 to allow you to create Kubernetes clusters in the same Harvester 0.3.0 cluster.

I replicated my Harvester 0.2.0 plus the Rancher server experience using Harvester 0.3.0 and Rancher 2.6.1.

There’s no upgrade path from Harvester 0.2.0 to 0.3.0, so the first step was reinstalling my Intel NUC with Harvester 0.3.0 following the docs at: https://docs.harvesterhci.io/v0.3/install/iso-install/.

Given that my previous Harvester 0.2.0 install included Rancher, I figured I’d install Rancher in a VM running on my newly installed Harvester 0.3.0 node – but which OS would I use? With Rancher being deployed using a single Docker container, I was looking for a small, lightweight OS for the edge that included Docker. From past experience, I knew that openSUSE Leap had slimmed down images of its distribution available at https://get.opensuse.org/leap/ – click the alternative downloads link immediately under the initial downloads. Known as Just enough OS (JeOS), these are available for both Leap and Tumbleweed (their rolling release). I opted for Leap, so I created an image using the URL for the OpenStack Cloud image (trust me – the KVM and XEN image hangs on boot).

Knowing that I wanted to be able to access Rancher on the same network my Harvester node was attached to, I also enabled  Advanced | Settings | vlan (VLAN) and created a network using VLAN ID 1 (Advanced | Networks).

The next step is to install Rancher in a VM. While I could do this manually, I prefer automation and wanted to do something I could reliably repeat (something I did a lot while getting this working) and perhaps adapt when installing future versions. When creating a virtual machine, I was intrigued by the user data and network data sections in the advanced options tab, referenced in the docs at https://docs.harvesterhci.io/v0.3/vm/create-vm/, along with some basic examples. I knew from past experience that cloud-init could be used to initialize cloud instances, and with the openSUSE OpenStack Cloud images using cloud-init, I wondered if this could be used here. According to the examples in the cloud-init docs at https://cloudinit.readthedocs.io/en/latest/topics/examples.html, it can!

When creating the Rancher VM, I gave it 1 CPU with a 4-core NUC and Harvester 0.3.0 not supporting over-provisioning (it’s a bug – phew!) – I had to be frugal! Through trial and error, I also found that the minimum memory required for Rancher to work is 3 GB. I chose my openSUSE Leap 15.3 JeOS OpenStack Cloud image on the volumes tab, and on the networks tab, I chose my custom (VLAN 1) network.

The real work is done on the advanced options tab. I already knew JeOS didn’t include Docker, so that would need to be installed before I could launch the Docker container for Rancher. I also knew the keyboard wasn’t set up for me in the UK, so I wanted to fix that too. Plus, I’d like a message to indicate it was ready to use. I came up with the following User Data:

password: changeme
packages:
  - docker
runcmd:
  - localectl set-keymap uk
  - systemctl enable --now docker
  - docker run --name=rancher -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.1
  - until curl -sk https://127.0.0.1 -o /dev/null; do sleep 30s; done
final_message: Rancher is ready!

Let me go through the above lines:

  • Line 1 sets the password of the default opensuse user – you will be prompted to change this the first time you log in as this user, so don’t set it to anything secret!
  • Lines 2 & 3 install the docker package.
  • Line 4 says we’ll run some commands once it’s booted the first time.
  • Line 5 sets the UK keyboard.
  • Line 6 enables and starts the Docker service.
  • Line 7 pulls and runs the Docker container for Rancher 2.6.1 – this is the same line as the Harvester docs, except I’ve added “–name=rancher” to make it easier when you need to find the Bootstrap Password later.
    NOTE: When you create the VM, this line will be split into two lines with an additional preceding line with “>-” – it will look a bit different, but it’s nothing to worry about!
  • Line 8 is a loop checking for the Rancher server to become available – I test localhost, so it works regardless of the assigned IP address.
  • Line 9 prints out a message saying it’s finished (which happens after the previous loop completes).

An extra couple of lines will be automatically added when you click the create button but don’t click it yet as we’re not done!

This still left a problem with which IP address I use to access Rancher? With devices being assigned random IP addresses via DHCP, how do I control which address is used? Fortunately, the Network Data sections allow us to set a static address (and not have to mess with config files or run custom scripting within the VM):

network:
  version: 1
  config:
    - type: physical
      name: eth0
      subnets:
        - type: static
          address: 192.168.144.190/24
          gateway: 192.168.144.254
    - type: nameserver
      address:
        - 192.168.144.254
      search:
        - example.com

I won’t go through all the lines above but will call out those you need to change for your own network:

  • Line 8 sets the IP address to use with the CIDR netmask (/24 means 255.255.255.0).
  • Line 9 sets the default gateway.
  • Line 12 sets the default DNS nameserver.
  • Line 14 sets the default DNS search domain.

See https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v1.html# for information on the other lines.

Unless you unticked the start virtual machine on creation, your VM should start booting once you click the Create button. If you open the Web Console in VNC, you’ll be able to keep an eye on the progress of your VM. When you see the message Rancher is ready, you can try accessing Rancher in a web browser at the IP address you specified above. Depending on the web browser you’re using and its configuration, you may see warning messages about the self-signed certificate Rancher is using.

The first time you log in to Rancher, you will be prompted for the random bootstrap password which was generated. To get this, you can SSH as the opensuse user to your Rancher VM, then run:

sudo docker logs rancher 2>&1 | grep "Bootstrap Password:"

Copy the password and paste it into the password field of the Rancher login screen, then click the login with Local User button.

You’re then prompted to set a password for the default admin user. Unless you can remember random strings or use a password manager, I’d set a specific password. You also need to agree to the terms and conditions for using Rancher!

Finally, you’re logged into Rancher, but we’re not entirely done yet as we need to add our Harvester cluster. To do this, click on the hamburger menu and then the Virtualization Management tab. Don’t panic if you see a failed whale error – just try reloading.

Clicking the Import Existing button will give you some registration commands to run on one of your Harvester node(s).

To do this, SSH to your Harvester node as the rancher user and then run the first kubectl command prefixed with sudo. Unless you’ve changed your Harvester installation, you’ll also need to run the curl command, again prefixing the kubectl command with sudo. The webpage should refresh, showing your Harvester cluster’s management page. If you click the Harvester Cluster link or tab, your Harvester cluster should be listed. Clicking on your cluster name should show something familiar!

Finally, we need to activate the Harvester node driver by clicking the hamburger menu and then the Cluster Management tab. Click Drivers, then Node Drivers, find Harvester in the list, and click Activate.

Now we have Harvester 0.3.0 integrated with Rancher 2.6.1, running similarly to Harvester 0.2.0, although sacrificing 1 CPU (which will be less of an issue once the CPU over-provisioning bug is fixed) and 3GB RAM.

Admittedly, running Rancher within a VM in the same Harvester you’re managing through Rancher doesn’t seem like the best plan, and you wouldn’t do it in production, but for the home lab, it’s fine. Just remember not to chop off the branch you’re standing on!

Harvester: Intro and Setup    

Tuesday, 17 August, 2021
I mentioned about a month back that I was using Harvester in my home lab. I didn’t go into much detail, so this post will bring some more depth. We will cover what Harvester does, as well as my hardware, installation, setup and how to deploy your first virtual machine. Let’s get started.

What is Harvester?

Harvester is Rancher’s open source answer to a hyperconverged infrastructure platform. Like most things Rancher is involved with, it is built on Kubernetes using tools like KubeVirt and Longhorn. KubeVirt is an exciting project that leverages KVM and libvirt to run virtual machines inside Kubernetes; this allows you to run both containers and VMs in your cluster. It reduces operational overhead and provides consistency. This combination of tried and tested technologies provides an open source solution in this space.

It is also designed to be used with bare metal, making it an excellent option for a home lab.

Hardware

If you check the hardware requirements, you will notice they focus more on business usage. So far, my personal experience says that you want at least a 4 core/8 thread CPU, 16GB of RAM, and a large SSD, preferably an NVMe drive. Anything less resource-wise doesn’t leave enough capacity for running many containers or VMs. I will install it on an Intel NUC 8i5BEK, which has an Intel Core i5-8259U. As far as RAM, it has 32GB of RAM and a 512GB NVMe drive. It can handle running Harvester without any issues. Of course, this is just my experience; your experience may differ.

Installation

Harvester ships as an ISO, which you can download on the GitHub Releases page. You can pull it quickly using wget.

$ wget https://releases.rancher.com/harvester/v0.2.0/harvester-amd64.iso

Once you have it downloaded, you will need to create a bootable USB. I typically use Balena Etcher since it is cross-platform and intuitive. Once you have a bootable USB, place it in the machine you want to use and boot the drive. This screen should greet you:

Select “New Cluster”:

Select the drive you want to use.

Enter your hostname, select your network interface, and make sure you use automatic DHCP.

You will then be prompted to enter your cluster token. This can be any phrase you want; I recommend using your password manager to generate one.

Set a password to use, and remember that the default user name is rancher.

The following several options are attractive, especially if you want to leverage your SSH keys used in GitHub. Since this is a home lab, I left the SSH keys, proxy and cloud-init setup blank. In an enterprise environment, this would be really useful. Now you will see the final screen before installation. Verify that everything is configured to your desires before proceeding.

If it all looks great, proceed with the installation. It will take a few minutes to complete; when it does, you will need to reboot.

After the reboot, the system will startup, and you will see a screen letting you know the URL for Harvester and the system’s status. Wait until it reports that Harvester is ready before trying to connect.

Great! It is now reporting that it is up and running, so it’s now time to set up Harvester.

Initial Setup

We can navigate to the URL listed once the OS boots. Mine is https://harvest:30443. It uses a self-signed certificate by default, so you will see a warning in your browser. Just click on “advanced” to proceed, and accept it. Set a password for the default admin account.

Now you should see the dashboard and the health of the system.

I like to disable the default account and add my own account for authentication. Probably not necessary for a home lab, but a good habit to get into. First, you need to navigate to it.

Now log out and back in with your new account. Once that’s finished, we can create our first VM.

Deploying Your First VM

Harvester has native support for qcow2 images and can import those from a URL. Let’s grab the URL for openSUSE Leap 15.3 JeOS image.

https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-kvm-and-xen.qcow2

The JeOS image for openSUSE is roughly 225MB, which is a perfect size for downloading and creating VMs quickly. Let’s make the image in Harvester.

Create a new image, and add the URL above as the image URL.

You should now see it listed.

Now we can create a VM using that image. Navigate to the VM screen.

Once we’ve made our way to the VM screen, we’ll create a new VM.

When that is complete, the VM will show up in the list. Wait until it has been started, then you can start using it.

Wrapping Up

In this article, I wanted to show you how to set up VMs with Harvester, even starting from scratch! There are plenty of features to explore and plenty more on the roadmap. This project is still early in its life, so now is a great time to jump in and get involved with its direction.

Hyperconverged Infrastructure and Harvester

Monday, 2 August, 2021

Virtual machines (VMs) have transformed infrastructure deployment and management. VMs are so ubiquitous that I can’t think of a single instance where I deployed production code to a bare metal server in my many years as a professional software engineer.

VMs provide secure, isolated environments hosting your choice of operating system while sharing the resources of the underlying server. This allows resources to be allocated more efficiently, reducing the cost of over-provisioned hardware.

Given the power and flexibility provided by VMs, it is common to find many VMs deployed across many servers. However, managing VMs at this scale introduces challenges.

Managing VMs at Scale

Hypervisors provide comprehensive management of the VMs on a single server. The ability to create new VMs, start and stop them, clone them, and back them up are exposed through simple management consoles or command-line interfaces (CLIs).

But what happens when you need to manage two servers instead of one? Suddenly you find yourself having first to gain access to the appropriate server to interact with the hypervisor. You’ll also quickly find that you want to move VMs from one server to another, which means you’ll need to orchestrate a sequence of shutdown, backup, file copy, restore and boot operations.

Routine tasks performed on one server become just that little bit more difficult with two, and quickly become overwhelming with 10, 100 or 1,000 servers.

Clearly, administrators need a better way to manage VMs at scale.

Hyperconverged Infrastructure

This is where Hyperconverged Infrastructure (HCI) comes in. HCI is a marketing term rather than a strict definition. Still, it is typically used to describe a software layer that abstracts the compute, storage and network resources of multiple (often commodity or whitebox) servers to present a unified view of the underlying infrastructure. By building on top of the virtualization functionality included in all major operating systems, HCI allows many systems to be managed as a single, shared resource.

With HCI, administrators no longer need to think in terms of VMs running on individual servers. New hardware can be added and removed as needed. VMs can be provisioned wherever there is appropriate capacity, and operations that span servers, such as moving VMs, are as routine with 2 servers as they are with 100.

Harvester

Harvester, created by Rancher, is open source HCI software built using Kubernetes.

While Kubernetes has become the defacto standard for container orchestration, it may seem like an odd choice as the foundation for managing VMs. However, when you think of Kubernetes as an extensible orchestration platform, this choice makes sense.

Kubernetes provides authentication, authorization, high availability, fault tolerance, CLIs, software development kits (SDKs), application programming interfaces (APIs), declarative state, node management, and flexible resource definitions. All of these features have been battle tested over the years with many large-scale clusters.

More importantly, Kubernetes orchestrates many kinds of resources beyond containers. Thanks to the use of custom resource definitions (CRDs), and custom operators, Kubernetes can describe and provision any kind of resource.

By building on Kubernetes, Harvester takes advantage of a well tested and actively developed platform. With the use of KubeVirt and Longhorn, Harvester extends Kubernetes to allow the management of bare metal servers and VMs.

Harvester is not the first time VM management has been built on top of Kubernetes; Rancher’s own RancherVM is one such example. But these solutions have not been as popular as hoped:

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators.

To address this, Harvester does not expose the underlying Kubernetes platform to the end user. Instead, it presents more familiar concepts like VMs, NICs, ISO images and disk volumes. This allows Harvester to take advantage of Kubernetes while giving administrators a more traditional view of their infrastructure.

Managing VMs at Scale

The fusion of Kubernetes and VMs provides the ability to perform common tasks such as VM creation, backups, restores, migrations, SSH-Key injection and more across multiple servers from one centralized administration console.

Consolidating virtualized resources like CPU, memory, network, and storage allows for greater resource utilization and simplified administration, allowing Harvester to satisfy the core premise of HCI.

Conclusion

HCI abstracts the resources exposed by many individual servers to provide administrators with a unified and seamless management interface, providing a single point to perform common tasks like VM provisioning, moving, cloning, and backups.

Harvester is an HCI solution leveraging popular open source projects like Kubernetes, KubeVirt, and Longhorn, but with the explicit goal of not exposing Kubernetes to the end user.

The end result is an HCI solution built on the best open source platforms available while still providing administrators with a familiar view of their infrastructure.

Download Harvester from the project website and learn more from the project documentation.

Meet the Harvester developer team! Join our free Summer is Open session on Harvester: Tuesday, July 27 at 12pm PT and on demand. Get details about the project, watch a demo, ask questions and get a challenge to complete offline.

Announcing Harvester Beta Availability

Friday, 28 May, 2021

It has been five months since we announced project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Since then, we’ve received a lot of feedback from the early adopters. This feedback has encouraged us and helped in shaping Harvester’s roadmap. Today, I am excited to announce the Harvester v0.2.0 release, along with the Beta availability of the project!

Let’s take a look at what’s new in Harvester v0.2.0.

Raw Block Device Support

We’ve added the raw block device support in v0.2.0. Since it’s a change that’s mostly under the hood, the updates might not be immediately obvious to end users. Let me explain more in detail:

In Harvester v0.1.0, the image to VM flow worked like this:

  1. Users added a new VM image.

  2. Harvester downloaded the image into the built-in MinIO object store.

  3. Users created a new VM using the image.

  4. Harvester created a new volume, and copied the image from the MinIO object store.

  5. The image was presented to the VM as a block device, but it was stored as a file in the volume created by Harvester.

This approach had a few issues:

  1. Read/write operations to the VM volume needed to be translated into reading/writing the image file, which performed worse compared to reading/writing the raw block device, due to the overhead of the filesystem layer.

  2. If one VM image is used multiple times by different VMs, it was replicated many times in the cluster. This is because each VM had its own copy of the volume, even though the majority of the content was likely the same since they’re coming from the same image.

  3. The dependency on MinIO to store the images resulted in Harvester keeping MinIO highly available and expandable. Those requirements caused an extra burden on the Harvester management plane.

In v0.2.0, we’ve took another approach to tackle the problem, which resulted in a simpler solution that had better performance and less duplicated data:

  1. Instead of an image file on the filesystem, now we’re providing the VM with raw block devices, which allows for better performance for the VM.

  2. We’ve taken advantage of a new feature called Backing Image in the Longhorn v1.1.1, to reduce the unnecessary copies of the VM image. Now the VM image will be served as a read-only layer for all the VMs using it. Longhorn is now responsible for creating another copy-on-write (COW) layer on top of the image for the VMs to use.

  3. Since now Longhorn starts to manage the VM image using the Backing Image feature, the dependency of MinIO can be removed.

Image 02
A comprehensive view of images in Harvester

From the user experience perspective, you may have noticed that importing an image is instantaneous. And starting a VM based on a new image takes a bit longer due to the image downloading process in Longhorn. Later on, any other VMs using the same image will take significantly less time to boot up, compared to the previous v0.1.0 release and the disk IO performance will be better as well.

VM Live Migration Support

In preparation for the future upgrade process, VM live migration is now supported in Harvester v0.2.0.

VM live migration allows a VM to migrate from one node to another, without any downtime. It’s mostly used when you want to perform maintenance work on one of the nodes or want to balance the workload across the nodes.

One thing worth noting is, due to potential IP change of the VM after migration when using the default management network, we highly recommend using the VLAN network instead of the default management network. Otherwise, you might not be able to keep the same IP for the VM after migration to another node.

You can read more about live migration support here.

VM Backup Support

We’ve added VM backup support to Harvester v0.2.0.

The backup support provides a way for you to backup your VM images outside of the cluster.

To use the backup/restore feature, you need an S3 compatible endpoint or NFS server and the destination of the backup will be referred to as the backup target.

You can get more details on how to set up the backup target in Harvester here.

Image 03
Easily manage and operate your virtual machines in Harvester

In the meantime, we’re also working on the snapshot feature for the VMs. In contrast to the backup feature, the snapshot feature will store the image state inside the cluster, providing VMs the ability to revert back to a previous snapshot. Unlike the backup feature, no data will be copied outside the cluster for a snapshot. So it will be a quick way to try something experimental, but not ideal for the purpose of keeping the data safe if the cluster went down.

PXE Boot Installation Support

PXE boot installation is widely used in the data center to automatically populate bare-metal nodes with desired operating systems. We’ve also added the PXE boot installation in Harvester v0.2.0 to help users that have a large number of servers and want a fully automated installation process.

You can find more information regarding how to do the PXE boot installation in Harvester v0.2.0 here.

We’ve also provided a few examples of doing iPXE on public bare-metal cloud providers, including Equinix Metal. More information is available here.

Rancher Integration

Last but not least, Harvester v0.2.0 now ships with a built-in Rancher server for Kubernetes management.

This was one of the most requested features since we announced Harvester v0.1.0, and we’re very excited to deliver the first version of the Rancher integration in the v0.2.0 release.

For v0.2.0, you can use the built-in Rancher server to create Kubernetes clusters on top of your Harvester bare-metal clusters.

To start using the built-in Rancher in Harvester v0.2.0, go to Settings, then set the rancher-enabled option to true. Now you should be able to see a Rancher button on the top right corner of the UI. Clicking the button takes you to the Rancher UI.

Harvester and Rancher share the authentication process, so once you’re logged in to Harvester, you don’t need to redo the login process in Rancher and vice versa.

If you want to create a new Kubernetes cluster using Rancher, you can follow the steps here. A reminder that VLAN networking needs to be enabled for creating Kubernetes clusters on top of the Harvester, since the default management network cannot guarantee a stable IP for the VMs, especially after reboot or migration.

What’s Next?

Now with v0.2.0 behind us, we’re working on the v0.3.0 release, which will be the last feature release before Harvester reaches GA.

We’re working on many things for v0.3.0 release. Here are some highlights:

  • Built-in load balancer
  • Rancher 2.6 integration
  • Replace K3OS with a small footprint OS designed for the container workload
  • Multi-tenant support
  • Multi-disk support
  • VM snapshot support
  • Terraform provider
  • Guest Kubernetes cluster CSI driver
  • Enhanced monitoring

You can get started today and give Harvester v0.2.0 a try via our website.

Let us know what you think via the Rancher User Slack #harvester channel. And start contributing by filing issues and feature requests via our github page.

Enjoy Harvester!

Announcing Harvester: Open Source Hyperconverged Infrastructure (HCI) Software

Wednesday, 16 December, 2020

Today, I am excited to announce project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Harvester provides fully integrated virtualization and storage capabilities on bare-metal servers. No Kubernetes knowledge is required to use Harvester.

Why Harvester?

In the past few years, we’ve seen many attempts to bring VM management into container platforms, including our own RancherVM, and other solutions like KubeVirt and Virtlet. We’ve seen some demand for solutions like this, mostly for running legacy software side by side with containers. But in the end, none of these solutions have come close to the popularity of industry-standard virtualization products like vSphere and Nutanix.

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators. They are familiar with concepts like ISO images, disk volumes, NICs and VLANS – not concepts like pods and PVCs.

Enter Harvester.

Project Harvester is an open source alternative to traditional proprietary hyperconverged infrastructure software. Harvester is built on top of cutting-edge open source technologies including Kubernetes, KubeVirt and Longhorn. We’ve designed Harvester to be easy to understand, install and operate. Users don’t need to understand anything about Kubernetes to use Harvester and enjoy all the benefits of Kubernetes.

Harvester v0.1.0

Harvester v0.1.0 has the following features:

Installation from ISO

You can download ISO from the release page on Github and install it directly on bare-metal nodes. During the installation, you can choose to create a new cluster or add the current node into an existing cluster. Harvester will automatically create a cluster based on the information you provided.

Install as a Helm Chart on an Existing Kubernetes Cluster

For development purposes, you can install Harvester on an existing Kubernetes cluster. The nodes must be able to support KVM through either hardware virtualization (Intel VT-x or AMD-V) or nested virtualization.

VM Lifecycle Management

Powered by KubeVirt, Harvester supports creating/deleting/updating operations for VMs, as well as SSH key injection and cloud-init.

Harvester also provides a graphical console and a serial port console for users to access the VM in the UI.

Storage Management

Harvester has a built-in highly available block storage system powered by Longhorn. It will use the storage space on the node, to provide highly available storage to the VMs inside the cluster.

Networking Management

Harvester provides several different options for networking.

By default, each VM inside Harvester will have a management NIC, powered by Kubernetes overlay networking.

Users can also add additional NICs to the VMs. Currently, VLAN is supported.

The multi-network functionality in Harvester is powered by Multus.

Image Management

Harvester has a built-in image repository, allowing users to easily download/manage new images for the VMs inside the cluster.

The image repository is powered by MinIO.

Image 01

Install

To install Harvester, just load the Harvester ISO into your bare-metal machine and boot it up.

Image 02

For the first node where you install Harvester, select Create a new Harvester cluster.

Later, you will be prompted to enter the password that will be used to enter the console on the host, as well as “Cluster Token.” The Cluster Token is a token that’s needed later by other nodes that want to join the same cluster.

Image 03

Then you will be prompted to choose the NIC that Harvester will use. The selected NIC will be used as the network for the management and storage traffic.

Image 04

Once everything has been configured, you will be prompted to confirm the installation of Harvester.

Image 05

Once installed, the host will be rebooted and boot into the Harvester console.

Image 06

Later, when you are adding a node to the cluster, you will be prompted to enter the management address (which is shown above) as well as the cluster token you’ve set when creating the cluster.

See here for a demo of the installation process.

Alternatively, you can install Harvester as a Helm chart on your existing Kubernetes cluster, if the nodes in your cluster have hardware virtualization support. See here for more details. And here is a demo using Digital Ocean which supports nested virtualization.

Usage

Once installed, you can use the management URL shown in the Harvester console to access the Harvester UI.

The default user name/password is documented here.

Image 07

Once logged in, you will see the dashboard.

Image 08

The first step to create a virtual machine is to import an image into Harvester.

Select the Images page and click the Create button, fill in the URL field and the image name will be automatically filled for you.

Image 09

Then click Create to confirm.

You will see the real-time progress of creating the image on the Images page.

Image 10

Once the image is finished creating, you can then start creating the VM using the image.

Select the Virtual Machine page, and click Create.

Image 11

Fill in the parameters needed for creation, including volumes, networks, cloud-init, etc. Then click Create.

VM will be created soon.

Image 12

Once created, click the Console button to get access to the console of the VM.

Image 13

See here for a UI demo.

Current Status and Roadmap

Harvester is in the early stages. We’ve just released the v0.1.0 (alpha) release. Feel free to give it a try and let us know what you think.

We have the following items in our roadmap:

  1. Live migration support
  2. PXE support
  3. VM backup/restore
  4. Zero downtime upgrade

If you need any help with Harvester, please join us at either our Rancher forums or Slack, where our team hangs out.

If you have any feedback or questions, feel free to file an issue on our GitHub page.

Thank you and enjoy Harvester!

SUSE Virtualization – Enforcing Admission Resource Integrity With Validating Admission Policy

Wednesday, 26 March, 2025

SUSE Virtualization – Enforcing Admission Resource Integrity With Validating Admission Policy

With more enterprises using SUSE Virtualization (formerly Harvester) as the bedrock virtualization platform to host their modern cloud-native AI and edge workloads, it’s important that the platform provides seamless built-in guardrails to validate and sanitize resources admitted into the environment. Invalid resource specification can leave the platform, its guest clusters and user workloads in non-compliant compromised states, with potential data corruption, loss and unauthorized exposure.

This article explains how SUSE Virtualization utilizes Kubernetes’ dynamic admission webhook[1] and validating admission policy[2] to ensure the integrity of workload resources admitted into your virtualization platform. It shows how SUSE Virtualization administrators can write custom validation policies to protect their heterogenous multi-tenant environments, using familiar Kubernetes tools. It also reduces deployment and delivery frictions for workload owners by providing them with insights into the built-in validation policies.

Admission Resource Validation

The Kubernetes’ dynamic admission webhook allows platform administrators and workload owners to encode validation policies using modern programming languages, to interrogate resources ingested into the platform. An example of a validation policy may be one that enforces rules which prohibit all admitted pods from exposing port 80 and binding to host path volumes.

The webhook validates API requests that intend to modify the states of Kubernetes resources like pods, virtual machines, persistent volumes and storage classes. Inputs that failed the validation criteria are rejected by the platform, with human-readable error messages logged and rendered on the SUSE Virtualization UI.

Recently, the SUSE Virtualization team has taken admission resource validation a step further by bundling the platform with out-of-box Kubernetes’ validating admission policy. This recent Kubernetes feature provides SUSE Virtualization administrators and workload owners with powerful tools to express and maintain custom admission validation policies.

Before diving deeper into both the admission webhook and admission policy, let’s take a brief look at how Kubernetes’ admission control flow works.

About Admission Control

The admission control flow[3] is made up of a number of admission controllers, which are gatekeeping plugins within the Kubernetes API Server that intercept and check authenticated API requests sent to interact with Kubernetes resources. These controllers can mutate and validate Kubernetes resources, and allow and deny these API requests.

Kubernetes users can introduce custom validation policies into the admission control flow either by embedding them in validating admission webhooks or creating validating admission policy resources. Both the webhooks and policy resources run during the validating phase of the admission control flow.

The validating admission webhooks are implemented as 3rd party components that the Kubernetes API server communicates with. They serve HTTP callback endpoints that accept admission requests from the Kubernetes API server. These webhooks evaluate the admission object against a set of validation rules to confirm compliance. The outcome of the evaluation, which includes the decision to accept or reject the request, is encapsulated in admission responses sent back to the Kubernetes API server.

The admission validation policy is an alternative to the admission webhooks. It is part of the Kubernetes’ core admissionregistration.k8s.io/v1 API version group. This API is implemented on top of Kubernetes’ built-in policy framework where validation rules are expressed in the Common Expression Language[4] (CEL). The CEL validation rules are evaluated and run directly in the Kubernetes API Server. The CEL language framework is sufficiently lightweight and safe, supported by a straight-forward syntax and grammar, with built-in pre-parsing and type checking mechanisms.

Both admission webhook and resource policy offers a flexible and expressive framework to implement validation rules to satisfy many admission validation use cases.

Validating Admission Webhook In SUSE Virtualization

SUSE Virtualization comes bundled with a collection of admission webhooks. Some of these webhooks are maintained by SUSE Virtualization maintainers while others are maintained by upstream open source projects like KubeVirt[5].

This section provides Linux commands to help SUSE Virtualization administrators to examine these admission webhooks.

All subsequent commands are tested with:
  • SUSE Virtualization 1.4.1
  • kubectl v1.31.2
  • yq v4.45.1

 

From a terminal with the appropriate kubeconfig settings to access SUSE Virtualization v1.4, run the following command to see the list of validating webhook configuration resources:


$ kubectl get validatingwebhookconfiguration

NAME                                 WEBHOOKS   AGE

harvester-load-balancer-webhook      1       13d

harvester-network-webhook            1       13d

harvester-node-disk-manager-webhook  1       13d

harvester-node-manager-webhook       1       13d

harvester-snapshot-validation-webhook   1       13d

harvester-validator                  1       13d

longhorn-webhook-validator           1       13d

rancher.cattle.io                    7       13d

rke2-ingress-nginx-admission         1       13d

validating-webhook-configuration     12      13d

virt-api-validator                   19      13d

virt-operator-validator              3       13d

 

These are resources that provide the necessary configuration to facilitate the admission validation exchanges between the Kubernetes API Server and SUSE Virtualization. 

Using the harvester-validator as an example, the following command shows that there is a webhook named harvester-webhook in the harvester-system namespace, listening at :443/v1/webhook/validation:

 

$ kubectl get validatingwebhookconfiguration harvester-validator -oyaml | yq r - 'webhooks[0].clientConfig.service'

name: harvester-webhook

namespace: harvester-system

path: /v1/webhook/validation

port: 443

The Kubernetes API Server uses this information to locate the webhook’s HTTPS endpoint. The not-shown .clientConfig.caBundle property holds the CA certificate used to facilitate secure TLS communication between the Kubernetes API Server and the platform.

The webhooks[0].rules section describes the admission events that the webhook watches. Each admission event is a tuple composed of the API version, operation and resource kind.

For example, the following command identify the list of resources whose update events will be forwarded to the harvester-webhook for admission validation:

$ kubectl get validatingwebhookconfiguration harvester-validator -oyaml | yq '.webhooks[0].rules[] | select(.operations[] == "UPDATE") | {"resource": .resources}'

- nodes

- persistentvolumeclaims

- keypairs

- virtualmachines

- virtualmachines/status

- virtualmachineimages

- virtualmachinebackups

- virtualmachinerestores

- settings

- virtualmachinetemplateversions

- storageclasses

- namespaces

- addons

- versions

- resourcequotas

- schedulevmbackups

Using the KubeVirt’s virtualmachines.kubevirt.io API as an example, when an update operation is performed on an instance of the resource, the harvester-webhook webhook ensures that: 

  • The virtual machine (VM) specification is well-formed with termination grace period and reserve memory defined
  • The persistent volume claim (PVC) referenced by the VM’s volume claim isn’t taken up another VM
  • The resource requirements of the VM do not exceed the upper bounds defined by the owner namespace’s resource quota

All these checks are put in-place to ensure that an VM update operation doesn’t put the platform and workloads into incoherent states such as when multiple virtual machines attempting to write to the same volumes concurrently.

Other implementations of the SUSE Virtualization validation webhooks can be found at https://github.com/harvester/harvester, made available under the open source Apache 2.0 license.

The webhook admission configuration includes other properties relevant to debugging and error handling scenarios:

failurePolicy Defines how unrecognized errors from the admission endpoint are handled. Supported values are Ignore or Fail (default).
timeoutSeconds Specifies the timeout for this webhook. After the timeout passes, the webhook call will either be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds.

For more information on the validating webhook configuration, see the Kubernetes API documentation[6].

Validating Admission Policy In SUSE Virtualization

This section describes features that are available only in SUSE Virtualization 1.5

 

Admission webhooks normally come with non-negligible infrastructure, software and security maintenance overheads. Since the validation code is compiled into the webhooks, any attempts to fix bugs or address new security vulnerabilities will require releasing new versions of the webhooks. Moreover, there isn’t a way to temporarily disable a subset of the validation rules to accommodate upgrade exceptionalities. Disabling the validation rules means bringing down the webhooks.

In SUSE Virtualization 1.5, SUSE Virtualization starts utilizing the validating admission policy as an alternative to admission webhooks, in order to address these shortcomings.

The validating admission policy is composed of 2 core APIs:

  • ValidatingAdmissionPolicy – Defines a collection of CEL-based validation rules used to validate admission requests. It specifies the kind of resources that the policy applies to, the kind of resources that can be used to parameterize the validation rules, and how rules violation is reported.
  • ValidatingAdmissionPolicyBinding – Enables the validation policy by defining the matching resources criteria using namespace and object selectors. It also references the specific parameter resource instances used for interpreting the parameters within the validation rules. Without the binding definition, the policy will have no effects on the rest of the cluster.

The parameter resources can be represented by native types such as ConfigMap or custom CRD types, scoped to either the cluster level or namespace level.

In SUSE Virtualization 1.5, SUSE Virtualization administrators can modify the pod and service CIDR settings. An admission validation policy is added to ensure immutability of these CIDRs post-installation. Specifically, this policy prevents the CIDRs from being tampered with during node promotion[7], which may jeopardize the underlying pod and service networking.

 

From a terminal, run the following command to examine the harvester-immutable-promote-cidr validating admission policy resource:

$ kubectl get validatingadmissionpolicy                              

NAME                            VALIDATIONS   PARAMKIND   AGE

harvester-immutable-promote-cidr   1          <unset>  2m28s

 

The validation rules are defined within the .spec.validations[0].expression property of the harvester-immutable-promote-cidr policy resource:

$ kubectl get validatingadmissionpolicy harvester-immutable-promote-cidr -oyaml | yq '.spec.validations[0].expression'        

(variables.oldPodCIDR == "" || variables.newPodCIDR == variables.oldPodCIDR) &&

(variables.oldServiceCIDR == "" || variables.newServiceCIDR == variables.oldServiceCIDR )&&

(variables.oldClusterDNS == "" || variables.newClusterDNS == variables.oldClusterDNS)

The rules are expressed using CEL. Administrators can easily examine these rules using familiar tools like kubectl. The rules simply say that during an update, the pod CIDR, service CIDR and DNS service IP must remain unchanged. If any one of these conditions fails, the update operation will be rejected.

Doing It Yourself

One of the main benefits of validating admission policy is that it belongs to the Kubernetes’ core admissionregistration.k8s.io/v1 API version group. SUSE Virtualization users can use it to satisfy their admission validation requirements without requiring additional installations.

The new support for 3rd party storage solutions[8] enables SUSE Virtualization users to use other CSI providers besides Longhorn to provision storage for their VM images and root disks. SUSE Virtualization utilizes KubeVirt Containerized Data Importer (CDI) to provision PVCs that can be used as image disks for KubeVirt VMs.

Imagine a SUSE Virtualization platform managing two tenants which map to guest clusters gcluz-us-west and gcluz-us-east. The existing data centers’ setups provide cluster gcluz-us-west with the flexibility to choose from a list of 3rd party storage providers. Meanwhile, tenants of the gcluz-us-east cluster have access to the default Longhorn provisioner only.

The following validating admission policy would be a fitting solution for this scenario:

apiVersion: admissionregistration.k8s.io/v1

kind: ValidatingAdmissionPolicy

metadata:

  name: vmimage-root-disk-provisioners

spec:

  failurePolicy: Fail

  paramKind:

    apiVersion: v1

    kind: ConfigMap

  matchConstraints:

    resourceRules:

    - apiGroups:   ["harvesterhci.io"]

      apiVersions: ["v1beta1"]

      operations:  ["CREATE","UPDATE"]

      resources:   ["virtualmachineimages"]

      scope: Namespaced

  validations:

  - expression: object.spec.backend in params.data.backends.split(",")

    messageExpression: "'Failed to create VM image. Unsupported backend storage: ' + object.spec.backend"

    reason: Invalid

  - expression: "!has(params.data.targetStorageClassNames) || !has(object.spec.targetStorageClassName) || object.spec.targetStorageClassName in params.data.targetStorageClassNames.split(',')"

    messageExpression: "'Failed to create VM image. Unsupported target storage class: ' + object.spec.targetStorageClassName"

    reason: Invalid

 

The important validation rules are defined within the .spec.validations property. They ensure all newly created or updated virtual machine images use only the backend and targetStorageClassName defined in the params resources.

The params resources are represented by two separate ConfigMaps which hold the respective configurations for the gcluz-us-west and gcluz-us-east clusters:

apiVersion: v1

kind: ConfigMap

metadata:

  name: policy-vmimage-root-disk-provisioners

  namespace: gcluz-us-west

data:

  backends: "cdi,backingimage"

  targetStorageClassNames: "v2-single-replica,lvm-striped,harvester-longhorn"

---

apiVersion: v1

kind: ConfigMap

metadata:

  name: policy-vmimage-root-disk-provisioners

  namespace: gcluz-us-east

data:

  backends: backingimage

  targetStorageClassNames: "harvester-longhorn"

 

Then we create two ValidatingAdmissionPolicyBinding resources to bind the namespace-scoped parameters to the policy:

apiVersion: admissionregistration.k8s.io/v1

kind: ValidatingAdmissionPolicyBinding

metadata:

  name: gcluz-us-west-vmimage-root-disk-provisioners

spec:

  policyName: vmimage-root-disk-provisioners

  validationActions: [Deny]

  paramRef:

name: policy-vmimage-root-disk-provisioners

namespace: gcluz-us-west

parameterNotFoundAction: Deny

  matchResources:

namespaceSelector:

   matchExpressions:

   - key: kubernetes.io/metadata.name

     operator: In

     values: ["gcluz-us-west"]

---

apiVersion: admissionregistration.k8s.io/v1

kind: ValidatingAdmissionPolicyBinding

metadata:

  name: gcluz-us-east-vmimage-root-disk-provisioners

spec:

  policyName: vmimage-root-disk-provisioners

  validationActions: [Deny]

  paramRef:

name: policy-vmimage-root-disk-provisioners

namespace: gcluz-us-east

parameterNotFoundAction: Deny

  matchResources:

namespaceSelector:

   matchExpressions:

   - key: kubernetes.io/metadata.name

     operator: In

     values: ["gcluz-us-east"]

 

With these resources deployed, any attempts to create virtual machine images which violate the policies will automatically be rejected by SUSE Virtualization.

For example, an update request to modify a virtual machine image in the gcluz-us-east to use an unapproved csi-hostpath-sc target storage class will be rejected:

Error message showing policy violation

 

If the rancher-monitoring add-on is enabled[9], key metrics such as the total number of policy violations, policy evaluation rate and execution latencies can be observed using Prometheus:

Metrics on total policy evaluation occurrences

Metrics on rate of policy evaluations

Metrics on p95 policy evaluation latencies

Conclusion

This article describes how SUSE Virtualization utilizes Kubernetes’ dynamic admission webhook and validating admission policy to ensure the integrity of workload resources admitted into your virtualization platform.

Since its inception, SUSE Virtualization comes with a collection of built-in validating admission webhooks to ensure critical resources like pods, virtual machines, persistent storages are protected against incoherent inputs that might destabilize the states of the platform and user workloads.

In SUSE Virtualization 1.5, users can utilize Kubernetes’ validating admission policy API to introduce their custom validation policies into the admission control workflow, without additional installation. With the help of the ranching-monitoring add-on, meaningful metrics can be collected to show the occurrences of policy violation, policy evaluation rates and latencies.

With more new exciting features planned, SUSE Virtualization will continue to extend and improve on both its admission webhooks and admission policies used to safeguard the platform’s states. The platform administrators will gain more visibility into the scope, error handling and controls of these validation policies.

According to IDC, organizations using SUSE Rancher Prime with Virtualization achieve up to 258% ROI or $3.4 million in average benefits per year.

SUSE Virtualization is a modern, open, interoperable, hyperconverged infrastructure (HCI) solution built on Kubernetes. It is an open-source alternative designed for operators seeking a cloud-native HCI solution. You can get it up and running today by following the “Getting Started” instructions at https://docs.harvesterhci.io.

References

[1] https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/

[2] https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/

[3] https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/

[4] https://cel.dev/

[5] https://kubevirt.io/

[6] https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#validatingwebhook-v1-admissionregistration-k8s-io

[7] https://docs.harvesterhci.io/v1.5/host/ 

[8] https://github.com/harvester/harvester/pull/7640

[9] https://docs.harvesterhci.io/v1.4/monitoring/harvester-monitoring/

Introducing the Rancher CVE Portal: Enhanced Transparency and Security for Your Rancher Workloads

Friday, 7 March, 2025

At SUSE, we’re always looking for ways to make it easier for customers to maintain secure, enterprise-grade environments. The Rancher Security team is excited to announce the public beta launch of the Rancher CVE Portal, available now at scans.rancher.com. This new resource is a significant step forward in providing clear, actionable visibility into vulnerabilities affecting Rancher and its associated dependencies.

The portal represents our commitment to security and transparency, offering customers and users an up-to-date, centralized source of critical and high-severity Common Vulnerabilities and Exposures (CVEs) for Rancher-related images. This has been a longstanding customer request, and we’re thrilled to deliver a solution that streamlines access to this critical information.

What is the Rancher CVE Portal?

The Rancher CVE Portal provides a curated list of vulnerabilities for Rancher and related solutions, including but not limited to: Rancher, RKE2, Longhorn and Harvester.

The portal covers the latest stable versions, as well as development and head versions, for all supported release lines. CVEs are organized in tables by version, with raw CSV data also available for download.

This portal serves as the single source of truth for all internally identified critical and high-severity CVEs in our container images. Whether you’re a customer managing production workloads or an open-source user evaluating vulnerabilities, the portal makes it easy to stay informed.

Additionally, the public repository used to build the site is available on GitHub, ensuring full transparency and alignment with our broader community practices.

Enterprise-Grade Security for SUSE® Rancher Prime Customers

For SUSE® Rancher Prime customers, this portal is part of SUSE’s broader commitment to secure software supply chains. It’s not just about identifying vulnerabilities but ensuring they are addressed promptly and effectively:

  • Timely CVE Patching: SUSE® Rancher Prime customers benefit from rapid response to critical and high-severity CVEs, with patches provided on a priority basis to keep your infrastructure secure.
  • Streamlined Updates: Updates aligned with SUSE’s enterprise-grade release processes ensure minimal disruption to your operations.
  • Simplified Compliance: Having a clear list of CVEs makes it easier for customers to meet regulatory requirements and demonstrate adherence to security best practices.

Prime customers also gain exclusive access to enhanced features in the future, with a roadmap of premium tools and data integrations designed to provide even greater visibility and control over security vulnerabilities.

This portal consolidates all relevant Rancher-related CVE information into one location, ensuring you can quickly find the vulnerabilities affecting your environment and take action.

CVE Portal in action

Jane, a platform operator, is  responsible for ensuring her company’s Kubernetes workloads run securely and reliably. Every morning, Jane sifts through security updates and CVE reports to identify vulnerabilities that could impact their Rancher-managed clusters. This process is time-consuming and often feels like piecing together a puzzle from scatter sources. Then, Jane hears about the Rancher CVE Portal,  a centralized place where she can find up-to-date, actionable information on critical and high-severity vulnerabilities for Rancher related images. Jane quickly bookmarks the portal, excited by how it simplifies her workflow and helps her address security risks proactively.

John, the head of infrastructure, oversees a large team tasked with maintaining secure, enterprise-grade environments for his organization. He’s always looking for tools that enhance his team’s efficiency and give him confidence in the security posture of their systems. When John learns about the public beta launch of the CVE Portal, he immediately sees its value. With scans.rancher.com, John’s team can now access a single source of truth for Rancher-related vulnerabilities, eliminating the guesswork and helping them respond faster to emerging threads.

Next Steps for the CVE Portal

The CVE portal is currently in public beta. Over the coming months, we’ll continue testing and gathering feedback from our users on the portal. Once the initial testing phase is complete, the portal will be moved to stable release status.

We’re also working on creating Knowledge Base (KB) articles that will provide detailed guidance on navigating the portal, interpreting the data, and leveraging it for operational decision-making.

The basic functionality of the CVE portal will remain free and open to all users, reflecting our commitment to the broader community. However, SUSE® Rancher Prime customers can expect exclusive enhancements as we expand the portal’s features in the future.

Your Feedback is Valued

WE built this portal with you in mind,nand we want to ensure it meets your needs. If you have feedback on the portal, please share it with your SUSE contact. Your insights will help us make this tool even more valuable for your organization and others in the Rancher community.

Get Started Today

Explore the Rancher CVE Portal at scans.rancher.com and see how we’re making it easier to secure your Rancher workloads. If you’re a SUSE® Rancher Prime customer, rest assured that our engineering and security teams are already addressing vulnerabilities with timely patches and priority updates.

At SUSE, we’re dedicated to providing enterprise-ready solutions that empower our customers to operate with confidence. This CVE portal is just one of many ways we’re helping you build a secure, resilient future for your Kubernetes ecosystems. If you would like to know more about how we triage CVEs in our dependencies, please read our knowledge base article: SUSE Rancher’s CVE Triage Workflow for Software Dependencies.

For more information on SUSE® Rancher Prime and our security solutions, contact your SUSE representative or visit our website.

Megújult terméknevek, régi-új előnyök

Friday, 28 February, 2025

SUSE Virtualization, SUSE Security, SUSE Storage, SUSE Multi-Linux Support, SUSE Multi-Linux Manager: lényegre törő új nevek segítik az eligazodást a portfólióban.

A SUSE egységesítette a portfóliójában található termékek neveit, hogy azok jobban tükrözzék az egyes szoftverek legfontosabb funkcióit. Az új elnevezések megkönnyítik a döntéshozók számára a megfelelő IT-megoldások kiválasztását és segítenek eligazodni a gyorsan változó technológiai környezetben. A SUSE megoldásaival a vállalatok lépést tarthatnak a technológiai fejlődéssel, rugalmasan és egyszerűen menedzselhetik IT-infrastruktúrájukat és csökkenthetik az üzemeltetési költségeket.

Server room or server computers.3d rendering.

De pontosan mit is kínálnak az új névvel ellátott termékek?

A portfólió egyik központi eleme a modern felhőnatív alkalmazások és a Kubernetes környezetek felügyeletét egyszerűsítő SUSE Rancher Prime (korábban: Rancher Prime) megoldás, amelynek segítségével a szervezetek egységesen kezelhetik a konténereket és az itt futó szolgáltatásokat a különböző környezetekben, beleértve a nyilvános felhőket, az adatközpontokat és az edge rendszereket.

A Kubernetes rendszerekhez optimalizált, nagy teljesítményű, felhőnatív, elosztott tárolási platform, a Longhorn a SUSE Storage nevet kapta. Az eszköz a SUSE Rancher Prime megoldással kombinálva egyszerűbbé, gyorsabbá és megbízhatóbbá teszi a nagy rendelkezésre állású blokktárolók telepítését és kezelését a Kubernetes-környezetben.

A SUSE Virtualization (korábban Harvester) virtualizációs platform pedig a virtuális gépek kezelését könnyíti meg. A SUSE Virtualization és a SUSE Rancher Prime integrációja megoldja a virtuális gépek és a Kubernetes-fürtök kezelésének üzemeltetési kihívásait és egységes felületet kínál a teljes modern hibrid infrastruktúra menedzseléséhez.

A konténerek biztonságáról és a megfelelőségi előírások teljesítéséről a SUSE Security (korábban: NeuVector) használatával gondoskodhatnak a vállalatok. A vállalati felhasználásra szánt konténerbiztonsági platform folyamatosan ellenőrzi a konténereket a teljes életciklusuk során, és segíti a szervezeteket a legszigorúbb zero trust szabályok alkalmazásában.

A vegyes környezetek kezeléséhez szánt Liberty Linux szintén új nevet kapott, és mostantól SUSE Multi-Linux Support néven fut tovább. A megoldás egységes, központi támogatással, valamint automatizált felügyeleti eszközökkel segíti a vegyes Linux környezetek felügyeletét és működtetését, és egy platformon keresztül biztosít támogatást Red Had Enterprise Linux, CentOS, valamint SUSE Linux Enterprise Server rendszerekhez. A vállalatok így megtarthatják az általuk preferált Linux operációs rendszert (legyen az akár CentOS vagy RHEL), miközben a biztonsági javításokat, karbantartási frissítéseket, illetve technikai támogatást a SUSE-tól veszik igénybe egyetlen felületen keresztül. Olyan nagy szervezetek is használják ezt a terméket, mint például a Deutsche Bank, ahol a SUSE Linux Enterprise és Red Hat Enterprise Linux szervereket menedzselik vele.