Getting Hands on with Harvester HCI

Monday, 2 May, 2022

When I left Red Hat to join SUSE as a Technical Marketing Manager at the end of 2021, I heard about Harvester, a new Hyperconverged Infrastructure (HCI) solution with Kubernetes under the hood. When I started looking at it, I immediately saw use cases where Harvester could really help IT operators and DevOps engineers. There are solutions that offer similar capabilities but there’s nothing else on the market like Harvester. In this blog, I’ll give an overview of getting started with Harvester and what you need for a lab implementation.­


First, let me bring you up to speed on Harvester. This HCI solution from SUSE takes advantage of your existing hardware with cutting edge open source technology, and, as always with SUSE, offers flexibility and freedom without locking you in with expensive and complex solutions.

Figure 1 shows, at a glance, what Harvester is and the main technologies that compose it.


Fig. 1 – Harvester stack 


The base of the solution is the Linux operating system. Longhorn provides lightweight and easy-to-use distributed block storage system for Kubernetes — in this case for the VMs running on the cluster. RKE2 provides the Kubernetes layer where KubeVirt runs, providing virtualization capabilities using KVM on Kubernetes. The concept is simple: like in Kubernetes, there are pods running in a cluster. The big difference is that there are VMs inside those pods. 

To learn more about the tech under the hood and technical specs, check out this blog post from Sheng Yang introducing Harvester technical details.

The lab

I set up a home lab based on a Slimbook One node with an AMD Ryzen 7 processor, with 8 cores and 16 threads, 64GB of RAM and 1TB NVMe SSD — this is twice the minimum requirements for Harvester. In case you don’t know Slimbook, it is a brand focused on hardware oriented for Linux and open source software. You’ll need an ethernet connection for Harvester to boot, so if you don’t have a dedicated switch to connect your server, just connect it to the router from your ISV.


Fig. 2 – Slimbook One 


The installation

The installation was smooth and easy since Harvester ships as an appliance. Download the ISO image and install it on a USB drive or use PXE for the startup. In this process, you’ll be asked some basic questions to configure Harvester during the installation process. 

Fig. 3 – ISO Install


As part of the initial set up you can create a token that can be used later to add nodes to the cluster. Adding more nodes to the cluster is easy; you just start another node with the appliance and provide the token so the new node can join to the Kubernetes cluster. This is similar for what you do with RKE2 and K3s when adding nodes to a cluster. After you provide all the information for the installation process, you’ll have to wait approximately 10 minutes for Harvester to finish the set up. The Harvester configuration is stored as a yaml file and can be sourced from a URL during the installation to make the installation repeatable and easy to keep on a git repository.


Once the installation is finished, on the screen you’ll see the IP/DNS to connect Harvester and whether Harvester is ready or not. Once ready, you can log into the UI using the IP/DNS. The UI is very similar to Rancher and gives you the possibility to use a secure password in the first login. 


Fig. 4 – Harvester installation finished & ready screen 


The first login and dashboard

When you log in for the first time, you’ll see that it is easy to navigate.  Harvester benefits from a clean UI; it’s easy to use and completely oriented toward virtualization users and operators. Harvester offers the same kind of experience that IT operators would expect of a virtualization platform like oVirt. 


Fig. 5 – Harvester dashboard 


The first thing you’ll find once logged in is the dashboard, which allows you to see all the basic information about your cluster, like hosts, VMs, images, cluster metrics and VM metrics. If you navigate down the dashboard, you’ll find an event manager that shows you all the events segregated by kind of object.


When you dig further into the UI, you´ll find not only the traditional virtualization items but also Kubernetes options, like managing namespaces. When we investigate further, we find some namespaces are already created but we can create more in order to take advantage of Kubernetes isolation. Also, we find a fleet-local namespace which gives us a clue about how Kubernetes objects are managed inside the local cluster. Fleet is a GitOps-based deployment engine created by Rancher to simplify and improve cluster control. In the Rancher UI it’s referred to as ‘Continuous Deployment.’

Creating your first VM

Before creating your first VM you need to upload the image you’ll use to create it.  Harvester can use qcow2, raw and ISO images which can be uploaded from the Images tab using a URL or importing them from your local machine. Before uploading the images, you have the option to select which namespace you want them in, and you can assign labels (yes, Kubernetes labels!) to use them from the Kubernetes cluster. Once you have images uploaded you can create your first VM.

The VM assistant feels like any other virtualization platform out there: you select CPU, RAM, storage, networking options, etc. 


Fig. 6 – VM creation


However, there are some subtle differences. First, you must select a namespace where to deploy the VM, and you have the possibility to see all the VM options as yaml code. This means your VMs can be defined as managed as code and integrated with Fleet. This is a real differentiator from more traditional virtualization platforms. Also, you can select the node where the VM will be running, use the Kubernetes scheduler to place the VM on the best node, apply scheduling rules or select specific nodes that do not support live migration. Finally, there is the option to use containers alongside VMs in the same pod; the container image you select is a sidecar for the VM. This sidecar container is added as a disk from the Harvester UI. Cloud config is supported out of the box to configure the VMs during the first launch as you could expect from solutions like OpenStack or oVirt. 


Finding Kubernetes concepts on a virtualization solution might be a little awkward at the beginning. However, finding things like Grafana, namespace isolation and sidecar containers in combination with a virtualization platform really helps to get the best of both worlds. As far as use cases where Harvester can be of use, it is perfect for the Edge, where it takes advantage of the physical servers you already have in your organization since it doesn’t need a lot of resources to run. Another use case is as an on-prem HCI solution, offering a perfect way to integrate VMs and containers in one platform. The integration with Rancher offers even more capabilities. Rancher provides a unified management layer for hybrid cloud environments, offering central RBAC management for multi-tenancy support; a single pane of glass to manage VMs, containers and clusters; or deploying your Kubernetes clusters in Harvester or on most of the cloud providers in the market. 

We may be in a cloud native world now, but VMs are not going anywhere. Solutions like Harvester ease the integration of both worlds, making your life easier. 

To get started with Harvester, head over to the quick start documentation. 

Join the SUSE & Rancher community to learn more about Harvester and other SUSE open source projects.




Pokaz Harvestera – zarządzaj wirtualizacją i kontenerami z Kubernetesem

Wednesday, 12 January, 2022

Polska premiera otwartej, interoperacyjnej infrastruktury hiperkonwergentnej (HCI). Zobacz jak Harvester unifikuje zarządzanie środowiskiem kontenerów i maszyn wirtualnych.

Zapraszamy na nasze pierwsze spotkanie z technologiami SUSE w 2022 roku, na którym zaprezentujemy Harvestera. To otwarte, interoperacyjne i rewolucyjne rozwiązanie HCI oparte na Kubernetesie. Od naszch inżynierów dowiesz się, jak Harvester i SUSE Rancher mogą pomóc Twojej organizacji zmodernizować stos wykorzystywanych technologii IT i rozwiązać problem złożoności infrastruktury informatycznej w przypadku stosowania zwirtualizowanych i skonteneryzowanych obciążeń. 

W programie spotkania

  • Dlaczego SUSE weszła w obszar infrastruktury hiperkonwergentnej,
  • Jak przebiega integracja pomiędzy rozwiązaniami SUSE Rancher i Harvester,
  • Co zapewni działowi operacji skonsolidowanie obciążeń zwirtualizowanych i skonteneryzowanych,
  • Pokaz na żywo możliwości Harvestera,
  • Pytania i odpowiedzi na żywo oraz omówienie planów rozwoju Harvestera.

Dołącz do nas 19 stycznia o 10:00!

Spotkanie 19 stycznia poprowadzą w jęz. polskim inżynierowie SUSE Rancher, Jarosław Bieniek i Jarek Śliwiński za pomocą platformy webinarowej ON24. Do udziału w nim wystarczy przeglądarka internetowa, nie ma konieczności instalowania dodatkowego oprogramowania. Na podany podczas rejestracji adres email wyślemy pocztą indywidualny link do udziału.

Rejestracja na spotkanie:

Harvester: A Modern Infrastructure for a Modern Platform

Tuesday, 21 December, 2021

Cloud platforms are not new — they have been around for a few years. And containers have been around even longer. Together, they have changed the way we think about software. Since the creation of these technologies, we have focused on platforms and apps. And who could blame anyone? Containers and Kubernetes let us do things that were unheard of only a few years ago.

What about the software that runs the infrastructure to support all these advancements? Over the same time, we have seen advancements — some in open source but the most with proprietary solutions. Sure, there is nothing wrong with running open source on top of a proprietary solution. These systems have become very good at what they do: running virtual machines but not container or container platforms, for that matter.

The vast majority of this infrastructure software is proprietary. This means you need two different skill sets to manage each of these — one proprietary, one Kubernetes. This is a lot to put on one team; it’s almost unbearable to put on one individual. What if there was an open infrastructure that used the same concepts and management plane as Kubernetes? We could lower the learning curve by managing our clusters the same way we can manage our host. We trust Kubernetes to manage clusters — why not our hosts?

Harvester: Built on Open Cloud Native Technology

Harvester is a simple, elegant, and light hyperconverged infrastructure (HCI) solution built for running virtual machines and Kubernetes clusters on bare metal servers. With Harvester reaching General Availability, we can now manage our host with the same concepts and management plane as our clusters. Harvester is a modern infrastructure for a modern platform. Completely open source, this solution is built on Kubernetes and incorporates other cloud native solutions, including Longhorn and Kubevirt, and leveraging all of these technologies transparently to deliver a modern hypervisor. This gives Harvester endless possibilities with all the other projects that integrate with Kubernetes.

This means operators and infrastructure engineers can leverage their existing skill sets and will find in Harvester a familiar HCI experience. Harvester easily integrates into cloud native environments, and offers enterprise-grade, turnkey features without costly overhead of the proprietary alternatives — saving both time and money.

A Platform for the Edge

Harvester’s small footprint means it is a great choice for the unique demands of hardware at the edge. Harvester gives operators the ability to deploy and manage VMs and Kubernetes clusters on a single platform. And because it integrates into Rancher, Harvester clusters can be managed centrally using all the great tooling Rancher provides. Edge applications will also benefit from readily available enterprise-grade storage, without costly and specialized storage hardware required. This enables operators to keep compute and storage as close to the user as possible, without sacrificing management and security. Kubernetes is quickly becoming a standard for edge deployments, so an HCI that also speaks this language is beneficial.

Harvester is a great solution for data centers, which come in all shapes and sizes. Harvester’s fully integrated approach means you can use high-density hardware with low-cost local storage. This saves on equipment costs and the amount of rack space required. A Harvester cluster can be as small as three servers, or an entire rack. Yet it can run just as well in branch or small-office server rooms. And all of these locations can be centrally managed through Rancher.

A Platform for Modernizing Applications

Harvester isn’t just a platform for building cloud native applications but one that you can use to take applications from VMs to clusters. It allows operators to run VMs alongside clusters, giving developers the opportunity to start decomposing these monoliths to cloud native applications. With most applications, this takes months and sometimes years. With Harvester, there isn’t a rush. VMs and clusters live side by side with ease. It offers all of this in one platform with one management plane.

As cloud native technologies continue their trajectory as keys to digital transformation, next-gen HCI solutions need to offer functionality and simplicity with the capability to manage containerized and non-containerized workloads, storage and network requirements across any environment.


What’s unique about Harvester? You can use it to manage multiple clusters hosted on VMs or a Kubernetes distribution. It’s 100 percent open source and leverages proven technologies – so why not give it a try to simplify your infrastructure stack?  You’ll get a feature-rich operational experience in a single management platform, with the support of the open-source community behind it. We have seen the evolution of Harvester, from a fledgling open-source project to a full-on enterprise-ready HCI solution.

We hope you take a moment to download and give Harvester a try.

JOIN US at the Harvester Global Online Meetup – January  19 at 10am PT. Our product team will be on hand to answer your questions. Register here.

Nowości SUSE z konferencji KubeCon: projekty Harvester, Epinio, Kubewarden, Opni i Rancher Desktop

Monday, 11 October, 2021

SUSE na konferencji KubeCon North America poinformowała o istotnych postępach w pracach nad projektem Harvester i systemami open source udostępnionymi w wersjach beta. Harvester ujednolica dostarczanie maszyn wirtualnych i skonteneryzowanych obciążeń z poziomu programu SUSE Rancher. Z kolei by zapewnić produkcyjną jakości Kubernetesa w dowolnym miejscu, SUSE rozwija innowacyjne projekty open source Epinio, Kubewarden, Opni i Rancher Desktop udostępnione już w wersjach beta. 

Dzięki integracji oprogramowania SUSE Rancher z systemem Harvester powstało kompleksowe rozwiązanie open source do budowy infrastruktury hiperkonwergentnej (HCI). Dzięki niej firmy mogą przyspieszyć transformację cyfrową poprzez konsolidację, uproszczenie i modernizację dotychczasowych operacji IT. Od czasu przejęcia Rancher Labs w grudniu 2020 r. firma SUSE wzmocniła swoje zaangażowanie w rozwój innowacji w ramach swojego portfolio oprogramowania gotowego do pracy w chmurze (cloud-native), inwestując w projekty open source, takie jak Harvester, Epinio, Kubewarden, Opni i Rancher Desktop. Wszystkie te platformy są właśnie demonstrowane podczas konferencji KubeCon North America na stoisku firmy SUSE.

Infrastruktura hiperkonwergentna (HCI) zbudowana na open source do wdrażania rozwiązań cloud-native we własnym centrum danych oraz na brzegu sieci
Najważniejszą nowością SUSE prezentowaną podczas konferencji na KubeCon North America jest integracja narzędzia SUSE Rancher do zarządzania Kubernetesem z oprogramowaniem Harvester. Jest to rozwiązanie, które pomaga wdrażać w ujednolicony sposób zarówno maszyny wirtualne jak i kontenery, bez wprowadzania dodatkowej złożoności, narzucania ograniczeń czy generowania dodatkowych kosztów, z czym wiązało się korzystanie z rozwiązań tradycyjnych dostawców. Harvester został zaprojektowany z myślą o wykorzystaniu możliwości oprogramowania SUSE Rancher w zakresie ciągłego dostarczania (ang. Continous Delivery) w ramach GitOps do zarządzania potencjalnie tysiącami klastrów HCI. Klastry te mogą z pomocą Harverstera obsługiwać mieszankę maszyn wirtualnych (VM) i skonteneryzowanych obciążeń uruchomionych czy to we własnym centrum danych, czy jako rozwiązania Edge działające na brzegu sieci. Użytkownicy oprogramowania SUSE Rancher mogą teraz tworzyć klastry Kubernetesa na maszynach wirtualnych Harvestera. Z kolei Harvester może wykorzystać oprogramowanie SUSE Rancher do scentralizowanego uwierzytelniania użytkowników i zarządzania wieloma klastrami.

Uproszczenie zarządzania Kubernetesem i dostarczania aplikacji
SUSE ogłosiła też szereg kolejnych projektów open source, w tym:

  • Rancher Desktop: Sama instalacja Kubernetes została zaprojektowana tak, aby była prosta. Dodatkowa wiedza jest wymagana, gdy firma musi np. zresetować klaster w celu przetestowania aplikacji w różnych wersjach Kubernetesa. Rancher Desktop sprawia, że uruchamianie Kubernetesa i Dockera na lokalnym komputerze PC lub Mac jest znacznie łatwiejsze i znacznie szybciej można z nich zacząć korzystać.
  • Epinio: Oprogramowanie Epinio pozwala użytkownikom na przeprowadzenie aplikacji przez cały proces, od tworzenia kodu źródłowego do wdrożenia. Zaprojektowano je tak, aby umożliwić inżynierom pisanie kodu, który będzie wdrażany na Kubernetesie, bez marnowania czasu czy pieniędzy na uczenie wszystkich nowej platformy. Osiąga się to poprzez dostarczanie programistom odpowiednich poziomów abstrakcji, jednocześnie pozwalając operatorom na kontynuowanie pracy w środowisku, z którym czują się komfortowo.
  • Opni: Dane obserwowalne są częścią każdego środowiska Kubernetes, ale niewiele osób wykorzystuje je efektywnie do gromadzenia dostępnych informacji o stanie systemów operacyjnych i potencjalnych przestojach klastrów czy aplikacji. SUSE jest świetnie przygotowana do tego, aby zapewnić wykrywanie anomalii poprzez zastosowanie sztucznej inteligencji w ramach Kubernetesa za pośrednictwem rozwiązania Opni. Wykrywa ono anomalie w logach i metrykach klastra Kubernetes.
  • Kubewarden: Bezpieczeństwo pozostaje istotną barierą dla adopcji Kubernetesa, a najnowszy projekt firmy SUSE – Kubewarden – ma pomóc w usunięciu tej przeszkody. Zapewnia on znacznie większą elastyczność w porównaniu z innymi rozwiązaniami dostępnymi obecnie na rynku, ponieważ pozwala na pisanie polityk w dowolnym języku, który można skompilować do WebAssemblies (WASM), w tym w języku Rego wykorzystywanym w OPA (Open Policy Agent). Pozwala to zespołom operacyjnym i zarządzającym na skodyfikowanie zasad dotyczących tego, co może, a co nie może być uruchamiane w ich środowiskach. Polityki są dystrybuowane poprzez rejestry kontenerów, a obciążenia i polityki mogą być dystrybuowane i zabezpieczane w ten sam sposób – ostatecznie usuwając wąskie gardła, z którymi borykają się organizacje i redukując czas, jaki zespoły DevOps muszą poświęcić na przeglądanie polityk.

Więcej informacji na temat projektów open source SUSE można znaleźć na stronie

Hyperconverged Infrastructure and Harvester

Monday, 2 August, 2021

Virtual machines (VMs) have transformed infrastructure deployment and management. VMs are so ubiquitous that I can’t think of a single instance where I deployed production code to a bare metal server in my many years as a professional software engineer.

VMs provide secure, isolated environments hosting your choice of operating system while sharing the resources of the underlying server. This allows resources to be allocated more efficiently, reducing the cost of over-provisioned hardware.

Given the power and flexibility provided by VMs, it is common to find many VMs deployed across many servers. However, managing VMs at this scale introduces challenges.

Managing VMs at Scale

Hypervisors provide comprehensive management of the VMs on a single server. The ability to create new VMs, start and stop them, clone them, and back them up are exposed through simple management consoles or command-line interfaces (CLIs).

But what happens when you need to manage two servers instead of one? Suddenly you find yourself having first to gain access to the appropriate server to interact with the hypervisor. You’ll also quickly find that you want to move VMs from one server to another, which means you’ll need to orchestrate a sequence of shutdown, backup, file copy, restore and boot operations.

Routine tasks performed on one server become just that little bit more difficult with two, and quickly become overwhelming with 10, 100 or 1,000 servers.

Clearly, administrators need a better way to manage VMs at scale.

Hyperconverged Infrastructure

This is where Hyperconverged Infrastructure (HCI) comes in. HCI is a marketing term rather than a strict definition. Still, it is typically used to describe a software layer that abstracts the compute, storage and network resources of multiple (often commodity or whitebox) servers to present a unified view of the underlying infrastructure. By building on top of the virtualization functionality included in all major operating systems, HCI allows many systems to be managed as a single, shared resource.

With HCI, administrators no longer need to think in terms of VMs running on individual servers. New hardware can be added and removed as needed. VMs can be provisioned wherever there is appropriate capacity, and operations that span servers, such as moving VMs, are as routine with 2 servers as they are with 100.


Harvester, created by Rancher, is open source HCI software built using Kubernetes.

While Kubernetes has become the defacto standard for container orchestration, it may seem like an odd choice as the foundation for managing VMs. However, when you think of Kubernetes as an extensible orchestration platform, this choice makes sense.

Kubernetes provides authentication, authorization, high availability, fault tolerance, CLIs, software development kits (SDKs), application programming interfaces (APIs), declarative state, node management, and flexible resource definitions. All of these features have been battle tested over the years with many large-scale clusters.

More importantly, Kubernetes orchestrates many kinds of resources beyond containers. Thanks to the use of custom resource definitions (CRDs), and custom operators, Kubernetes can describe and provision any kind of resource.

By building on Kubernetes, Harvester takes advantage of a well tested and actively developed platform. With the use of KubeVirt and Longhorn, Harvester extends Kubernetes to allow the management of bare metal servers and VMs.

Harvester is not the first time VM management has been built on top of Kubernetes; Rancher’s own RancherVM is one such example. But these solutions have not been as popular as hoped:

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators.

To address this, Harvester does not expose the underlying Kubernetes platform to the end user. Instead, it presents more familiar concepts like VMs, NICs, ISO images and disk volumes. This allows Harvester to take advantage of Kubernetes while giving administrators a more traditional view of their infrastructure.

Managing VMs at Scale

The fusion of Kubernetes and VMs provides the ability to perform common tasks such as VM creation, backups, restores, migrations, SSH-Key injection and more across multiple servers from one centralized administration console.

Consolidating virtualized resources like CPU, memory, network, and storage allows for greater resource utilization and simplified administration, allowing Harvester to satisfy the core premise of HCI.


HCI abstracts the resources exposed by many individual servers to provide administrators with a unified and seamless management interface, providing a single point to perform common tasks like VM provisioning, moving, cloning, and backups.

Harvester is an HCI solution leveraging popular open source projects like Kubernetes, KubeVirt, and Longhorn, but with the explicit goal of not exposing Kubernetes to the end user.

The end result is an HCI solution built on the best open source platforms available while still providing administrators with a familiar view of their infrastructure.

Download Harvester from the project website and learn more from the project documentation.

Meet the Harvester developer team! Join our free Summer is Open session on Harvester: Tuesday, July 27 at 12pm PT and on demand. Get details about the project, watch a demo, ask questions and get a challenge to complete offline.

Category: Rancher Kubernetes Comments closed

Announcing Harvester Beta Availability

Friday, 28 May, 2021

It has been five months since we announced project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Since then, we’ve received a lot of feedback from the early adopters. This feedback has encouraged us and helped in shaping Harvester’s roadmap. Today, I am excited to announce the Harvester v0.2.0 release, along with the Beta availability of the project!

Let’s take a look at what’s new in Harvester v0.2.0.

Raw Block Device Support

We’ve added the raw block device support in v0.2.0. Since it’s a change that’s mostly under the hood, the updates might not be immediately obvious to end users. Let me explain more in detail:

In Harvester v0.1.0, the image to VM flow worked like this:

  1. Users added a new VM image.

  2. Harvester downloaded the image into the built-in MinIO object store.

  3. Users created a new VM using the image.

  4. Harvester created a new volume, and copied the image from the MinIO object store.

  5. The image was presented to the VM as a block device, but it was stored as a file in the volume created by Harvester.

This approach had a few issues:

  1. Read/write operations to the VM volume needed to be translated into reading/writing the image file, which performed worse compared to reading/writing the raw block device, due to the overhead of the filesystem layer.

  2. If one VM image is used multiple times by different VMs, it was replicated many times in the cluster. This is because each VM had its own copy of the volume, even though the majority of the content was likely the same since they’re coming from the same image.

  3. The dependency on MinIO to store the images resulted in Harvester keeping MinIO highly available and expandable. Those requirements caused an extra burden on the Harvester management plane.

In v0.2.0, we’ve took another approach to tackle the problem, which resulted in a simpler solution that had better performance and less duplicated data:

  1. Instead of an image file on the filesystem, now we’re providing the VM with raw block devices, which allows for better performance for the VM.

  2. We’ve taken advantage of a new feature called Backing Image in the Longhorn v1.1.1, to reduce the unnecessary copies of the VM image. Now the VM image will be served as a read-only layer for all the VMs using it. Longhorn is now responsible for creating another copy-on-write (COW) layer on top of the image for the VMs to use.

  3. Since now Longhorn starts to manage the VM image using the Backing Image feature, the dependency of MinIO can be removed.

Image 02
A comprehensive view of images in Harvester

From the user experience perspective, you may have noticed that importing an image is instantaneous. And starting a VM based on a new image takes a bit longer due to the image downloading process in Longhorn. Later on, any other VMs using the same image will take significantly less time to boot up, compared to the previous v0.1.0 release and the disk IO performance will be better as well.

VM Live Migration Support

In preparation for the future upgrade process, VM live migration is now supported in Harvester v0.2.0.

VM live migration allows a VM to migrate from one node to another, without any downtime. It’s mostly used when you want to perform maintenance work on one of the nodes or want to balance the workload across the nodes.

One thing worth noting is, due to potential IP change of the VM after migration when using the default management network, we highly recommend using the VLAN network instead of the default management network. Otherwise, you might not be able to keep the same IP for the VM after migration to another node.

You can read more about live migration support here.

VM Backup Support

We’ve added VM backup support to Harvester v0.2.0.

The backup support provides a way for you to backup your VM images outside of the cluster.

To use the backup/restore feature, you need an S3 compatible endpoint or NFS server and the destination of the backup will be referred to as the backup target.

You can get more details on how to set up the backup target in Harvester here.

Image 03
Easily manage and operate your virtual machines in Harvester

In the meantime, we’re also working on the snapshot feature for the VMs. In contrast to the backup feature, the snapshot feature will store the image state inside the cluster, providing VMs the ability to revert back to a previous snapshot. Unlike the backup feature, no data will be copied outside the cluster for a snapshot. So it will be a quick way to try something experimental, but not ideal for the purpose of keeping the data safe if the cluster went down.

PXE Boot Installation Support

PXE boot installation is widely used in the data center to automatically populate bare-metal nodes with desired operating systems. We’ve also added the PXE boot installation in Harvester v0.2.0 to help users that have a large number of servers and want a fully automated installation process.

You can find more information regarding how to do the PXE boot installation in Harvester v0.2.0 here.

We’ve also provided a few examples of doing iPXE on public bare-metal cloud providers, including Equinix Metal. More information is available here.

Rancher Integration

Last but not least, Harvester v0.2.0 now ships with a built-in Rancher server for Kubernetes management.

This was one of the most requested features since we announced Harvester v0.1.0, and we’re very excited to deliver the first version of the Rancher integration in the v0.2.0 release.

For v0.2.0, you can use the built-in Rancher server to create Kubernetes clusters on top of your Harvester bare-metal clusters.

To start using the built-in Rancher in Harvester v0.2.0, go to Settings, then set the rancher-enabled option to true. Now you should be able to see a Rancher button on the top right corner of the UI. Clicking the button takes you to the Rancher UI.

Harvester and Rancher share the authentication process, so once you’re logged in to Harvester, you don’t need to redo the login process in Rancher and vice versa.

If you want to create a new Kubernetes cluster using Rancher, you can follow the steps here. A reminder that VLAN networking needs to be enabled for creating Kubernetes clusters on top of the Harvester, since the default management network cannot guarantee a stable IP for the VMs, especially after reboot or migration.

What’s Next?

Now with v0.2.0 behind us, we’re working on the v0.3.0 release, which will be the last feature release before Harvester reaches GA.

We’re working on many things for v0.3.0 release. Here are some highlights:

  • Built-in load balancer
  • Rancher 2.6 integration
  • Replace K3OS with a small footprint OS designed for the container workload
  • Multi-tenant support
  • Multi-disk support
  • VM snapshot support
  • Terraform provider
  • Guest Kubernetes cluster CSI driver
  • Enhanced monitoring

You can get started today and give Harvester v0.2.0 a try via our website.

Let us know what you think via the Rancher User Slack #harvester channel. And start contributing by filing issues and feature requests via our github page.

Enjoy Harvester!

Announcing Harvester: Open Source Hyperconverged Infrastructure (HCI) Software

Wednesday, 16 December, 2020

Today, I am excited to announce project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Harvester provides fully integrated virtualization and storage capabilities on bare-metal servers. No Kubernetes knowledge is required to use Harvester.

Why Harvester?

In the past few years, we’ve seen many attempts to bring VM management into container platforms, including our own RancherVM, and other solutions like KubeVirt and Virtlet. We’ve seen some demand for solutions like this, mostly for running legacy software side by side with containers. But in the end, none of these solutions have come close to the popularity of industry-standard virtualization products like vSphere and Nutanix.

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators. They are familiar with concepts like ISO images, disk volumes, NICs and VLANS – not concepts like pods and PVCs.

Enter Harvester.

Project Harvester is an open source alternative to traditional proprietary hyperconverged infrastructure software. Harvester is built on top of cutting-edge open source technologies including Kubernetes, KubeVirt and Longhorn. We’ve designed Harvester to be easy to understand, install and operate. Users don’t need to understand anything about Kubernetes to use Harvester and enjoy all the benefits of Kubernetes.

Harvester v0.1.0

Harvester v0.1.0 has the following features:

Installation from ISO

You can download ISO from the release page on Github and install it directly on bare-metal nodes. During the installation, you can choose to create a new cluster or add the current node into an existing cluster. Harvester will automatically create a cluster based on the information you provided.

Install as a Helm Chart on an Existing Kubernetes Cluster

For development purposes, you can install Harvester on an existing Kubernetes cluster. The nodes must be able to support KVM through either hardware virtualization (Intel VT-x or AMD-V) or nested virtualization.

VM Lifecycle Management

Powered by KubeVirt, Harvester supports creating/deleting/updating operations for VMs, as well as SSH key injection and cloud-init.

Harvester also provides a graphical console and a serial port console for users to access the VM in the UI.

Storage Management

Harvester has a built-in highly available block storage system powered by Longhorn. It will use the storage space on the node, to provide highly available storage to the VMs inside the cluster.

Networking Management

Harvester provides several different options for networking.

By default, each VM inside Harvester will have a management NIC, powered by Kubernetes overlay networking.

Users can also add additional NICs to the VMs. Currently, VLAN is supported.

The multi-network functionality in Harvester is powered by Multus.

Image Management

Harvester has a built-in image repository, allowing users to easily download/manage new images for the VMs inside the cluster.

The image repository is powered by MinIO.

Image 01


To install Harvester, just load the Harvester ISO into your bare-metal machine and boot it up.

Image 02

For the first node where you install Harvester, select Create a new Harvester cluster.

Later, you will be prompted to enter the password that will be used to enter the console on the host, as well as “Cluster Token.” The Cluster Token is a token that’s needed later by other nodes that want to join the same cluster.

Image 03

Then you will be prompted to choose the NIC that Harvester will use. The selected NIC will be used as the network for the management and storage traffic.

Image 04

Once everything has been configured, you will be prompted to confirm the installation of Harvester.

Image 05

Once installed, the host will be rebooted and boot into the Harvester console.

Image 06

Later, when you are adding a node to the cluster, you will be prompted to enter the management address (which is shown above) as well as the cluster token you’ve set when creating the cluster.

See here for a demo of the installation process.

Alternatively, you can install Harvester as a Helm chart on your existing Kubernetes cluster, if the nodes in your cluster have hardware virtualization support. See here for more details. And here is a demo using Digital Ocean which supports nested virtualization.


Once installed, you can use the management URL shown in the Harvester console to access the Harvester UI.

The default user name/password is documented here.

Image 07

Once logged in, you will see the dashboard.

Image 08

The first step to create a virtual machine is to import an image into Harvester.

Select the Images page and click the Create button, fill in the URL field and the image name will be automatically filled for you.

Image 09

Then click Create to confirm.

You will see the real-time progress of creating the image on the Images page.

Image 10

Once the image is finished creating, you can then start creating the VM using the image.

Select the Virtual Machine page, and click Create.

Image 11

Fill in the parameters needed for creation, including volumes, networks, cloud-init, etc. Then click Create.

VM will be created soon.

Image 12

Once created, click the Console button to get access to the console of the VM.

Image 13

See here for a UI demo.

Current Status and Roadmap

Harvester is in the early stages. We’ve just released the v0.1.0 (alpha) release. Feel free to give it a try and let us know what you think.

We have the following items in our roadmap:

  1. Live migration support
  2. PXE support
  3. VM backup/restore
  4. Zero downtime upgrade

If you need any help with Harvester, please join us at either our Rancher forums or Slack, where our team hangs out.

If you have any feedback or questions, feel free to file an issue on our GitHub page.

Thank you and enjoy Harvester!

Centralne zarządzanie kontenerami i maszynami wirtualnymi

Tuesday, 17 May, 2022

Konsekwencją wdrażania rozwiązań opartych na maszynach wirtualnych i kontenerach, poza oczywistymi zaletami takimi jak skalowalność, czy możliwość wdrażania innowacyjnych usług w chmurze, jest rosnąca złożoność środowisk IT i problemy z zarządzaniem rozbudowanym stosem technologii. 

Nową odpowiedzią na problem złożoności środowiska IT jest SUSE Harvester – otwarte rozwiązanie HCI (infrastruktura hiperkonwergentna) oparte na Kubernetesie, realizujące podejście cloud native.

SUSE Harvester umożliwia wdrażanie i zarządzanie całą pulą wykorzystywanych maszyn wirtualnych i kontenerów, w ujednolicony sposób, bez wprowadzania dodatkowej złożoności, narzucania ograniczeń czy zwiększonych kosztów.

Rozwiązuje to problem złożoności środowiska IT w przypadku stosowania wirtualizacji i konteneryzacji. Pozwala na optymalne zarządzanie dostępnymi zasobami hostów, storage’u i sieci niezależnie od ich lokalizacji i typu.

Zapraszamy 26 maja 2022 r. na webinar organizowany przez Linux Polska oraz SUSE. Zobacz jak zarządzać całym złożonym środowiskiem kontenerów i maszyn wirtualnych z SUSE Harvester.


Dlaczego klienci wybierają SUSE Rancher?

Monday, 4 April, 2022

Czy masz już swoje preferowane narzędzie do zarządzania Kubernetesem?  Jeżeli jeszcze nie masz albo jeśli Twoje obecne narzędzie nie spełnia wszystkich oczekiwań, zobacz co cenią nasi klienci w rozwiązaniu SUSE Rancher. Wyniki naszej ankiety są czasami wręcz zdumiewające!

10 powtarzających się odpowiedzi naszych klientów:

  1. Klienci uwielbiają nasze darmowe szkolenia – Rancher Academy. Wspomnieli o nim prawie wszyscy uczestnicy badania. Zarządzanie Kubernetesem jest bardzo skomplikowane, więc rozpoczęcie od szkolenia jest zawsze dobrym pomysłem.
  2. Nowa wersja SUSE Rancher (2.6) pojawiła się z wieloma nowymi funkcjami – nowy interfejs webowy jest piękny i intuicyjny. Klienci uwielbiają go za wygląd i łatwość obsługi. Interfejs użytkownika typu “wskaż i kliknij” znacznie skraca czas nauki.
  3. SUSE Rancher bezproblemowo łączy się z istniejącymi klastrami Kubernetes, tymi on-premise jak i w chmurze, z usługami Active Directory do uwierzytelniania itp. Rancher to gracz zespołowy!
  4. SUSE Rancher jest naprawdę heterogeniczny, jeśli chodzi o wybór systemu operacyjnego – można go wdrożyć na SUSE Linux Enterprise Server, Ubuntu, CentOS, Red Hat itd.
  5. SUSE Rancher radzi sobie z zarządzaniem złożonością Kubernetesa lepiej niż inne konkurencyjne rozwiązania. Rozmawialiśmy z administratorem IT, który zmagał się z jednym z naszych konkurencyjnych produktów. Następnie przetestował Ranchera i… wszystkie problemy szybko zniknęły.
  6. Nasz cennik z opłatami za węzeł jest tak prosty, jak to tylko możliwe. Nie pobieramy opłat za vCore/vCPU, nie stawiamy dodatkowych warunków. Cena jest za zarządzany węzeł – to wszystko.
  7. Klienci pokochali Ranchera od samego początku. Po fuzji z SUSE klient z sektora bankowego, z którym rozmawialiśmy, czuł się jeszcze lepiej, ponieważ ma teraz za sobą większą organizację z profesjonalnym zapleczem wsparcia technicznego. Wsparcie jest przecież podstawą każdego biznesu open source.
  8. Co więcej, po fuzji z SUSE, Rancher otrzymał ogromny zastrzyk zasobów inżynieryjnych. Projekty takie jak Harvester szybko stały się już gotowymi produktami.
  9. Nasi klienci zdecydowali się na zakup SUSE Ranchera, ponieważ w ich organizacjach już wcześniej korzystano z niego. Jest to po prostu fajny produkt dostępny na GitHub i zespoły programistów go uwielbiają!
  10. To jest ostatni atut, ale jaki! Klientom podoba się otwartość Ranchera i to, że nie odbiega on od “czystego” Kubernetesa. Podzielili się z nami informacją, że inni dostawcy narzucają swoje własne wersje Kubernetesa, co według nich jest dużą wadą!

Rancher Rodeo ponownie w Polsce! Praktyczne warsztaty z Kubernetesa

Thursday, 24 February, 2022

Zapraszamy 17 marca na pierwsze w tym roku Rancher Rodeo, które po raz pierwszy w całości będzie prowadzone w jęz. polskim. Rodeo to bezpłatne, intensywne warsztaty, których celem jest przekazanie zespołom DevOps oraz IT praktycznej wiedzy i umiejętności potrzebnych do wdrażania i zarządzania Kubernetesem w dowolnym środowisku.

Jeśli w Państwa zespole IT są osoby zainteresowane zdobyciem praktycznej wiedzy na temat Kubernetesa i kontenerów, nasz warsztat może być doskonałym sposobem na rozwinięcie niezbędnych umiejętności potrzebnych do wdrożenia wysoce dostępnych, dobrze monitorowanych i bezpiecznych klastrów Kubernetesa przy użyciu Ranchera.

W Rancher Rodeo ponownie można będzie wziąć udział zdalnie lub stacjonarnie w hotelu Marriott Warszawie, by na miejscu móc korzystać ze wsparcia osób prowadzących warsztaty. Niezależnie od formy udziału, konieczna jest rejestracja w celu otrzymania indywidualnego linku platformy warsztatowej HobbyFarm, na której będą prowadzone praktyczne ćwiczenia.

Warsztaty będą też okazją do briefingu o nowościach technologicznych, które pojawiły się już w tym roku. Przybliżymy możliwości rozwiązania Harvester do budowy infrastruktury hiperkonwergentnej (HCI) opartej na open source i jednoczesnego zarządzania kontenerami oraz maszynami wirtualnymi z poziomu Kubernetesa. Poznasz również Rancher Desktop udostępniający deweloperom produkcyjne środowisko Kubernetesa na ich desktopach.

Jest jeszcze jeden powód, by rozważyć udział warsztatach stacjonarnych. Rodeo organizujemy nie bez powodu 17 marca, w Dzień Św. Patryka. To najbardziej zielony dzień w roku na całym świecie! Z tej okazji zaraz po warsztatach, w godz. 14:00-16:00, zapraszamy na lunch i after party!

W programie warsztatów Rancher Rodeo

  • Briefing z nowości technologicznych Ranchera (Harvester, Rancher Desktop)
  • Wprowadzenie do warsztatów: architektura i koncepcje wdrażania systemów Docker i Kubernetes,
  • Instalacja i konfiguracja serwera Rancher,
  • Wdrażanie klastra Kubernetes, uruchamianie i udostępnianie aplikacji.

Zapraszamy do rejestracji!