Build a lightweight private cloud with Harvester, K3s, and Traefik Proxy

Tuesday, 17 May, 2022

Cloud native technologies are so compelling they’re changing the landscape of computing everywhere – including on-premises. And while it would be convenient if you were deploying into a greenfield situation, that’s rarely reality.

Enter Harvester, the open source hyperconverged infrastructure (HCI) solution designed to easily unify your virtual machine (VM) and container infrastructure operations. And with Harvester, K3s and Traefik Proxy (installed as the ingress controller with K3s) we want to show you how to build an on-premises, lightweight private cloud with ease.

Join us on Wed, May 25th for this Traefik Labs hosted online meetup to explore Harvester, K3s, Kubevirt, Longhorn and Traefik Proxy as the building blocks to a modern, lightweight private cloud.

Register today!

Getting Hands on with Harvester HCI

Monday, 2 May, 2022

When I left Red Hat to join SUSE as a Technical Marketing Manager at the end of 2021, I heard about Harvester, a new Hyperconverged Infrastructure (HCI) solution with Kubernetes under the hood. When I started looking at it, I immediately saw use cases where Harvester could really help IT operators and DevOps engineers. There are solutions that offer similar capabilities but there’s nothing else on the market like Harvester. In this blog, I’ll give an overview of getting started with Harvester and what you need for a lab implementation.­

 

First, let me bring you up to speed on Harvester. This HCI solution from SUSE takes advantage of your existing hardware with cutting edge open source technology, and, as always with SUSE, offers flexibility and freedom without locking you in with expensive and complex solutions.

Figure 1 shows, at a glance, what Harvester is and the main technologies that compose it.

 

Fig. 1 – Harvester stack 

 

The base of the solution is the Linux operating system. Longhorn provides lightweight and easy-to-use distributed block storage system for Kubernetes — in this case for the VMs running on the cluster. RKE2 provides the Kubernetes layer where KubeVirt runs, providing virtualization capabilities using KVM on Kubernetes. The concept is simple: like in Kubernetes, there are pods running in a cluster. The big difference is that there are VMs inside those pods. 

To learn more about the tech under the hood and technical specs, check out this blog post from Sheng Yang introducing Harvester technical details.

The lab

I set up a home lab based on a Slimbook One node with an AMD Ryzen 7 processor, with 8 cores and 16 threads, 64GB of RAM and 1TB NVMe SSD — this is twice the minimum requirements for Harvester. In case you don’t know Slimbook, it is a brand focused on hardware oriented for Linux and open source software. You’ll need an ethernet connection for Harvester to boot, so if you don’t have a dedicated switch to connect your server, just connect it to the router from your ISV.

 

Fig. 2 – Slimbook One 

 

The installation

The installation was smooth and easy since Harvester ships as an appliance. Download the ISO image and install it on a USB drive or use PXE for the startup. In this process, you’ll be asked some basic questions to configure Harvester during the installation process. 

Fig. 3 – ISO Install

 

As part of the initial set up you can create a token that can be used later to add nodes to the cluster. Adding more nodes to the cluster is easy; you just start another node with the appliance and provide the token so the new node can join to the Kubernetes cluster. This is similar for what you do with RKE2 and K3s when adding nodes to a cluster. After you provide all the information for the installation process, you’ll have to wait approximately 10 minutes for Harvester to finish the set up. The Harvester configuration is stored as a yaml file and can be sourced from a URL during the installation to make the installation repeatable and easy to keep on a git repository.

 

Once the installation is finished, on the screen you’ll see the IP/DNS to connect Harvester and whether Harvester is ready or not. Once ready, you can log into the UI using the IP/DNS. The UI is very similar to Rancher and gives you the possibility to use a secure password in the first login. 

 

Fig. 4 – Harvester installation finished & ready screen 

 

The first login and dashboard

When you log in for the first time, you’ll see that it is easy to navigate.  Harvester benefits from a clean UI; it’s easy to use and completely oriented toward virtualization users and operators. Harvester offers the same kind of experience that IT operators would expect of a virtualization platform like oVirt. 

 

Fig. 5 – Harvester dashboard 

 

The first thing you’ll find once logged in is the dashboard, which allows you to see all the basic information about your cluster, like hosts, VMs, images, cluster metrics and VM metrics. If you navigate down the dashboard, you’ll find an event manager that shows you all the events segregated by kind of object.

 

When you dig further into the UI, you´ll find not only the traditional virtualization items but also Kubernetes options, like managing namespaces. When we investigate further, we find some namespaces are already created but we can create more in order to take advantage of Kubernetes isolation. Also, we find a fleet-local namespace which gives us a clue about how Kubernetes objects are managed inside the local cluster. Fleet is a GitOps-based deployment engine created by Rancher to simplify and improve cluster control. In the Rancher UI it’s referred to as ‘Continuous Deployment.’

Creating your first VM

Before creating your first VM you need to upload the image you’ll use to create it.  Harvester can use qcow2, raw and ISO images which can be uploaded from the Images tab using a URL or importing them from your local machine. Before uploading the images, you have the option to select which namespace you want them in, and you can assign labels (yes, Kubernetes labels!) to use them from the Kubernetes cluster. Once you have images uploaded you can create your first VM.

The VM assistant feels like any other virtualization platform out there: you select CPU, RAM, storage, networking options, etc. 

 

Fig. 6 – VM creation

 

However, there are some subtle differences. First, you must select a namespace where to deploy the VM, and you have the possibility to see all the VM options as yaml code. This means your VMs can be defined as managed as code and integrated with Fleet. This is a real differentiator from more traditional virtualization platforms. Also, you can select the node where the VM will be running, use the Kubernetes scheduler to place the VM on the best node, apply scheduling rules or select specific nodes that do not support live migration. Finally, there is the option to use containers alongside VMs in the same pod; the container image you select is a sidecar for the VM. This sidecar container is added as a disk from the Harvester UI. Cloud config is supported out of the box to configure the VMs during the first launch as you could expect from solutions like OpenStack or oVirt. 

Conclusion

Finding Kubernetes concepts on a virtualization solution might be a little awkward at the beginning. However, finding things like Grafana, namespace isolation and sidecar containers in combination with a virtualization platform really helps to get the best of both worlds. As far as use cases where Harvester can be of use, it is perfect for the Edge, where it takes advantage of the physical servers you already have in your organization since it doesn’t need a lot of resources to run. Another use case is as an on-prem HCI solution, offering a perfect way to integrate VMs and containers in one platform. The integration with Rancher offers even more capabilities. Rancher provides a unified management layer for hybrid cloud environments, offering central RBAC management for multi-tenancy support; a single pane of glass to manage VMs, containers and clusters; or deploying your Kubernetes clusters in Harvester or on most of the cloud providers in the market. 

We may be in a cloud native world now, but VMs are not going anywhere. Solutions like Harvester ease the integration of both worlds, making your life easier. 

To get started with Harvester, head over to the quick start documentation. 

Join the SUSE & Rancher community to learn more about Harvester and other SUSE open source projects.

    

 

 

Technical Insights of Harvester 1.0

Tuesday, 21 December, 2021

Exactly one year ago, we announced the alpha availability of the project Harvester, an open Source Hypercoverged Infrastructure solution. During this last year, the team has been working hard on developing the project and we brought you the beta release of v0.2.0 and v0.3.0. Throughout the last year, we’ve received many queries from our users and the community, asking when Harvester will be in production.  

Now finally, after a year, we’re excited to present Harvester v1.0, the first general availability release of Harvester!  

Why Harvester?

Harvester is an open source alternative to traditional proprietary hyperconverged infrastructure software. Harvester is built on top of cutting-edge open source technologies, including Kubernetes, KubeVirt and Longhorn.  

Even though Harvester is built on top of Kubernetes, we’ve designed Harvester to be easy to understand, install and operate. Users don’t need to understand anything about Kubernetes to start using Harvester and can experience all the benefits of Kubernetes by using a standalone Harvester cluster.  

If you’re already familiar with Kubernetes and want to have a central place to manage all your Kubernetes and VM workloads, Harvester’s unique value is its integration with Rancher. With Rancher v2.6.3, users can manage all the Harvester clusters, local or remote, by using the new Virtualization Management feature. Also, it’s simple to provision new Kubernetes clusters on top of Harvester using Rancher. Harvester has provided a built-in CSI driver and Cloud Provider to the clusters provisioned by Rancher, which makes Harvester the ideal solution for any users who want to run Kubernetes workloads on top of VMs in the data center.

What does Harvester do?

As an HCI solution, Harvester brings compute, storage and network management together. Here are some highlighted features in the Harvester v1.0 release.  

Environment 

  • Installation 
  • Via ISO 
  • Via PXE 
  • Air Gap environment support 
  • Proxy support 

Compute 

  • VM lifecycle management 
  • Built-in monitoring dashboard 
  • Cloud Config 
  • SSH key injection 
  • Graphic console to VNC and serial port 
  • VM Template 
  • Live migration 
  • Export images from existing VMs 
  • Terraform Provider 

Storage 

  • High performance and efficient block storage 
  • Built-in highly-available image repository 
  • VM backup/restore to S3 
  • Hot plug disks 

Network 

  • Virtual IP for the cluster 
  • Multi-network 
  • VLAN 
  • Custom SSL certificate 

Integration with Rancher 

    • Virtualization Management via Rancher for multiple Harvester clusters 
    • Multi-tenancy support with RBAC 
    • Kubernetes cluster provisioning 
    • Built-in CSI driver 

What is Harvester made of? 

Operating System

Harvester is delivered as an appliance, with the operating system and everything needed to run included, and is designed to be installed on bare metal servers. The operating system is based on the widely used and trusted foundation of Linux kernel development for which SUSE has been known for more than 29 years.  

Kubernetes  

On top of the OS, Harvester uses Rancher Kubernetes Engine 2 (RKE2) to provide the Kubernetes experience. Built by the SUSE Rancher engineering team, RKE2 is a Kubernetes distribution created for enterprises with additional security features. It’s the sibling of the widely popular K3s distribution. By using RKE2, Harvester has a solid foundation of the orchestration layer.  

KubeVirt 

KubeVirt is a CNCF sandbox project that provides virtualization management on top of Kubernetes. KubeVirt was originally created by Red Hat. It’s a virtualization management tool based on KVM, the most popular open source hypervisor. The Harvester team has worked closely with the KubeVirt teams to add features like live migration with hot-plugged disks to KubeVirt to enhance the user experience of Harvester.   

Longhorn 

Longhorn is a CNCF incubation project that provides highly available persistent storage support to Kubernetes. Longhorn was originally created by Rancher Labs and is now maintained by SUSE. It’s one of the most popular cloud native storage solutions out there. There are more than 40,000 nodes running Longhorn worldwide. The Harvester team has also worked closely with the Longhorn project on features like backing image and live migration support. 

Other Cloud Native projects  

Harvester has also used Multus to provide multiple network support for the VMs, Kube-Vip for floating IP to the Harvester cluster as well as load balancing service to the guest cluster.   

Quick Start Harvester

Minimal requirement

  • CPU: x86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum; 16-core or above preferred  
  • Memory: 32 GB minimum, 64 GB or above preferred  
  • Disk Capacity: 120 GB minimum, 500 GB or above preferred  
  • Disk Performance: 5,000+ minimal random IOPS per disk (SSD/NVMe). Management nodes (first 3 nodes) must be fast enough for Etcd.  
  • Network Card: 1 Gbps Ethernet minimum, 10Gbps Ethernet recommended  
  • Network Switch: Trunking of ports required for VLAN support  

Installation 

You can install Harvester via ISO or PXE into your bare metal nodes. Make sure to choose the first node to install as `Create a Harvester cluster`, all the other nodes should be configured as `Join a Harvester cluster`. Read more about ISO Install here or PXE Boot Install for more detail. 

Dashboard

Once you have installed Harvester, you will see the IP address of Dashboard in the bare metal node’s terminal.  

Put the IP into your web browser, then you will get access to the Harvester Dashboard.  

Integration with Rancher 

One of the most exciting features in Harvester is the integration with Rancher. Now you can manage your container and virtualization workload in the same Rancher instance, which gives you a unified experience for all your workloads in the data center. 

Notice that one Rancher cluster can manage multiple Harvester clusters, though one Harvester cluster can only be imported into one Rancher cluster. You can now access the Harvester UI via the Rancher UI. Also, you can now easily provision new Kubernetes clusters using the managed Harvester cluster. You can learn more about why we chose to integrate Rancher and Harvester here. 

For RKE1 and RKE2 clusters provisioned by Rancher, you can get the load balancer and persistent volume support automatically with the clusters provisioned by Harvester (which we will refer to as guest clusters in the future). For more documentation on the integration please read our docs.

Feedback

Harvester’s product and engineering team are always open to suggestions and feedback. Test out Harvester today and let us know what you think! You can reach us on our Slack channel, or submit a request in GitHub or reach out to us in the SUSE & Rancher Community. You can keep up to date with Harvester via our open source project page where you can access our latest docs. 

Also, join me and the SUSE & Rancher community team on the 19th of January 2022 at 10 am Pacific Time as we host our global community meetup introducing Harvester. You can also find out more about the GA release here. 

Enjoy Harvester! 

Harvester is now production-ready and generally available  

Tuesday, 21 December, 2021

2021 has been a memorable year for the Harvester team. In May, SUSE hosted the first virtual SUSECON, where we announced the beta release of Harvester, alongside a cast of new innovative open source projects from the SUSE Rancher engineering team. In October, for the first time in two years, we were able to meet our industry peers and the community face-to-face at KubeCon North America where we announced Harvester’s plans to integrate with our leading Kubernetes management platform SUSE Rancher.

Today, we’re closing out the year with one more major announcement – that Harvester is now production-ready and generally available for our customers and the open source community! Harvester’s highly anticipated release marks a major milestone for SUSE as it is the first brand new product release since SUSE’s acquisition of Rancher Labs and expands SUSE’s portfolio capabilities into the hyperconverged infrastructure space.

Why did SUSE build an HCI product?

This year, SUSE made a commitment to our customers and the community to help them ‘Choose Open’ and innovate across their business using open source solutions. Harvester plays an integral piece in SUSE’s portfolio as it showcases our commitment in enriching the open source landscape while providing our customers and the community valuable solutions to help them solve their infrastructure challenges.

Harvester is a natural extension to our existing strong background in container management. It takes an open, interoperable approach to hyperconverged infrastructure and addresses common challenges, including managing sprawl, siloing of teams and resource limitations faced by IT operators who need to manage modern environments comprised of both virtualized and containerized workloads.

What’s Harvester?

Harvester is a 100% free-to-use, open source modern hyperconverged infrastructure solution that is built on a foundation of cloud native solutions including Kubernetes, Longhorn and Kubevirt. It has been designed as an enterprise-ready turnkey solution that gives operators a familiar operating experience like other proprietary HCI solutions in the market.

Though built on Kubernetes, it does not require any pre-existing knowledge to operate. Its integration with SUSE Rancher gives users the ability to operate their virtualized and container workloads all within the same platform while also creating an easy, low-risk pathway for organizations looking to adopt cloud native solutions into their infrastructure modernization strategy. Learn more about the technical capabilities of Harvester in this blog by Sheng Yang, Engineering Lead for Harvester.

Image 1. Harvester as part of SUSE Rancher Console

Harvester integrates with SUSE Rancher

With today’s GA, one of the biggest milestones the Harvester engineering team has achieved this year is the integration of Harvester into the SUSE Rancher console.

As organizations look to accelerate their IT modernization journey, complexity rapidly grows as teams adopt multiple different solutions to help them manage their ever-expanding environments.  Organizations now need tools that can help them both confidently scale environments that simultaneously efficiently manages and governs their stack. Harvester and SUSE Rancher together addresses these needs by consolidating the management of operations for virtualized and containerized workloads – all accessible in a single Rancher platform instance.

This means both Harvester and Rancher clusters can be managed side by side within Rancher’s instance, reducing operators’ need to use separate solutions between the two workloads. Users can access the Harvester UI directly from within the Rancher console. In addition, Harvester clusters also have the ability to access the same features available to Rancher clusters, including authentication, role-based access control and cluster provisioning.

Another opportunity with Harvester and Rancher is that organizations who may be early in their modernization journey can use both open source solutions together as a low-risk pathway to adopting cloud native technology across their stack. Both solutions promote innovation by encouraging organizations to build their confidence in integrating modern technology to develop cloud native applications. For extra piece of mind, customers who may need an additional helping hand can have access SUSE’s support subscription available for Harvester.

Harvester’s general availability extends further than its integration with SUSE Rancher and its ability to consolidate VM and container workloads. Learn more from Robert Sirchia, Senior Technical Evangelist at SUSE, as he explores how Harvester’s cloud-native lightweight nature can be applied at the edge and also used as a platform to modernize applications.

Don’t miss the SUSE and Rancher community’s Global Online Meetup introducing Harvester on the 19th of January 2022 and 10am Pacific Time – alternatively find a local Harvester meetup near you. Learn more about Harvester here or get started today.

Harvester: A Modern Infrastructure for a Modern Platform

Tuesday, 21 December, 2021

Cloud platforms are not new — they have been around for a few years. And containers have been around even longer. Together, they have changed the way we think about software. Since the creation of these technologies, we have focused on platforms and apps. And who could blame anyone? Containers and Kubernetes let us do things that were unheard of only a few years ago.

What about the software that runs the infrastructure to support all these advancements? Over the same time, we have seen advancements — some in open source but the most with proprietary solutions. Sure, there is nothing wrong with running open source on top of a proprietary solution. These systems have become very good at what they do: running virtual machines but not container or container platforms, for that matter.

The vast majority of this infrastructure software is proprietary. This means you need two different skill sets to manage each of these — one proprietary, one Kubernetes. This is a lot to put on one team; it’s almost unbearable to put on one individual. What if there was an open infrastructure that used the same concepts and management plane as Kubernetes? We could lower the learning curve by managing our clusters the same way we can manage our host. We trust Kubernetes to manage clusters — why not our hosts?

Harvester: Built on Open Cloud Native Technology

Harvester is a simple, elegant, and light hyperconverged infrastructure (HCI) solution built for running virtual machines and Kubernetes clusters on bare metal servers. With Harvester reaching General Availability, we can now manage our host with the same concepts and management plane as our clusters. Harvester is a modern infrastructure for a modern platform. Completely open source, this solution is built on Kubernetes and incorporates other cloud native solutions, including Longhorn and Kubevirt, and leveraging all of these technologies transparently to deliver a modern hypervisor. This gives Harvester endless possibilities with all the other projects that integrate with Kubernetes.

This means operators and infrastructure engineers can leverage their existing skill sets and will find in Harvester a familiar HCI experience. Harvester easily integrates into cloud native environments, and offers enterprise-grade, turnkey features without costly overhead of the proprietary alternatives — saving both time and money.

A Platform for the Edge

Harvester’s small footprint means it is a great choice for the unique demands of hardware at the edge. Harvester gives operators the ability to deploy and manage VMs and Kubernetes clusters on a single platform. And because it integrates into Rancher, Harvester clusters can be managed centrally using all the great tooling Rancher provides. Edge applications will also benefit from readily available enterprise-grade storage, without costly and specialized storage hardware required. This enables operators to keep compute and storage as close to the user as possible, without sacrificing management and security. Kubernetes is quickly becoming a standard for edge deployments, so an HCI that also speaks this language is beneficial.

Harvester is a great solution for data centers, which come in all shapes and sizes. Harvester’s fully integrated approach means you can use high-density hardware with low-cost local storage. This saves on equipment costs and the amount of rack space required. A Harvester cluster can be as small as three servers, or an entire rack. Yet it can run just as well in branch or small-office server rooms. And all of these locations can be centrally managed through Rancher.

A Platform for Modernizing Applications

Harvester isn’t just a platform for building cloud native applications but one that you can use to take applications from VMs to clusters. It allows operators to run VMs alongside clusters, giving developers the opportunity to start decomposing these monoliths to cloud native applications. With most applications, this takes months and sometimes years. With Harvester, there isn’t a rush. VMs and clusters live side by side with ease. It offers all of this in one platform with one management plane.

As cloud native technologies continue their trajectory as keys to digital transformation, next-gen HCI solutions need to offer functionality and simplicity with the capability to manage containerized and non-containerized workloads, storage and network requirements across any environment.

Conclusion

What’s unique about Harvester? You can use it to manage multiple clusters hosted on VMs or a Kubernetes distribution. It’s 100 percent open source and leverages proven technologies – so why not give it a try to simplify your infrastructure stack?  You’ll get a feature-rich operational experience in a single management platform, with the support of the open-source community behind it. We have seen the evolution of Harvester, from a fledgling open-source project to a full-on enterprise-ready HCI solution.

We hope you take a moment to download and give Harvester a try.

JOIN US at the Harvester Global Online Meetup – January  19 at 10am PT. Our product team will be on hand to answer your questions. Register here.

Hyperconverged Infrastructure and Harvester

Monday, 2 August, 2021

Virtual machines (VMs) have transformed infrastructure deployment and management. VMs are so ubiquitous that I can’t think of a single instance where I deployed production code to a bare metal server in my many years as a professional software engineer.

VMs provide secure, isolated environments hosting your choice of operating system while sharing the resources of the underlying server. This allows resources to be allocated more efficiently, reducing the cost of over-provisioned hardware.

Given the power and flexibility provided by VMs, it is common to find many VMs deployed across many servers. However, managing VMs at this scale introduces challenges.

Managing VMs at Scale

Hypervisors provide comprehensive management of the VMs on a single server. The ability to create new VMs, start and stop them, clone them, and back them up are exposed through simple management consoles or command-line interfaces (CLIs).

But what happens when you need to manage two servers instead of one? Suddenly you find yourself having first to gain access to the appropriate server to interact with the hypervisor. You’ll also quickly find that you want to move VMs from one server to another, which means you’ll need to orchestrate a sequence of shutdown, backup, file copy, restore and boot operations.

Routine tasks performed on one server become just that little bit more difficult with two, and quickly become overwhelming with 10, 100 or 1,000 servers.

Clearly, administrators need a better way to manage VMs at scale.

Hyperconverged Infrastructure

This is where Hyperconverged Infrastructure (HCI) comes in. HCI is a marketing term rather than a strict definition. Still, it is typically used to describe a software layer that abstracts the compute, storage and network resources of multiple (often commodity or whitebox) servers to present a unified view of the underlying infrastructure. By building on top of the virtualization functionality included in all major operating systems, HCI allows many systems to be managed as a single, shared resource.

With HCI, administrators no longer need to think in terms of VMs running on individual servers. New hardware can be added and removed as needed. VMs can be provisioned wherever there is appropriate capacity, and operations that span servers, such as moving VMs, are as routine with 2 servers as they are with 100.

Harvester

Harvester, created by Rancher, is open source HCI software built using Kubernetes.

While Kubernetes has become the defacto standard for container orchestration, it may seem like an odd choice as the foundation for managing VMs. However, when you think of Kubernetes as an extensible orchestration platform, this choice makes sense.

Kubernetes provides authentication, authorization, high availability, fault tolerance, CLIs, software development kits (SDKs), application programming interfaces (APIs), declarative state, node management, and flexible resource definitions. All of these features have been battle tested over the years with many large-scale clusters.

More importantly, Kubernetes orchestrates many kinds of resources beyond containers. Thanks to the use of custom resource definitions (CRDs), and custom operators, Kubernetes can describe and provision any kind of resource.

By building on Kubernetes, Harvester takes advantage of a well tested and actively developed platform. With the use of KubeVirt and Longhorn, Harvester extends Kubernetes to allow the management of bare metal servers and VMs.

Harvester is not the first time VM management has been built on top of Kubernetes; Rancher’s own RancherVM is one such example. But these solutions have not been as popular as hoped:

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators.

To address this, Harvester does not expose the underlying Kubernetes platform to the end user. Instead, it presents more familiar concepts like VMs, NICs, ISO images and disk volumes. This allows Harvester to take advantage of Kubernetes while giving administrators a more traditional view of their infrastructure.

Managing VMs at Scale

The fusion of Kubernetes and VMs provides the ability to perform common tasks such as VM creation, backups, restores, migrations, SSH-Key injection and more across multiple servers from one centralized administration console.

Consolidating virtualized resources like CPU, memory, network, and storage allows for greater resource utilization and simplified administration, allowing Harvester to satisfy the core premise of HCI.

Conclusion

HCI abstracts the resources exposed by many individual servers to provide administrators with a unified and seamless management interface, providing a single point to perform common tasks like VM provisioning, moving, cloning, and backups.

Harvester is an HCI solution leveraging popular open source projects like Kubernetes, KubeVirt, and Longhorn, but with the explicit goal of not exposing Kubernetes to the end user.

The end result is an HCI solution built on the best open source platforms available while still providing administrators with a familiar view of their infrastructure.

Download Harvester from the project website and learn more from the project documentation.

Meet the Harvester developer team! Join our free Summer is Open session on Harvester: Tuesday, July 27 at 12pm PT and on demand. Get details about the project, watch a demo, ask questions and get a challenge to complete offline.

Category: Featured Content, Rancher Kubernetes Comments closed

Announcing Harvester Beta Availability

Friday, 28 May, 2021

It has been five months since we announced project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Since then, we’ve received a lot of feedback from the early adopters. This feedback has encouraged us and helped in shaping Harvester’s roadmap. Today, I am excited to announce the Harvester v0.2.0 release, along with the Beta availability of the project!

Let’s take a look at what’s new in Harvester v0.2.0.

Raw Block Device Support

We’ve added the raw block device support in v0.2.0. Since it’s a change that’s mostly under the hood, the updates might not be immediately obvious to end users. Let me explain more in detail:

In Harvester v0.1.0, the image to VM flow worked like this:

  1. Users added a new VM image.

  2. Harvester downloaded the image into the built-in MinIO object store.

  3. Users created a new VM using the image.

  4. Harvester created a new volume, and copied the image from the MinIO object store.

  5. The image was presented to the VM as a block device, but it was stored as a file in the volume created by Harvester.

This approach had a few issues:

  1. Read/write operations to the VM volume needed to be translated into reading/writing the image file, which performed worse compared to reading/writing the raw block device, due to the overhead of the filesystem layer.

  2. If one VM image is used multiple times by different VMs, it was replicated many times in the cluster. This is because each VM had its own copy of the volume, even though the majority of the content was likely the same since they’re coming from the same image.

  3. The dependency on MinIO to store the images resulted in Harvester keeping MinIO highly available and expandable. Those requirements caused an extra burden on the Harvester management plane.

In v0.2.0, we’ve took another approach to tackle the problem, which resulted in a simpler solution that had better performance and less duplicated data:

  1. Instead of an image file on the filesystem, now we’re providing the VM with raw block devices, which allows for better performance for the VM.

  2. We’ve taken advantage of a new feature called Backing Image in the Longhorn v1.1.1, to reduce the unnecessary copies of the VM image. Now the VM image will be served as a read-only layer for all the VMs using it. Longhorn is now responsible for creating another copy-on-write (COW) layer on top of the image for the VMs to use.

  3. Since now Longhorn starts to manage the VM image using the Backing Image feature, the dependency of MinIO can be removed.

Image 02
A comprehensive view of images in Harvester

From the user experience perspective, you may have noticed that importing an image is instantaneous. And starting a VM based on a new image takes a bit longer due to the image downloading process in Longhorn. Later on, any other VMs using the same image will take significantly less time to boot up, compared to the previous v0.1.0 release and the disk IO performance will be better as well.

VM Live Migration Support

In preparation for the future upgrade process, VM live migration is now supported in Harvester v0.2.0.

VM live migration allows a VM to migrate from one node to another, without any downtime. It’s mostly used when you want to perform maintenance work on one of the nodes or want to balance the workload across the nodes.

One thing worth noting is, due to potential IP change of the VM after migration when using the default management network, we highly recommend using the VLAN network instead of the default management network. Otherwise, you might not be able to keep the same IP for the VM after migration to another node.

You can read more about live migration support here.

VM Backup Support

We’ve added VM backup support to Harvester v0.2.0.

The backup support provides a way for you to backup your VM images outside of the cluster.

To use the backup/restore feature, you need an S3 compatible endpoint or NFS server and the destination of the backup will be referred to as the backup target.

You can get more details on how to set up the backup target in Harvester here.

Image 03
Easily manage and operate your virtual machines in Harvester

In the meantime, we’re also working on the snapshot feature for the VMs. In contrast to the backup feature, the snapshot feature will store the image state inside the cluster, providing VMs the ability to revert back to a previous snapshot. Unlike the backup feature, no data will be copied outside the cluster for a snapshot. So it will be a quick way to try something experimental, but not ideal for the purpose of keeping the data safe if the cluster went down.

PXE Boot Installation Support

PXE boot installation is widely used in the data center to automatically populate bare-metal nodes with desired operating systems. We’ve also added the PXE boot installation in Harvester v0.2.0 to help users that have a large number of servers and want a fully automated installation process.

You can find more information regarding how to do the PXE boot installation in Harvester v0.2.0 here.

We’ve also provided a few examples of doing iPXE on public bare-metal cloud providers, including Equinix Metal. More information is available here.

Rancher Integration

Last but not least, Harvester v0.2.0 now ships with a built-in Rancher server for Kubernetes management.

This was one of the most requested features since we announced Harvester v0.1.0, and we’re very excited to deliver the first version of the Rancher integration in the v0.2.0 release.

For v0.2.0, you can use the built-in Rancher server to create Kubernetes clusters on top of your Harvester bare-metal clusters.

To start using the built-in Rancher in Harvester v0.2.0, go to Settings, then set the rancher-enabled option to true. Now you should be able to see a Rancher button on the top right corner of the UI. Clicking the button takes you to the Rancher UI.

Harvester and Rancher share the authentication process, so once you’re logged in to Harvester, you don’t need to redo the login process in Rancher and vice versa.

If you want to create a new Kubernetes cluster using Rancher, you can follow the steps here. A reminder that VLAN networking needs to be enabled for creating Kubernetes clusters on top of the Harvester, since the default management network cannot guarantee a stable IP for the VMs, especially after reboot or migration.

What’s Next?

Now with v0.2.0 behind us, we’re working on the v0.3.0 release, which will be the last feature release before Harvester reaches GA.

We’re working on many things for v0.3.0 release. Here are some highlights:

  • Built-in load balancer
  • Rancher 2.6 integration
  • Replace K3OS with a small footprint OS designed for the container workload
  • Multi-tenant support
  • Multi-disk support
  • VM snapshot support
  • Terraform provider
  • Guest Kubernetes cluster CSI driver
  • Enhanced monitoring

You can get started today and give Harvester v0.2.0 a try via our website.

Let us know what you think via the Rancher User Slack #harvester channel. And start contributing by filing issues and feature requests via our github page.

Enjoy Harvester!

Meet Harvester, an HCI Solution for the Edge

Tuesday, 6 April, 2021

About six months ago, I learned about a new project called Harvester, our open source hyperconverged infrastructure (HCI) software built using Kubernetes, libvirt, kubevirt, Longhorn and minIO. At first, the idea of managing VMs via Kubernetes did not seem very exciting. “Why would I not just containerize the workloads or orchestrate the VM natively via KVM, Xen or my hypervisor of choice?” and that approach makes a lot of sense except for one thing: the edge. At the edge, Harvester provides a solution for a nightmarish technical challenge. Specifically, when one host must run the dreaded Windows legacy applications and modern containerized microservers. In this blog and the following tutorials, I’ll map out an edge stack and set up and install Harvester. Later I’ll use Fleet to orchestrate the entire host with OS and Kubernetes updates. We’ll then deploy the whole thing with a bit of Terraform, completing the solution.

At the edge, we often lack the necessities such as a cloud or even spare hardware. Running Windows VMs alongside your Linux containers provides much-needed flexibility while using the Kubernetes API to manage the entire deployment brings welcome simplicity and control. With K3s and Harvester (in app mode), you can maximize your edge node’s utility by allowing it to run Linux containers and Windows VMs, down to the host OS orchestrated via Rancher’s Continuous Delivery (Fleet) GitOps deployment tool. 

At the host, we start with SLES and Ubuntu. The system-update operator can be customized for other Linux operating systems.

We’ll use K3s as our Kubernetes distribution. K3s’ advantage here is indisputable: small footprint, less chatty datastore (SQLite when used in single master mode), and removal of cloud-based bloat present in most Kubernetes distributions, including our RKE.

Harvester has two modes of operation: HCI, where it can attach to another cluster as a VM hosting node or a Helm application deployed into an existing Kubernetes cluster. The application can be installed and operated via a Helm chart and CRDs, providing our node with the greatest flexibility. 

Later we’ll orchestrate it via a Rancher 2.5 cluster and our Continuous Delivery functionality, powered by Fleet. Underneath, Harvester uses libvirt, kubevirt, multus and minIO, installed by default with the Helm chart. We’ll add a Windows image and deploy a VM via a CRD once we finish installing Harvester. At the end, I’ll provide scripts to get the MVP deployed in Google Cloud Platform (GCP), so you can play along at home. Note that since multi-layered VMs require special CPU functionality, we can currently only test in GCP or Digital Ocean.

In summary, Rancher Continuous Delivery (Fleet), Harvester, and K3s on top of Linux can provide a solid edge application hosting solution capable of scaling to many teams and millions of edge devices. While it’s not the only solution, and you can use each component individually with other open source components, this is one solution that you can implement today, complete with instructions and tutorials. Drop me a note if it is helpful for you, and as always, you can reach out to our consulting services for expert assistance and advice.  

 

Tutorial Sections:

Setting up the Host Image on GCP

Walks you through the extra work needed to enable nested virtualization in GCP.

Create Test Cluster with K3s

Deploy a cluster and get ready for Harvester.

Deploying Harvester with Helm

Deploy the Harvester app itself.

Setting Up and Starting a Linux VM

Testing with an OpenSuse JeOS Leap 15 sp2

Setting Up and Starting a Windows VM

Testing with a pre-licensed windows 10 Pro vm

Automating Harvester via Terraform

Orchestrating the entire rollout in Terraform.

Setting Up The Host Image on GCP

When choosing the host for Harvester, k3os requires the least amount of customization for K3s. However, GCP (and Digital Ocean) requires some extra configuration to get nested virtualization working and enable us to run Harvester. I’ll show the steps with SLES, k3OS, and Ubuntu 20.04 LTS since the former uses the latter. Both images need a special license key passed in so GCP places them on hosts that support nested virtualization. You can find more info here.

Do not forget to initialize gcloud prior to starting and after deployment be sure to open up port 22 to enable access. You can open port 22 with the following gcloud command.

gcloud compute firewall-rules create allow-tcpssh --allow tcp:22

Build a Customized SUSE Linux Enterprise Server (SLES) Image

GCP publishes public images that can be used. Sles public images are stored in the project `suse-cloud` and we’ll get a standard image and then recreate it with the needed vm license. We’ll do this by first copying a public image onto a disk in my current project.

gcloud compute disks create sles-15-sp2 --image-project suse-cloud --image-family sles-15 --zone us-central1-b

That will create a disk called sles-15-sp2 based on the sles-15 family in zone us-central1-b. Next we’ll create an image local to our project that will use that disk and include the nested vm license.

gcloud compute images create sles-15-sp2 --source-disk sles-15-sp2 --family sles-15 --source-disk-zone us-central1-b --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"

That’s it! (Until we get to K3s configuration.)

Build a Customized Ubuntu Image

The process to create the Ubuntu image is much the same.

First, we’ll create a new disk in our project and load the public ubuntu 20.04 image.

gcloud compute disks create ubuntu-2004-lts --image-project ubuntu-os-cloud --image-family ubuntu-2004-lts --zone us-central1-b

Then we’ll build a new image locally in our project by passing the special key.

gcloud compute images create ubuntu-2004-lts --source-disk ubuntu-2004-lts --family ubuntu-2004-lts --source-disk-zone us-central1-b --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"

You can then move on to K3s setup unless you are configuring k3OS.

Create and Build a Custom k3OS Image

Since there is no k3OS image published currently for GCP’s nested virtualization, we’ll build our own.

Prerequisites

To do the build itself, you’ll need a clean Ubuntu 20.04 installation with internet access. I used a new multipass instance with standard specs and it worked great. You can get it here]:

multipass launch --name gcp-builder

You’ll also need a GCP account with an active project.

Set Up the Tools Inside Local Builder VM

First, we need to install the GCP CLI. For k3os, we’ll need Packer, which we’ll install later.

Add Google to your ubuntu source list.

echo "deb https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

Then we’ll need to add their key:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Update sources and install the cloud SDK:

sudo apt-get update && sudo apt-get install google-cloud-sdk

Last, we’ll initialize the SDK by logging in and selecting our project. Go ahead and select your zone. I used us-central1-b for all the instructions.

gcloud init

Create k3os Image (k3os)

We’ll need another tool called Packer to build the k3os image:

sudo apt-get install packer

The k3os github repo comes with a handy gcp builder. Skip this part to use the Ubuntu image directly. Otherwise:

git clone https://github.com/rancher/k3os.git

And check out the latest no-rc build:

git checkout v0.11.1

For the rest, we’ll cd into the k3os/package/packer/gcp directory:

cd k3os/package/packer/gcp

Here I manually edited the template.json file and simplified the image name and region to match my default (us-central1-b):

vi template.json

Update builders[0].image_name to ‘rancher-k3os’ and variables.region value to ‘us-central1-b’. Then, we need to pass the GCP license needed for nested virtualization builders[0].image_licenses to [“projects/vm-options/global/licenses/enable-vmx”].

Add your project’s ID to the environment as GCP_PROJECT_ID.

export GCP_PROJECT_ID=<<YOUR GCP PROJECT ID>>

Add SSH Public Key to Image

I’m not sure why this was required, but the standard packer does not provide ssh access. I added the local google_cloud_engine key in ~/.ssh to the authorized_ssh_keys in the config.yml.

You can find more configuration options in the configuration section of the [installation docs.](https://github.com/rancher/k3os#configuration)

Google Cloud Service Account

Create a new service account with Cloud Build Service Account permissions. You can do this via the ui or cli: Google Console: https://console.cloud.google.com/iam-admin/serviceaccounts

Packer also provides [GCP service account creation instructions](https://www.packer.io/docs/builders/googlecompute) for the cli.

Either way, save the resulting configuration file as account.json in the gcp directory.

Finally, let’s run the image builder.

packer build template.json

 

Create a Test Cluster with K3s

To create the VM in GCP based on our new image, we need to specify the minimum CPU family that includes the nested virtualization tech and enough drive space to run everything.

SUSE Enterprise Linux 15 SP2

https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances

gcloud compute instances create harvester –zone us-central1-b –min-cpu-platform “Intel Haswell” –image sles-15-sp2 –machine-type=n1-standard-8 –boot-disk-size=200GB –tags http-server,https-server –can-ip-forward

You will then need to install libvirt, qemu-kvm, and setup app armor exception.

Ubuntu 20.04

The following command should create an ubuntu instance usable for our testing:

gcloud compute instances create harvester --zone us-central1-b --min-cpu-platform "Intel Haswell" --image ubuntu-2004-lts --machine-type=n1-standard-8 --boot-disk-size=200GB

Connecting

If you set the firewall rule from the previous step, you can connect using the ssh key that Google created ~/.ssh/google_compute_engine. When connecting to the ubuntu instance, use the ubuntu username.

 

Deploying Harvester with Helm

Deploying Harvester is a three-step process and requires you to install Helm on the new cluster. Connect via ssh and install the application:

export VERIFY_CHECKSUM=false
 curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
 cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Then clone the Harvester repository. 

git clone https://github.com/rancher/harvester

Then browse to harvester/deploy/charts:

cd harvester/deploy/charts

Then install the chart itself. This will take a few minutes:

helm upgrade --install harvester ./harvester --namespace harvester-system --set longhorn.enabled=true,minio.persistence.storageClass=longhorn,service.harvester.type=NodePort,multus.enabled=true --create-namespace

However, this hides the complexity of what is happening underneath. It installs Longhorn, minIO, and multus along with the kubevirt components. You can install them separately by disabling them in the Helm installation.

Note: Currently, it is not possible to use another storage type. However, the defaults would just not install anything and would create a broken system. So they must be specified despite lack of functional choice.

Setting Up and Starting a Linux VM

Currently, setting up a VM takes some effort. You must place the compatible image in an accessible URL so that Harvester can download it. For the Linux VM, we’ll use the UI, switching to CRDs for the Windows VM.

A Working VM

Harvester does not support all formats, but kvm and compressed kvm images will work.

Upload to a URL

I uploaded mine to s3 to make it easy, but any web-accessible storage will do.

You can use the manifest here to import an open sles joes 15.2 leap image:

apiVersion: harvester.cattle.io/v1alpha1
kind: VirtualMachineImage
metadata:
 name: image-openjeos-leap-15sp2
 annotations:
 {}
# key: string
 generateName: image-openjeos-leap-15sp2
 labels:
 {}
# key: string
 namespace: default
spec:
 displayName: opensles-leap-jeos-15-sp2
 url: >-
 https://thecrazyrussian.s3.amazonaws.com/openSUSE-Leap-15.2-JeOS.x86_64-15.2-kvm-and-xen-Build31.186.qcow2

Download Via Harvester

Browse over to the images, select “add a new”, and input the URL to upload and store it in the included minIO installation.

Create a VM from Image

Once the image is fully uploaded, we can create a VM based on the image.

Remote Connection: VNC

If everything is working properly, you should be able to vnc into the new image.

 

Setting Up and Starting a Windows VM

The process for setting up a Windows VM is a bit more painful than a Linux VM. We need to pass options into the underlying yaml, so we’ll use these sample Windows CRDs and insert them to create our VM after image download.

Upload CRD for Image. We can use this CRD to grab a Windows CD image that I have stored on a web-accessible location:

apiVersion: harvester.cattle.io/v1alpha1
kind: VirtualMachineImage
metadata:
 name: image-windows10
 annotations:
 field.cattle.io/description: windowsimage
 generateName: image-windows10
 labels:
 {}
 namespace: default
spec:
 displayName: windows
 url: 'https://thecrazyrussian.s3.amazonaws.com/Win10_2004_English_x64.iso'

Upload CRD for VM

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  annotations:
    kubevirt.io/latest-observed-api-version: v1alpha3
    kubevirt.io/storage-observed-api-version: v1alpha3
  finalizers:
    - wrangler.cattle.io/VMController.UnsetOwnerOfDataVolumes
  labels: {}
  name: 'windows10'
  namespace: default
spec:
  dataVolumeTemplates:
    - apiVersion: cdi.kubevirt.io/v1alpha1
      kind: DataVolume
      metadata:
        annotations:
          cdi.kubevirt.io/storage.import.requiresScratch: 'true'
          harvester.cattle.io/imageId: default/image-windows10
        creationTimestamp: null
        name: windows-cdrom-disk-win
      spec:
        pvc:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 20Gi
          storageClassName: longhorn
          volumeMode: Filesystem
        source:
          http:
            certConfigMap: importer-ca-none
            url: 'http://minio.harvester-system:9000/vm-images/image-windows10'
    - apiVersion: cdi.kubevirt.io/v1alpha1
      kind: DataVolume
      metadata:
        creationTimestamp: null
        name: windows-rootdisk-win
      spec:
        pvc:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 32Gi
          storageClassName: longhorn
          volumeMode: Filesystem
        source:
          blank: {}
  running: true
  template:
    metadata:
      annotations:
        harvester.cattle.io/diskNames: >-
          ["windows-cdrom-disk-win","windows-rootdisk-win","windows-virtio-container-disk-win"]
        harvester.cattle.io/sshNames: '[]'
      creationTimestamp: null
      labels:
        harvester.cattle.io/creator: harvester
        harvester.cattle.io/vmName: windows
    spec:
      domain:
        cpu:
          cores: 4
        devices:
          disks:
            - bootOrder: 1
n              cdrom:
                bus: sata
              name: cdrom-disk
            - disk:
                bus: virtio
              name: rootdisk
            - cdrom:
                bus: sata
              name: virtio-container-disk
            - disk:
                bus: virtio
              name: cloudinitdisk
          inputs:
            - bus: usb
              name: tablet
              type: tablet
          interfaces:
            - masquerade: {}
              model: e1000
              name: default
        machine:
          type: q35
        resources:
          requests:
            memory: 4Gi
      hostname: windows
      networks:
        - name: default
          pod: {}
      volumes:
        - dataVolume:
            name: windows-cdrom-disk-win
          name: cdrom-disk
        - dataVolume:
            name: windows-rootdisk-win
          name: rootdisk
        - containerDisk:
            image: kubevirt/virtio-container-disk
          name: virtio-container-disk
        - cloudInitNoCloud:
            userData: |
              #cloud-config
              ssh_authorized_keys: []
          name: cloudinitdisk

Once both CRDs finish processing, the VM should start booting and you will be able to run the normal Windows install process via the console or VNC.

When it is time to select which drive to load Windows on, you’ll need to load the special Virtio driver from the cd already included in the VM above.

Select “Install Now”

Then “I do not have a product key”

Select “Windows 10 Pro”

“Accept the license”

Install the Virtio Storage Driver

You will then have to select the disk. Instead, select “Load Driver”

Select the AMD64 driver (depending on your VM) and load it.

You should be able to select your (non-empty) hard disk and continue installing Windows.

Automating Harvester via Terraform

We can automate all these components. For example, we can script the VM creation via bash or Terraform. We can use Fleet or any other GitOps tool to push out individual VM and image crds and updates to the entire edge device, from the OS to the applications. Let’s start with an integrated Terraform script responsible for deploying Rancher and Harvester on a single node K3s cluster in GCP. You must complete the previous steps for this to work.

git clone https://github.com/thecrazyrussian/terraform-harvester.git
cd terraform-harvester

Here we must edit the infra/terraform.tfvars.example file. We can copy it from the infra/terraform.tfvars.example:

cp terraform.tfvars.example terraform.tfvars

And edit it in our favorite editor:

vi terraform.tfvars

Once you set all the variables for your route53 zone, GCP account, and you’ve added credentials.json for gcp into the infra/ directory, you should be ready to `apply`. That should stand up a single node Rancher/Harvester cluster and deploy a Windows VM on to it for customization.

Browse to the nodeport where Harvester made itself available, log in with admin/password and start a console to the VM to configure it as you normally would.

 

If you enjoyed this demo, head over to the SUSE & Rancher Community and let us know how you plan to use Harvester.

Announcing Harvester: Open Source Hyperconverged Infrastructure (HCI) Software

Wednesday, 16 December, 2020

Today, I am excited to announce project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Harvester provides fully integrated virtualization and storage capabilities on bare-metal servers. No Kubernetes knowledge is required to use Harvester.

Why Harvester?

In the past few years, we’ve seen many attempts to bring VM management into container platforms, including our own RancherVM, and other solutions like KubeVirt and Virtlet. We’ve seen some demand for solutions like this, mostly for running legacy software side by side with containers. But in the end, none of these solutions have come close to the popularity of industry-standard virtualization products like vSphere and Nutanix.

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators. They are familiar with concepts like ISO images, disk volumes, NICs and VLANS – not concepts like pods and PVCs.

Enter Harvester.

Project Harvester is an open source alternative to traditional proprietary hyperconverged infrastructure software. Harvester is built on top of cutting-edge open source technologies including Kubernetes, KubeVirt and Longhorn. We’ve designed Harvester to be easy to understand, install and operate. Users don’t need to understand anything about Kubernetes to use Harvester and enjoy all the benefits of Kubernetes.

Harvester v0.1.0

Harvester v0.1.0 has the following features:

Installation from ISO

You can download ISO from the release page on Github and install it directly on bare-metal nodes. During the installation, you can choose to create a new cluster or add the current node into an existing cluster. Harvester will automatically create a cluster based on the information you provided.

Install as a Helm Chart on an Existing Kubernetes Cluster

For development purposes, you can install Harvester on an existing Kubernetes cluster. The nodes must be able to support KVM through either hardware virtualization (Intel VT-x or AMD-V) or nested virtualization.

VM Lifecycle Management

Powered by KubeVirt, Harvester supports creating/deleting/updating operations for VMs, as well as SSH key injection and cloud-init.

Harvester also provides a graphical console and a serial port console for users to access the VM in the UI.

Storage Management

Harvester has a built-in highly available block storage system powered by Longhorn. It will use the storage space on the node, to provide highly available storage to the VMs inside the cluster.

Networking Management

Harvester provides several different options for networking.

By default, each VM inside Harvester will have a management NIC, powered by Kubernetes overlay networking.

Users can also add additional NICs to the VMs. Currently, VLAN is supported.

The multi-network functionality in Harvester is powered by Multus.

Image Management

Harvester has a built-in image repository, allowing users to easily download/manage new images for the VMs inside the cluster.

The image repository is powered by MinIO.

Image 01

Install

To install Harvester, just load the Harvester ISO into your bare-metal machine and boot it up.

Image 02

For the first node where you install Harvester, select Create a new Harvester cluster.

Later, you will be prompted to enter the password that will be used to enter the console on the host, as well as “Cluster Token.” The Cluster Token is a token that’s needed later by other nodes that want to join the same cluster.

Image 03

Then you will be prompted to choose the NIC that Harvester will use. The selected NIC will be used as the network for the management and storage traffic.

Image 04

Once everything has been configured, you will be prompted to confirm the installation of Harvester.

Image 05

Once installed, the host will be rebooted and boot into the Harvester console.

Image 06

Later, when you are adding a node to the cluster, you will be prompted to enter the management address (which is shown above) as well as the cluster token you’ve set when creating the cluster.

See here for a demo of the installation process.

Alternatively, you can install Harvester as a Helm chart on your existing Kubernetes cluster, if the nodes in your cluster have hardware virtualization support. See here for more details. And here is a demo using Digital Ocean which supports nested virtualization.

Usage

Once installed, you can use the management URL shown in the Harvester console to access the Harvester UI.

The default user name/password is documented here.

Image 07

Once logged in, you will see the dashboard.

Image 08

The first step to create a virtual machine is to import an image into Harvester.

Select the Images page and click the Create button, fill in the URL field and the image name will be automatically filled for you.

Image 09

Then click Create to confirm.

You will see the real-time progress of creating the image on the Images page.

Image 10

Once the image is finished creating, you can then start creating the VM using the image.

Select the Virtual Machine page, and click Create.

Image 11

Fill in the parameters needed for creation, including volumes, networks, cloud-init, etc. Then click Create.

VM will be created soon.

Image 12

Once created, click the Console button to get access to the console of the VM.

Image 13

See here for a UI demo.

Current Status and Roadmap

Harvester is in the early stages. We’ve just released the v0.1.0 (alpha) release. Feel free to give it a try and let us know what you think.

We have the following items in our roadmap:

  1. Live migration support
  2. PXE support
  3. VM backup/restore
  4. Zero downtime upgrade

If you need any help with Harvester, please join us at either our Rancher forums or Slack, where our team hangs out.

If you have any feedback or questions, feel free to file an issue on our GitHub page.

Thank you and enjoy Harvester!

Innovation without Disruption: Introducing SUSE Linux Enterprise 15 SP4 and Agility

Monday, 20 June, 2022

In a production environment, where applications must be flexible at deployment, running and rolling out times, it is important to consider agility as one of the main points to consider when building or evolving your platform.

SUSE Linux Enterprise Server is a modern, modular operating system for both multimodal and traditional IT. In this article, I’ll provide a high-level overview of features, capabilities and limitations of SUSE Linux Enterprise Server 15 SP4 and highlight important product updates.SUSE Linux Enterprise Server leverages your workloads to provide security, agility and resiliency to your ecosystem. In this article, I am going to cover agility. SUSE Linux Enterprise Server also now supports KubeVirt. 

Regarding agility, some relevant offerings from SUSE include:

  • Base Container Images (BCI): BCI brings all the SLES (SUSE Linux Enterprise Server) experience into container workloads. It builds your applications in a secure, multi-stage and performance environment.
  • Harvester HCI (HyperConverged Infrastructure) (KubeVirt): Harvester is a modern HCI solution that bridges the gap between the HCI software and the cloud-native ecosystem using technologies like Longhorn and KubeVirt to provide storage and virtualization capabilities.  It connects multiple interfaces to the Virtual Machines and provides isolation capabilities to the architecture. With Harvester and Kubernetes, you no longer need to manage traditional HCI infrastructure and cloud-native separately.
  • SUSE Manager HUB: Scale your infrastructure and manage thousands of servers through a hub implementation of SUSE Manager.

Why SLE BCI?

While Alpine is the most used base image, when it comes to an enterprise use case, you should consider more variables before making a choice. Here are some of the reasons why SLE BCI (which I will shorten to simply BCI for now) is potentially a great fit.

  • Maximum security: When it comes to developing applications, the world is moving and working in a cloud native ecosystem because of its emphasis on flexibility, agility and cost effectiveness. However, application security is often an afterthought in the initial stages of developing a new app. If developers do not choose their base image wisely, their application could be affected by security vulnerabilities, or it simply will not pass the required security certifications. When developing the SLE family of products, SUSE worked to ensure they meet the highest levels of security and compliance, including FIPS (Federal Information Processing Standard), EAL4+, FSTEC, USG, CIS (Center for Internet Security) and DISA/STIG. All this work flows downstream to SLE BCI, making it one of the industry’s most secure base images for enterprise developers or independent software vendors to leverage.
  • Available images: SUSE provides two sets of images through its registry, the base ones (bci-base, bci-minimal, bci-micro, bci-init) and the language-specific ones (Golang, rust, openJDK, python, ruby, and more).  Check out the registry!
  • Supportability: One of the key factors that made me give BCI a try is the supportability matrix. So far, if I must test my application locally or for a Proof of Concept, I could use an Alpine or a specific language/runtime image. But when it comes to creating an enterprise-grade application, sooner than later, I will need to migrate to a supported one. SUSE fully supports bci-base. Customers with an active subscription agreement can open support cases or request new features through the official channels.Something else that captured my attention: the supportability matrix of BCI has no bounds with the underlying host where the application is running, which allows more flexibility and mixed ecosystems while keeping your application covered by the SUSE support umbrella.

SUSE Manager hub

Ecosystems need to scale as required. Managing servers in a lab is not comparable to managing different production environments where not only is managing servers important, but so is complying with security standards and maintaining health and ensuring compliance.  When it comes to managing an environment, whether it is pure SUSE or a mixed environment, there are some aspects we need to take into consideration:

  • Compliance: through the templates and automation of new deployments, every new element or operating system would ensure that it is following the compliance definition for the ecosystem and the different environments defined.
  • Security: An agile environment requires new features to be tested and new discovered vulnerabilities to be patched. Your ecosystem is as vulnerable as the weakest element you have deployed. With a centralized path, configuration, and package management, you will be aware of the vulnerabilities affecting your entire ecosystem and design the update or deployment strategy.
  • Health: as part of day 2 operations, SUSE Manager centralizes the management of the risk of business disruptions and monitors downtime.
  • Scalability: with new elements coming to the environment, it is also important to manage the infrastructure in a supported, feasible and performant manner. SUSE provides scalability up to 1 million clients in a hub-based architecture. Multiple SUSE Managers can be managed from a single hub node, aggregating clients and attaching them to a specific proxy server that is also managed by its own manager.  This allows you to have a centralized reporting database that is helpful since you do not have to look on each server to get the monitoring of a specific environment or subset of clients. In other words, everything is managed from a centralized hub. This architecture adds some features for complex environments or specific management requirements for compliance.  For example, for multi-tenancy you can use different managers to isolate server configurations. Check out the SUSE Manager product page for more information.
  • Monitoring: Whether SUSE Manager is installed on a hub or standalone, each environment needs to be reported where you can see the relevant information you are looking for in a single glance. Ecosystems need to be agile and adaptable, deploying new servers, decommissioning the ones you no longer need and being aware of new elements added even from various sources. SUSE Manager can deploy multiple probes that you can configure to look after the most critical elements or the most relevant events for you.SUSE Manager uses Prometheus to monitor the elements and Grafana for the dashboards. You are not restricted to what comes with the product; instead, you can create customized dashboards to organize and show that information in a way that is more relevant. In a scenario where the monitoring comes from third-party software, SUSE Manager Monitoring can pull data from a single or multiple external sources and use it.No matter how you evolve your ecosystem, whether you do it through the deployment templates or use external deployers, SUSE Manager, through the Service Discovery features, can look for potential monitoring targets that add dynamic definitions on a living environment.

Trento

SAP environments are complex systems designed to accomplish complex challenges. They consist of several pieces including databases, high availability systems, applications servers and workloads. No matter where you deploy, on premise or in the cloud, all those pieces need to integrate with each other with their own setup processes and configurations. This implies that SAP environments are hard to deploy, configure and manage. Usually, the initial deployment and configuration of SAP requires enterprise admins and third-party integrators to reference SAP notes. It is a time- and resource-consuming task.

SAP setup process consists of several manual steps and configurations to deploy and maintain the software successfully. With so many elements to configure and handle, there are situations where misconfigurations and human errors lead to unexpected downtime.SUSE and SAP have been working together for the last 20 years to build up a stable integration between SAP and SUSE Linux Enterprise Server for SAP Applications, creating an in-depth operating system designed and certified for running SAP systems, databases and workloads.

Deploying and maintaining SAP environments is not a “fire and forget.” It requires maintenance and monitoring the status of the hosts, systems, databases and high availability pieces. To do that, you have to look for someone who can handle this as it is an extremely specific system. This is where Trento comes to the table. Trento is a containerized solution that provides a single console view to discover and manage all SAP systems components (databases, hosts, HA, databases and HANA Databases). Trento is the way to safeguard SAP ecosystems. The user will be notified when a bad configuration or a missing setup step is detected on any systems, recommendations on reducing time-consuming assets (like performing daily and manual revisions of the systems) or digging into the SAP documentation looking for a specific asset. Trento is the centralized piece of SAP infrastructure where the user can see the status of the ecosystem in a single dashboard, get recommendations on what is the best configuration for a specific environment and ensure the SAP ecosystem is deployed and running following best practices. Leverage SUSE’s expertise with SAP. Within SUSE Linux Enterprise Server for SAP Applications, Trento is a first-class citizen that can leverage how well the operating system and the SAP ecosystem work together.

Conclusion

SUSE provides a stack to manage your infrastructure components, with a focus on agility without renouncing stability or security. This stack includes SUSE Manager, BCI images, Trento, and Harvester.  SUSE can manage multi-vendor ecosystems where SYSE systems and other operating systems are managed, patched and analyzed.  SUSE solutions keep your entire environment in compliance with the highest security standards.To learn more, go to Business Critical Linux, SUSE Security, SUSE Linux Enterprise Base Container Images, SUSE Manager, and/or SUSE Linux Enterprise Server.

Thanks for reading!