Harvester Integrates with Rancher: What Does This Mean for You?

Thursday, 21 October, 2021

Thousands of new technology solutions are created each year, all designed to help serve the open source community of developers and engineers who build better applications. In 2014, Rancher was founded as a software solution aiming to help simplify the lives of engineers building applications in the new market of containers.

Today, Rancher is a market-leading, open source solution supported by a rich community helping thousands of global organizations deliver Kubernetes at scale.

Harvester is a new project from the SUSE Rancher team. It is a 100% open source, Hyperconverged Infrastructure (HCI) solution that offers the same expected integrations as traditional commercial solutions while also incorporating beneficial components of Kubernetes. Harvester is built on a foundation of cloud native technology to bridge the gap between traditional HCI and cloud native solutions.

Why Is This Significant? 

Harvester addresses the intersection of traditional HCI frameworks and modern containerization strategies. Developed by SUSE’s team of Rancher engineers, Harvester preserves the core values of Rancher. This includes enriching the open source community by creating Harvester as a 100% open, interoperable, and reliable HCI solution that fits any environment while retaining the traditional functions of HCI solutions. This helps users efficiently manage and operate their virtual machine workloads.

When Harvester is used with Rancher, it provides cloud native users with a holistic platform to manage new cloud native environments, including Kubernetes and containers alongside legacy Virtual Machine (VM) workloads. Rancher and Harvester together can help organizations modernize their IT environment by simplifying the operations and management of workloads across their infrastructure, reducing the amount of operational debt.

What Can We Expect in the Rancher & Harvester Integration?

There are a couple of significant updates in this v0.3.0 of Harvester with Rancher. The integration with Rancher v2.6.1 gives users extended usability across both platforms, including importing and managing multiple Harvester clusters using the Virtualization Management feature in Rancher v2.6.1. In addition, users can also leverage the authentication mechanisms and RBAC control for multi-tenancy support available in Rancher.  

Harvester users can now provision RKE & RKE2 clusters within Rancher v2.6.1 using the built-in Harvester Node Driver. Additionally, Harvester can now provide built-in Load Balancer support and raw cluster persistent storage support to guest Kubernetes clusters.  

Harvester remains on track to hit its general availability v1.0.0 release later this year.

Learn more about the Rancher and Harvester integration here.  

You can also check out additional feature releases in v0.3.0 of Harvester on GitHub or at harvesterhci.io.

Tags: ,,,,, Category: Uncategorized Comments closed

How to Manage Harvester 0.3.0 with Rancher 2.6.1 Running in a VM within Harvester

Wednesday, 20 October, 2021

What I liked about the release of Harvester 0.2.0 was the ease of enabling the embedded Rancher server, which allowed you to create Kubernetes clusters in the same Harvester cluster.

With the release of Harvester 0.3.0, this option was removed in favor of installing Rancher 2.6.1 separately and then importing your Harvester cluster into Rancher, where you could manage it. A Harvester node driver is provided with Rancher 2.6.1 to allow you to create Kubernetes clusters in the same Harvester 0.3.0 cluster.

I replicated my Harvester 0.2.0 plus the Rancher server experience using Harvester 0.3.0 and Rancher 2.6.1.

There’s no upgrade path from Harvester 0.2.0 to 0.3.0, so the first step was reinstalling my Intel NUC with Harvester 0.3.0 following the docs at: https://docs.harvesterhci.io/v0.3/install/iso-install/.

Given that my previous Harvester 0.2.0 install included Rancher, I figured I’d install Rancher in a VM running on my newly installed Harvester 0.3.0 node – but which OS would I use? With Rancher being deployed using a single Docker container, I was looking for a small, lightweight OS that included Docker. From past experience, I knew that openSUSE Leap had slimmed down images of its distribution available at https://get.opensuse.org/leap/ – click the alternative downloads link immediately under the initial downloads. Known as Just enough OS (JeOS), these are available for both Leap and Tumbleweed (their rolling release). I opted for Leap, so I created an image using the URL for the OpenStack Cloud image (trust me – the KVM and XEN image hangs on boot).

Knowing that I wanted to be able to access Rancher on the same network my Harvester node was attached to, I also enabled  Advanced | Settings | vlan (VLAN) and created a network using VLAN ID 1 (Advanced | Networks).

The next step is to install Rancher in a VM. While I could do this manually, I prefer automation and wanted to do something I could reliably repeat (something I did a lot while getting this working) and perhaps adapt when installing future versions. When creating a virtual machine, I was intrigued by the user data and network data sections in the advanced options tab, referenced in the docs at https://docs.harvesterhci.io/v0.3/vm/create-vm/, along with some basic examples. I knew from past experience that cloud-init could be used to initialize cloud instances, and with the openSUSE OpenStack Cloud images using cloud-init, I wondered if this could be used here. According to the examples in the cloud-init docs at https://cloudinit.readthedocs.io/en/latest/topics/examples.html, it can!

When creating the Rancher VM, I gave it 1 CPU with a 4-core NUC and Harvester 0.3.0 not supporting over-provisioning (it’s a bug – phew!) – I had to be frugal! Through trial and error, I also found that the minimum memory required for Rancher to work is 3 GB. I chose my openSUSE Leap 15.3 JeOS OpenStack Cloud image on the volumes tab, and on the networks tab, I chose my custom (VLAN 1) network.

The real work is done on the advanced options tab. I already knew JeOS didn’t include Docker, so that would need to be installed before I could launch the Docker container for Rancher. I also knew the keyboard wasn’t set up for me in the UK, so I wanted to fix that too. Plus, I’d like a message to indicate it was ready to use. I came up with the following User Data:

password: changeme
packages:
  - docker
runcmd:
  - localectl set-keymap uk
  - systemctl enable --now docker
  - docker run --name=rancher -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.1
  - until curl -sk https://127.0.0.1 -o /dev/null; do sleep 30s; done
final_message: Rancher is ready!

Let me go through the above lines:

  • Line 1 sets the password of the default opensuse user – you will be prompted to change this the first time you log in as this user, so don’t set it to anything secret!
  • Lines 2 & 3 install the docker package.
  • Line 4 says we’ll run some commands once it’s booted the first time.
  • Line 5 sets the UK keyboard.
  • Line 6 enables and starts the Docker service.
  • Line 7 pulls and runs the Docker container for Rancher 2.6.1 – this is the same line as the Harvester docs, except I’ve added “–name=rancher” to make it easier when you need to find the Bootstrap Password later.
    NOTE: When you create the VM, this line will be split into two lines with an additional preceding line with “>-” – it will look a bit different, but it’s nothing to worry about!
  • Line 8 is a loop checking for the Rancher server to become available – I test localhost, so it works regardless of the assigned IP address.
  • Line 9 prints out a message saying it’s finished (which happens after the previous loop completes).

An extra couple of lines will be automatically added when you click the create button but don’t click it yet as we’re not done!

This still left a problem with which IP address I use to access Rancher? With devices being assigned random IP addresses via DHCP, how do I control which address is used? Fortunately, the Network Data sections allow us to set a static address (and not have to mess with config files or run custom scripting within the VM):

network:
  version: 1
  config:
    - type: physical
      name: eth0
      subnets:
        - type: static
          address: 192.168.144.190/24
          gateway: 192.168.144.254
    - type: nameserver
      address:
        - 192.168.144.254
      search:
        - example.com

I won’t go through all the lines above but will call out those you need to change for your own network:

  • Line 8 sets the IP address to use with the CIDR netmask (/24 means 255.255.255.0).
  • Line 9 sets the default gateway.
  • Line 12 sets the default DNS nameserver.
  • Line 14 sets the default DNS search domain.

See https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v1.html# for information on the other lines.

Unless you unticked the start virtual machine on creation, your VM should start booting once you click the Create button. If you open the Web Console in VNC, you’ll be able to keep an eye on the progress of your VM. When you see the message Rancher is ready, you can try accessing Rancher in a web browser at the IP address you specified above. Depending on the web browser you’re using and its configuration, you may see warning messages about the self-signed certificate Rancher is using.

The first time you log in to Rancher, you will be prompted for the random bootstrap password which was generated. To get this, you can SSH as the opensuse user to your Rancher VM, then run:

sudo docker logs rancher 2>&1 | grep "Bootstrap Password:"

Copy the password and paste it into the password field of the Rancher login screen, then click the login with Local User button.

You’re then prompted to set a password for the default admin user. Unless you can remember random strings or use a password manager, I’d set a specific password. You also need to agree to the terms and conditions for using Rancher!

Finally, you’re logged into Rancher, but we’re not entirely done yet as we need to add our Harvester cluster. To do this, click on the hamburger menu and then the Virtualization Management tab. Don’t panic if you see a failed whale error – just try reloading.

Clicking the Import Existing button will give you some registration commands to run on one of your Harvester node(s).

To do this, SSH to your Harvester node as the rancher user and then run the first kubectl command prefixed with sudo. Unless you’ve changed your Harvester installation, you’ll also need to run the curl command, again prefixing the kubectl command with sudo. The webpage should refresh, showing your Harvester cluster’s management page. If you click the Harvester Cluster link or tab, your Harvester cluster should be listed. Clicking on your cluster name should show something familiar!

Finally, we need to activate the Harvester node driver by clicking the hamburger menu and then the Cluster Management tab. Click Drivers, then Node Drivers, find Harvester in the list, and click Activate.

Now we have Harvester 0.3.0 integrated with Rancher 2.6.1, running similarly to Harvester 0.2.0, although sacrificing 1 CPU (which will be less of an issue once the CPU over-provisioning bug is fixed) and 3GB RAM.

Admittedly, running Rancher within a VM in the same Harvester you’re managing through Rancher doesn’t seem like the best plan, and you wouldn’t do it in production, but for the home lab, it’s fine. Just remember not to chop off the branch you’re standing on!

Tags: ,,, Category: Kubernetes, Kubernetes, Kubernetes Comments closed

Harvester: Intro and Setup    

Tuesday, 17 August, 2021
I mentioned about a month back that I was using Harvester in my home lab. I didn’t go into much detail, so this post will bring some more depth. We will cover what Harvester does, as well as my hardware, installation, setup and how to deploy your first virtual machine. Let’s get started.

What is Harvester?

Harvester is Rancher’s open source answer to a hyperconverged infrastructure platform. Like most things Rancher is involved with, it is built on Kubernetes using tools like KubeVirt and Longhorn. KubeVirt is an exciting project that leverages KVM and libvirt to run virtual machines inside Kubernetes; this allows you to run both containers and VMs in your cluster. It reduces operational overhead and provides consistency. This combination of tried and tested technologies provides an open source solution in this space.

It is also designed to be used with bare metal, making it an excellent option for a home lab.

Hardware

If you check the hardware requirements, you will notice they focus more on business usage. So far, my personal experience says that you want at least a 4 core/8 thread CPU, 16GB of RAM, and a large SSD, preferably an NVMe drive. Anything less resource-wise doesn’t leave enough capacity for running many containers or VMs. I will install it on an Intel NUC 8i5BEK, which has an Intel Core i5-8259U. As far as RAM, it has 32GB of RAM and a 512GB NVMe drive. It can handle running Harvester without any issues. Of course, this is just my experience; your experience may differ.

Installation

Harvester ships as an ISO, which you can download on the GitHub Releases page. You can pull it quickly using wget.

$ wget https://releases.rancher.com/harvester/v0.2.0/harvester-amd64.iso

Once you have it downloaded, you will need to create a bootable USB. I typically use Balena Etcher since it is cross-platform and intuitive. Once you have a bootable USB, place it in the machine you want to use and boot the drive. This screen should greet you:

Select “New Cluster”:

Select the drive you want to use.

Enter your hostname, select your network interface, and make sure you use automatic DHCP.

You will then be prompted to enter your cluster token. This can be any phrase you want; I recommend using your password manager to generate one.

Set a password to use, and remember that the default user name is rancher.

The following several options are attractive, especially if you want to leverage your SSH keys used in GitHub. Since this is a home lab, I left the SSH keys, proxy and cloud-init setup blank. In an enterprise environment, this would be really useful. Now you will see the final screen before installation. Verify that everything is configured to your desires before proceeding.

If it all looks great, proceed with the installation. It will take a few minutes to complete; when it does, you will need to reboot.

After the reboot, the system will startup, and you will see a screen letting you know the URL for Harvester and the system’s status. Wait until it reports that Harvester is ready before trying to connect.

Great! It is now reporting that it is up and running, so it’s now time to set up Harvester.

Initial Setup

We can navigate to the URL listed once the OS boots. Mine is https://harvest:30443. It uses a self-signed certificate by default, so you will see a warning in your browser. Just click on “advanced” to proceed, and accept it. Set a password for the default admin account.

Now you should see the dashboard and the health of the system.

I like to disable the default account and add my own account for authentication. Probably not necessary for a home lab, but a good habit to get into. First, you need to navigate to it.

Now log out and back in with your new account. Once that’s finished, we can create our first VM.

Deploying Your First VM

Harvester has native support for qcow2 images and can import those from a URL. Let’s grab the URL for openSUSE Leap 15.3 JeOS image.

https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-kvm-and-xen.qcow2

The JeOS image for openSUSE is roughly 225MB, which is a perfect size for downloading and creating VMs quickly. Let’s make the image in Harvester.

Create a new image, and add the URL above as the image URL.

You should now see it listed.

Now we can create a VM using that image. Navigate to the VM screen.

Once we’ve made our way to the VM screen, we’ll create a new VM.

When that is complete, the VM will show up in the list. Wait until it has been started, then you can start using it.

Wrapping Up

In this article, I wanted to show you how to set up VMs with Harvester, even starting from scratch! There are plenty of features to explore and plenty more on the roadmap. This project is still early in its life, so now is a great time to jump in and get involved with its direction.

Tags: Category: Uncategorized Comments closed

Hyperconverged Infrastructure and Harvester

Monday, 2 August, 2021

Virtual machines (VMs) have transformed infrastructure deployment and management. VMs are so ubiquitous that I can’t think of a single instance where I deployed production code to a bare metal server in my many years as a professional software engineer.

VMs provide secure, isolated environments hosting your choice of operating system while sharing the resources of the underlying server. This allows resources to be allocated more efficiently, reducing the cost of over-provisioned hardware.

Given the power and flexibility provided by VMs, it is common to find many VMs deployed across many servers. However, managing VMs at this scale introduces challenges.

Managing VMs at Scale

Hypervisors provide comprehensive management of the VMs on a single server. The ability to create new VMs, start and stop them, clone them, and back them up are exposed through simple management consoles or command-line interfaces (CLIs).

But what happens when you need to manage two servers instead of one? Suddenly you find yourself having first to gain access to the appropriate server to interact with the hypervisor. You’ll also quickly find that you want to move VMs from one server to another, which means you’ll need to orchestrate a sequence of shutdown, backup, file copy, restore and boot operations.

Routine tasks performed on one server become just that little bit more difficult with two, and quickly become overwhelming with 10, 100 or 1,000 servers.

Clearly, administrators need a better way to manage VMs at scale.

Hyperconverged Infrastructure

This is where Hyperconverged Infrastructure (HCI) comes in. HCI is a marketing term rather than a strict definition. Still, it is typically used to describe a software layer that abstracts the compute, storage and network resources of multiple (often commodity or whitebox) servers to present a unified view of the underlying infrastructure. By building on top of the virtualization functionality included in all major operating systems, HCI allows many systems to be managed as a single, shared resource.

With HCI, administrators no longer need to think in terms of VMs running on individual servers. New hardware can be added and removed as needed. VMs can be provisioned wherever there is appropriate capacity, and operations that span servers, such as moving VMs, are as routine with 2 servers as they are with 100.

Harvester

Harvester, created by Rancher, is open source HCI software built using Kubernetes.

While Kubernetes has become the defacto standard for container orchestration, it may seem like an odd choice as the foundation for managing VMs. However, when you think of Kubernetes as an extensible orchestration platform, this choice makes sense.

Kubernetes provides authentication, authorization, high availability, fault tolerance, CLIs, software development kits (SDKs), application programming interfaces (APIs), declarative state, node management, and flexible resource definitions. All of these features have been battle tested over the years with many large-scale clusters.

More importantly, Kubernetes orchestrates many kinds of resources beyond containers. Thanks to the use of custom resource definitions (CRDs), and custom operators, Kubernetes can describe and provision any kind of resource.

By building on Kubernetes, Harvester takes advantage of a well tested and actively developed platform. With the use of KubeVirt and Longhorn, Harvester extends Kubernetes to allow the management of bare metal servers and VMs.

Harvester is not the first time VM management has been built on top of Kubernetes; Rancher’s own RancherVM is one such example. But these solutions have not been as popular as hoped:

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators.

To address this, Harvester does not expose the underlying Kubernetes platform to the end user. Instead, it presents more familiar concepts like VMs, NICs, ISO images and disk volumes. This allows Harvester to take advantage of Kubernetes while giving administrators a more traditional view of their infrastructure.

Managing VMs at Scale

The fusion of Kubernetes and VMs provides the ability to perform common tasks such as VM creation, backups, restores, migrations, SSH-Key injection and more across multiple servers from one centralized administration console.

Consolidating virtualized resources like CPU, memory, network, and storage allows for greater resource utilization and simplified administration, allowing Harvester to satisfy the core premise of HCI.

Conclusion

HCI abstracts the resources exposed by many individual servers to provide administrators with a unified and seamless management interface, providing a single point to perform common tasks like VM provisioning, moving, cloning, and backups.

Harvester is an HCI solution leveraging popular open source projects like Kubernetes, KubeVirt, and Longhorn, but with the explicit goal of not exposing Kubernetes to the end user.

The end result is an HCI solution built on the best open source platforms available while still providing administrators with a familiar view of their infrastructure.

Download Harvester from the project website and learn more from the project documentation.

Meet the Harvester developer team! Join our free Summer is Open session on Harvester: Tuesday, July 27 at 12pm PT and on demand. Get details about the project, watch a demo, ask questions and get a challenge to complete offline.

Announcing Harvester Beta Availability

Friday, 28 May, 2021

It has been five months since we announced project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Since then, we’ve received a lot of feedback from the early adopters. This feedback has encouraged us and helped in shaping Harvester’s roadmap. Today, I am excited to announce the Harvester v0.2.0 release, along with the Beta availability of the project!

Let’s take a look at what’s new in Harvester v0.2.0.

Raw Block Device Support

We’ve added the raw block device support in v0.2.0. Since it’s a change that’s mostly under the hood, the updates might not be immediately obvious to end users. Let me explain more in detail:

In Harvester v0.1.0, the image to VM flow worked like this:

  1. Users added a new VM image.

  2. Harvester downloaded the image into the built-in MinIO object store.

  3. Users created a new VM using the image.

  4. Harvester created a new volume, and copied the image from the MinIO object store.

  5. The image was presented to the VM as a block device, but it was stored as a file in the volume created by Harvester.

This approach had a few issues:

  1. Read/write operations to the VM volume needed to be translated into reading/writing the image file, which performed worse compared to reading/writing the raw block device, due to the overhead of the filesystem layer.

  2. If one VM image is used multiple times by different VMs, it was replicated many times in the cluster. This is because each VM had its own copy of the volume, even though the majority of the content was likely the same since they’re coming from the same image.

  3. The dependency on MinIO to store the images resulted in Harvester keeping MinIO highly available and expandable. Those requirements caused an extra burden on the Harvester management plane.

In v0.2.0, we’ve took another approach to tackle the problem, which resulted in a simpler solution that had better performance and less duplicated data:

  1. Instead of an image file on the filesystem, now we’re providing the VM with raw block devices, which allows for better performance for the VM.

  2. We’ve taken advantage of a new feature called Backing Image in the Longhorn v1.1.1, to reduce the unnecessary copies of the VM image. Now the VM image will be served as a read-only layer for all the VMs using it. Longhorn is now responsible for creating another copy-on-write (COW) layer on top of the image for the VMs to use.

  3. Since now Longhorn starts to manage the VM image using the Backing Image feature, the dependency of MinIO can be removed.

Image 02
A comprehensive view of images in Harvester

From the user experience perspective, you may have noticed that importing an image is instantaneous. And starting a VM based on a new image takes a bit longer due to the image downloading process in Longhorn. Later on, any other VMs using the same image will take significantly less time to boot up, compared to the previous v0.1.0 release and the disk IO performance will be better as well.

VM Live Migration Support

In preparation for the future upgrade process, VM live migration is now supported in Harvester v0.2.0.

VM live migration allows a VM to migrate from one node to another, without any downtime. It’s mostly used when you want to perform maintenance work on one of the nodes or want to balance the workload across the nodes.

One thing worth noting is, due to potential IP change of the VM after migration when using the default management network, we highly recommend using the VLAN network instead of the default management network. Otherwise, you might not be able to keep the same IP for the VM after migration to another node.

You can read more about live migration support here.

VM Backup Support

We’ve added VM backup support to Harvester v0.2.0.

The backup support provides a way for you to backup your VM images outside of the cluster.

To use the backup/restore feature, you need an S3 compatible endpoint or NFS server and the destination of the backup will be referred to as the backup target.

You can get more details on how to set up the backup target in Harvester here.

Image 03
Easily manage and operate your virtual machines in Harvester

In the meantime, we’re also working on the snapshot feature for the VMs. In contrast to the backup feature, the snapshot feature will store the image state inside the cluster, providing VMs the ability to revert back to a previous snapshot. Unlike the backup feature, no data will be copied outside the cluster for a snapshot. So it will be a quick way to try something experimental, but not ideal for the purpose of keeping the data safe if the cluster went down.

PXE Boot Installation Support

PXE boot installation is widely used in the data center to automatically populate bare-metal nodes with desired operating systems. We’ve also added the PXE boot installation in Harvester v0.2.0 to help users that have a large number of servers and want a fully automated installation process.

You can find more information regarding how to do the PXE boot installation in Harvester v0.2.0 here.

We’ve also provided a few examples of doing iPXE on public bare-metal cloud providers, including Equinix Metal. More information is available here.

Rancher Integration

Last but not least, Harvester v0.2.0 now ships with a built-in Rancher server for Kubernetes management.

This was one of the most requested features since we announced Harvester v0.1.0, and we’re very excited to deliver the first version of the Rancher integration in the v0.2.0 release.

For v0.2.0, you can use the built-in Rancher server to create Kubernetes clusters on top of your Harvester bare-metal clusters.

To start using the built-in Rancher in Harvester v0.2.0, go to Settings, then set the rancher-enabled option to true. Now you should be able to see a Rancher button on the top right corner of the UI. Clicking the button takes you to the Rancher UI.

Harvester and Rancher share the authentication process, so once you’re logged in to Harvester, you don’t need to redo the login process in Rancher and vice versa.

If you want to create a new Kubernetes cluster using Rancher, you can follow the steps here. A reminder that VLAN networking needs to be enabled for creating Kubernetes clusters on top of the Harvester, since the default management network cannot guarantee a stable IP for the VMs, especially after reboot or migration.

What’s Next?

Now with v0.2.0 behind us, we’re working on the v0.3.0 release, which will be the last feature release before Harvester reaches GA.

We’re working on many things for v0.3.0 release. Here are some highlights:

  • Built-in load balancer
  • Rancher 2.6 integration
  • Replace K3OS with a small footprint OS designed for the container workload
  • Multi-tenant support
  • Multi-disk support
  • VM snapshot support
  • Terraform provider
  • Guest Kubernetes cluster CSI driver
  • Enhanced monitoring

You can get started today and give Harvester v0.2.0 a try via our website.

Let us know what you think via the Rancher User Slack #harvester channel. And start contributing by filing issues and feature requests via our github page.

Enjoy Harvester!

Meet Harvester, an HCI Solution for the Edge

Tuesday, 6 April, 2021

About six months ago, I learned about a new project called Harvester, our open source hyperconverged infrastructure (HCI) software built using Kubernetes, libvirt, kubevirt, Longhorn and minIO. At first, the idea of managing VMs via Kubernetes did not seem very exciting. “Why would I not just containerize the workloads or orchestrate the VM natively via KVM, Xen or my hypervisor of choice?” and that approach makes a lot of sense except for one thing: the edge. At the edge, Harvester provides a solution for a nightmarish technical challenge. Specifically, when one host must run the dreaded Windows legacy applications and modern containerized microservers. In this blog and the following tutorials, I’ll map out an edge stack and set up and install Harvester. Later I’ll use Fleet to orchestrate the entire host with OS and Kubernetes updates. We’ll then deploy the whole thing with a bit of Terraform, completing the solution.

At the edge, we often lack the necessities such as a cloud or even spare hardware. Running Windows VMs alongside your Linux containers provides much-needed flexibility while using the Kubernetes API to manage the entire deployment brings welcome simplicity and control. With K3s and Harvester (in app mode), you can maximize your edge node’s utility by allowing it to run Linux containers and Windows VMs, down to the host OS orchestrated via Rancher’s Continuous Delivery (Fleet) GitOps deployment tool. 

At the host, we start with SLES and Ubuntu. The system-update operator can be customized for other Linux operating systems.

We’ll use K3s as our Kubernetes distribution. K3s’ advantage here is indisputable: small footprint, less chatty datastore (SQLite when used in single master mode), and removal of cloud-based bloat present in most Kubernetes distributions, including our RKE.

Harvester has two modes of operation: HCI, where it can attach to another cluster as a VM hosting node or a Helm application deployed into an existing Kubernetes cluster. The application can be installed and operated via a Helm chart and CRDs, providing our node with the greatest flexibility. 

Later we’ll orchestrate it via a Rancher 2.5 cluster and our Continuous Delivery functionality, powered by Fleet. Underneath, Harvester uses libvirt, kubevirt, multus and minIO, installed by default with the Helm chart. We’ll add a Windows image and deploy a VM via a CRD once we finish installing Harvester. At the end, I’ll provide scripts to get the MVP deployed in Google Cloud Platform (GCP), so you can play along at home. Note that since multi-layered VMs require special CPU functionality, we can currently only test in GCP or Digital Ocean.

In summary, Rancher Continuous Delivery (Fleet), Harvester, and K3s on top of Linux can provide a solid edge application hosting solution capable of scaling to many teams and millions of edge devices. While it’s not the only solution, and you can use each component individually with other open source components, this is one solution that you can implement today, complete with instructions and tutorials. Drop me a note if it is helpful for you, and as always, you can reach out to our consulting services for expert assistance and advice.  

 

Tutorial Sections:

Setting up the Host Image on GCP

Walks you through the extra work needed to enable nested virtualization in GCP.

Create Test Cluster with K3s

Deploy a cluster and get ready for Harvester.

Deploying Harvester with Helm

Deploy the Harvester app itself.

Setting Up and Starting a Linux VM

Testing with an OpenSuse JeOS Leap 15 sp2

Setting Up and Starting a Windows VM

Testing with a pre-licensed windows 10 Pro vm

Automating Harvester via Terraform

Orchestrating the entire rollout in Terraform.

Setting Up The Host Image on GCP

When choosing the host for Harvester, k3os requires the least amount of customization for K3s. However, GCP (and Digital Ocean) requires some extra configuration to get nested virtualization working and enable us to run Harvester. I’ll show the steps with SLES, k3OS, and Ubuntu 20.04 LTS since the former uses the latter. Both images need a special license key passed in so GCP places them on hosts that support nested virtualization. You can find more info here.

Do not forget to initialize gcloud prior to starting and after deployment be sure to open up port 22 to enable access. You can open port 22 with the following gcloud command.

gcloud compute firewall-rules create allow-tcpssh --allow tcp:22

Build a Customized SUSE Linux Enterprise Server (SLES) Image

GCP publishes public images that can be used. Sles public images are stored in the project `suse-cloud` and we’ll get a standard image and then recreate it with the needed vm license. We’ll do this by first copying a public image onto a disk in my current project.

gcloud compute disks create sles-15-sp2 --image-project suse-cloud --image-family sles-15 --zone us-central1-b

That will create a disk called sles-15-sp2 based on the sles-15 family in zone us-central1-b. Next we’ll create an image local to our project that will use that disk and include the nested vm license.

gcloud compute images create sles-15-sp2 --source-disk sles-15-sp2 --family sles-15 --source-disk-zone us-central1-b --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"

That’s it! (Until we get to K3s configuration.)

Build a Customized Ubuntu Image

The process to create the Ubuntu image is much the same.

First, we’ll create a new disk in our project and load the public ubuntu 20.04 image.

gcloud compute disks create ubuntu-2004-lts --image-project ubuntu-os-cloud --image-family ubuntu-2004-lts --zone us-central1-b

Then we’ll build a new image locally in our project by passing the special key.

gcloud compute images create ubuntu-2004-lts --source-disk ubuntu-2004-lts --family ubuntu-2004-lts --source-disk-zone us-central1-b --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"

You can then move on to K3s setup unless you are configuring k3OS.

Create and Build a Custom k3OS Image

Since there is no k3OS image published currently for GCP’s nested virtualization, we’ll build our own.

Prerequisites

To do the build itself, you’ll need a clean Ubuntu 20.04 installation with internet access. I used a new multipass instance with standard specs and it worked great. You can get it here]:

multipass launch --name gcp-builder

You’ll also need a GCP account with an active project.

Set Up the Tools Inside Local Builder VM

First, we need to install the GCP CLI. For k3os, we’ll need Packer, which we’ll install later.

Add Google to your ubuntu source list.

echo "deb https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

Then we’ll need to add their key:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Update sources and install the cloud SDK:

sudo apt-get update && sudo apt-get install google-cloud-sdk

Last, we’ll initialize the SDK by logging in and selecting our project. Go ahead and select your zone. I used us-central1-b for all the instructions.

gcloud init

Create k3os Image (k3os)

We’ll need another tool called Packer to build the k3os image:

sudo apt-get install packer

The k3os github repo comes with a handy gcp builder. Skip this part to use the Ubuntu image directly. Otherwise:

git clone https://github.com/rancher/k3os.git

And check out the latest no-rc build:

git checkout v0.11.1

For the rest, we’ll cd into the k3os/package/packer/gcp directory:

cd k3os/package/packer/gcp

Here I manually edited the template.json file and simplified the image name and region to match my default (us-central1-b):

vi template.json

Update builders[0].image_name to ‘rancher-k3os’ and variables.region value to ‘us-central1-b’. Then, we need to pass the GCP license needed for nested virtualization builders[0].image_licenses to [“projects/vm-options/global/licenses/enable-vmx”].

Add your project’s ID to the environment as GCP_PROJECT_ID.

export GCP_PROJECT_ID=<<YOUR GCP PROJECT ID>>

Add SSH Public Key to Image

I’m not sure why this was required, but the standard packer does not provide ssh access. I added the local google_cloud_engine key in ~/.ssh to the authorized_ssh_keys in the config.yml.

You can find more configuration options in the configuration section of the [installation docs.](https://github.com/rancher/k3os#configuration)

Google Cloud Service Account

Create a new service account with Cloud Build Service Account permissions. You can do this via the ui or cli: Google Console: https://console.cloud.google.com/iam-admin/serviceaccounts

Packer also provides [GCP service account creation instructions](https://www.packer.io/docs/builders/googlecompute) for the cli.

Either way, save the resulting configuration file as account.json in the gcp directory.

Finally, let’s run the image builder.

packer build template.json

 

Create a Test Cluster with K3s

To create the VM in GCP based on our new image, we need to specify the minimum CPU family that includes the nested virtualization tech and enough drive space to run everything.

SUSE Enterprise Linux 15 SP2

https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances

gcloud compute instances create harvester –zone us-central1-b –min-cpu-platform “Intel Haswell” –image sles-15-sp2 –machine-type=n1-standard-8 –boot-disk-size=200GB –tags http-server,https-server –can-ip-forward

You will then need to install libvirt, qemu-kvm, and setup app armor exception.

Ubuntu 20.04

The following command should create an ubuntu instance usable for our testing:

gcloud compute instances create harvester --zone us-central1-b --min-cpu-platform "Intel Haswell" --image ubuntu-2004-lts --machine-type=n1-standard-8 --boot-disk-size=200GB

Connecting

If you set the firewall rule from the previous step, you can connect using the ssh key that Google created ~/.ssh/google_compute_engine. When connecting to the ubuntu instance, use the ubuntu username.

 

Deploying Harvester with Helm

Deploying Harvester is a three-step process and requires you to install Helm on the new cluster. Connect via ssh and install the application:

export VERIFY_CHECKSUM=false
 curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
 cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Then clone the Harvester repository. 

git clone https://github.com/rancher/harvester

Then browse to harvester/deploy/charts:

cd harvester/deploy/charts

Then install the chart itself. This will take a few minutes:

helm upgrade --install harvester ./harvester --namespace harvester-system --set longhorn.enabled=true,minio.persistence.storageClass=longhorn,service.harvester.type=NodePort,multus.enabled=true --create-namespace

However, this hides the complexity of what is happening underneath. It installs Longhorn, minIO, and multus along with the kubevirt components. You can install them separately by disabling them in the Helm installation.

Note: Currently, it is not possible to use another storage type. However, the defaults would just not install anything and would create a broken system. So they must be specified despite lack of functional choice.

Setting Up and Starting a Linux VM

Currently, setting up a VM takes some effort. You must place the compatible image in an accessible URL so that Harvester can download it. For the Linux VM, we’ll use the UI, switching to CRDs for the Windows VM.

A Working VM

Harvester does not support all formats, but kvm and compressed kvm images will work.

Upload to a URL

I uploaded mine to s3 to make it easy, but any web-accessible storage will do.

You can use the manifest here to import an open sles joes 15.2 leap image:

apiVersion: harvester.cattle.io/v1alpha1
kind: VirtualMachineImage
metadata:
 name: image-openjeos-leap-15sp2
 annotations:
 {}
# key: string
 generateName: image-openjeos-leap-15sp2
 labels:
 {}
# key: string
 namespace: default
spec:
 displayName: opensles-leap-jeos-15-sp2
 url: >-
 https://thecrazyrussian.s3.amazonaws.com/openSUSE-Leap-15.2-JeOS.x86_64-15.2-kvm-and-xen-Build31.186.qcow2

Download Via Harvester

Browse over to the images, select “add a new”, and input the URL to upload and store it in the included minIO installation.

Create a VM from Image

Once the image is fully uploaded, we can create a VM based on the image.

Remote Connection: VNC

If everything is working properly, you should be able to vnc into the new image.

 

Setting Up and Starting a Windows VM

The process for setting up a Windows VM is a bit more painful than a Linux VM. We need to pass options into the underlying yaml, so we’ll use these sample Windows CRDs and insert them to create our VM after image download.

Upload CRD for Image. We can use this CRD to grab a Windows CD image that I have stored on a web-accessible location:

apiVersion: harvester.cattle.io/v1alpha1
kind: VirtualMachineImage
metadata:
 name: image-windows10
 annotations:
 field.cattle.io/description: windowsimage
 generateName: image-windows10
 labels:
 {}
 namespace: default
spec:
 displayName: windows
 url: 'https://thecrazyrussian.s3.amazonaws.com/Win10_2004_English_x64.iso'

Upload CRD for VM

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  annotations:
    kubevirt.io/latest-observed-api-version: v1alpha3
    kubevirt.io/storage-observed-api-version: v1alpha3
  finalizers:
    - wrangler.cattle.io/VMController.UnsetOwnerOfDataVolumes
  labels: {}
  name: 'windows10'
  namespace: default
spec:
  dataVolumeTemplates:
    - apiVersion: cdi.kubevirt.io/v1alpha1
      kind: DataVolume
      metadata:
        annotations:
          cdi.kubevirt.io/storage.import.requiresScratch: 'true'
          harvester.cattle.io/imageId: default/image-windows10
        creationTimestamp: null
        name: windows-cdrom-disk-win
      spec:
        pvc:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 20Gi
          storageClassName: longhorn
          volumeMode: Filesystem
        source:
          http:
            certConfigMap: importer-ca-none
            url: 'http://minio.harvester-system:9000/vm-images/image-windows10'
    - apiVersion: cdi.kubevirt.io/v1alpha1
      kind: DataVolume
      metadata:
        creationTimestamp: null
        name: windows-rootdisk-win
      spec:
        pvc:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 32Gi
          storageClassName: longhorn
          volumeMode: Filesystem
        source:
          blank: {}
  running: true
  template:
    metadata:
      annotations:
        harvester.cattle.io/diskNames: >-
          ["windows-cdrom-disk-win","windows-rootdisk-win","windows-virtio-container-disk-win"]
        harvester.cattle.io/sshNames: '[]'
      creationTimestamp: null
      labels:
        harvester.cattle.io/creator: harvester
        harvester.cattle.io/vmName: windows
    spec:
      domain:
        cpu:
          cores: 4
        devices:
          disks:
            - bootOrder: 1
n              cdrom:
                bus: sata
              name: cdrom-disk
            - disk:
                bus: virtio
              name: rootdisk
            - cdrom:
                bus: sata
              name: virtio-container-disk
            - disk:
                bus: virtio
              name: cloudinitdisk
          inputs:
            - bus: usb
              name: tablet
              type: tablet
          interfaces:
            - masquerade: {}
              model: e1000
              name: default
        machine:
          type: q35
        resources:
          requests:
            memory: 4Gi
      hostname: windows
      networks:
        - name: default
          pod: {}
      volumes:
        - dataVolume:
            name: windows-cdrom-disk-win
          name: cdrom-disk
        - dataVolume:
            name: windows-rootdisk-win
          name: rootdisk
        - containerDisk:
            image: kubevirt/virtio-container-disk
          name: virtio-container-disk
        - cloudInitNoCloud:
            userData: |
              #cloud-config
              ssh_authorized_keys: []
          name: cloudinitdisk

Once both CRDs finish processing, the VM should start booting and you will be able to run the normal Windows install process via the console or VNC.

When it is time to select which drive to load Windows on, you’ll need to load the special Virtio driver from the cd already included in the VM above.

Select “Install Now”

Then “I do not have a product key”

Select “Windows 10 Pro”

“Accept the license”

Install the Virtio Storage Driver

You will then have to select the disk. Instead, select “Load Driver”

Select the AMD64 driver (depending on your VM) and load it.

You should be able to select your (non-empty) hard disk and continue installing Windows.

Automating Harvester via Terraform

We can automate all these components. For example, we can script the VM creation via bash or Terraform. We can use Fleet or any other GitOps tool to push out individual VM and image crds and updates to the entire edge device, from the OS to the applications. Let’s start with an integrated Terraform script responsible for deploying Rancher and Harvester on a single node K3s cluster in GCP. You must complete the previous steps for this to work.

git clone https://github.com/thecrazyrussian/terraform-harvester.git
cd terraform-harvester

Here we must edit the infra/terraform.tfvars.example file. We can copy it from the infra/terraform.tfvars.example:

cp terraform.tfvars.example terraform.tfvars

And edit it in our favorite editor:

vi terraform.tfvars

Once you set all the variables for your route53 zone, GCP account, and you’ve added credentials.json for gcp into the infra/ directory, you should be ready to `apply`. That should stand up a single node Rancher/Harvester cluster and deploy a Windows VM on to it for customization.

Browse to the nodeport where Harvester made itself available, log in with admin/password and start a console to the VM to configure it as you normally would.

 

If you enjoyed this demo, head over to the SUSE & Rancher Community and let us know how you plan to use Harvester.

Announcing Harvester: Open Source Hyperconverged Infrastructure (HCI) Software

Wednesday, 16 December, 2020

Today, I am excited to announce project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Harvester provides fully integrated virtualization and storage capabilities on bare-metal servers. No Kubernetes knowledge is required to use Harvester.

Why Harvester?

In the past few years, we’ve seen many attempts to bring VM management into container platforms, including our own RancherVM, and other solutions like KubeVirt and Virtlet. We’ve seen some demand for solutions like this, mostly for running legacy software side by side with containers. But in the end, none of these solutions have come close to the popularity of industry-standard virtualization products like vSphere and Nutanix.

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators. They are familiar with concepts like ISO images, disk volumes, NICs and VLANS – not concepts like pods and PVCs.

Enter Harvester.

Project Harvester is an open source alternative to traditional proprietary hyperconverged infrastructure software. Harvester is built on top of cutting-edge open source technologies including Kubernetes, KubeVirt and Longhorn. We’ve designed Harvester to be easy to understand, install and operate. Users don’t need to understand anything about Kubernetes to use Harvester and enjoy all the benefits of Kubernetes.

Harvester v0.1.0

Harvester v0.1.0 has the following features:

Installation from ISO

You can download ISO from the release page on Github and install it directly on bare-metal nodes. During the installation, you can choose to create a new cluster or add the current node into an existing cluster. Harvester will automatically create a cluster based on the information you provided.

Install as a Helm Chart on an Existing Kubernetes Cluster

For development purposes, you can install Harvester on an existing Kubernetes cluster. The nodes must be able to support KVM through either hardware virtualization (Intel VT-x or AMD-V) or nested virtualization.

VM Lifecycle Management

Powered by KubeVirt, Harvester supports creating/deleting/updating operations for VMs, as well as SSH key injection and cloud-init.

Harvester also provides a graphical console and a serial port console for users to access the VM in the UI.

Storage Management

Harvester has a built-in highly available block storage system powered by Longhorn. It will use the storage space on the node, to provide highly available storage to the VMs inside the cluster.

Networking Management

Harvester provides several different options for networking.

By default, each VM inside Harvester will have a management NIC, powered by Kubernetes overlay networking.

Users can also add additional NICs to the VMs. Currently, VLAN is supported.

The multi-network functionality in Harvester is powered by Multus.

Image Management

Harvester has a built-in image repository, allowing users to easily download/manage new images for the VMs inside the cluster.

The image repository is powered by MinIO.

Image 01

Install

To install Harvester, just load the Harvester ISO into your bare-metal machine and boot it up.

Image 02

For the first node where you install Harvester, select Create a new Harvester cluster.

Later, you will be prompted to enter the password that will be used to enter the console on the host, as well as “Cluster Token.” The Cluster Token is a token that’s needed later by other nodes that want to join the same cluster.

Image 03

Then you will be prompted to choose the NIC that Harvester will use. The selected NIC will be used as the network for the management and storage traffic.

Image 04

Once everything has been configured, you will be prompted to confirm the installation of Harvester.

Image 05

Once installed, the host will be rebooted and boot into the Harvester console.

Image 06

Later, when you are adding a node to the cluster, you will be prompted to enter the management address (which is shown above) as well as the cluster token you’ve set when creating the cluster.

See here for a demo of the installation process.

Alternatively, you can install Harvester as a Helm chart on your existing Kubernetes cluster, if the nodes in your cluster have hardware virtualization support. See here for more details. And here is a demo using Digital Ocean which supports nested virtualization.

Usage

Once installed, you can use the management URL shown in the Harvester console to access the Harvester UI.

The default user name/password is documented here.

Image 07

Once logged in, you will see the dashboard.

Image 08

The first step to create a virtual machine is to import an image into Harvester.

Select the Images page and click the Create button, fill in the URL field and the image name will be automatically filled for you.

Image 09

Then click Create to confirm.

You will see the real-time progress of creating the image on the Images page.

Image 10

Once the image is finished creating, you can then start creating the VM using the image.

Select the Virtual Machine page, and click Create.

Image 11

Fill in the parameters needed for creation, including volumes, networks, cloud-init, etc. Then click Create.

VM will be created soon.

Image 12

Once created, click the Console button to get access to the console of the VM.

Image 13

See here for a UI demo.

Current Status and Roadmap

Harvester is in the early stages. We’ve just released the v0.1.0 (alpha) release. Feel free to give it a try and let us know what you think.

We have the following items in our roadmap:

  1. Live migration support
  2. PXE support
  3. VM backup/restore
  4. Zero downtime upgrade

If you need any help with Harvester, please join us at either our Rancher forums or Slack, where our team hangs out.

If you have any feedback or questions, feel free to file an issue on our GitHub page.

Thank you and enjoy Harvester!

Understanding Hyperconverged Infrastructure at the Edge from Adoption to Acceleration

Thursday, 29 September, 2022

You may be tired of the regular three-tiered infrastructure and the management issues it can bring in distributed systems and maintenance. Or perhaps you’ve looked at your infrastructure and realized that you need to move away from its current configuration. If that’s the case, hyperconverged infrastructure (HCI) may be a good solution because it removes a lot of management overhead, acting like a hypervisor that can handle networking and storage.

There are some key principles behind HCI that bring to light the advantages it has. Particularly, it can help simplify the deployment of new nodes and new applications. Because everything inside your infrastructure runs on normal x86 servers, adding nodes is as simple as spinning up a server and joining it to your HCI cluster. From here, applications can easily move around on the nodes as needed to optimize performance.

Once you’ve gotten your nodes deployed and added to your cluster, everything inside an HCI can be managed by policies, making it possible for you to strictly define the behavior of your infrastructure. This is one of the key benefits of HCI — it uses a single management interface. You don’t need to configure your networking in one place, your storage in another, and your compute in a third place; everything can be managed cohesively.

This cohesive management is possible because an HCI relies heavily on virtualization, making it feasible to converge the typical three tiers (compute, networking and storage) into a single plane, offering you flexibility.

While HCI might be an overkill for simple projects, it’s becoming a best practice for various enterprise use cases. In this article, you’ll see some of the main use cases for wanting to implement HCI in your organization. We’ll also introduce Harvester as a modern way to get started easier.

While reading through these use cases, remember that the use of HCI is not limited to them. To benefit most from this article, think about what principles of HCI make the use cases possible, and perhaps, you’ll be able to come up with additional use cases for yourself.

Why you need a hyperconverged infrastructure

There are many use cases when it comes to HCI, and most of them are based on the fact that HCI is highly scalable and, more importantly, it’s easy to scale HCI. The concept started getting momentum back in 2009, but it wasn’t until 2014 that it started gaining traction in the community at large. HCI is a proven and mature technology that, in its essence, has worked the same way for many years.

The past few decades have seen virtualization become the preferred method for users to optimize their resource usage and manage their infrastructure costs. However, introducing new technology, such as containers, has required operators to shift their existing virtualized-focused infrastructure to integrate with these modern cloud-based solutions, bringing new challenges for IT operators to tackle.

Managing virtualized resources (and specifically VMs) can be quite challenging. This is where HCI can help. By automating and simplifying the management of virtual resources, HCI makes it easy for developers and team leads to leverage virtualization to the fullest and reduce the time to market their product, a crucial factor in determining the success of a project.

Following are some of the most popular ways to use HCI currently:

Edge computing

Edge computing is the principle of running workloads outside the primary data centers of a company. While there’s no single reason for wanting to use edge computing, the most popular reason is to decrease customer latency.

In edge computing, you don’t always need an extensive fleet of servers, and the amount of power you need will likely change based on the location. You’ll need more servers to serve New York City with a population of 8.3 million than you’d need to fill the entire country of Denmark with a population of 5.8 million. One of the most significant benefits of HCI is that it scales incredibly well and low. You’d typically want multiple nodes for reasons like backup, redundancy and high availability. But theoretically, it’s possible to scale down to a single node.

Given that HCI runs on normal hardware, it’s also possible for you to optimize your nodes for the workload you need. If your edge computing use case is to provide a cache for users, then you’d likely need more storage. However, if you’re implementing edge workers that need to execute small scripts, you’re more likely to need processing power and memory. With HCI, you can adapt the implementation to your needs.

Migrating to a Hybrid Cloud Model

Over the past decade, the cloud has gotten more and more popular. Many companies move to the cloud and later realize their applications are better suited to run on-premises. You will also find companies that no longer want to run things in their data centers and instead want to move them to the cloud. In both these cases, HCI can be helpful.

If you want to leverage the cloud, HCI can provide a similar user experience on-premise. HCI is sometimes described as a “cloud in a box” because it can offer similar services one would expect in a public cloud. Examples of this include a consistent API for allocating compute resources dynamically, load balancers and storage services. Having a similar platform is a good foundation for being able to move applications between the public cloud and on-premise. You can even take advantage of tools like Rancher that can manage cloud infrastructure and on-prem HCI from a single pane of glass.

Modernization strategy

Many organizations view HCI as an enabler in their modernization processes. However, modernization is quite different from migration.

Modernization focuses on redesigning existing systems and architecture to make the most efficient use of the new environment and its offerings. With its particular focus on simplifying the complex management of data, orchestration and workflows, HCI is perfect for modernization.

HCI enables you to consolidate your complex server architecture with all its storage, compute and network resources into smaller, easy-to-manage nodes. You can easily transform a node from a storage-first resource to a compute-first resource, allowing you to design your infrastructure how you want it while retaining simplicity.

Modern HCI solutions like Harvester can help you to run your virtualized and containerized workloads side by side, simplifying the operational and management components of infrastructure management while also providing the capabilities to manage workloads across distributed environments. Regarding automation, Harvester provides a unique approach by using cloud native APIs. This allows the user to automate using the same tools they would use to manage cloud native applications. Not switching between two “toolboxes” can increase product development velocity and decrease the overhead of managing complex systems. That means users of this approach get their product to market sooner and with less cost.

Virtual Desktop Infrastructure (VDI)

Many organizations maintain fleets of virtual desktops that enable their employees to work remotely while maintaining standards of security and performance. Virtual desktops are desktop environments that are not limited to the hardware they’re hosted in; they can be accessed remotely via the use of software. Organizations prefer them over hardware since they’re easy to provision, scale, and destroy on demand.

Since compute and storage are two strongly connected and important resources in virtual desktops, HCI can easily manage virtual desktops. HCI’s enhanced reliability provides VDI with increased fault tolerance and efficient capacity consumption. HCI also helps cut down costs for VDI as there is no need for separate storage arrays, dedicated storage networks, and related hardware.

Remote office/Branch office

A remote office/branch office (ROBO) is one of the best reasons for using HCI. In case you’re not familiar, it’s typical for big enterprises to have a headquarters where they host their data and internal applications. Then the ROBOs will either have a direct connection to the headquarters to access the data and applications or have a replica in their own location. In both cases, you will introduce more management and maintenance and other factors, such as latency.

With HCI, you can spin up a few servers in the ROBOs and add them to an HCI cluster. Now, you’re managing all your infrastructure, even the infrastructure in remote locations, through a single interface. Not only can this result in a better experience for the employees, but depending on how much customer interaction they have, it can result in a better customer experience.

In addition, with HCI, you’re likely to lower your total cost of ownership. While you would typically have to put up an entire rack of hardware in a ROBO, you’re now expected to accomplish the same with just a few servers.

Conclusion

After reading this article, you now know more about how HCI can be used to support a variety of use cases, and hopefully, you’ve come up with a few use cases yourself. This is just the beginning of how HCI can be used. Over the next decade or two, HCI will continue to play an important role in any infrastructure strategy, as it can be used in both on-premises data centers and the public cloud. The fact that it uses commodity x86 systems to run makes it suitable for many different use cases.

If you’re ready to start using HCI for yourself, take a look at Harvester. Harvester is a solution developed by SUSE, built for bare metal servers. It uses enterprise-grade technologies, such as KubernetesKubeVirt and Longhorn.

What’s Next:

Want to learn more about how Harvester and Rancher are helping enterprises modernize their stack speed? Sign up here to join our Global Online Meetup: Harvester on October 26th, 2022, at 11 AM EST.

SUSE Linux Enterprise Server ‘Leader’ in Virtualization Software

Thursday, 14 July, 2022

Markus Noga, General Manager, Business-critical Linux at SUSE 

 

I’m excited to share that SLES was recognized by G2, the world’s largest and most trusted tech marketplace, as the Leader in its Server Virtualization Software category and the High Performer in the Infrastructure-as-a-Service category. 

Linux with virtualization and IaaS is the foundation of cloud computing. I am proud of our engineering and that SLES is the perfect guest, optimized for and supporting all leading hypervisor technologies and cloud platforms. 

 

 

According to Gartner, the worldwide IaaS Public Cloud Services Market Grew 41.4% in 2021. As our customers continue their cloud native and digital transformation journey, they require solutions that allow them to effectively manage at scale and drive growth across their business. To do this, they have found SUSE solutions to offer the highest levels of security, availability and performance.

 

I am pleased to share a few testimonials from our customers:

“VMware and SUSE Linux Enterprise Server work very well together. When we originally went down the path of virtualization, SUSE Linux Enterprise Server was the first distribution to include drivers for our chosen virtualization technology as standard, which made things much easier. In general terms, we find that SUSE is often ahead of the other Linux vendors in introducing support for new technologies such as new file systems, for example. This means we can get the benefits of cutting-edge technology without the usual risks of being an early adopter.” 

Steven Mertens, Global Service Lead In­frastructure, NGA Human Resources 

 

“Another thing that attracted us to SLES for SAP Applications is the fact that it has been specifically optimized to run on Microsoft Azure. There are ready-to-use cloud images of SLES for SAP Applications available on Microsoft Azure, which accelerates installation and deployment.” 

Hinrich Mielke, SAP Director, Alegri International Group 

 

While SLES is leading the way, it doesn’t stop there, nor does our engineering work. For customers containerizing their workloads, SUSE Rancher is the most interoperable Kubernetes-management solution across public cloud, edge and on-premises. It works well with SLES and leading 3rd party Linux distributions.  

Separately, Harvester allows customers to unify their virtual machine and container workloads alongside Kubernetes clusters, working seamlessly with SUSE Rancher.  

Together, SUSE Linux Enterprise Server, SUSE Rancher, and Harvester help customers achieve unparalleled security, scalability, resilience and efficiency for all their infrastructure operations. 

 

Watch out for more exciting news to come and see how you can innovate with SUSE. 

 

At SUSECON Digital 2022 in June, we launched the latest release of our Linux code base, SUSE Linux Enterprise 15 Service Pack 4 (SLE 15 SP4), which provides you with the advantages of using one of the world’s most secure enterprise Linux platforms. You can re-visit my SUSECON keynote, which provides a comprehensive overview, including several interviews with our partners and customers.