How to Manage Harvester 0.3.0 with Rancher 2.6.1 Running in a VM within Harvester

Wednesday, 20 October, 2021

What I liked about the release of Harvester 0.2.0 was the ease of enabling the embedded Rancher server, which allowed you to create Kubernetes clusters in the same Harvester cluster.

With the release of Harvester 0.3.0, this option was removed in favor of installing Rancher 2.6.1 separately and then importing your Harvester cluster into Rancher, where you could manage it. A Harvester node driver is provided with Rancher 2.6.1 to allow you to create Kubernetes clusters in the same Harvester 0.3.0 cluster.

I replicated my Harvester 0.2.0 plus the Rancher server experience using Harvester 0.3.0 and Rancher 2.6.1.

There’s no upgrade path from Harvester 0.2.0 to 0.3.0, so the first step was reinstalling my Intel NUC with Harvester 0.3.0 following the docs at: https://docs.harvesterhci.io/v0.3/install/iso-install/.

Given that my previous Harvester 0.2.0 install included Rancher, I figured I’d install Rancher in a VM running on my newly installed Harvester 0.3.0 node – but which OS would I use? With Rancher being deployed using a single Docker container, I was looking for a small, lightweight OS that included Docker. From past experience, I knew that openSUSE Leap had slimmed down images of its distribution available at https://get.opensuse.org/leap/ – click the alternative downloads link immediately under the initial downloads. Known as Just enough OS (JeOS), these are available for both Leap and Tumbleweed (their rolling release). I opted for Leap, so I created an image using the URL for the OpenStack Cloud image (trust me – the KVM and XEN image hangs on boot).

Knowing that I wanted to be able to access Rancher on the same network my Harvester node was attached to, I also enabled  Advanced | Settings | vlan (VLAN) and created a network using VLAN ID 1 (Advanced | Networks).

The next step is to install Rancher in a VM. While I could do this manually, I prefer automation and wanted to do something I could reliably repeat (something I did a lot while getting this working) and perhaps adapt when installing future versions. When creating a virtual machine, I was intrigued by the user data and network data sections in the advanced options tab, referenced in the docs at https://docs.harvesterhci.io/v0.3/vm/create-vm/, along with some basic examples. I knew from past experience that cloud-init could be used to initialize cloud instances, and with the openSUSE OpenStack Cloud images using cloud-init, I wondered if this could be used here. According to the examples in the cloud-init docs at https://cloudinit.readthedocs.io/en/latest/topics/examples.html, it can!

When creating the Rancher VM, I gave it 1 CPU with a 4-core NUC and Harvester 0.3.0 not supporting over-provisioning (it’s a bug – phew!) – I had to be frugal! Through trial and error, I also found that the minimum memory required for Rancher to work is 3 GB. I chose my openSUSE Leap 15.3 JeOS OpenStack Cloud image on the volumes tab, and on the networks tab, I chose my custom (VLAN 1) network.

The real work is done on the advanced options tab. I already knew JeOS didn’t include Docker, so that would need to be installed before I could launch the Docker container for Rancher. I also knew the keyboard wasn’t set up for me in the UK, so I wanted to fix that too. Plus, I’d like a message to indicate it was ready to use. I came up with the following User Data:

password: changeme
packages:
  - docker
runcmd:
  - localectl set-keymap uk
  - systemctl enable --now docker
  - docker run --name=rancher -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.1
  - until curl -sk https://127.0.0.1 -o /dev/null; do sleep 30s; done
final_message: Rancher is ready!

Let me go through the above lines:

  • Line 1 sets the password of the default opensuse user – you will be prompted to change this the first time you log in as this user, so don’t set it to anything secret!
  • Lines 2 & 3 install the docker package.
  • Line 4 says we’ll run some commands once it’s booted the first time.
  • Line 5 sets the UK keyboard.
  • Line 6 enables and starts the Docker service.
  • Line 7 pulls and runs the Docker container for Rancher 2.6.1 – this is the same line as the Harvester docs, except I’ve added “–name=rancher” to make it easier when you need to find the Bootstrap Password later.
    NOTE: When you create the VM, this line will be split into two lines with an additional preceding line with “>-” – it will look a bit different, but it’s nothing to worry about!
  • Line 8 is a loop checking for the Rancher server to become available – I test localhost, so it works regardless of the assigned IP address.
  • Line 9 prints out a message saying it’s finished (which happens after the previous loop completes).

An extra couple of lines will be automatically added when you click the create button but don’t click it yet as we’re not done!

This still left a problem with which IP address I use to access Rancher? With devices being assigned random IP addresses via DHCP, how do I control which address is used? Fortunately, the Network Data sections allow us to set a static address (and not have to mess with config files or run custom scripting within the VM):

network:
  version: 1
  config:
    - type: physical
      name: eth0
      subnets:
        - type: static
          address: 192.168.144.190/24
          gateway: 192.168.144.254
    - type: nameserver
      address:
        - 192.168.144.254
      search:
        - example.com

I won’t go through all the lines above but will call out those you need to change for your own network:

  • Line 8 sets the IP address to use with the CIDR netmask (/24 means 255.255.255.0).
  • Line 9 sets the default gateway.
  • Line 12 sets the default DNS nameserver.
  • Line 14 sets the default DNS search domain.

See https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v1.html# for information on the other lines.

Unless you unticked the start virtual machine on creation, your VM should start booting once you click the Create button. If you open the Web Console in VNC, you’ll be able to keep an eye on the progress of your VM. When you see the message Rancher is ready, you can try accessing Rancher in a web browser at the IP address you specified above. Depending on the web browser you’re using and its configuration, you may see warning messages about the self-signed certificate Rancher is using.

The first time you log in to Rancher, you will be prompted for the random bootstrap password which was generated. To get this, you can SSH as the opensuse user to your Rancher VM, then run:

sudo docker logs rancher 2>&1 | grep "Bootstrap Password:"

Copy the password and paste it into the password field of the Rancher login screen, then click the login with Local User button.

You’re then prompted to set a password for the default admin user. Unless you can remember random strings or use a password manager, I’d set a specific password. You also need to agree to the terms and conditions for using Rancher!

Finally, you’re logged into Rancher, but we’re not entirely done yet as we need to add our Harvester cluster. To do this, click on the hamburger menu and then the Virtualization Management tab. Don’t panic if you see a failed whale error – just try reloading.

Clicking the Import Existing button will give you some registration commands to run on one of your Harvester node(s).

To do this, SSH to your Harvester node as the rancher user and then run the first kubectl command prefixed with sudo. Unless you’ve changed your Harvester installation, you’ll also need to run the curl command, again prefixing the kubectl command with sudo. The webpage should refresh, showing your Harvester cluster’s management page. If you click the Harvester Cluster link or tab, your Harvester cluster should be listed. Clicking on your cluster name should show something familiar!

Finally, we need to activate the Harvester node driver by clicking the hamburger menu and then the Cluster Management tab. Click Drivers, then Node Drivers, find Harvester in the list, and click Activate.

Now we have Harvester 0.3.0 integrated with Rancher 2.6.1, running similarly to Harvester 0.2.0, although sacrificing 1 CPU (which will be less of an issue once the CPU over-provisioning bug is fixed) and 3GB RAM.

Admittedly, running Rancher within a VM in the same Harvester you’re managing through Rancher doesn’t seem like the best plan, and you wouldn’t do it in production, but for the home lab, it’s fine. Just remember not to chop off the branch you’re standing on!

Tags: ,,, Category: Rancher Longhorn Comments closed

Harvester: Intro and Setup    

Tuesday, 17 August, 2021
I mentioned about a month back that I was using Harvester in my home lab. I didn’t go into much detail, so this post will bring some more depth. We will cover what Harvester does, as well as my hardware, installation, setup and how to deploy your first virtual machine. Let’s get started.

What is Harvester?

Harvester is Rancher’s open source answer to a hyperconverged infrastructure platform. Like most things Rancher is involved with, it is built on Kubernetes using tools like KubeVirt and Longhorn. KubeVirt is an exciting project that leverages KVM and libvirt to run virtual machines inside Kubernetes; this allows you to run both containers and VMs in your cluster. It reduces operational overhead and provides consistency. This combination of tried and tested technologies provides an open source solution in this space.

It is also designed to be used with bare metal, making it an excellent option for a home lab.

Hardware

If you check the hardware requirements, you will notice they focus more on business usage. So far, my personal experience says that you want at least a 4 core/8 thread CPU, 16GB of RAM, and a large SSD, preferably an NVMe drive. Anything less resource-wise doesn’t leave enough capacity for running many containers or VMs. I will install it on an Intel NUC 8i5BEK, which has an Intel Core i5-8259U. As far as RAM, it has 32GB of RAM and a 512GB NVMe drive. It can handle running Harvester without any issues. Of course, this is just my experience; your experience may differ.

Installation

Harvester ships as an ISO, which you can download on the GitHub Releases page. You can pull it quickly using wget.

$ wget https://releases.rancher.com/harvester/v0.2.0/harvester-amd64.iso

Once you have it downloaded, you will need to create a bootable USB. I typically use Balena Etcher since it is cross-platform and intuitive. Once you have a bootable USB, place it in the machine you want to use and boot the drive. This screen should greet you:

Select “New Cluster”:

Select the drive you want to use.

Enter your hostname, select your network interface, and make sure you use automatic DHCP.

You will then be prompted to enter your cluster token. This can be any phrase you want; I recommend using your password manager to generate one.

Set a password to use, and remember that the default user name is rancher.

The following several options are attractive, especially if you want to leverage your SSH keys used in GitHub. Since this is a home lab, I left the SSH keys, proxy and cloud-init setup blank. In an enterprise environment, this would be really useful. Now you will see the final screen before installation. Verify that everything is configured to your desires before proceeding.

If it all looks great, proceed with the installation. It will take a few minutes to complete; when it does, you will need to reboot.

After the reboot, the system will startup, and you will see a screen letting you know the URL for Harvester and the system’s status. Wait until it reports that Harvester is ready before trying to connect.

Great! It is now reporting that it is up and running, so it’s now time to set up Harvester.

Initial Setup

We can navigate to the URL listed once the OS boots. Mine is https://harvest:30443. It uses a self-signed certificate by default, so you will see a warning in your browser. Just click on “advanced” to proceed, and accept it. Set a password for the default admin account.

Now you should see the dashboard and the health of the system.

I like to disable the default account and add my own account for authentication. Probably not necessary for a home lab, but a good habit to get into. First, you need to navigate to it.

Now log out and back in with your new account. Once that’s finished, we can create our first VM.

Deploying Your First VM

Harvester has native support for qcow2 images and can import those from a URL. Let’s grab the URL for openSUSE Leap 15.3 JeOS image.

https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-kvm-and-xen.qcow2

The JeOS image for openSUSE is roughly 225MB, which is a perfect size for downloading and creating VMs quickly. Let’s make the image in Harvester.

Create a new image, and add the URL above as the image URL.

You should now see it listed.

Now we can create a VM using that image. Navigate to the VM screen.

Once we’ve made our way to the VM screen, we’ll create a new VM.

When that is complete, the VM will show up in the list. Wait until it has been started, then you can start using it.

Wrapping Up

In this article, I wanted to show you how to set up VMs with Harvester, even starting from scratch! There are plenty of features to explore and plenty more on the roadmap. This project is still early in its life, so now is a great time to jump in and get involved with its direction.

Tags: ,,,,, Category: Uncategorized Comments closed

Hyperconverged Infrastructure and Harvester

Monday, 2 August, 2021

Virtual machines (VMs) have transformed infrastructure deployment and management. VMs are so ubiquitous that I can’t think of a single instance where I deployed production code to a bare metal server in my many years as a professional software engineer.

VMs provide secure, isolated environments hosting your choice of operating system while sharing the resources of the underlying server. This allows resources to be allocated more efficiently, reducing the cost of over-provisioned hardware.

Given the power and flexibility provided by VMs, it is common to find many VMs deployed across many servers. However, managing VMs at this scale introduces challenges.

Managing VMs at Scale

Hypervisors provide comprehensive management of the VMs on a single server. The ability to create new VMs, start and stop them, clone them, and back them up are exposed through simple management consoles or command-line interfaces (CLIs).

But what happens when you need to manage two servers instead of one? Suddenly you find yourself having first to gain access to the appropriate server to interact with the hypervisor. You’ll also quickly find that you want to move VMs from one server to another, which means you’ll need to orchestrate a sequence of shutdown, backup, file copy, restore and boot operations.

Routine tasks performed on one server become just that little bit more difficult with two, and quickly become overwhelming with 10, 100 or 1,000 servers.

Clearly, administrators need a better way to manage VMs at scale.

Hyperconverged Infrastructure

This is where Hyperconverged Infrastructure (HCI) comes in. HCI is a marketing term rather than a strict definition. Still, it is typically used to describe a software layer that abstracts the compute, storage and network resources of multiple (often commodity or whitebox) servers to present a unified view of the underlying infrastructure. By building on top of the virtualization functionality included in all major operating systems, HCI allows many systems to be managed as a single, shared resource.

With HCI, administrators no longer need to think in terms of VMs running on individual servers. New hardware can be added and removed as needed. VMs can be provisioned wherever there is appropriate capacity, and operations that span servers, such as moving VMs, are as routine with 2 servers as they are with 100.

Harvester

Harvester, created by Rancher, is open source HCI software built using Kubernetes.

While Kubernetes has become the defacto standard for container orchestration, it may seem like an odd choice as the foundation for managing VMs. However, when you think of Kubernetes as an extensible orchestration platform, this choice makes sense.

Kubernetes provides authentication, authorization, high availability, fault tolerance, CLIs, software development kits (SDKs), application programming interfaces (APIs), declarative state, node management, and flexible resource definitions. All of these features have been battle tested over the years with many large-scale clusters.

More importantly, Kubernetes orchestrates many kinds of resources beyond containers. Thanks to the use of custom resource definitions (CRDs), and custom operators, Kubernetes can describe and provision any kind of resource.

By building on Kubernetes, Harvester takes advantage of a well tested and actively developed platform. With the use of KubeVirt and Longhorn, Harvester extends Kubernetes to allow the management of bare metal servers and VMs.

Harvester is not the first time VM management has been built on top of Kubernetes; Rancher’s own RancherVM is one such example. But these solutions have not been as popular as hoped:

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators.

To address this, Harvester does not expose the underlying Kubernetes platform to the end user. Instead, it presents more familiar concepts like VMs, NICs, ISO images and disk volumes. This allows Harvester to take advantage of Kubernetes while giving administrators a more traditional view of their infrastructure.

Managing VMs at Scale

The fusion of Kubernetes and VMs provides the ability to perform common tasks such as VM creation, backups, restores, migrations, SSH-Key injection and more across multiple servers from one centralized administration console.

Consolidating virtualized resources like CPU, memory, network, and storage allows for greater resource utilization and simplified administration, allowing Harvester to satisfy the core premise of HCI.

Conclusion

HCI abstracts the resources exposed by many individual servers to provide administrators with a unified and seamless management interface, providing a single point to perform common tasks like VM provisioning, moving, cloning, and backups.

Harvester is an HCI solution leveraging popular open source projects like Kubernetes, KubeVirt, and Longhorn, but with the explicit goal of not exposing Kubernetes to the end user.

The end result is an HCI solution built on the best open source platforms available while still providing administrators with a familiar view of their infrastructure.

Download Harvester from the project website and learn more from the project documentation.

Meet the Harvester developer team! Join our free Summer is Open session on Harvester: Tuesday, July 27 at 12pm PT and on demand. Get details about the project, watch a demo, ask questions and get a challenge to complete offline.

Tags: ,,, Category: Uncategorized Comments closed

Announcing Harvester Beta Availability

Friday, 28 May, 2021

It has been five months since we announced project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Since then, we’ve received a lot of feedback from the early adopters. This feedback has encouraged us and helped in shaping Harvester’s roadmap. Today, I am excited to announce the Harvester v0.2.0 release, along with the Beta availability of the project!

Let’s take a look at what’s new in Harvester v0.2.0.

Raw Block Device Support

We’ve added the raw block device support in v0.2.0. Since it’s a change that’s mostly under the hood, the updates might not be immediately obvious to end users. Let me explain more in detail:

In Harvester v0.1.0, the image to VM flow worked like this:

  1. Users added a new VM image.

  2. Harvester downloaded the image into the built-in MinIO object store.

  3. Users created a new VM using the image.

  4. Harvester created a new volume, and copied the image from the MinIO object store.

  5. The image was presented to the VM as a block device, but it was stored as a file in the volume created by Harvester.

This approach had a few issues:

  1. Read/write operations to the VM volume needed to be translated into reading/writing the image file, which performed worse compared to reading/writing the raw block device, due to the overhead of the filesystem layer.

  2. If one VM image is used multiple times by different VMs, it was replicated many times in the cluster. This is because each VM had its own copy of the volume, even though the majority of the content was likely the same since they’re coming from the same image.

  3. The dependency on MinIO to store the images resulted in Harvester keeping MinIO highly available and expandable. Those requirements caused an extra burden on the Harvester management plane.

In v0.2.0, we’ve took another approach to tackle the problem, which resulted in a simpler solution that had better performance and less duplicated data:

  1. Instead of an image file on the filesystem, now we’re providing the VM with raw block devices, which allows for better performance for the VM.

  2. We’ve taken advantage of a new feature called Backing Image in the Longhorn v1.1.1, to reduce the unnecessary copies of the VM image. Now the VM image will be served as a read-only layer for all the VMs using it. Longhorn is now responsible for creating another copy-on-write (COW) layer on top of the image for the VMs to use.

  3. Since now Longhorn starts to manage the VM image using the Backing Image feature, the dependency of MinIO can be removed.

Image 02
A comprehensive view of images in Harvester

From the user experience perspective, you may have noticed that importing an image is instantaneous. And starting a VM based on a new image takes a bit longer due to the image downloading process in Longhorn. Later on, any other VMs using the same image will take significantly less time to boot up, compared to the previous v0.1.0 release and the disk IO performance will be better as well.

VM Live Migration Support

In preparation for the future upgrade process, VM live migration is now supported in Harvester v0.2.0.

VM live migration allows a VM to migrate from one node to another, without any downtime. It’s mostly used when you want to perform maintenance work on one of the nodes or want to balance the workload across the nodes.

One thing worth noting is, due to potential IP change of the VM after migration when using the default management network, we highly recommend using the VLAN network instead of the default management network. Otherwise, you might not be able to keep the same IP for the VM after migration to another node.

You can read more about live migration support here.

VM Backup Support

We’ve added VM backup support to Harvester v0.2.0.

The backup support provides a way for you to backup your VM images outside of the cluster.

To use the backup/restore feature, you need an S3 compatible endpoint or NFS server and the destination of the backup will be referred to as the backup target.

You can get more details on how to set up the backup target in Harvester here.

Image 03
Easily manage and operate your virtual machines in Harvester

In the meantime, we’re also working on the snapshot feature for the VMs. In contrast to the backup feature, the snapshot feature will store the image state inside the cluster, providing VMs the ability to revert back to a previous snapshot. Unlike the backup feature, no data will be copied outside the cluster for a snapshot. So it will be a quick way to try something experimental, but not ideal for the purpose of keeping the data safe if the cluster went down.

PXE Boot Installation Support

PXE boot installation is widely used in the data center to automatically populate bare-metal nodes with desired operating systems. We’ve also added the PXE boot installation in Harvester v0.2.0 to help users that have a large number of servers and want a fully automated installation process.

You can find more information regarding how to do the PXE boot installation in Harvester v0.2.0 here.

We’ve also provided a few examples of doing iPXE on public bare-metal cloud providers, including Equinix Metal. More information is available here.

Rancher Integration

Last but not least, Harvester v0.2.0 now ships with a built-in Rancher server for Kubernetes management.

This was one of the most requested features since we announced Harvester v0.1.0, and we’re very excited to deliver the first version of the Rancher integration in the v0.2.0 release.

For v0.2.0, you can use the built-in Rancher server to create Kubernetes clusters on top of your Harvester bare-metal clusters.

To start using the built-in Rancher in Harvester v0.2.0, go to Settings, then set the rancher-enabled option to true. Now you should be able to see a Rancher button on the top right corner of the UI. Clicking the button takes you to the Rancher UI.

Harvester and Rancher share the authentication process, so once you’re logged in to Harvester, you don’t need to redo the login process in Rancher and vice versa.

If you want to create a new Kubernetes cluster using Rancher, you can follow the steps here. A reminder that VLAN networking needs to be enabled for creating Kubernetes clusters on top of the Harvester, since the default management network cannot guarantee a stable IP for the VMs, especially after reboot or migration.

What’s Next?

Now with v0.2.0 behind us, we’re working on the v0.3.0 release, which will be the last feature release before Harvester reaches GA.

We’re working on many things for v0.3.0 release. Here are some highlights:

  • Built-in load balancer
  • Rancher 2.6 integration
  • Replace K3OS with a small footprint OS designed for the container workload
  • Multi-tenant support
  • Multi-disk support
  • VM snapshot support
  • Terraform provider
  • Guest Kubernetes cluster CSI driver
  • Enhanced monitoring

You can get started today and give Harvester v0.2.0 a try via our website.

Let us know what you think via the Rancher User Slack #harvester channel. And start contributing by filing issues and feature requests via our github page.

Enjoy Harvester!

Tags: ,, Category: Uncategorized Comments closed

Announcing Harvester: Open Source Hyperconverged Infrastructure (HCI) Software

Wednesday, 16 December, 2020

Today, I am excited to announce project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Harvester provides fully integrated virtualization and storage capabilities on bare-metal servers. No Kubernetes knowledge is required to use Harvester.

Why Harvester?

In the past few years, we’ve seen many attempts to bring VM management into container platforms, including our own RancherVM, and other solutions like KubeVirt and Virtlet. We’ve seen some demand for solutions like this, mostly for running legacy software side by side with containers. But in the end, none of these solutions have come close to the popularity of industry-standard virtualization products like vSphere and Nutanix.

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators. They are familiar with concepts like ISO images, disk volumes, NICs and VLANS – not concepts like pods and PVCs.

Enter Harvester.

Project Harvester is an open source alternative to traditional proprietary hyperconverged infrastructure software. Harvester is built on top of cutting-edge open source technologies including Kubernetes, KubeVirt and Longhorn. We’ve designed Harvester to be easy to understand, install and operate. Users don’t need to understand anything about Kubernetes to use Harvester and enjoy all the benefits of Kubernetes.

Harvester v0.1.0

Harvester v0.1.0 has the following features:

Installation from ISO

You can download ISO from the release page on Github and install it directly on bare-metal nodes. During the installation, you can choose to create a new cluster or add the current node into an existing cluster. Harvester will automatically create a cluster based on the information you provided.

Install as a Helm Chart on an Existing Kubernetes Cluster

For development purposes, you can install Harvester on an existing Kubernetes cluster. The nodes must be able to support KVM through either hardware virtualization (Intel VT-x or AMD-V) or nested virtualization.

VM Lifecycle Management

Powered by KubeVirt, Harvester supports creating/deleting/updating operations for VMs, as well as SSH key injection and cloud-init.

Harvester also provides a graphical console and a serial port console for users to access the VM in the UI.

Storage Management

Harvester has a built-in highly available block storage system powered by Longhorn. It will use the storage space on the node, to provide highly available storage to the VMs inside the cluster.

Networking Management

Harvester provides several different options for networking.

By default, each VM inside Harvester will have a management NIC, powered by Kubernetes overlay networking.

Users can also add additional NICs to the VMs. Currently, VLAN is supported.

The multi-network functionality in Harvester is powered by Multus.

Image Management

Harvester has a built-in image repository, allowing users to easily download/manage new images for the VMs inside the cluster.

The image repository is powered by MinIO.

Image 01

Install

To install Harvester, just load the Harvester ISO into your bare-metal machine and boot it up.

Image 02

For the first node where you install Harvester, select Create a new Harvester cluster.

Later, you will be prompted to enter the password that will be used to enter the console on the host, as well as “Cluster Token.” The Cluster Token is a token that’s needed later by other nodes that want to join the same cluster.

Image 03

Then you will be prompted to choose the NIC that Harvester will use. The selected NIC will be used as the network for the management and storage traffic.

Image 04

Once everything has been configured, you will be prompted to confirm the installation of Harvester.

Image 05

Once installed, the host will be rebooted and boot into the Harvester console.

Image 06

Later, when you are adding a node to the cluster, you will be prompted to enter the management address (which is shown above) as well as the cluster token you’ve set when creating the cluster.

See here for a demo of the installation process.

Alternatively, you can install Harvester as a Helm chart on your existing Kubernetes cluster, if the nodes in your cluster have hardware virtualization support. See here for more details. And here is a demo using Digital Ocean which supports nested virtualization.

Usage

Once installed, you can use the management URL shown in the Harvester console to access the Harvester UI.

The default user name/password is documented here.

Image 07

Once logged in, you will see the dashboard.

Image 08

The first step to create a virtual machine is to import an image into Harvester.

Select the Images page and click the Create button, fill in the URL field and the image name will be automatically filled for you.

Image 09

Then click Create to confirm.

You will see the real-time progress of creating the image on the Images page.

Image 10

Once the image is finished creating, you can then start creating the VM using the image.

Select the Virtual Machine page, and click Create.

Image 11

Fill in the parameters needed for creation, including volumes, networks, cloud-init, etc. Then click Create.

VM will be created soon.

Image 12

Once created, click the Console button to get access to the console of the VM.

Image 13

See here for a UI demo.

Current Status and Roadmap

Harvester is in the early stages. We’ve just released the v0.1.0 (alpha) release. Feel free to give it a try and let us know what you think.

We have the following items in our roadmap:

  1. Live migration support
  2. PXE support
  3. VM backup/restore
  4. Zero downtime upgrade

If you need any help with Harvester, please join us at either our Rancher forums or Slack, where our team hangs out.

If you have any feedback or questions, feel free to file an issue on our GitHub page.

Thank you and enjoy Harvester!

Getting Started with Cluster Autoscaling in Kubernetes

Tuesday, 12 September, 2023

Autoscaling the resources and services in your Kubernetes cluster is essential if your system is going to meet variable workloads. You can’t rely on manual scaling to help the cluster handle unexpected load changes.

While cluster autoscaling certainly allows for faster and more efficient deployment, the practice also reduces resource waste and helps decrease overall costs. When you can scale up or down quickly, your applications can be optimized for different workloads, making them more reliable. And a reliable system is always cheaper in the long run.

This tutorial introduces you to Kubernetes’s Cluster Autoscaler. You’ll learn how it differs from other types of autoscaling in Kubernetes, as well as how to implement Cluster Autoscaler using Rancher.

The differences between different types of Kubernetes autoscaling

By monitoring utilization and reacting to changes, Kubernetes autoscaling helps ensure that your applications and services are always running at their best. You can accomplish autoscaling through the use of a Vertical Pod Autoscaler (VPA)Horizontal Pod Autoscaler (HPA) or Cluster Autoscaler (CA).

VPA is a Kubernetes resource responsible for managing individual pods’ resource requests. It’s used to automatically adjust the resource requests and limits of individual pods, such as CPU and memory, to optimize resource utilization. VPA helps organizations maintain the performance of individual applications by scaling up or down based on usage patterns.

HPA is a Kubernetes resource that automatically scales the number of replicas of a particular application or service. HPA monitors the usage of the application or service and will scale the number of replicas up or down based on the usage levels. This helps organizations maintain the performance of their applications and services without the need for manual intervention.

CA is a Kubernetes resource used to automatically scale the number of nodes in the cluster based on the usage levels. This helps organizations maintain the performance of the cluster and optimize resource utilization.

The main difference between VPA, HPA and CA is that VPA and HPA are responsible for managing the resource requests of individual pods and services, while CA is responsible for managing the overall resources of the cluster. VPA and HPA are used to scale up or down based on the usage patterns of individual applications or services, while CA is used to scale the number of nodes in the cluster to maintain the performance of the overall cluster.

Now that you understand how CA differs from VPA and HPA, you’re ready to begin implementing cluster autoscaling in Kubernetes.

Prerequisites

There are many ways to demonstrate how to implement CA. For instance, you could install Kubernetes on your local machine and set up everything manually using the kubectl command-line tool. Or you could set up a user with sufficient permissions on Amazon Web Services (AWS), Google Cloud Platform (GCP) or Azure to play with Kubernetes using your favorite managed cluster provider. Both options are valid; however, they involve a lot of configuration steps that can distract from the main topic: the Kubernetes Cluster Autoscaler.

An easier solution is one that allows the tutorial to focus on understanding the inner workings of CA and not on time-consuming platform configurations, which is what you’ll be learning about here. This solution involves only two requirements: a Linode account and Rancher.

For this tutorial, you’ll need a running Rancher Manager server. Rancher is perfect for demonstrating how CA works, as it allows you to deploy and manage Kubernetes clusters on any provider conveniently from its powerful UI. Moreover, you can deploy it using several providers, including these popular options:

If you are curious about a more advanced implementation, we suggest reading the Rancher documentation, which describes how to install Cluster Autoscaler on Rancher using Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling groups. However, please note that implementing CA is very similar on different platforms, as all solutions leverage Kubernetes Cluster API for their purposes. Something that will be addressed in more detail later.

What is Cluster API, and how does Kubernetes CA leverage it

Cluster API is an open source project for building and managing Kubernetes clusters. It provides a declarative API to define the desired state of Kubernetes clusters. In other words, Cluster API can be used to extend the Kubernetes API to manage clusters across various cloud providers, bare metal installations and virtual machines.

In comparison, Kubernetes CA leverages Cluster API to enable the automatic scaling of Kubernetes clusters in response to changing application demands. CA detects when the capacity of a cluster is insufficient to accommodate the current workload and then requests additional nodes from the cloud provider. CA then provisions the new nodes using Cluster API and adds them to the cluster. In this way, the CA ensures that the cluster has the capacity needed to serve its applications.

Because Rancher supports CA and RKE2, and K3s works with Cluster API, their combination offers the ideal solution for automated Kubernetes lifecycle management from a central dashboard. This is also true for any other cloud provider that offers support for Cluster API.

Link to the Cluster API blog

Implementing CA in Kubernetes

Now that you know what Cluster API and CA are, it’s time to get down to business. Your first task will be to deploy a new Kubernetes cluster using Rancher.

Deploying a new Kubernetes cluster using Rancher

Begin by navigating to your Rancher installation. Once logged in, click on the hamburger menu located at the top left and select Cluster Management:

Rancher's main dashboard

On the next screen, click on Drivers:

**Cluster Management | Drivers**

Rancher uses cluster drivers to create Kubernetes clusters in hosted cloud providers.

For Linode LKE, you need to activate the specific driver, which is simple. Just select the driver and press the Activate button. Once the driver is downloaded and installed, the status will change to Active, and you can click on Clusters in the side menu:

Activate LKE driver

With the cluster driver enabled, it’s time to create a new Kubernetes deployment by selecting Clusters | Create:

**Clusters | Create**

Then select Linode LKE from the list of hosted Kubernetes providers:

Create LKE cluster

Next, you’ll need to enter some basic information, including a name for the cluster and the personal access token used to authenticate with the Linode API. When you’ve finished, click Proceed to Cluster Configuration to continue:

**Add Cluster** screen

If the connection to the Linode API is successful, you’ll be directed to the next screen, where you will need to choose a region, Kubernetes version and, optionally, a tag for the new cluster. Once you’re ready, press Proceed to Node pool selection:

Cluster configuration

This is the final screen before creating the LKE cluster. In it, you decide how many node pools you want to create. While there are no limitations on the number of node pools you can create, the implementation of Cluster Autoscaler for Linode does impose two restrictions, which are listed here:

  1. Each LKE Node Pool must host a single node (called Linode).
  2. Each Linode must be of the same type (eg 2GB, 4GB and 6GB).

For this tutorial, you will use two node pools, one hosting 2GB RAM nodes and one hosting 4GB RAM nodes. Configuring node pools is easy; select the type from the drop-down list and the desired number of nodes, and then click the Add Node Pool button. Once your configuration looks like the following image, press Create:

Node pool selection

You’ll be taken back to the Clusters screen, where you should wait for the new cluster to be provisioned. Behind the scenes, Rancher is leveraging the Cluster API to configure the LKE cluster according to your requirements:

Cluster provisioning

Once the cluster status shows as active, you can review the new cluster details by clicking the Explore button on the right:

Explore new cluster

At this point, you’ve deployed an LKE cluster using Rancher. In the next section, you’ll learn how to implement CA on it.

Setting up CA

If you’re new to Kubernetes, implementing CA can seem complex. For instance, the Cluster Autoscaler on AWS documentation talks about how to set permissions using Identity and Access Management (IAM) policies, OpenID Connect (OIDC) Federated Authentication and AWS security credentials. Meanwhile, the Cluster Autoscaler on Azure documentation focuses on how to implement CA in Azure Kubernetes Service (AKS), Autoscale VMAS instances and Autoscale VMSS instances, for which you will also need to spend time setting up the correct credentials for your user.

The objective of this tutorial is to leave aside the specifics associated with the authentication and authorization mechanisms of each cloud provider and focus on what really matters: How to implement CA in Kubernetes. To this end, you should focus your attention on these three key points:

  1. CA introduces the concept of node groups, also called by some vendors autoscaling groups. You can think of these groups as the node pools managed by CA. This concept is important, as CA gives you the flexibility to set node groups that scale automatically according to your instructions while simultaneously excluding other node groups for manual scaling.
  2. CA adds or removes Kubernetes nodes following certain parameters that you configure. These parameters include the previously mentioned node groups, their minimum size, maximum size and more.
  3. CA runs as a Kubernetes deployment, in which secrets, services, namespaces, roles and role bindings are defined.

The supported versions of CA and Kubernetes may vary from one vendor to another. The way node groups are identified (using flags, labels, environmental variables, etc.) and the permissions needed for the deployment to run may also vary. However, at the end of the day, all implementations revolve around the principles listed previously: auto-scaling node groups, CA configuration parameters and CA deployment.

With that said, let’s get back to business. After pressing the Explore button, you should be directed to the Cluster Dashboard. For now, you’re only interested in looking at the nodes and the cluster’s capacity.

The next steps consist of defining node groups and carrying out the corresponding CA deployment. Start with the simplest and follow some best practices to create a namespace to deploy the components that make CA. To do this, go to Projects/Namespaces:

Create a new namespace

On the next screen, you can manage Rancher Projects and namespaces. Under Projects: System, click Create Namespace to create a new namespace part of the System project:

**Cluster Dashboard | Namespaces**

Give the namespace a name and select Create. Once the namespace is created, click on the icon shown here (ie import YAML):

Import YAML

One of the many advantages of Rancher is that it allows you to perform countless tasks from the UI. One such task is to import local YAML files or create them on the fly and deploy them to your Kubernetes cluster.

To take advantage of this useful feature, copy the following code. Remember to replace <PERSONAL_ACCESS_TOKEN> with the Linode token that you created for the tutorial:

---
apiVersion: v1
kind: Secret
metadata:
  name: cluster-autoscaler-cloud-config
  namespace: autoscaler
type: Opaque
stringData:
  cloud-config: |-
    [global]
    linode-token=<PERSONAL_ACCESS_TOKEN>
    lke-cluster-id=88612
    defaut-min-size-per-linode-type=1
    defaut-max-size-per-linode-type=5
    do-not-import-pool-id=88541

    [nodegroup "g6-standard-1"]
    min-size=1
    max-size=4

    [nodegroup "g6-standard-2"]
    min-size=1
    max-size=2

Next, select the namespace you just created, paste the code in Rancher and select Import:

Paste YAML

A pop-up window will appear, confirming that the resource has been created. Press Close to continue:

Confirmation

The secret you just created is how Linode implements the node group configuration that CA will use. This configuration defines several parameters, including the following:

  • linode-token: This is the same personal access token that you used to register LKE in Rancher.
  • lke-cluster-id: This is the unique identifier of the LKE cluster that you created with Rancher. You can get this value from the Linode console or by running the command curl -H "Authorization: Bearer $TOKEN" https://api.linode.com/v4/lke/clusters, where STOKEN is your Linode personal access token. In the output, the first field, id, is the identifier of the cluster.
  • defaut-min-size-per-linode-type: This is a global parameter that defines the minimum number of nodes in each node group.
  • defaut-max-size-per-linode-type: This is also a global parameter that sets a limit to the number of nodes that Cluster Autoscaler can add to each node group.
  • do-not-import-pool-id: On Linode, each node pool has a unique ID. This parameter is used to exclude specific node pools so that CA does not scale them.
  • nodegroup (min-size and max-size): This parameter sets the minimum and maximum limits for each node group. The CA for Linode implementation forces each node group to use the same node type. To get a list of available node types, you can run the command curl https://api.linode.com/v4/linode/types.

This tutorial defines two node groups, one using g6-standard-1 linodes (2GB nodes) and one using g6-standard-2 linodes (4GB nodes). For the first group, CA can increase the number of nodes up to a maximum of four, while for the second group, CA can only increase the number of nodes to two.

With the node group configuration ready, you can deploy CA to the respective namespace using Rancher. Paste the following code into Rancher (click on the import YAML icon as before):

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
  name: cluster-autoscaler
  namespace: autoscaler
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["events", "endpoints"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["pods/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["cluster-autoscaler"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["watch", "list", "get", "update"]
  - apiGroups: [""]
    resources:
      - "namespaces"
      - "pods"
      - "services"
      - "replicationcontrollers"
      - "persistentvolumeclaims"
      - "persistentvolumes"
    verbs: ["watch", "list", "get"]
  - apiGroups: ["extensions"]
    resources: ["replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["policy"]
    resources: ["poddisruptionbudgets"]
    verbs: ["watch", "list"]
  - apiGroups: ["apps"]
    resources: ["statefulsets", "replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["batch", "extensions"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: ["coordination.k8s.io"]
    resourceNames: ["cluster-autoscaler"]
    resources: ["leases"]
    verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create","list","watch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
    verbs: ["delete", "get", "update", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: autoscaler

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: autoscaler

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '8085'
    spec:
      serviceAccountName: cluster-autoscaler
      containers:
        - image: k8s.gcr.io/autoscaling/cluster-autoscaler-amd64:v1.26.1
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=2
            - --cloud-provider=linode
            - --cloud-config=/config/cloud-config
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
            - name: cloud-config
              mountPath: /config
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"
        - name: cloud-config
          secret:
            secretName: cluster-autoscaler-cloud-config

In this code, you’re defining some labels; the namespace where you will deploy the CA; and the respective ClusterRole, Role, ClusterRoleBinding, RoleBinding, ServiceAccount and Cluster Autoscaler.

The difference between cloud providers is near the end of the file, at command. Several flags are specified here. The most relevant include the following:

  • Cluster Autoscaler version v.
  • cloud-provider; in this case, Linode.
  • cloud-config, which points to a file that uses the secret you just created in the previous step.

Again, a cloud provider that uses a minimum number of flags is intentionally chosen. For a complete list of available flags and options, read the Cloud Autoscaler FAQ.

Once you apply the deployment, a pop-up window will appear, listing the resources created:

CA deployment

You’ve just implemented CA on Kubernetes, and now, it’s time to test it.

CA in action

To check to see if CA works as expected, deploy the following dummy workload in the default namespace using Rancher:

Sample workload

Here’s a review of the code:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-workload
  labels:
    app: busybox
spec:
  replicas: 600
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        
        command: ['sh', '-c', 'echo Demo Workload ; sleep 600']

As you can see, it’s a simple workload that generates 600 busybox replicas.

If you navigate to the Cluster Dashboard, you’ll notice that the initial capacity of the LKE cluster is 220 pods. This means CA should kick in and add nodes to cope with this demand:

**Cluster Dashboard**

If you now click on Nodes (side menu), you will see how the node-creation process unfolds:

Nodes

New nodes

If you wait a couple of minutes and go back to the Cluster Dashboard, you’ll notice that CA did its job because, now, the cluster is serving all 600 replicas:

Cluster at capacity

This proves that scaling up works. But you also need to test to see scaling down. Go to Workload (side menu) and click on the hamburger menu corresponding to busybox-workload. From the drop-down list, select Delete:

Deleting workload

A pop-up window will appear; confirm that you want to delete the deployment to continue:

Deleting workload pop-up

By deleting the deployment, the expected result is that CA starts removing nodes. Check this by going back to Nodes:

Scaling down

Keep in mind that by default, CA will start removing nodes after 10 minutes. Meanwhile, you will see taints on the Nodes screen indicating the nodes that are candidates for deletion. For more information about this behavior and how to modify it, read “Does CA respect GracefulTermination in scale-down?” in the Cluster Autoscaler FAQ.

After 10 minutes have elapsed, the LKE cluster will return to its original state with one 2GB node and one 4GB node:

Downscaling completed

Optionally, you can confirm the status of the cluster by returning to the Cluster Dashboard:

**Cluster Dashboard**

And now you have verified that Cluster Autoscaler can scale up and down nodes as required.

CA, Rancher and managed Kubernetes services

At this point, the power of Cluster Autoscaler is clear. It lets you automatically adjust the number of nodes in your cluster based on demand, minimizing the need for manual intervention.

Since Rancher fully supports the Kubernetes Cluster Autoscaler API, you can leverage this feature on major service providers like AKS, Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS). Let’s look at one more example to illustrate this point.

Create a new workload like the one shown here:

New workload

It’s the same code used previously, only in this case, with 1,000 busybox replicas instead of 600. After a few minutes, the cluster capacity will be exceeded. This is because the configuration you set specifies a maximum of four 2GB nodes (first node group) and two 4GB nodes (second node group); that is, six nodes in total:

**Cluster Dashboard**

Head over to the Linode Dashboard and manually add a new node pool:

**Linode Dashboard**

Add new node

The new node will be displayed along with the rest on Rancher’s Nodes screen:

**Nodes**

Better yet, since the new node has the same capacity as the first node group (2GB), it will be deleted by CA once the workload is reduced.

In other words, regardless of the underlying infrastructure, Rancher makes use of CA to know if nodes are created or destroyed dynamically due to load.

Overall, Rancher’s ability to support Cluster Autoscaler out of the box is good news; it reaffirms Rancher as the ideal Kubernetes multi-cluster management tool regardless of which cloud provider your organization uses. Add to that Rancher’s seamless integration with other tools and technologies like Longhorn and Harvester, and the result will be a convenient centralized dashboard to manage your entire hyper-converged infrastructure.

Conclusion

This tutorial introduced you to Kubernetes Cluster Autoscaler and how it differs from other types of autoscaling, such as Vertical Pod Autoscaler (VPA) and Horizontal Pod Autoscaler (HPA). In addition, you learned how to implement CA on Kubernetes and how it can scale up and down your cluster size.

Finally, you also got a brief glimpse of Rancher’s potential to manage Kubernetes clusters from the convenience of its intuitive UI. Rancher is part of the rich ecosystem of SUSE, the leading open Kubernetes management platform. To learn more about other solutions developed by SUSE, such as Edge 2.0 or NeuVector, visit their website.

Driving Innovation with Extensible Interoperability in Rancher’s Spring ’23 Release

Tuesday, 18 April, 2023

We’re on a mission to build the industry’s most open, secure and interoperable Kubernetes management platform. Over the past few months, the team has made significant advancements across the entire Rancher portfolio that we are excited to share today with our community and customers.

Introducing the Rancher UI Extensions Framework

In November last year, we announced the release of v2.7.0, where we took our first steps into developing Rancher into a truly interoperable, extensible platform with the introduction of our extensions catalog. With the release of Rancher v2.7.2 today, we’re proud to announce that we’ve expanded our extensibility capabilities by releasing our ‘User Interface (UI) Extensions Framework.’

Users can now customize their Kubernetes experience. They can build on their Rancher platform and manage their clusters using custom, peer-developed or Rancher-built extensions.

Image 1: Installation of Kubewarden Extension

Image 1: Installation of Kubewarden Extension

Supporting this, we’ve also now made three Rancher-built extensions available, including:

  1. Kubewarden Extension delivers a comprehensive method to manage the lifecycle of Kubernetes policies across Rancher clusters.
  2. Elemental Extension provides operators with the ability to manage their cloud native OS and Edge devices from within the Rancher console.
  3. Harvester Extension helps operators load their virtualized Harvester cluster into Rancher to aid in management

Building a rich community remains our priority as we develop Rancher. That’s why as part of this release, the new UI Extensions Framework has also been implemented into the SUSE One Partner Program. Technology partners are key to our thriving ecosystem and by validating and supporting extensions, we’re eager to see the innovation from our partner community.

You can learn more about the new Rancher UI Extension Framework in this blog by Neil MacDougall, Director of UI/UX. Make sure to join him and our community team at our next Global Online Meetup as he deep dives into the UI Framework.

Adding more value to Rancher Prime and helping customers elevate their performance

In December 2022, we announced the launch of Rancher Prime – our new enterprise subscription, where we introduced the option to deploy Rancher from a trusted private registry. Today we announce the new components we’ve added to the subscription to help our customers improve their time-to-value across their teams, including:

  1. SLA-backed enterprise support for Policy and OS Management via the Kubewarden and Elemental Extensions
  2. The launch of the Rancher Prime Knowledgebase in our SUSE Collective customer loyalty programRancher Prime Knowledgebase

 

Image 2: Rancher Prime Knowledgebase in SUSE Collective

We’ve added these elements to help our customers improve their resiliency and performance across their enterprise-grade container workloads. Read this blog from Utsav Sanghani, Director of Product – Enterprise Container Management, for a detailed overview of the upgrades we made in Rancher and Rancher Prime and the value it derives for customers.

Empowering a community of Kubernetes innovators

Image 3: Rancher Academy Courses

Our community team also launched our free online education platform, Rancher Academy, at KubeCon Europe 2023. The cloud native community can now access expert-led courses on demand, covering important topics including fundamentals in containers, Kubernetes and Rancher to help accelerate their Kubernetes journey. Check out this blog from Tom Callway, Vice President of Product Marketing, as he shares in detail the launch and future outlook for Rancher Academy.

Achieving milestones across our open source projects

Finally, we’ve also made milestones across our innovation projects, including these updates:

Rancher Desktop 1.8 now includes configurable application behaviors such as auto-start at login. All application settings are configurable from the command line and experimental settings give access to Apple’s Virtualization framework on macOS Ventura.

Kubewarden 1.6.0 now allows DevSecOps teams to write Policy as Code using both traditional programming languages and domain-specific languages.

Opni 0.9 has several observability feature updates as it approaches its planned GA later in the year.

S3GW (S3 Gateway) 0.14.0 has new features such as lifecycle management, object locking and holds and UI improvements.

Epinio 1.7 now has a UI with Dex integration, the identity service that uses OpenID Connect to drive authentication for other apps, and SUSE’s S3GW.

Keep up to date with all our product release cadences on GitHub, or connect with your peers and us via Slack.

Utilizing the New Rancher UI Extensions Framework

Tuesday, 18 April, 2023

What are Rancher Extensions?

The Rancher by SUSE team wants to accelerate the pace of development and open Rancher to partners, customers, developers and users, enabling them to build on top of it to extend its functionality and further integrate it into their environments.

With Rancher Extensions, you can develop your own extensions to the Rancher UI. Completely independently of Rancher. The source code lives in your own repository. You develop, build and release it whenever you like. You can add your extension to Rancher at any time. Extensions are versioned by you and have their own independent release cycle.

Think Chrome browser extensions – but for Rancher.

Could this be the best innovation in Rancher for some time? It might just be!

What can you do?

Rancher defines several extension points which developers can take advantage of to provide extra functionality, for example:

  1. Add new UI screens to the top-level side navigation
  2. Add new UI screens to the navigation of the Cluster Explorer UI
  3. Add new UI for Kubernetes CRDs
  4. Extend existing views in Rancher Manager by adding panels, tabs and actions
  5. Customize the landing page

We’ll be adding more extension points over time.

Our goal is to enable deep integrations into Rancher. We know how important graphical user interfaces are to users, especially in helping users of all abilities to understand and manage complex technologies like Kubernetes. Being able to bring together data from different systems and visualize them within a single-pane-of-glass experience is extremely powerful for users.

With extensions, if you have a system that provides monitoring metadata, for example, we want to enable you to see that data in the context of where it is relevant – if you’re looking at a Kubernetes Pod, for example, we want you to be able to augment Rancher’s Pod view so you can see that data right alongside the Pod information.

Extensions, Extensions, Extensions

The Rancher by SUSE teams is using the Extensions mechanism to develop and deliver our own additions to Rancher – initially with extensions for Kubewarden and Elemental. We also use Extensions for our Harvester integration. Over time we’ll be adding more.

Over the coming releases, we will be refactoring the Rancher UI itself to use the extensions mechanism. We plan to build out and use the very same extension mechanism and APIs internally as externally developed extensions will use. This will help ensure those extension points deliver on the needs of developers and are fully supported and maintained.

Elemental

Elemental is a software stack enabling centralized, full cloud native OS management with Kubernetes.

With the Elemental extension for Rancher, we add UI capability for Elemental right into the Rancher user interface.

Image 1: Elemental Extension

The Elemental extension is an example of an extension that provides a new top-level “product” experience. It adds a new “OS Management” navigation item to the top-level navigation menu which leads to a new experience for managing Elemental. It uses the Rancher component library to ensure a consistent look and feel. Learn more here or visit the Elemental Extension GitHub repository

Kubewarden

Kubewarden is a policy engine for Kubernetes. Its mission is to simplify the adoption of policy-as-code.

The Kubewarden extension for Rancher makes it easy to install Kubewarden into a downstream cluster and manage Kubewarden and its policies right from within the Rancher Cluster Explorer user interface.

Image 2: Kubewarden Extension

The Kubewarden extension is a great example of an extension that adds to the Cluster Explorer experience. It also showcases how extensions can assist in simplifying the installation of additional components that are required to enable a feature in a cluster.

Unlike Helm charts, extensions have no parameters at install time – there’s nothing to configure – we want extensions to be super simple to install. Learn more here or visit the Kubewarden Extension GitHub repository.

Harvester

The Harvester integration into Rancher also leverages the UI Extensions framework. This enables the management of Harvester clusters right from within Rancher.

Because of the de-coupling that UI Extensions enables, the Harvester UI can be updated completely independently of Rancher. Learn more here or visit the Harvester UI GitHub repository.

Under the Hood

The diagram below shows a high-level overview of Rancher Extensions.

A lot of effort has gone into refactoring Rancher to modularize it and establish the API for extensions.

The end goal is to slim down the core of the Rancher Manager UI into a “Shell” into which Extensions are loaded. The functionality that is included by default will be split out into several “Built-in” extensions.

Image 3: Architecture for Rancher UI 

We are also in the process of splitting out and documenting our component library, so others can leverage it in their extensions to ensure a common look and feel.

A Rancher Extension is a packaged Vue library that provides functionality to extend and enhance the Rancher Manager UI. You’ll need to have some familiarity with Vue to build an extension, but anyone familiar with React or Angular should find it easy to get started.

Once an extension has been authored, it can be packaged up into a simple Helm chart, added to a Helm repository, and then easily installed into a running Rancher system.

Extensions are installed and managed from the new “Extensions” UI available from the Rancher slide-in menu:

Image 4: Rancher Extensions Menu

Rancher shows all the installed Extensions and the available extensions from the Helm repositories added. Extensions can also be upgraded, rolled back and uninstalled. Developers can also enable the ability to load extensions easily during development without the need to build and publish the extension to a Helm repository.

Developers

To help developers get started with Rancher extensions, we’ve published developer documentation, and we’re building out a set of example extensions.

Over time, we will be enhancing and simplifying some of our APIs, extending the documentation, and adding even more examples to help developers get started.

We have also set up a Slack channel exclusively for extensions – check out the #extensions channel on the Rancher User’s Slack.

Join the Party

We’re only just getting started with Rancher Extensions. We introduced them in Rancher 2.7. You can use them today and get started developing your own!

We want to encourage as many users, developers, customers and partners out there as possible to take a look and give them a spin. Join me on the 3rd of May at 11 am US EST where I’ll be going through the Extension Framework live as part of the Rancher Global Online Meetup – you can sign up here.

As we look ahead, we’ll be augmenting the Rancher extensions repository with a partner repository and a community repository to make it easier to discover extensions. Reach out to us via Slack if you have an extension you’d like included in these repositories.

Fasten your seat belts. This is just the beginning. We can’t wait to see what others do with Rancher Extensions!

Driving Innovation with Extensible Interoperability in Rancher’s Spring ’23 Release

Tuesday, 18 April, 2023

We’re on a mission to build the industry’s most open, secure and interoperable Kubernetes management platform. Over the past few months, the team has made significant advancements across the entire Rancher portfolio that we are excited to share today with our community and customers.

Introducing the Rancher UI Extensions Framework

In November last year, we announced the release of v2.7.0, where we took our first steps into developing Rancher into a truly interoperable, extensible platform with the introduction of our extensions catalog. With the release of Rancher v2.7.2 today, we’re proud to announce that we’ve expanded our extensibility capabilities by releasing our ‘User Interface (UI) Extensions Framework.’

Users can now customize their Kubernetes experience. They can build on their Rancher platform and manage their clusters using custom, peer-developed or Rancher-built extensions.

Image 1: Installation of Kubewarden Extension

Image 1: Installation of Kubewarden Extension

Supporting this, we’ve also now made three Rancher-built extensions available, including:

  1. Kubewarden Extension delivers a comprehensive method to manage the lifecycle of Kubernetes policies across Rancher clusters.
  2. Elemental Extension provides operators with the ability to manage their cloud native OS and Edge devices from within the Rancher console.
  3. Harvester Extension helps operators load their virtualized Harvester cluster into Rancher to aid in management

Building a rich community remains our priority as we develop Rancher. That’s why as part of this release, the new UI Extensions Framework has also been implemented into the SUSE One Partner Program. Technology partners are key to our thriving ecosystem and by validating and supporting extensions, we’re eager to see the innovation from our partner community.

You can learn more about the new Rancher UI Extension Framework in this blog by Neil MacDougall, Director of UI/UX. Make sure to join him and our community team at our next Global Online Meetup as he deep dives into the UI Framework.

Adding more value to Rancher Prime and helping customers elevate their performance

In December 2022, we announced the launch of Rancher Prime – our new enterprise subscription, where we introduced the option to deploy Rancher from a trusted private registry. Today we announce the new components we’ve added to the subscription to help our customers improve their time-to-value across their teams, including:

  1. SLA-backed enterprise support for Policy and OS Management via the Kubewarden and Elemental Extensions
  2. The launch of the Rancher Prime Knowledgebase in our SUSE Collective customer loyalty programRancher Prime Knowledgebase

 

Image 2: Rancher Prime Knowledgebase in SUSE Collective

We’ve added these elements to help our customers improve their resiliency and performance across their enterprise-grade container workloads. Read this blog from Utsav Sanghani, Director of Product – Enterprise Container Management, for a detailed overview of the upgrades we made in Rancher and Rancher Prime and the value it derives for customers.

Empowering a community of Kubernetes innovators

Image 3: Rancher Academy Courses

Our community team also launched our free online education platform, Rancher Academy, at KubeCon Europe 2023. The cloud native community can now access expert-led courses on demand, covering important topics including fundamentals in containers, Kubernetes and Rancher to help accelerate their Kubernetes journey. Check out this blog from Tom Callway, Vice President of Product Marketing, as he shares in detail the launch and future outlook for Rancher Academy.

Achieving milestones across our open source projects

Finally, we’ve also made milestones across our innovation projects, including these updates:

Rancher Desktop 1.8 now includes configurable application behaviors such as auto-start at login. All application settings are configurable from the command line and experimental settings give access to Apple’s Virtualization framework on macOS Ventura.

Kubewarden 1.6.0 now allows DevSecOps teams to write Policy as Code using both traditional programming languages and domain-specific languages.

Opni 0.9 has several observability feature updates as it approaches its planned GA later in the year.

S3GW (S3 Gateway) 0.14.0 has new features such as lifecycle management, object locking and holds and UI improvements.

Epinio 1.7 now has a UI with Dex integration, the identity service that uses OpenID Connect to drive authentication for other apps, and SUSE’s S3GW.

Keep up to date with all our product release cadences on GitHub, or connect with your peers and us via Slack.