Harvester: Intro and Setup    

星期二, 17 八月, 2021
I mentioned about a month back that I was using Harvester in my home lab. I didn’t go into much detail, so this post will bring some more depth. We will cover what Harvester does, as well as my hardware, installation, setup and how to deploy your first virtual machine. Let’s get started.

What is Harvester?

Harvester is Rancher’s open source answer to a hyperconverged infrastructure platform. Like most things Rancher is involved with, it is built on Kubernetes using tools like KubeVirt and Longhorn. KubeVirt is an exciting project that leverages KVM and libvirt to run virtual machines inside Kubernetes; this allows you to run both containers and VMs in your cluster. It reduces operational overhead and provides consistency. This combination of tried and tested technologies provides an open source solution in this space.

It is also designed to be used with bare metal, making it an excellent option for a home lab.

Hardware

If you check the hardware requirements, you will notice they focus more on business usage. So far, my personal experience says that you want at least a 4 core/8 thread CPU, 16GB of RAM, and a large SSD, preferably an NVMe drive. Anything less resource-wise doesn’t leave enough capacity for running many containers or VMs. I will install it on an Intel NUC 8i5BEK, which has an Intel Core i5-8259U. As far as RAM, it has 32GB of RAM and a 512GB NVMe drive. It can handle running Harvester without any issues. Of course, this is just my experience; your experience may differ.

Installation

Harvester ships as an ISO, which you can download on the GitHub Releases page. You can pull it quickly using wget.

$ wget https://releases.rancher.com/harvester/v0.2.0/harvester-amd64.iso

Once you have it downloaded, you will need to create a bootable USB. I typically use Balena Etcher since it is cross-platform and intuitive. Once you have a bootable USB, place it in the machine you want to use and boot the drive. This screen should greet you:

Select “New Cluster”:

Select the drive you want to use.

Enter your hostname, select your network interface, and make sure you use automatic DHCP.

You will then be prompted to enter your cluster token. This can be any phrase you want; I recommend using your password manager to generate one.

Set a password to use, and remember that the default user name is rancher.

The following several options are attractive, especially if you want to leverage your SSH keys used in GitHub. Since this is a home lab, I left the SSH keys, proxy and cloud-init setup blank. In an enterprise environment, this would be really useful. Now you will see the final screen before installation. Verify that everything is configured to your desires before proceeding.

If it all looks great, proceed with the installation. It will take a few minutes to complete; when it does, you will need to reboot.

After the reboot, the system will startup, and you will see a screen letting you know the URL for Harvester and the system’s status. Wait until it reports that Harvester is ready before trying to connect.

Great! It is now reporting that it is up and running, so it’s now time to set up Harvester.

Initial Setup

We can navigate to the URL listed once the OS boots. Mine is https://harvest:30443. It uses a self-signed certificate by default, so you will see a warning in your browser. Just click on “advanced” to proceed, and accept it. Set a password for the default admin account.

Now you should see the dashboard and the health of the system.

I like to disable the default account and add my own account for authentication. Probably not necessary for a home lab, but a good habit to get into. First, you need to navigate to it.

Now log out and back in with your new account. Once that’s finished, we can create our first VM.

Deploying Your First VM

Harvester has native support for qcow2 images and can import those from a URL. Let’s grab the URL for openSUSE Leap 15.3 JeOS image.

https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-kvm-and-xen.qcow2

The JeOS image for openSUSE is roughly 225MB, which is a perfect size for downloading and creating VMs quickly. Let’s make the image in Harvester.

Create a new image, and add the URL above as the image URL.

You should now see it listed.

Now we can create a VM using that image. Navigate to the VM screen.

Once we’ve made our way to the VM screen, we’ll create a new VM.

When that is complete, the VM will show up in the list. Wait until it has been started, then you can start using it.

Wrapping Up

In this article, I wanted to show you how to set up VMs with Harvester, even starting from scratch! There are plenty of features to explore and plenty more on the roadmap. This project is still early in its life, so now is a great time to jump in and get involved with its direction.

Hyperconverged Infrastructure and Harvester

星期一, 2 八月, 2021

Virtual machines (VMs) have transformed infrastructure deployment and management. VMs are so ubiquitous that I can’t think of a single instance where I deployed production code to a bare metal server in my many years as a professional software engineer.

VMs provide secure, isolated environments hosting your choice of operating system while sharing the resources of the underlying server. This allows resources to be allocated more efficiently, reducing the cost of over-provisioned hardware.

Given the power and flexibility provided by VMs, it is common to find many VMs deployed across many servers. However, managing VMs at this scale introduces challenges.

Managing VMs at Scale

Hypervisors provide comprehensive management of the VMs on a single server. The ability to create new VMs, start and stop them, clone them, and back them up are exposed through simple management consoles or command-line interfaces (CLIs).

But what happens when you need to manage two servers instead of one? Suddenly you find yourself having first to gain access to the appropriate server to interact with the hypervisor. You’ll also quickly find that you want to move VMs from one server to another, which means you’ll need to orchestrate a sequence of shutdown, backup, file copy, restore and boot operations.

Routine tasks performed on one server become just that little bit more difficult with two, and quickly become overwhelming with 10, 100 or 1,000 servers.

Clearly, administrators need a better way to manage VMs at scale.

Hyperconverged Infrastructure

This is where Hyperconverged Infrastructure (HCI) comes in. HCI is a marketing term rather than a strict definition. Still, it is typically used to describe a software layer that abstracts the compute, storage and network resources of multiple (often commodity or whitebox) servers to present a unified view of the underlying infrastructure. By building on top of the virtualization functionality included in all major operating systems, HCI allows many systems to be managed as a single, shared resource.

With HCI, administrators no longer need to think in terms of VMs running on individual servers. New hardware can be added and removed as needed. VMs can be provisioned wherever there is appropriate capacity, and operations that span servers, such as moving VMs, are as routine with 2 servers as they are with 100.

Harvester

Harvester, created by Rancher, is open source HCI software built using Kubernetes.

While Kubernetes has become the defacto standard for container orchestration, it may seem like an odd choice as the foundation for managing VMs. However, when you think of Kubernetes as an extensible orchestration platform, this choice makes sense.

Kubernetes provides authentication, authorization, high availability, fault tolerance, CLIs, software development kits (SDKs), application programming interfaces (APIs), declarative state, node management, and flexible resource definitions. All of these features have been battle tested over the years with many large-scale clusters.

More importantly, Kubernetes orchestrates many kinds of resources beyond containers. Thanks to the use of custom resource definitions (CRDs), and custom operators, Kubernetes can describe and provision any kind of resource.

By building on Kubernetes, Harvester takes advantage of a well tested and actively developed platform. With the use of KubeVirt and Longhorn, Harvester extends Kubernetes to allow the management of bare metal servers and VMs.

Harvester is not the first time VM management has been built on top of Kubernetes; Rancher’s own RancherVM is one such example. But these solutions have not been as popular as hoped:

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators.

To address this, Harvester does not expose the underlying Kubernetes platform to the end user. Instead, it presents more familiar concepts like VMs, NICs, ISO images and disk volumes. This allows Harvester to take advantage of Kubernetes while giving administrators a more traditional view of their infrastructure.

Managing VMs at Scale

The fusion of Kubernetes and VMs provides the ability to perform common tasks such as VM creation, backups, restores, migrations, SSH-Key injection and more across multiple servers from one centralized administration console.

Consolidating virtualized resources like CPU, memory, network, and storage allows for greater resource utilization and simplified administration, allowing Harvester to satisfy the core premise of HCI.

Conclusion

HCI abstracts the resources exposed by many individual servers to provide administrators with a unified and seamless management interface, providing a single point to perform common tasks like VM provisioning, moving, cloning, and backups.

Harvester is an HCI solution leveraging popular open source projects like Kubernetes, KubeVirt, and Longhorn, but with the explicit goal of not exposing Kubernetes to the end user.

The end result is an HCI solution built on the best open source platforms available while still providing administrators with a familiar view of their infrastructure.

Download Harvester from the project website and learn more from the project documentation.

Meet the Harvester developer team! Join our free Summer is Open session on Harvester: Tuesday, July 27 at 12pm PT and on demand. Get details about the project, watch a demo, ask questions and get a challenge to complete offline.

Tags: , Category: 未分类 Comments closed

Announcing Harvester Beta Availability

星期五, 28 五月, 2021

It has been five months since we announced project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Since then, we’ve received a lot of feedback from the early adopters. This feedback has encouraged us and helped in shaping Harvester’s roadmap. Today, I am excited to announce the Harvester v0.2.0 release, along with the Beta availability of the project!

Let’s take a look at what’s new in Harvester v0.2.0.

Raw Block Device Support

We’ve added the raw block device support in v0.2.0. Since it’s a change that’s mostly under the hood, the updates might not be immediately obvious to end users. Let me explain more in detail:

In Harvester v0.1.0, the image to VM flow worked like this:

  1. Users added a new VM image.

  2. Harvester downloaded the image into the built-in MinIO object store.

  3. Users created a new VM using the image.

  4. Harvester created a new volume, and copied the image from the MinIO object store.

  5. The image was presented to the VM as a block device, but it was stored as a file in the volume created by Harvester.

This approach had a few issues:

  1. Read/write operations to the VM volume needed to be translated into reading/writing the image file, which performed worse compared to reading/writing the raw block device, due to the overhead of the filesystem layer.

  2. If one VM image is used multiple times by different VMs, it was replicated many times in the cluster. This is because each VM had its own copy of the volume, even though the majority of the content was likely the same since they’re coming from the same image.

  3. The dependency on MinIO to store the images resulted in Harvester keeping MinIO highly available and expandable. Those requirements caused an extra burden on the Harvester management plane.

In v0.2.0, we’ve took another approach to tackle the problem, which resulted in a simpler solution that had better performance and less duplicated data:

  1. Instead of an image file on the filesystem, now we’re providing the VM with raw block devices, which allows for better performance for the VM.

  2. We’ve taken advantage of a new feature called Backing Image in the Longhorn v1.1.1, to reduce the unnecessary copies of the VM image. Now the VM image will be served as a read-only layer for all the VMs using it. Longhorn is now responsible for creating another copy-on-write (COW) layer on top of the image for the VMs to use.

  3. Since now Longhorn starts to manage the VM image using the Backing Image feature, the dependency of MinIO can be removed.

Image 02
A comprehensive view of images in Harvester

From the user experience perspective, you may have noticed that importing an image is instantaneous. And starting a VM based on a new image takes a bit longer due to the image downloading process in Longhorn. Later on, any other VMs using the same image will take significantly less time to boot up, compared to the previous v0.1.0 release and the disk IO performance will be better as well.

VM Live Migration Support

In preparation for the future upgrade process, VM live migration is now supported in Harvester v0.2.0.

VM live migration allows a VM to migrate from one node to another, without any downtime. It’s mostly used when you want to perform maintenance work on one of the nodes or want to balance the workload across the nodes.

One thing worth noting is, due to potential IP change of the VM after migration when using the default management network, we highly recommend using the VLAN network instead of the default management network. Otherwise, you might not be able to keep the same IP for the VM after migration to another node.

You can read more about live migration support here.

VM Backup Support

We’ve added VM backup support to Harvester v0.2.0.

The backup support provides a way for you to backup your VM images outside of the cluster.

To use the backup/restore feature, you need an S3 compatible endpoint or NFS server and the destination of the backup will be referred to as the backup target.

You can get more details on how to set up the backup target in Harvester here.

Image 03
Easily manage and operate your virtual machines in Harvester

In the meantime, we’re also working on the snapshot feature for the VMs. In contrast to the backup feature, the snapshot feature will store the image state inside the cluster, providing VMs the ability to revert back to a previous snapshot. Unlike the backup feature, no data will be copied outside the cluster for a snapshot. So it will be a quick way to try something experimental, but not ideal for the purpose of keeping the data safe if the cluster went down.

PXE Boot Installation Support

PXE boot installation is widely used in the data center to automatically populate bare-metal nodes with desired operating systems. We’ve also added the PXE boot installation in Harvester v0.2.0 to help users that have a large number of servers and want a fully automated installation process.

You can find more information regarding how to do the PXE boot installation in Harvester v0.2.0 here.

We’ve also provided a few examples of doing iPXE on public bare-metal cloud providers, including Equinix Metal. More information is available here.

Rancher Integration

Last but not least, Harvester v0.2.0 now ships with a built-in Rancher server for Kubernetes management.

This was one of the most requested features since we announced Harvester v0.1.0, and we’re very excited to deliver the first version of the Rancher integration in the v0.2.0 release.

For v0.2.0, you can use the built-in Rancher server to create Kubernetes clusters on top of your Harvester bare-metal clusters.

To start using the built-in Rancher in Harvester v0.2.0, go to Settings, then set the rancher-enabled option to true. Now you should be able to see a Rancher button on the top right corner of the UI. Clicking the button takes you to the Rancher UI.

Harvester and Rancher share the authentication process, so once you’re logged in to Harvester, you don’t need to redo the login process in Rancher and vice versa.

If you want to create a new Kubernetes cluster using Rancher, you can follow the steps here. A reminder that VLAN networking needs to be enabled for creating Kubernetes clusters on top of the Harvester, since the default management network cannot guarantee a stable IP for the VMs, especially after reboot or migration.

What’s Next?

Now with v0.2.0 behind us, we’re working on the v0.3.0 release, which will be the last feature release before Harvester reaches GA.

We’re working on many things for v0.3.0 release. Here are some highlights:

  • Built-in load balancer
  • Rancher 2.6 integration
  • Replace K3OS with a small footprint OS designed for the container workload
  • Multi-tenant support
  • Multi-disk support
  • VM snapshot support
  • Terraform provider
  • Guest Kubernetes cluster CSI driver
  • Enhanced monitoring

You can get started today and give Harvester v0.2.0 a try via our website.

Let us know what you think via the Rancher User Slack #harvester channel. And start contributing by filing issues and feature requests via our github page.

Enjoy Harvester!

Tags: ,,,,, Category: 未分类 Comments closed

Announcing Harvester: Open Source Hyperconverged Infrastructure (HCI) Software

星期三, 16 十二月, 2020

Today, I am excited to announce project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Harvester provides fully integrated virtualization and storage capabilities on bare-metal servers. No Kubernetes knowledge is required to use Harvester.

Why Harvester?

In the past few years, we’ve seen many attempts to bring VM management into container platforms, including our own RancherVM, and other solutions like KubeVirt and Virtlet. We’ve seen some demand for solutions like this, mostly for running legacy software side by side with containers. But in the end, none of these solutions have come close to the popularity of industry-standard virtualization products like vSphere and Nutanix.

We believe the reason for this lack of popularity is that all efforts to date to manage VMs in container platforms require users to have substantial knowledge of container platforms. Despite Kubernetes becoming an industry standard, knowledge of it is not widespread among VM administrators. They are familiar with concepts like ISO images, disk volumes, NICs and VLANS – not concepts like pods and PVCs.

Enter Harvester.

Project Harvester is an open source alternative to traditional proprietary hyperconverged infrastructure software. Harvester is built on top of cutting-edge open source technologies including Kubernetes, KubeVirt and Longhorn. We’ve designed Harvester to be easy to understand, install and operate. Users don’t need to understand anything about Kubernetes to use Harvester and enjoy all the benefits of Kubernetes.

Harvester v0.1.0

Harvester v0.1.0 has the following features:

Installation from ISO

You can download ISO from the release page on Github and install it directly on bare-metal nodes. During the installation, you can choose to create a new cluster or add the current node into an existing cluster. Harvester will automatically create a cluster based on the information you provided.

Install as a Helm Chart on an Existing Kubernetes Cluster

For development purposes, you can install Harvester on an existing Kubernetes cluster. The nodes must be able to support KVM through either hardware virtualization (Intel VT-x or AMD-V) or nested virtualization.

VM Lifecycle Management

Powered by KubeVirt, Harvester supports creating/deleting/updating operations for VMs, as well as SSH key injection and cloud-init.

Harvester also provides a graphical console and a serial port console for users to access the VM in the UI.

Storage Management

Harvester has a built-in highly available block storage system powered by Longhorn. It will use the storage space on the node, to provide highly available storage to the VMs inside the cluster.

Networking Management

Harvester provides several different options for networking.

By default, each VM inside Harvester will have a management NIC, powered by Kubernetes overlay networking.

Users can also add additional NICs to the VMs. Currently, VLAN is supported.

The multi-network functionality in Harvester is powered by Multus.

Image Management

Harvester has a built-in image repository, allowing users to easily download/manage new images for the VMs inside the cluster.

The image repository is powered by MinIO.

Image 01

Install

To install Harvester, just load the Harvester ISO into your bare-metal machine and boot it up.

Image 02

For the first node where you install Harvester, select Create a new Harvester cluster.

Later, you will be prompted to enter the password that will be used to enter the console on the host, as well as “Cluster Token.” The Cluster Token is a token that’s needed later by other nodes that want to join the same cluster.

Image 03

Then you will be prompted to choose the NIC that Harvester will use. The selected NIC will be used as the network for the management and storage traffic.

Image 04

Once everything has been configured, you will be prompted to confirm the installation of Harvester.

Image 05

Once installed, the host will be rebooted and boot into the Harvester console.

Image 06

Later, when you are adding a node to the cluster, you will be prompted to enter the management address (which is shown above) as well as the cluster token you’ve set when creating the cluster.

See here for a demo of the installation process.

Alternatively, you can install Harvester as a Helm chart on your existing Kubernetes cluster, if the nodes in your cluster have hardware virtualization support. See here for more details. And here is a demo using Digital Ocean which supports nested virtualization.

Usage

Once installed, you can use the management URL shown in the Harvester console to access the Harvester UI.

The default user name/password is documented here.

Image 07

Once logged in, you will see the dashboard.

Image 08

The first step to create a virtual machine is to import an image into Harvester.

Select the Images page and click the Create button, fill in the URL field and the image name will be automatically filled for you.

Image 09

Then click Create to confirm.

You will see the real-time progress of creating the image on the Images page.

Image 10

Once the image is finished creating, you can then start creating the VM using the image.

Select the Virtual Machine page, and click Create.

Image 11

Fill in the parameters needed for creation, including volumes, networks, cloud-init, etc. Then click Create.

VM will be created soon.

Image 12

Once created, click the Console button to get access to the console of the VM.

Image 13

See here for a UI demo.

Current Status and Roadmap

Harvester is in the early stages. We’ve just released the v0.1.0 (alpha) release. Feel free to give it a try and let us know what you think.

We have the following items in our roadmap:

  1. Live migration support
  2. PXE support
  3. VM backup/restore
  4. Zero downtime upgrade

If you need any help with Harvester, please join us at either our Rancher forums or Slack, where our team hangs out.

If you have any feedback or questions, feel free to file an issue on our GitHub page.

Thank you and enjoy Harvester!

Getting Started with Cluster Autoscaling in Kubernetes

星期二, 12 九月, 2023

Autoscaling the resources and services in your Kubernetes cluster is essential if your system is going to meet variable workloads. You can’t rely on manual scaling to help the cluster handle unexpected load changes.

While cluster autoscaling certainly allows for faster and more efficient deployment, the practice also reduces resource waste and helps decrease overall costs. When you can scale up or down quickly, your applications can be optimized for different workloads, making them more reliable. And a reliable system is always cheaper in the long run.

This tutorial introduces you to Kubernetes’s Cluster Autoscaler. You’ll learn how it differs from other types of autoscaling in Kubernetes, as well as how to implement Cluster Autoscaler using Rancher.

The differences between different types of Kubernetes autoscaling

By monitoring utilization and reacting to changes, Kubernetes autoscaling helps ensure that your applications and services are always running at their best. You can accomplish autoscaling through the use of a Vertical Pod Autoscaler (VPA)Horizontal Pod Autoscaler (HPA) or Cluster Autoscaler (CA).

VPA is a Kubernetes resource responsible for managing individual pods’ resource requests. It’s used to automatically adjust the resource requests and limits of individual pods, such as CPU and memory, to optimize resource utilization. VPA helps organizations maintain the performance of individual applications by scaling up or down based on usage patterns.

HPA is a Kubernetes resource that automatically scales the number of replicas of a particular application or service. HPA monitors the usage of the application or service and will scale the number of replicas up or down based on the usage levels. This helps organizations maintain the performance of their applications and services without the need for manual intervention.

CA is a Kubernetes resource used to automatically scale the number of nodes in the cluster based on the usage levels. This helps organizations maintain the performance of the cluster and optimize resource utilization.

The main difference between VPA, HPA and CA is that VPA and HPA are responsible for managing the resource requests of individual pods and services, while CA is responsible for managing the overall resources of the cluster. VPA and HPA are used to scale up or down based on the usage patterns of individual applications or services, while CA is used to scale the number of nodes in the cluster to maintain the performance of the overall cluster.

Now that you understand how CA differs from VPA and HPA, you’re ready to begin implementing cluster autoscaling in Kubernetes.

Prerequisites

There are many ways to demonstrate how to implement CA. For instance, you could install Kubernetes on your local machine and set up everything manually using the kubectl command-line tool. Or you could set up a user with sufficient permissions on Amazon Web Services (AWS), Google Cloud Platform (GCP) or Azure to play with Kubernetes using your favorite managed cluster provider. Both options are valid; however, they involve a lot of configuration steps that can distract from the main topic: the Kubernetes Cluster Autoscaler.

An easier solution is one that allows the tutorial to focus on understanding the inner workings of CA and not on time-consuming platform configurations, which is what you’ll be learning about here. This solution involves only two requirements: a Linode account and Rancher.

For this tutorial, you’ll need a running Rancher Manager server. Rancher is perfect for demonstrating how CA works, as it allows you to deploy and manage Kubernetes clusters on any provider conveniently from its powerful UI. Moreover, you can deploy it using several providers, including these popular options:

If you are curious about a more advanced implementation, we suggest reading the Rancher documentation, which describes how to install Cluster Autoscaler on Rancher using Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling groups. However, please note that implementing CA is very similar on different platforms, as all solutions leverage Kubernetes Cluster API for their purposes. Something that will be addressed in more detail later.

What is Cluster API, and how does Kubernetes CA leverage it

Cluster API is an open source project for building and managing Kubernetes clusters. It provides a declarative API to define the desired state of Kubernetes clusters. In other words, Cluster API can be used to extend the Kubernetes API to manage clusters across various cloud providers, bare metal installations and virtual machines.

In comparison, Kubernetes CA leverages Cluster API to enable the automatic scaling of Kubernetes clusters in response to changing application demands. CA detects when the capacity of a cluster is insufficient to accommodate the current workload and then requests additional nodes from the cloud provider. CA then provisions the new nodes using Cluster API and adds them to the cluster. In this way, the CA ensures that the cluster has the capacity needed to serve its applications.

Because Rancher supports CA and RKE2, and K3s works with Cluster API, their combination offers the ideal solution for automated Kubernetes lifecycle management from a central dashboard. This is also true for any other cloud provider that offers support for Cluster API.

Link to the Cluster API blog

Implementing CA in Kubernetes

Now that you know what Cluster API and CA are, it’s time to get down to business. Your first task will be to deploy a new Kubernetes cluster using Rancher.

Deploying a new Kubernetes cluster using Rancher

Begin by navigating to your Rancher installation. Once logged in, click on the hamburger menu located at the top left and select Cluster Management:

Rancher's main dashboard

On the next screen, click on Drivers:

**Cluster Management | Drivers**

Rancher uses cluster drivers to create Kubernetes clusters in hosted cloud providers.

For Linode LKE, you need to activate the specific driver, which is simple. Just select the driver and press the Activate button. Once the driver is downloaded and installed, the status will change to Active, and you can click on Clusters in the side menu:

Activate LKE driver

With the cluster driver enabled, it’s time to create a new Kubernetes deployment by selecting Clusters | Create:

**Clusters | Create**

Then select Linode LKE from the list of hosted Kubernetes providers:

Create LKE cluster

Next, you’ll need to enter some basic information, including a name for the cluster and the personal access token used to authenticate with the Linode API. When you’ve finished, click Proceed to Cluster Configuration to continue:

**Add Cluster** screen

If the connection to the Linode API is successful, you’ll be directed to the next screen, where you will need to choose a region, Kubernetes version and, optionally, a tag for the new cluster. Once you’re ready, press Proceed to Node pool selection:

Cluster configuration

This is the final screen before creating the LKE cluster. In it, you decide how many node pools you want to create. While there are no limitations on the number of node pools you can create, the implementation of Cluster Autoscaler for Linode does impose two restrictions, which are listed here:

  1. Each LKE Node Pool must host a single node (called Linode).
  2. Each Linode must be of the same type (eg 2GB, 4GB and 6GB).

For this tutorial, you will use two node pools, one hosting 2GB RAM nodes and one hosting 4GB RAM nodes. Configuring node pools is easy; select the type from the drop-down list and the desired number of nodes, and then click the Add Node Pool button. Once your configuration looks like the following image, press Create:

Node pool selection

You’ll be taken back to the Clusters screen, where you should wait for the new cluster to be provisioned. Behind the scenes, Rancher is leveraging the Cluster API to configure the LKE cluster according to your requirements:

Cluster provisioning

Once the cluster status shows as active, you can review the new cluster details by clicking the Explore button on the right:

Explore new cluster

At this point, you’ve deployed an LKE cluster using Rancher. In the next section, you’ll learn how to implement CA on it.

Setting up CA

If you’re new to Kubernetes, implementing CA can seem complex. For instance, the Cluster Autoscaler on AWS documentation talks about how to set permissions using Identity and Access Management (IAM) policies, OpenID Connect (OIDC) Federated Authentication and AWS security credentials. Meanwhile, the Cluster Autoscaler on Azure documentation focuses on how to implement CA in Azure Kubernetes Service (AKS), Autoscale VMAS instances and Autoscale VMSS instances, for which you will also need to spend time setting up the correct credentials for your user.

The objective of this tutorial is to leave aside the specifics associated with the authentication and authorization mechanisms of each cloud provider and focus on what really matters: How to implement CA in Kubernetes. To this end, you should focus your attention on these three key points:

  1. CA introduces the concept of node groups, also called by some vendors autoscaling groups. You can think of these groups as the node pools managed by CA. This concept is important, as CA gives you the flexibility to set node groups that scale automatically according to your instructions while simultaneously excluding other node groups for manual scaling.
  2. CA adds or removes Kubernetes nodes following certain parameters that you configure. These parameters include the previously mentioned node groups, their minimum size, maximum size and more.
  3. CA runs as a Kubernetes deployment, in which secrets, services, namespaces, roles and role bindings are defined.

The supported versions of CA and Kubernetes may vary from one vendor to another. The way node groups are identified (using flags, labels, environmental variables, etc.) and the permissions needed for the deployment to run may also vary. However, at the end of the day, all implementations revolve around the principles listed previously: auto-scaling node groups, CA configuration parameters and CA deployment.

With that said, let’s get back to business. After pressing the Explore button, you should be directed to the Cluster Dashboard. For now, you’re only interested in looking at the nodes and the cluster’s capacity.

The next steps consist of defining node groups and carrying out the corresponding CA deployment. Start with the simplest and follow some best practices to create a namespace to deploy the components that make CA. To do this, go to Projects/Namespaces:

Create a new namespace

On the next screen, you can manage Rancher Projects and namespaces. Under Projects: System, click Create Namespace to create a new namespace part of the System project:

**Cluster Dashboard | Namespaces**

Give the namespace a name and select Create. Once the namespace is created, click on the icon shown here (ie import YAML):

Import YAML

One of the many advantages of Rancher is that it allows you to perform countless tasks from the UI. One such task is to import local YAML files or create them on the fly and deploy them to your Kubernetes cluster.

To take advantage of this useful feature, copy the following code. Remember to replace <PERSONAL_ACCESS_TOKEN> with the Linode token that you created for the tutorial:

---
apiVersion: v1
kind: Secret
metadata:
  name: cluster-autoscaler-cloud-config
  namespace: autoscaler
type: Opaque
stringData:
  cloud-config: |-
    [global]
    linode-token=<PERSONAL_ACCESS_TOKEN>
    lke-cluster-id=88612
    defaut-min-size-per-linode-type=1
    defaut-max-size-per-linode-type=5
    do-not-import-pool-id=88541

    [nodegroup "g6-standard-1"]
    min-size=1
    max-size=4

    [nodegroup "g6-standard-2"]
    min-size=1
    max-size=2

Next, select the namespace you just created, paste the code in Rancher and select Import:

Paste YAML

A pop-up window will appear, confirming that the resource has been created. Press Close to continue:

Confirmation

The secret you just created is how Linode implements the node group configuration that CA will use. This configuration defines several parameters, including the following:

  • linode-token: This is the same personal access token that you used to register LKE in Rancher.
  • lke-cluster-id: This is the unique identifier of the LKE cluster that you created with Rancher. You can get this value from the Linode console or by running the command curl -H "Authorization: Bearer $TOKEN" https://api.linode.com/v4/lke/clusters, where STOKEN is your Linode personal access token. In the output, the first field, id, is the identifier of the cluster.
  • defaut-min-size-per-linode-type: This is a global parameter that defines the minimum number of nodes in each node group.
  • defaut-max-size-per-linode-type: This is also a global parameter that sets a limit to the number of nodes that Cluster Autoscaler can add to each node group.
  • do-not-import-pool-id: On Linode, each node pool has a unique ID. This parameter is used to exclude specific node pools so that CA does not scale them.
  • nodegroup (min-size and max-size): This parameter sets the minimum and maximum limits for each node group. The CA for Linode implementation forces each node group to use the same node type. To get a list of available node types, you can run the command curl https://api.linode.com/v4/linode/types.

This tutorial defines two node groups, one using g6-standard-1 linodes (2GB nodes) and one using g6-standard-2 linodes (4GB nodes). For the first group, CA can increase the number of nodes up to a maximum of four, while for the second group, CA can only increase the number of nodes to two.

With the node group configuration ready, you can deploy CA to the respective namespace using Rancher. Paste the following code into Rancher (click on the import YAML icon as before):

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
  name: cluster-autoscaler
  namespace: autoscaler
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["events", "endpoints"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["pods/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["cluster-autoscaler"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["watch", "list", "get", "update"]
  - apiGroups: [""]
    resources:
      - "namespaces"
      - "pods"
      - "services"
      - "replicationcontrollers"
      - "persistentvolumeclaims"
      - "persistentvolumes"
    verbs: ["watch", "list", "get"]
  - apiGroups: ["extensions"]
    resources: ["replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["policy"]
    resources: ["poddisruptionbudgets"]
    verbs: ["watch", "list"]
  - apiGroups: ["apps"]
    resources: ["statefulsets", "replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["batch", "extensions"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: ["coordination.k8s.io"]
    resourceNames: ["cluster-autoscaler"]
    resources: ["leases"]
    verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create","list","watch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
    verbs: ["delete", "get", "update", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: autoscaler

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: autoscaler

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '8085'
    spec:
      serviceAccountName: cluster-autoscaler
      containers:
        - image: k8s.gcr.io/autoscaling/cluster-autoscaler-amd64:v1.26.1
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=2
            - --cloud-provider=linode
            - --cloud-config=/config/cloud-config
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
            - name: cloud-config
              mountPath: /config
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"
        - name: cloud-config
          secret:
            secretName: cluster-autoscaler-cloud-config

In this code, you’re defining some labels; the namespace where you will deploy the CA; and the respective ClusterRole, Role, ClusterRoleBinding, RoleBinding, ServiceAccount and Cluster Autoscaler.

The difference between cloud providers is near the end of the file, at command. Several flags are specified here. The most relevant include the following:

  • Cluster Autoscaler version v.
  • cloud-provider; in this case, Linode.
  • cloud-config, which points to a file that uses the secret you just created in the previous step.

Again, a cloud provider that uses a minimum number of flags is intentionally chosen. For a complete list of available flags and options, read the Cloud Autoscaler FAQ.

Once you apply the deployment, a pop-up window will appear, listing the resources created:

CA deployment

You’ve just implemented CA on Kubernetes, and now, it’s time to test it.

CA in action

To check to see if CA works as expected, deploy the following dummy workload in the default namespace using Rancher:

Sample workload

Here’s a review of the code:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-workload
  labels:
    app: busybox
spec:
  replicas: 600
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        
        command: ['sh', '-c', 'echo Demo Workload ; sleep 600']

As you can see, it’s a simple workload that generates 600 busybox replicas.

If you navigate to the Cluster Dashboard, you’ll notice that the initial capacity of the LKE cluster is 220 pods. This means CA should kick in and add nodes to cope with this demand:

**Cluster Dashboard**

If you now click on Nodes (side menu), you will see how the node-creation process unfolds:

Nodes

New nodes

If you wait a couple of minutes and go back to the Cluster Dashboard, you’ll notice that CA did its job because, now, the cluster is serving all 600 replicas:

Cluster at capacity

This proves that scaling up works. But you also need to test to see scaling down. Go to Workload (side menu) and click on the hamburger menu corresponding to busybox-workload. From the drop-down list, select Delete:

Deleting workload

A pop-up window will appear; confirm that you want to delete the deployment to continue:

Deleting workload pop-up

By deleting the deployment, the expected result is that CA starts removing nodes. Check this by going back to Nodes:

Scaling down

Keep in mind that by default, CA will start removing nodes after 10 minutes. Meanwhile, you will see taints on the Nodes screen indicating the nodes that are candidates for deletion. For more information about this behavior and how to modify it, read “Does CA respect GracefulTermination in scale-down?” in the Cluster Autoscaler FAQ.

After 10 minutes have elapsed, the LKE cluster will return to its original state with one 2GB node and one 4GB node:

Downscaling completed

Optionally, you can confirm the status of the cluster by returning to the Cluster Dashboard:

**Cluster Dashboard**

And now you have verified that Cluster Autoscaler can scale up and down nodes as required.

CA, Rancher and managed Kubernetes services

At this point, the power of Cluster Autoscaler is clear. It lets you automatically adjust the number of nodes in your cluster based on demand, minimizing the need for manual intervention.

Since Rancher fully supports the Kubernetes Cluster Autoscaler API, you can leverage this feature on major service providers like AKS, Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS). Let’s look at one more example to illustrate this point.

Create a new workload like the one shown here:

New workload

It’s the same code used previously, only in this case, with 1,000 busybox replicas instead of 600. After a few minutes, the cluster capacity will be exceeded. This is because the configuration you set specifies a maximum of four 2GB nodes (first node group) and two 4GB nodes (second node group); that is, six nodes in total:

**Cluster Dashboard**

Head over to the Linode Dashboard and manually add a new node pool:

**Linode Dashboard**

Add new node

The new node will be displayed along with the rest on Rancher’s Nodes screen:

**Nodes**

Better yet, since the new node has the same capacity as the first node group (2GB), it will be deleted by CA once the workload is reduced.

In other words, regardless of the underlying infrastructure, Rancher makes use of CA to know if nodes are created or destroyed dynamically due to load.

Overall, Rancher’s ability to support Cluster Autoscaler out of the box is good news; it reaffirms Rancher as the ideal Kubernetes multi-cluster management tool regardless of which cloud provider your organization uses. Add to that Rancher’s seamless integration with other tools and technologies like Longhorn and Harvester, and the result will be a convenient centralized dashboard to manage your entire hyper-converged infrastructure.

Conclusion

This tutorial introduced you to Kubernetes Cluster Autoscaler and how it differs from other types of autoscaling, such as Vertical Pod Autoscaler (VPA) and Horizontal Pod Autoscaler (HPA). In addition, you learned how to implement CA on Kubernetes and how it can scale up and down your cluster size.

Finally, you also got a brief glimpse of Rancher’s potential to manage Kubernetes clusters from the convenience of its intuitive UI. Rancher is part of the rich ecosystem of SUSE, the leading open Kubernetes management platform. To learn more about other solutions developed by SUSE, such as Edge 2.0 or NeuVector, visit their website.

Driving Innovation with Extensible Interoperability in Rancher’s Spring ’23 Release

星期二, 18 四月, 2023

We’re on a mission to build the industry’s most open, secure and interoperable Kubernetes management platform. Over the past few months, the team has made significant advancements across the entire Rancher portfolio that we are excited to share today with our community and customers.

Introducing the Rancher UI Extensions Framework

In November last year, we announced the release of v2.7.0, where we took our first steps into developing Rancher into a truly interoperable, extensible platform with the introduction of our extensions catalog. With the release of Rancher v2.7.2 today, we’re proud to announce that we’ve expanded our extensibility capabilities by releasing our ‘User Interface (UI) Extensions Framework.’

Users can now customize their Kubernetes experience. They can build on their Rancher platform and manage their clusters using custom, peer-developed or Rancher-built extensions.

Image 1: Installation of Kubewarden Extension

Image 1: Installation of Kubewarden Extension

Supporting this, we’ve also now made three Rancher-built extensions available, including:

  1. Kubewarden Extension delivers a comprehensive method to manage the lifecycle of Kubernetes policies across Rancher clusters.
  2. Elemental Extension provides operators with the ability to manage their cloud native OS and Edge devices from within the Rancher console.
  3. Harvester Extension helps operators load their virtualized Harvester cluster into Rancher to aid in management

Building a rich community remains our priority as we develop Rancher. That’s why as part of this release, the new UI Extensions Framework has also been implemented into the SUSE One Partner Program. Technology partners are key to our thriving ecosystem and by validating and supporting extensions, we’re eager to see the innovation from our partner community.

You can learn more about the new Rancher UI Extension Framework in this blog by Neil MacDougall, Director of UI/UX. Make sure to join him and our community team at our next Global Online Meetup as he deep dives into the UI Framework.

Adding more value to Rancher Prime and helping customers elevate their performance

In December 2022, we announced the launch of Rancher Prime – our new enterprise subscription, where we introduced the option to deploy Rancher from a trusted private registry. Today we announce the new components we’ve added to the subscription to help our customers improve their time-to-value across their teams, including:

  1. SLA-backed enterprise support for Policy and OS Management via the Kubewarden and Elemental Extensions
  2. The launch of the Rancher Prime Knowledgebase in our SUSE Collective customer loyalty programRancher Prime Knowledgebase

 

Image 2: Rancher Prime Knowledgebase in SUSE Collective

We’ve added these elements to help our customers improve their resiliency and performance across their enterprise-grade container workloads. Read this blog from Utsav Sanghani, Director of Product – Enterprise Container Management, for a detailed overview of the upgrades we made in Rancher and Rancher Prime and the value it derives for customers.

Empowering a community of Kubernetes innovators

Image 3: Rancher Academy Courses

Our community team also launched our free online education platform, Rancher Academy, at KubeCon Europe 2023. The cloud native community can now access expert-led courses on demand, covering important topics including fundamentals in containers, Kubernetes and Rancher to help accelerate their Kubernetes journey. Check out this blog from Tom Callway, Vice President of Product Marketing, as he shares in detail the launch and future outlook for Rancher Academy.

Achieving milestones across our open source projects

Finally, we’ve also made milestones across our innovation projects, including these updates:

Rancher Desktop 1.8 now includes configurable application behaviors such as auto-start at login. All application settings are configurable from the command line and experimental settings give access to Apple’s Virtualization framework on macOS Ventura.

Kubewarden 1.6.0 now allows DevSecOps teams to write Policy as Code using both traditional programming languages and domain-specific languages.

Opni 0.9 has several observability feature updates as it approaches its planned GA later in the year.

S3GW (S3 Gateway) 0.14.0 has new features such as lifecycle management, object locking and holds and UI improvements.

Epinio 1.7 now has a UI with Dex integration, the identity service that uses OpenID Connect to drive authentication for other apps, and SUSE’s S3GW.

Keep up to date with all our product release cadences on GitHub, or connect with your peers and us via Slack.

Utilizing the New Rancher UI Extensions Framework

星期二, 18 四月, 2023

What are Rancher Extensions?

The Rancher by SUSE team wants to accelerate the pace of development and open Rancher to partners, customers, developers and users, enabling them to build on top of it to extend its functionality and further integrate it into their environments.

With Rancher Extensions, you can develop your own extensions to the Rancher UI. Completely independently of Rancher. The source code lives in your own repository. You develop, build and release it whenever you like. You can add your extension to Rancher at any time. Extensions are versioned by you and have their own independent release cycle.

Think Chrome browser extensions – but for Rancher.

Could this be the best innovation in Rancher for some time? It might just be!

What can you do?

Rancher defines several extension points which developers can take advantage of to provide extra functionality, for example:

  1. Add new UI screens to the top-level side navigation
  2. Add new UI screens to the navigation of the Cluster Explorer UI
  3. Add new UI for Kubernetes CRDs
  4. Extend existing views in Rancher Manager by adding panels, tabs and actions
  5. Customize the landing page

We’ll be adding more extension points over time.

Our goal is to enable deep integrations into Rancher. We know how important graphical user interfaces are to users, especially in helping users of all abilities to understand and manage complex technologies like Kubernetes. Being able to bring together data from different systems and visualize them within a single-pane-of-glass experience is extremely powerful for users.

With extensions, if you have a system that provides monitoring metadata, for example, we want to enable you to see that data in the context of where it is relevant – if you’re looking at a Kubernetes Pod, for example, we want you to be able to augment Rancher’s Pod view so you can see that data right alongside the Pod information.

Extensions, Extensions, Extensions

The Rancher by SUSE teams is using the Extensions mechanism to develop and deliver our own additions to Rancher – initially with extensions for Kubewarden and Elemental. We also use Extensions for our Harvester integration. Over time we’ll be adding more.

Over the coming releases, we will be refactoring the Rancher UI itself to use the extensions mechanism. We plan to build out and use the very same extension mechanism and APIs internally as externally developed extensions will use. This will help ensure those extension points deliver on the needs of developers and are fully supported and maintained.

Elemental

Elemental is a software stack enabling centralized, full cloud native OS management with Kubernetes.

With the Elemental extension for Rancher, we add UI capability for Elemental right into the Rancher user interface.

Image 1: Elemental Extension

The Elemental extension is an example of an extension that provides a new top-level “product” experience. It adds a new “OS Management” navigation item to the top-level navigation menu which leads to a new experience for managing Elemental. It uses the Rancher component library to ensure a consistent look and feel. Learn more here or visit the Elemental Extension GitHub repository

Kubewarden

Kubewarden is a policy engine for Kubernetes. Its mission is to simplify the adoption of policy-as-code.

The Kubewarden extension for Rancher makes it easy to install Kubewarden into a downstream cluster and manage Kubewarden and its policies right from within the Rancher Cluster Explorer user interface.

Image 2: Kubewarden Extension

The Kubewarden extension is a great example of an extension that adds to the Cluster Explorer experience. It also showcases how extensions can assist in simplifying the installation of additional components that are required to enable a feature in a cluster.

Unlike Helm charts, extensions have no parameters at install time – there’s nothing to configure – we want extensions to be super simple to install. Learn more here or visit the Kubewarden Extension GitHub repository.

Harvester

The Harvester integration into Rancher also leverages the UI Extensions framework. This enables the management of Harvester clusters right from within Rancher.

Because of the de-coupling that UI Extensions enables, the Harvester UI can be updated completely independently of Rancher. Learn more here or visit the Harvester UI GitHub repository.

Under the Hood

The diagram below shows a high-level overview of Rancher Extensions.

A lot of effort has gone into refactoring Rancher to modularize it and establish the API for extensions.

The end goal is to slim down the core of the Rancher Manager UI into a “Shell” into which Extensions are loaded. The functionality that is included by default will be split out into several “Built-in” extensions.

Image 3: Architecture for Rancher UI 

We are also in the process of splitting out and documenting our component library, so others can leverage it in their extensions to ensure a common look and feel.

A Rancher Extension is a packaged Vue library that provides functionality to extend and enhance the Rancher Manager UI. You’ll need to have some familiarity with Vue to build an extension, but anyone familiar with React or Angular should find it easy to get started.

Once an extension has been authored, it can be packaged up into a simple Helm chart, added to a Helm repository, and then easily installed into a running Rancher system.

Extensions are installed and managed from the new “Extensions” UI available from the Rancher slide-in menu:

Image 4: Rancher Extensions Menu

Rancher shows all the installed Extensions and the available extensions from the Helm repositories added. Extensions can also be upgraded, rolled back and uninstalled. Developers can also enable the ability to load extensions easily during development without the need to build and publish the extension to a Helm repository.

Developers

To help developers get started with Rancher extensions, we’ve published developer documentation, and we’re building out a set of example extensions.

Over time, we will be enhancing and simplifying some of our APIs, extending the documentation, and adding even more examples to help developers get started.

We have also set up a Slack channel exclusively for extensions – check out the #extensions channel on the Rancher User’s Slack.

Join the Party

We’re only just getting started with Rancher Extensions. We introduced them in Rancher 2.7. You can use them today and get started developing your own!

We want to encourage as many users, developers, customers and partners out there as possible to take a look and give them a spin. Join me on the 3rd of May at 11 am US EST where I’ll be going through the Extension Framework live as part of the Rancher Global Online Meetup – you can sign up here.

As we look ahead, we’ll be augmenting the Rancher extensions repository with a partner repository and a community repository to make it easier to discover extensions. Reach out to us via Slack if you have an extension you’d like included in these repositories.

Fasten your seat belts. This is just the beginning. We can’t wait to see what others do with Rancher Extensions!

Driving Innovation with Extensible Interoperability in Rancher’s Spring ’23 Release

星期二, 18 四月, 2023

We’re on a mission to build the industry’s most open, secure and interoperable Kubernetes management platform. Over the past few months, the team has made significant advancements across the entire Rancher portfolio that we are excited to share today with our community and customers.

Introducing the Rancher UI Extensions Framework

In November last year, we announced the release of v2.7.0, where we took our first steps into developing Rancher into a truly interoperable, extensible platform with the introduction of our extensions catalog. With the release of Rancher v2.7.2 today, we’re proud to announce that we’ve expanded our extensibility capabilities by releasing our ‘User Interface (UI) Extensions Framework.’

Users can now customize their Kubernetes experience. They can build on their Rancher platform and manage their clusters using custom, peer-developed or Rancher-built extensions.

Image 1: Installation of Kubewarden Extension

Image 1: Installation of Kubewarden Extension

Supporting this, we’ve also now made three Rancher-built extensions available, including:

  1. Kubewarden Extension delivers a comprehensive method to manage the lifecycle of Kubernetes policies across Rancher clusters.
  2. Elemental Extension provides operators with the ability to manage their cloud native OS and Edge devices from within the Rancher console.
  3. Harvester Extension helps operators load their virtualized Harvester cluster into Rancher to aid in management

Building a rich community remains our priority as we develop Rancher. That’s why as part of this release, the new UI Extensions Framework has also been implemented into the SUSE One Partner Program. Technology partners are key to our thriving ecosystem and by validating and supporting extensions, we’re eager to see the innovation from our partner community.

You can learn more about the new Rancher UI Extension Framework in this blog by Neil MacDougall, Director of UI/UX. Make sure to join him and our community team at our next Global Online Meetup as he deep dives into the UI Framework.

Adding more value to Rancher Prime and helping customers elevate their performance

In December 2022, we announced the launch of Rancher Prime – our new enterprise subscription, where we introduced the option to deploy Rancher from a trusted private registry. Today we announce the new components we’ve added to the subscription to help our customers improve their time-to-value across their teams, including:

  1. SLA-backed enterprise support for Policy and OS Management via the Kubewarden and Elemental Extensions
  2. The launch of the Rancher Prime Knowledgebase in our SUSE Collective customer loyalty programRancher Prime Knowledgebase

 

Image 2: Rancher Prime Knowledgebase in SUSE Collective

We’ve added these elements to help our customers improve their resiliency and performance across their enterprise-grade container workloads. Read this blog from Utsav Sanghani, Director of Product – Enterprise Container Management, for a detailed overview of the upgrades we made in Rancher and Rancher Prime and the value it derives for customers.

Empowering a community of Kubernetes innovators

Image 3: Rancher Academy Courses

Our community team also launched our free online education platform, Rancher Academy, at KubeCon Europe 2023. The cloud native community can now access expert-led courses on demand, covering important topics including fundamentals in containers, Kubernetes and Rancher to help accelerate their Kubernetes journey. Check out this blog from Tom Callway, Vice President of Product Marketing, as he shares in detail the launch and future outlook for Rancher Academy.

Achieving milestones across our open source projects

Finally, we’ve also made milestones across our innovation projects, including these updates:

Rancher Desktop 1.8 now includes configurable application behaviors such as auto-start at login. All application settings are configurable from the command line and experimental settings give access to Apple’s Virtualization framework on macOS Ventura.

Kubewarden 1.6.0 now allows DevSecOps teams to write Policy as Code using both traditional programming languages and domain-specific languages.

Opni 0.9 has several observability feature updates as it approaches its planned GA later in the year.

S3GW (S3 Gateway) 0.14.0 has new features such as lifecycle management, object locking and holds and UI improvements.

Epinio 1.7 now has a UI with Dex integration, the identity service that uses OpenID Connect to drive authentication for other apps, and SUSE’s S3GW.

Keep up to date with all our product release cadences on GitHub, or connect with your peers and us via Slack.

A Guide to Using Rancher for Multicloud Deployments

星期三, 8 三月, 2023

Rancher is a Kubernetes management platform that creates a consistent environment for multicloud container operation. It solves several of the challenges around multicloud Kubernetes deployments, such as poor visibility into where workloads are running and the lack of centralized authentication and access control.

Multicloud improves resiliency by letting you distribute applications across providers. It can also be a competitive advantage since you’re able to utilize the benefits of every provider. Moreover, multicloud reduces vendor lock-in because you’re less dependent on any one platform.

However, these advantages are often negated by the difficulty in managing multi-cloud Kubernetes. Deploying multiple clusters, using them as one unit and monitoring the entire fleet are daunting tasks for team leaders. You need a way to consistently implement authorization, observability and security best practices.

In this article, you’ll learn how Rancher resolves these problems so you can confidently use Kubernetes in multi-cloud scenarios.

Rancher and multicloud

One of the benefits of Rancher is that it provides a consistent experience when you’re using several environments. You can manage the full lifecycle of all your clusters, whether they’re in the cloud or on-premises. It also abstracts away the differences between Kubernetes implementations, creating a single surface for monitoring your deployments.

Diagram showing how Rancher works with all Kubernetes distributions and cloud platforms courtesy of James Walker

Rancher is flexible enough to work with both new and existing clusters, and there are three possible ways to connect your clusters:

  1. Provision a new cluster using a managed cloud Kubernetes service:Rancher can create new Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE) clusters for you. The process is fully automated within the Rancher UI. You can also import existing clusters.
  2. Provision a new cluster on standalone cloud infrastructure: Rancher can deploy an RKE, RKE2, or K3s cluster by provisioning new compute nodes from your cloud provider. This option supports Amazon Elastic Compute Cloud (EC2), Microsoft Azure, DigitalOcean, Harvester, Linode and VMware vSphere.
  3. Bring your own cluster: You can manually connect Kubernetes clusters running locally or in other cloud environments. This gives you the versatility to combine on-premises and public cloud infrastructure in hybrid deployment situations.

Screenshot of adding a cluster in Rancher

Once you’ve added your multicloud clusters, your single Rancher installation lets you seamlessly manage them all.

A unified dashboard

One of the biggest multicloud headaches is tracking what’s deployed, where it’s located and whether it’s running correctly. With Rancher, you get a unified dashboard that shows every cluster, including the cloud environment it’s hosted in and its resource utilization:

Screenshot of the Rancher dashboard showing multiple clusters

Clusters screenshot

The Rancher home screen provides a centralized view of the clusters you’ve registered, covering both your cloud and on-premises deployments. Similarly, the sidebar integrates a shortcut list of clusters that helps you quickly move between environments.

After you’ve navigated to a specific cluster, the Cluster Dashboard page offers an at-a-glance view of capacity, utilization, events and deployments:

Screenshot of Rancher's **Cluster Dashboard**

Scrolling further down, you can view precise cluster metrics that help you analyze performance:

Screenshot of viewing cluster metrics in Rancher

Rancher lets you access vital monitoring data for all your Kubernetes environments within one tool, eliminating the need to log into individual cloud provider control panels.

Centralized authorization and access control

Kubernetes has built-in support for role-based access control (RBAC) to limit the actions that individual user accounts can take. However, this is insufficient for multicloud deployments because you have to manage and maintain your policies individually in each of your clusters.

Rancher improves multicloud Kubernetes usability by adding a centralized user authentication system. You can set up user accounts within Rancher or connect an external service using protocols such as LDAP, SAML and OAuth.

Once you’ve created your users, you can assign them specific access control rules to limit their rights within Rancher and your clusters. Global permissionsdefine how users can manage your Rancher installation. For instance, you can create and modify cluster connections while cluster- and project-level rolesconfigure the available actions after selecting a cluster.

To create a new user, click the menu icon in the top-left to expand the sidebar, then select the Users & Authentication link. Press the Create button on the next screen, where your existing users are displayed:

Screenshot of the Rancher UI

Fill out your new user’s credentials on the following screen:

Screenshot of creating a new user in Rancher

Then scroll down the page to begin assigning permissions to the new user.

Set the user’s global permissions, which control their overall level of access within Rancher. Then you can add more fine-grained policies for specific actions from the roles at the bottom. Once you’ve finished, click the Create button on the bottom-right to add the account. The user can now log into Rancher:

Screenshot of assigning a user's global roles in Rancher

Next, navigate to one of your clusters and head to Cluster > Cluster Membersin the sidebar. Click the Add button in the top-right to grant a user access to the cluster:

Screenshot of adding a cluster member in Rancher

Use the next screen to search for the user account, then set their role in the cluster. Once you press Create in the bottom-right, the user will be able to perform the cluster interactions you’ve assigned:

Screenshot of setting a cluster member's permissions in Rancher

Adding a cluster role

For more precise access control, you can set up your own roles that build upon Kubernetes RBAC. These can apply at the global (Rancher) level or within a specific cluster or project/namespace. All three are created in a similar way.

To create a cluster role, expand the Rancher sidebar again and return to the Users & Authentication page. Select the Roles link from the menu on the left and then select Cluster from the tab strip. Press the Create Cluster Rolebutton in the top-right:

Screenshot of Rancher's Cluster Roles interface

Give your role a name and enter an optional description. Next, use the Grant Resources interface to define the Kubernetes permissions the role includes. This example permits users to create and list pods in the cluster. Press the Create button to add your role:

Screenshot of defining a cluster role's permissions in Rancher

The role will now show up when you’re adding new members to your clusters:

Screenshot of selecting a custom cluster role for a cluster member in Rancher

Rancher and multicloud security

Rancher enhances multicloud security by providing active mechanisms for tightening your environments. Besides the security benefits of centralized authentication and RBAC, Rancher also integrates additional security measuresthat protect your clusters and cloud environments.

Rancher maintains a comprehensive hardening guide based on the Center for Internet Security (CIS) Benchmarks that help you implement best practices and identify vulnerabilities. You can scan a cluster against the benchmark from within the Rancher application.

To do so, navigate to your cluster, then expand Apps > Charts in the left sidebar. Select the CIS Benchmark chart from the list:

Screenshot of the CIS Benchmark app in Rancher's app list

Click the Install button on the next screen:

Screenshot of the CIS Benchmark app's details page in Rancher

Follow the steps to complete the installation in your cluster:

Screenshot of the CIS Benchmark app's installation screen in Rancher

It could take several minutes for the process to finish — you’ll see a “SUCCESS” message in the logs pane when it’s done:

Screenshot of the CIS Benchmark app's installation logs in Rancher

Now, navigate back to your cluster. You’ll find a new CIS Benchmark item in Rancher’s sidebar. Expand this menu and click the Scan link; then press the Create button on the page that appears:

Screenshot of the CIS Benchmark interface in Rancher

On the next screen, you’ll be prompted to select a scan profile. This defines the hardening checks that will be performed. You can change the default to choose a different benchmark or Kubernetes version. Press the Create button to start the scan:

Screenshot of creating a CIS Benchmark scan in Rancher

The scan run will then show in the Scans table back on the CIS Benchmark > Scan screen:

Screenshot of the CIS Benchmark **Scans** interface in Rancher, with a running scan displayed

Once it is finished, you can view the results in your browser by selecting the scan from the table:

Screenshot of viewing CIS Benchmark scan results in the Rancher UI

Rancher helps DevOps teams to scale multicloud environments

Multicloud is hard — more resources normally means higher overheads, a bigger attack surface and a rapidly swelling toolchain. These issues can impede you as you try to scale.

Rancher incorporates unique capabilities that help operators work effectively with different deployments, even when they’re distributed across several environments.

Automatic cluster backups provide safety

Rancher includes a backup system that you can install as an operator in your clusters. This operator backs up your Kubernetes API resources so you can recover from disasters.

You can add the operator by navigating to a cluster and choosing Apps > Charts from the side menu. Then find the Rancher Backups app and follow the prompts to install it:

Screenshot of the Rancher Backups app description in the Rancher interface

You’ll find the Rancher Backups item appear in the navigation menu. Click the Create button to define a new one-time or recurring backup schedule:

Screenshot of the **Backups** interface in Rancher

Fill out the details to configure your backup routine:

Screenshot of configuring a backup in Rancher

Once you’ve created a backup, you can restore it in the future if data gets accidentally deleted or a disaster occurs. With Rancher, you can create backups for all your clusters with a single consistent procedure, which produces more resilient environments.

Rancher integrates with multi-cloud solutions

One of the benefits of Rancher is that it’s built as a single platform for managing Kubernetes in any cluster. But it gets even better when combined with other ecosystem tools. Rancher has integrations with adjacent components that provide more focused support for specific use cases, including the following:

  • Longhorn is distributed Cloud native block storage that runs anywhere and supports automated provisioning, security and backups. You can deploy Longhorn to your clusters from within the Rancher UI, enabling more reliable storage for your workloads.
  • Harvester is a solution for hyperconverged infrastructure on bare-metal servers. It provides a virtual machine (VM) management system that complements Rancher’s capabilities for Kubernetes clusters. By combining Harvester and Rancher, you can effectively manage your on-premises clusters and the infrastructure that hosts them.
  • Helm is the standard package manager for Kubernetes applications. It packages an application’s Kubernetes manifests into a collection called a chart, ready to deploy with a single command. Rancher natively supports Helm charts and provides a convenient interface for deploying them into your cluster via its apps system.

By utilizing Rancher alongside other common tools, you can make multicloud Kubernetes even more powerful. Automated storage, local infrastructure management and packaged applications allow you to scale up freely without the hassle of manually provisioning environments and creating your app’s resources.

Deploy to large-scale environments with Rancher Fleet

Rancher also helps you deploy applications using automated GitOps methodologies. Rancher Fleet is a dedicated GitOps solution for containerized workloads that offers transparent visibility, flexible control and support for large-scale deployments to multiple environments.

Rancher Fleet manages your Kubernetes manifests, Helm charts and Kustomize templates for you, converting them into Helm charts that can automatically deploy in your clusters. You can set up Fleet in your Rancher installation by clicking the menu icon in the top-left and then choosing Continuous Delivery from the slide-out main menu:

Screenshot of the **Rancher Fleet** landing screen

Click Get started to connect your first Git repository and deploy it to your clusters. Once again, Rancher permits you to use standardized delivery workflows in all your environments. You’re no longer restricted to a single cloud vendor, delivery channel or platform as a service (PaaS):

Screenshot of creating a new Rancher Fleet Git repository connection

Conclusion

Multicloud presents new opportunities for more flexible and efficient deployments. Mixing solutions from several different cloud providers lets you select the best option for each of your components while avoiding the risk of vendor lock-in.

Nonetheless, organizations that use multicloud with containers and Kubernetes often experience operational challenges. It’s difficult to manage clusters that exist in several different environments, such as public clouds and on-premises servers. Moreover, implementing centralized monitoring, access control and security policies yourself is highly taxing.

Rancher solves these challenges by providing a single tool for provisioning infrastructure, installing Kubernetes and managing your deployments. It works with Google GKE, Amazon EKS, Azure AKS and your own clusters, making it the ultimate solution for achieving multicloud Kubernetes interoperability. Try Rancher today to provision and scale multicloud Kubernetes.