Rancher Desktop 1.9-Tech-Preview: With Support for Docker Extensions

Monday, 24 April, 2023

We are pleased to announce that a tech preview version of Rancher Desktop with support for Docker Extensions has just been released!   

Extensions

If you are a software developer or an operator dealing with containers, you know how crucial it is to have a versatile toolkit to support your diverse and ever-changing needs. Software extensions provide a great way to extend the capabilities of your core tool so you do not have to deal with multiple tools for different needs.  

The newly introduced Extensions feature enables you to install and use Docker Extensions within Rancher Desktop. In the tech preview release, we started with a catalog of three extensions you can install and try. We will continue to expand the catalog with more extensions in the upcoming Rancher Desktop releases. Trying out the current extensions is super easy. You can install and manage the extensions via the GUI or the command line utility rdctl (ex: rdctl extension install docker/logs-explorer:0.2.2). Try it out today, and let us know what you think! Refer to the release notes for installation instructions for your OS.

 

Next steps

There are several next steps you can take: 

  • Learn more about the changes in the 1.9.0-tech-preview release from the release notes. 
  • Star Rancher Desktop on GitHub to show that you like it. This lets us know that you want to see development continue. 
  • Provide feedback in the issue queue. 

Kubernetes in Docker Desktop Just Got Easier with Epinio

Tuesday, 10 May, 2022

 

Epinio takes developers from application to URL in one step. In this blog post, I’m going to tell you about the new Epinio extension for Docker Desktop, that allows you to run Epinio on your laptop. DevOps are very interested in the details of containerized workloads and Kubernetes especially, but for developers, the abstraction layer provided by Kubernetes might not be too relevant to their daily work. And managing a Kubernetes cluster certainly is above and beyond for most application developers.

However, deploying an application with Epinio is possible with very little Kubernetes knowledge. On a real cluster, Epinio also offers several advantages to operators, but that is a story for another post.

Areas where Epinio can help

Figure 1: Where Epinio is useful

The above image illustrates that Epinio can be used as part of continuous delivery workflows, enable operators to manage applications better and help to build self-service platforms. Developers run Kubernetes clusters locally to stay close to the production environment. Epinio builds and deploys workloads, just like it would on a real cluster.

 

The feedback we get on Epinio shows that even running Kubernetes locally is not a one-liner. For example, the networking differs significantly between K3s, k3d, KinD, minikube, Rancher Desktop and others.

Kubernetes Checkbox in Docker Desktop

Figure 2: Install Kubernetes in Docker Desktop first

That’s why a Docker Desktop Extension is such a great opportunity for Epinio. The user can create a Kubernetes cluster by just clicking on a single checkbox. The result is always the same, which makes the Epinio installation predictable.

After choosing the Epinio extension from the marketplace (as shown in the above image), you can press the “Install” button, shown in the screenshot below, to add Epinio to Docker’s Kubernetes.

Figure 3: Epinio Extension

From here you can choose a folder, with an application inside, and send it off to Epinio. Epinio will run Cloud-Native-Buildpacks and create a deployment. The application can then be visited via the generated URL.

The extension UI is in an early stage and compared to the Epinio UI a bit limited. However, the Epinio UI has been installed alongside and you can click the “Open” button to access it with the default credentials.

The screenshot below shows the standalone UI, where you can delete and edit applications, configure environment variables and more.

Figure 4: Epinio’s UI

 

There are several next steps you can take:
  • Star Epinio on Github to show that you like the project.
  • Try out Epinio in Docker Desktop or install it onto your cluster.
  • Let us know if Buildpacks work for you, as they do not support all kinds of applications yet.
  • Get in touch with us, you can find us on GitHub and Slack.

Kubernetes in Docker Desktop Just Got Easier with Epinio

Tuesday, 10 May, 2022

 

Epinio takes developers from application to URL in one step. In this blog post, I’m going to tell you about the new Epinio extension for Docker Desktop, that allows you to run Epinio on your laptop. DevOps are very interested in the details of containerized workloads and Kubernetes especially, but for developers, the abstraction layer provided by Kubernetes might not be too relevant to their daily work. And managing a Kubernetes cluster certainly is above and beyond for most application developers.

However, deploying an application with Epinio is possible with very little Kubernetes knowledge. On a real cluster, Epinio also offers several advantages to operators, but that is a story for another post.

Areas where Epinio can help

Figure 1: Where Epinio is useful

The above image illustrates that Epinio can be used as part of continuous delivery workflows, enable operators to manage applications better and help to build self-service platforms. Developers run Kubernetes clusters locally to stay close to the production environment. Epinio builds and deploys workloads, just like it would on a real cluster.

 

The feedback we get on Epinio shows that even running Kubernetes locally is not a one-liner. For example, the networking differs significantly between K3s, k3d, KinD, minikube, Rancher Desktop and others.

Kubernetes Checkbox in Docker Desktop

Figure 2: Install Kubernetes in Docker Desktop first

That’s why a Docker Desktop Extension is such a great opportunity for Epinio. The user can create a Kubernetes cluster by just clicking on a single checkbox. The result is always the same, which makes the Epinio installation predictable.

After choosing the Epinio extension from the marketplace (as shown in the above image), you can press the “Install” button, shown in the screenshot below, to add Epinio to Docker’s Kubernetes.

Figure 3: Epinio Extension

From here you can choose a folder, with an application inside, and send it off to Epinio. Epinio will run Cloud-Native-Buildpacks and create a deployment. The application can then be visited via the generated URL.

The extension UI is in an early stage and compared to the Epinio UI a bit limited. However, the Epinio UI has been installed alongside and you can click the “Open” button to access it with the default credentials.

The screenshot below shows the standalone UI, where you can delete and edit applications, configure environment variables and more.

Figure 4: Epinio’s UI

 

There are several next steps you can take:
  • Star Epinio on Github to show that you like the project.
  • Try out Epinio in Docker Desktop or install it onto your cluster.
  • Let us know if Buildpacks work for you, as they do not support all kinds of applications yet.
  • Get in touch with us, you can find us on GitHub and Slack.

Migrating Rancher (2.5.0+) Single Node Docker install to a (HA) Kubernetes/K3s cluster

Wednesday, 26 January, 2022

Introduction

This guide will show you how to migrate your Rancher Single Node Docker install to a (HA) Kubernetes / K3s Cluster extensively.

Back when Rancher v2.0.0 was released, there were only two options for installation. Either you ran it as a Single-Node Docker container or installed it on an RKE Kubernetes cluster (an HA-cluster being the recommended way by Rancher Labs for production workloads). With the release of Rancher 2.4.0, this has been expanded to include K3s clusters and, as of v2.5.0, any CNCF-certified Kubernetes distribution is supported.

Since I was still new to Kubernetes back then and had no idea how to set up a cluster, I went with a Single Node Docker install. Up until Rancher v2.5.0, you were actually stuck with this choice. There was no way to migrate your Single-Node Docker container to a Kubernetes cluster. However, with the release of v2.5.0, migrating your Rancher installation is now officially supported using the backup-restore-operator. In this guide, I will show you all the steps I’ve taken to do so and explain how you, too, can migrate your Rancher installation from a single node docker install to a single node or even a highly available Kubernetes cluster.

This guide will be written with new Kubernetes/Docker/Linux users in mind, so it will be rather extensive. However, it does include advice that even more experienced users should find helpful in executing this migration. I will also avoid using any cloud services (specifically S3 buckets) that you might not have available. Let’s get started!

Essential conditions

Right, let’s start with the things you’ll need to have available for you to execute this migration.

Time: 1 hour – 1.5 hours

Must have:

  • Rancher 2.5.0 and up running as a single node docker install (older versions are not supported for this method, you’ll have to update first.)

Tip: If you’re not running Rancher 2.5.x yet, I highly recommend updating from the latest 2.4.x (2.4.15 as of writing) straight to v2.5.8 or higher, skipping 2.5.0-2.5.7. As of Rancher 2.5.0, rancher installs Rancher Fleet into downstream clusters, and I’ve found that the earlier 2.5.x versions had some issues in doing so reliably after updating. However, all my updates from v2.4.x to 2.5.8 (and up) have gone flawlessly.

  • Access to the DNS records of your Rancher domain. This is ONLY required IF you’re changing the IP/node. Your migrated installation is required to use the same domain as your current installation as Rancher does not support changing this.
  • SSH access to the node Rancher is running on either as root or a user that can use sudo (without password, more on that later).
  • Access to the local cluster through the UI Access to the local cluster through the UI. If you’re not running 2.5.x yet, you probably have never seen this before. But as of 2.5.0 Rancher actually. lists the cluster on which it’s installed as local. This is the case even for Single-Node Docker installation as Rancher sets up a K3S cluster inside the docker container in order to run. Pretty neat huh?
  • kubectl (v1.20.x – v1.22.x at the time of writing) on your local machine (the one inside the Rancher Interface won’t cut it here as we’ll be using it once Rancher is down for the migration) – this is the Kubernetes command-line interface.
  • helm v3.2.x or higher (lower versions cause issues) – package management for Kubernetes.
  • A node to install K3S on as a single node cluster (this can be the node on which Rancher is currently installed.

Optionally:

  • An S3 bucket, preferably with a valid HTTPS certificate (can be Amazon or Minio). If you have a bucket you can use, that will make your life easier. If you have one but without a valid certificate, you can still use it, but it will not be covered in this guide on how to do so. I will however link to resources where you can find out how to do so.

Note: Even though backup-restore-operator supports using different StorageClasses other than S3 for its back-ups, the documentation on using them for restores is scarce. Unless you can download the backup manually to use it for the restore later, I wouldn’t recommend going down this route. If you do use a different StorageClass, make sure its Reclaim Policy is set to “Retain”. This will prevent the PV from being deleted if the PVC from the rancher-backup chart gets deleted.

What we’ll be doing

  • Make a backup of the current single-node docker install as a last resort in case stuff goes wrong (in a way that we can restore to a single-node docker install if needed).
  • Install the Rancher backup-restore-operator (BRO).
  • Create a Persistent Volume for BRO to store the backup.
  • Extract the back-up from the Rancher container (Because Rancher in a single node docker container runs on what’s effectively a K3s cluster inside a Docker container, the notion of HostPath writing to the host is lost. As far as Rancher is concerned, the container is the host. Thus this extra step is required to obtain the backup).
  • Create a one-time backup using BRO that will be used for the migration.
  • Setting up a new cluster (this is optional if you already have one prepared) to migrate Rancher to. This can be your current host.
  • Preparing the new cluster for restoration.
  • Restore the BRO back-up, reinstall Rancher into the new cluster and verify everything works. Restoring the backup only restores the data from your Kubernetes data store, but not the actual workloads from the local cluster. Hence we have to reinstall Rancher after restoring the backup.
  • Clean up backup-restore-operator and the persistent volume we used.
  • Extra: Update Rancher to 2.6.3.

1. Creating a backup

Before we do anything else, let’s make a backup of our current single-node docker install. That way, in case anything goes wrong, we’ll have something to return to. Rancher has an excellent guide on how to do this:

https://rancher.com/docs/rancher/v2.x/en/backups/v2.5/docker-installs/docker-backups/

Note: This backup will only serve as a last resort to restore your current single-node docker install if something goes wrong. This backup can not be used to migrate your SND install to a Kubernetes cluster.

Important: Copy the backup from your node to your local machine or secure it elsewhere. That way if something goes horribly wrong with the node, you won’t be locked out of your backup.

If you don’t know how to do so, you can use rsync. If you’ve only used the older scp one before, the command is exactly the same, except you have to remove the -P. If you’ve used neither, just know that they’re both tools capable of transferring files over SSH. No need to set up anything on your server in order to use them. The syntax for rsync is as follows:

rsync [OPTIONS] user@serverIP_or_name:SourceDirectory_or_filePath Target

Example (filled in):

rsync -P vashiru@192.168.1.221:/home/vashiru/rancher-backup-2021-05-28.tar.gz .

This will copy the backup from the home directory of my server to the current directory my terminal is pointed at.

2. Creating a Persistent Volume for BRO

For this guide, I’ll assume you don’t have access to an S3-compatible object store that BRO can use to store and retrieve backups. If you do have access, you can skip this step if you prefer and move on to installing a backup-restore-operator.

If you don’t have S3-compatible storage, we’ll have to first create a Persistent Volume for BRO to store the backups.

  1. Go to your cluster overview and click on the local cluster. Our local is the cluster that Rancher uses under the hood and it itself is installed in.

Untitled

  1. You should see your dashboard, in the top bar navigate to Storage → Persistent Volumes

Untitled-1

  1. Click on add volume, and give it the name rancher-backup.For the volume plugin select HostPath, and the default capacity of 10 GB is fine (it’ll be a couple of MB only). For the path on the node select /rancher-backup, and you must select the option  A directory, or create if does not exist.

Migrate-rancher-2

Important: Because Rancher as a single-node docker container runs a K3S single-node cluster inside the Docker container, the path on the node means something different here than you’d expect from the name. The node in this context is not your host system, but rather the container in which Rancher itself is running This means that right now, anything BRO would write to it, will be lost once the container gets removed. Because of this, we’ll have to copy the backup out of the docker container later on in order to use it for the migration.

(Yes, alternatively you could bind-mount a folder on the host to the directory in this container. That way it’ll write directly to the hard disk of your node; feel free to do so).

3. Installing backup-restore-operator

Installing the backup-restore operator is very straightforward. It comes as an app that you install into your local cluster through the Market Place in Cluster explorer.

  1. Go to your (global) cluster overview and click on explorer behind the local cluster.

Migrate-rancher-3

  1. Open the menu on the top left and select apps & marketplace.

Migrate-rancher-4

  1. Locate the Rancher backups chart and open it.

Migrate-rancher-5

  1. You’ll be greeted with the following screen (though a slightly newer version, that’ll still work with this version of Rancher). On the left side, select chart options.

Migrate-rancher-6

  1. If you want to use your S3 compatible storage, this is where you would enter your credentials for it as well as any CA certificates in case it’s self-signed. If you’re using Minio, the default bucket region is us-east-1. More information about using an S3 bucket can be found in the Racher docs.

If not, select the “use an existing persistent volume” option and select our previously created rancher-backup PV as follows:

Migrate-rancher-7

  1. Click install and wait for the installation to finish. It’ll install two things: first the rancher-backup-crds and secondly rancher-backups. CRDs are custom resource definitions, which are extensions to the Kubernetes API. Rancher backup uses these to create the backups and then restore them.

Migrate-rancher-8

Migrate-rancher-9

4. Creating a one-time backup using BRO

After we’ve installed BRO, it’s time to create our backup. This is the backup we’ll be using to actually migrate the cluster in the following steps.

  1. Navigate to the cluster explorer of your local cluster like we did before and open the menu on the top left. You’ll notice it has a new option called Rancher backups – that’s the one we want. If you don’t see it, try refreshing the page.

Migrate-rancher-10

  1. You’ll see that there are currently no backups present, so let’s go ahead and make one using the create button.

Migrate-rancher-11

  1. Let’s give our backup a name, called rancher-migrate, and enter a description if you like. Set the schedule to One-Time Backup and set encryption to Store the contents of the backup unencrypted (we’ll make it easy on us for ourselves here). You can leave the storage location set to Use the default storage location configured during installation. If you’re using an S3 bucket, there is no need to enter your credentials again here. After you’ve done all that, go ahead and click on create to generate the backup job.

Migrate-rancher-12

  1. After you hit create, you should be returning to the backup overview. Your Rancher migrate backup job should be visible and after a few moments the state should change to completed and a filename should show up.

Important: Copy the filename to a safe location, you’ll need it later.

Migrate-rancher-13

Note: Despite the backup-restore-operator looking very straightforward, the rest of the procedure is actually a bit difficult. You can’t just go ahead, install a new cluster, add Rancher on it along with BRO, and trigger a restore from the UI on the new cluster. This will cause issues and have your restoration turn out unsuccessful. I am talking from experience here.

5. Extracting the BRO backup from the container for migration

If you’ve used the persistent volume method from this guide to perform your backup, you’ll have to perform this step. If you’ve used an S3 bucket, you can skip this entirely. If you used a different StorageClass, go download your backup from there.

As mentioned earlier, Rancher uses a K3S cluster inside its docker container in order to run, causing the HostPath PV not to write to the host, but to the storage inside the container. This means we’ll have to extract the backup from the container, in order to use it for our migration.

Luckily docker has made it very easy to do so.

  1. SSH into your host.
  2. Find your container or container-id using the command sudo docker ps for me that would be 228d9abea4f4 or distracted_ride.

Migrate-rancher-14

  1. Use the following command to copy the backup from inside the container to a directory on your host:

sudo docker cp <container-name>:<path_from_pv>/filename_of_backup.tar.gz ..

In my case that would be:

sudo docker cp distracted_ride:/rancher-backup/rancher-migrate-ceee0baa-0c4a-4b25-b15e-e554aa28f705-2021-06-01T21-04-31Z.tar.gz .

  1. You can check you’ve successfully extracted it using ls -lh.

Migrate-rancher-15

  1. Just like before, let’s copy this file over to our local machine using rsync, just to be safe. This step is also required if you’ll be using a new cluster rather than your current host for your new Rancher installation. Do so by exiting your current host and retrieving the file with rsync.

rsync -P vashiru@192.168.1.221:/home/vashiru/rancher-migrate-36b16a2b-7f44-4cb1-8a0e-370c3514f681-2021-05-28T08-50-29Z.tar.gz .

Migrate-rancher-16

Take note of this filename, as you’ll need it later to perform the restore.

6. Shutting down the current Rancher docker container

Now that you’ve created a backup of the current rancher installation and secured it to your local machine for the migration, we can shut down the current Rancher installation. Don’t worry, all workloads in downstream clusters will continue to run as normal.

  1. SSH into your host.
  2. Find your container or container-id using the command sudo docker ps for me, that would be 228d9abea4f4 or distracted_ride.

Migrate-rancher-17

  1. Shutdown the current Rancher container using sudo docker stop <container name>, so for me that would be sudo docker stop distracted_ride. If you want you can validate that it’s been shut down by running sudo docker ps -a. This will list all containers, including stopped ones, and you’ll see their status has been changed to exit.

Migrate-rancher-18

With the backup out of the way, we’re about halfway there. From here on it’s just a matter of setting up a Kubernetes cluster, preparing it for the restore, restoring the backup, and bringing Rancher back online. This might sound daunting at first, but I’ll walk you through it.

  7. Picking an appropriate new Kubernetes cluster

In order to perform this migration, we’re going to need to create a cluster for our Rancher installation to run on. But before we do this, let’s take a  moment to take a step back. Up until now you’ve been running Rancher as a single-node docker container. Whether this came to be due to budgetary reasons, hardware constraints, or just a short-term solution gone permanent, this is a good moment to reconsider if a single node is still sufficient for your use case.

If Rancher being available for you and perhaps other people is vital to your day-to-day operation, I would recommend setting up a highly available cluster (as does Rancher labs for production workloads). However, if Rancher having 100% uptime isn’t a concern for you (in my experience a single-node K3s cluster is very robust on its own) a single-node cluster might be plenty for you.

The last thing to consider is the future of your use case. A single-node cluster might just work for you now, but you may want to turn that into a highly available cluster down the line. If you feel like that’s the case for you, you might want to look more closely at the various options available below. If the future is uncertain at this point, don’t worry, you can always migrate your cluster again using this guide.

I’ve listed a couple of options for Kubernetes clusters based on K3s and RKE, along with links to the documentation on how to set them up. Both K3s and RKE are fully CNCF-certified Kubernetes distributions, meaning they’re fully compatible with other [Kubernetes] distributions and can utilize all the features. The main difference is how you set up your cluster and that K3S has had some in-tree storage drivers and cloud provider drivers stripped out of it.

Single node cluster options

  • K3S cluster using the embedded SQLite database – Lightweight – can NOT be scaled to HA later; for the sake of simplicity, this will be the focus of this guide.
  • K3S cluster using the embedded etcd datastore – Resource heavy: requires a fast SSD and lots of memory due to etcd – can be scaled to a HA cluster later. If you still want this option, follow along with the installation below; you’ll only have to change 1 parameter.
  • K3S cluster using an external MySQL/Postgresql/etcd datastore – Lightweight using MySQL/Postgresql, resource-heavy with etcd, but can all be scaled to a HA cluster later.
  • RKE cluster using embedded etcd datastore – requires a very fast SSD and lots of memory due to etcd, can be scaled to a HA cluster later.

Highly available cluster options:

If you decide one of the K3s options is best suited for your use case, you can follow along with the next step ‘setting up a K3S cluster for Rancher’. Apart from setting up any Mysql/Postgres/etcd external databases, I’ll cover which flags to change to use them.

8. Setting up a K3s cluster for Rancher

You’ve chosen to have your new Rancher installation run on a Kubernetes cluster based on K3s – an excellent choice! There are a couple of ways to install K3s:

It’s important to note that K3s out-of-the-box does not use Docker. Instead, it used the containerd container runtime. Containerd is actually fully compatible with Docker, as containerd is what Docker already uses under the hood, hidden by the fancy Docker CLI. If you prefer to keep running your containers inside Docker, that’s possible with all 3 options, not just k3d.

For this guide, we’ll be using k3sup to install K3s. It’s my preferred way of doing so as it will automatically download the kubeconfig file to our local machine to access the cluster. Saving us a few steps.

Important: If you’re installing Rancher on a K3s cluster running on raspbian buster, you’ll have to follow these steps to switch to legacy IP tables. If your cluster is Alpine Linux based, please follow these extra steps.

  1. Prepare the node(s). If you’re not using the root account to log in over SSH and sudo for the account requires any form of authentication (usually a password), disable this using this guide. You can undo this afterward – it’s just required for k3sup to work.
  2. First, download and install k3sup for your platform of choice
  3. Secondly, we will be able to install a single-node K3s cluster with just one command. This command will install K3s as a single-node Kubernetes cluster using containerd and the embedded sqlite data store. It will also store the credentials of the cluster in the current directory as rancher-cluster.yaml with the context name rancher.  If you’d rather use docker, prepare for HA,  or just want to use an external database in general, check the tweaks below the command.

k3sup install --ip=<ip-of-first-node> --user=<username> --k3s-channel=v1.20 --local-path=rancher-cluster.yaml --context rancher

Important: Kubernetes v1.20.x is the latest supported version of Kubernetes by Rancher v2.5.0. This is why it’s important to tell the installation to use this version rather than the latest v1.22.5 of K3s.

Tweaks

  • Q: I want to use Docker instead of containerd
    Add -k3s-extra-args='--docker' behind the command.  I would recommend using containerd though as the k3s-uninstall.sh script works better with it, but if you want to use docker you can. Just note that if you ever decide to uninstall k3s, you’ll have to manually stop and remove the workloads in docker.
  • Q: I want to use an external Mysql/Postgresql/etcd database as a data store
    Add --datastore="sql-etcd-connection-string" to the command. Check the k3s documentation for the format of the string.
  • Q: I want to use the embedded etcd database for the HA setup
    Add --cluster to the command to replace the embedded SQLite with etcd and start K3s in clustering mode.
  1. After K3sup finishes installing our cluster, we want to check everything went correctly. Because we stored our kubeconfig in the current directory, we can’t just run kubectl get nodes because it will look for our config in ~/.kube/config. So in order to fix that for this terminal session, run the following command: export KUBECONFIG=${PWD}/rancher-cluster.yaml.
  2. Now verify your cluster has been installed correctly using kubectl get nodes. You should get an output like this:

Migrate-rancher-19

If it says “not ready”, wait a few moments and run kubectl get nodes again.

Congratulations,  you’ve just installed a single-node Kubernetes cluster! ?

Important: If you plan on going with a HA setup, proceed with the rest of the first, then add the other nodes afterward.

9. Preparing the cluster for restore

Before we can proceed to restore the BRO backup (rancher-migrate....tar.gz), we need to prepare our cluster. Preparing the cluster will be done through the following steps:

  1. Installing cert-manager (required by Rancher).
  2. Creating a HostPath Persistent Volume and placing our backup in.
  3. Installing backup-restore-operator.

For these steps, we’re going to need kubectl and helm v3.2.x or higher. If you hadn’t installed helm until now, please do so.

Important: Do NOT install Rancher itself at this point. If you install Rancher before performing the restore, it will fail. You’ll end up with duplicate resources inside the datastore that will cause all kinds of issues.

9.1 Installing cert-manager

  1. Make sure you use the same terminal session as you did before. By validating you still see the same nodes as before kubectl get nodes. If not, run the export KUBECONFIG command from earlier to load the right config.
  2. Before we can install cert-manager, we need to tell helm where to find it. In order to do so, we’ll add the jetstack (cert-manager) helm chart repositories to helm and let it download its definitions. We can do that with the following 2 commands:helm repo add jetstack https://charts.jetstack.iohelm repo update.

After you should see helm reload all charts from all the repositories you’ve added.

Migrate-rancher-20

  1. cert-manager requires a few CRDs to be installed before installing cert-manageritself. Technically there’s a way to do it by setting a flag on the helm install as well, but the last time I did that, it didn’t work. So we’ll use kubectl for it:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml

Migrate-rancher-21

Ignore the slight version discrepancy in the picture, the screenshots are slightly older, and the guide has been updated to use the latest version.

  1. Then we install cert-manager itself using helm.

helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.6.1

Tip: If you add the --create-namespace flag when installing a helm-chart, it will create it for you as the name suggests. This saves you from having to create it manually with kubectl first.

  1. Wait for a few minutes and then check that cert-manager has been installed correctly using kubectl get pods -n cert-manager. All pods should have a ready 1/1 state. If not, wait a couple of minutes and check again. If they’re still not starting, check the logs using kubectl logs <podname> -n cert-manager to figure out what’s wrong.

Migrate-rancher-22

And that’s the first part done; the cert-manager is now successfully installed.

9.2 Creating the persistent volume for our restore

Before we can install the backup-restore-operator we have to create a Persistent Volume containing our backup first. This is because the backup-restore-operator can only restore from a persistent volume if it’s set as the default location upon installation. Otherwise, it only supports S3 buckets. This is currently not very clearly stated in the documentation on the Rancher website.

If you actually used an S3 bucket for your backup and will be using that for the restore, you can skip these steps and move on with 9.3 Installing backup-restore-operator. If you used a different storage class, either grab your backup from there and use the PV method below or skip to 9.3 Installing the backup-restore-operator and tweak the installation of BRO yourself. For everyone else following this guide, just keep reading.

  1. First, we must create our persistent volume to place our backup. This has to be in the same namespace as where BRO is installed, so we’ll have to create both up front. Below you can find the yaml file to do so.

Note: For the hostPath I used /home/vashiru/rancher-backup – this is the actual folder in which you’ll have to place your backup on the node. When we tell BRO to use this PV, it will mount that folder to /var/lib/backups inside the container. Due to this, you’ll want it to be a separate folder on your node. As such, tweak it to your needs. The path doesn’t have to exist on the node yet, we’ll create it in the next few steps.

For now, just tweak the path under hostPath to your needs and save the file as rancher-backup-ns-pv.yaml.

---
apiVersion: v1
kind: Namespace
metadata:
  name: cattle-resources-system
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: rancher-backup
  namespace: cattle-resources-system
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: /home/vashiru/rancher-backup
  1. Now apply this file to your cluster using kubectl apply -f rancher-backup-ns-pv.yaml.
  2. You can validate it’s been deployed using kubectl get pv.

Migrate-rancher-23

  1. SSH into your node.
  2. Create the directory using mkdir -p /home/vashiru/rancher-backup. In case you didn’t know, the -p flag will automatically create the entire path you specify, including subdirectories.
  3. If you are re-using the node you previously used to run the single node docker-container on, just copy the rancher-migrate-<hash>-<timestamp>.tar.gz file into the /home/vashiru/rancher-backup. Otherwise, use rsync to upload your backup to the new node.

Example:

rsync rancher-migrate-<hash>-<timestamp>.tar.gz 192.168.1.221:/home/vashiru/rancher-backup/

Important: Make sure you used the backup starting with rancher-migrate-<hash>... here. Do NOT use rancher-data-backup, as that’s only for restoring single-node docker installations.

  1. Once the backup is placed in the right directory, exit from SSH again.

9.3. Installing backup-restore-operator

  1. Add the rancher-charts helm chart repository to helm and update the definitions using:

helm repo add rancher-charts https://charts.rancher.io

helm repo update

  1. Install the CRDs for the backup-restore-operator using helm:

helm install rancher-backup-crd rancher-charts/rancher-backup-crd -n cattle-resources-system --create-namespace

Migrate-rancher-24

  1. Next, we’ll have to install the backup restore operator. If you’ve been following along with the guide for the persistent volume, use option A. If you’re using S3, use option B.

A. Persistent volume method:

helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-system --set persistence.enabled=true --set persistence.storageClass=manual --set persistence.volumeName="" --set persistence.size=10Gi

This will change a few things in the helm chart values so that the backup-restore-operator will create a persistent volume claim that will bind our persistent volume.

Migrate-rancher-25

B. S3 bucket for restore:

helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-system

Migrate-rancher-26

  1. Validate that the PVC has been able to claim the persistent volume using kubectl get pv. You should see the following:

Migrate-rancher-27

The status should be updated to Bound and the claim should be bound to cattle-resources-system/rancher-backup-1. If this is not the case, go back to 9.2 and check your steps. If needed, uninstall the helm chart and delete the PV. Deleting the PV will not delete your data inside because this is a hostPath PV.

10. Restoring the BRO-back-up

There are only a couple of steps left! First will be the restoration itself. The backup-restore-operator automatically monitors the local cluster for any objects that are being created of the Restore kind that uses the apiVersion resources.cattle.io/v1. When it detects such an object is created, it immediately triggers a restore.

So how do we create this Restore object? After all, it’s not like we have the Rancher GUI to manage them. Well once again, we’ll use a yaml file to create this and trigger the restore.

  1. Depending on whether you’ll be restoring from a PV or an S3 bucket. Pick one of the options below depending on how you want to trigger the restore and follow the steps there. If you follow this guide and use the persistent volume, go for option A. If you want to restore from an S3 bucket, pick option B.

A. Restore from the persistent volume

Replace the filename with the one that’s applicable to you and save it as restore.yaml.

apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
  name: rancher-migrate
spec:
  backupFilename: rancher-migrate-ceee0baa-0c4a-4b25-b15e-e554aa28f705-2021-06-01T21-04-31Z.tar.gz

B. Restore using an S3 bucket

The S3 restore requires a secret to be present for the S3 credentials, so the sample below contains both the secret, as well as the restore object in it. Replace all <> brackets with the values applicable to you (removing the <> of course) and save it as restore.yaml.

Note: The secret is specified as stringData, which means you can enter your accessKey and secretKey as plaintext. Normally you’d use data instead of stringData, in which case they’d have to be base64 encoded.

Friendly reminder: the default region for buckets created by Minio is us-east-1.

---
apiVersion: v1
kind: Secret
metadata:
  name: s3-creds
  namespace: cattle-resources-system
type: Opaque
stringData:
  accessKey: <your-access-key>
  secretKey: <your-secret-key>
---
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
  name: rancher-migrate
spec:
  backupFilename: rancher-migrate-ceee0baa-0c4a-4b25-b15e-e554aa28f705-2021-06-01T21-04-31Z.tar.gz
  prune: false
  storageLocation:
    s3:
      credentialSecretName: s3-creds
      credentialSecretNamespace: cattle-resources-system
      bucketName: <your-bucket-name>
      folder: <folder-on-bucket>
      region: <bucket-s3-region>
      endpoint: <hostname-to-bucket-so-without-http>
  1. Apply your YAML file to the cluster using kubectl apply -f restore.yaml.
  2. Use kubectl get pods -n cattle-resources-system to find out the name of your rancher-backup pod

Migrate-rancher-28

  1. Use kubectl logs <pod-name> -n cattle-resources-system -f to monitor the logs of the container for errors. You’ll see it picks up the restore and starts to process it:

Migrate-rancher-29

  1. After a few minutes, it will finish with the message Done Restoring. You might see a WARN[2021/06/01 21:46:33] Error getting object for controllerRef rancher, skipping it, just before the end – you can ignore this.

Migrate-rancher-30

Great! Our datastore has been restored time to open our browser and. … Nope, not yet. Two more steps to go. The datastore has been restored, but this doesn’t restore our deployment of Rancher itself. We’ll still have to install Rancher manually. Once installed, it will automatically detect the data already present in the datastore and use that instead of triggering a first setup procedure. Also, there’s currently a bug that causes the local-path-provisioner to fail, so we’ll have to fix that.

11. Fixing local-path-provisioner

There’s currently an issue that after you restore Rancher, the local-path-provisioner will start to fail. Luckily this can easily be adjusted. This might be fixed by the time you use this guide, so we’ll validate if this is the case first.

  1. Let your cluster settle for a bit.
  2. Use kubectl get pods -n kube-system to check on our local-path-provisioner pod. Your output will look something like this:

Migrate-rancher-31

  1. Check the logs of that pod using kubectl logs <pod-name> -n kube-system. If it returns the error below, you’ve been affected by kubernetes-sigs/kubespray #7321 :

time="2021-06-01T22:51:44Z" level=fatal msg="Error starting daemon: invalid empty flag helper-pod-file and it also does not exist at ConfigMap kube-system/local-path-config with err: configmaps \"local-path-config\" is forbidden: User \"system:serviceaccount:kube-system:local-path-provisioner-service-account\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io](http://clusterrole.rbac.authorization.k8s.io/) \"local-path-provisioner-role\" not found"

  1. If you’re affected, save the following yaml (credits to ledroide) to your disk ask role-local-path.yaml:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: local-path-provisioner-workaround
  namespace: kube-system
rules:
- apiGroups:
    - ''
  resources:
    - configmaps
  verbs:
    - get
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: local-path-provisioner-workaround
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: local-path-provisioner-workaround
subjects:
- kind: ServiceAccount
  name: local-path-provisioner-service-account
  1. Apply it to your cluster using kubectl apply -f role-local-path.yaml.
  2. Restart the deployment using kubectl rollout restart deployment/local-path-provisioner -n kube-system.

Migrate-rancher-32

  1. Run kubectl get pods -n kube-system to validate that the issue has been resolved.

Migrate-rancher-33

12. Installing Rancher on your new cluster

Almost done. This is our last step before having our beloved Rancher installation up and running, ready to manage our clusters again. We’ll do some housekeeping after, but this is the last step before your cluster will be back online.

  1. If you are using a new node for your cluster (ergo you switched IP addresses) update your DNS records now and have them point to your new node. If you’re repurposing your old node, you can skip this.
  2. Add the rancher-stable helm repo to the helm. This is different than the general rancher helm chart you installed earlier. You can add the rancher-stable repo using:

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

helm repo update

  1. So before we install Rancher, we have one more decision to make. How is the HTTPS certificate going to be handled? Will you let Rancher self-generate its TLS certificate, using let’s encrypt for a valid certificate, or are you bringing your own? Pick your option down below and follow the required steps:

IMPORTANT: your hostname MUST be the same as it was on the single node docker installation. This is what your downstream clusters will be looking for to (re)connect with Rancher.

A. Rancher generated (self-signed) certificate

Replace the hostname (including the brackets) to the one of your rancher cluster (without protocol etc.) and run the following command:

helm install rancher rancher-stable/rancher --version=2.5.8 --namespace cattle-system --create-namespace --set hostname=<replace-with-your-hostname> --set replicas=1 --set ingress.tls.source=rancher

Note: We’re setting replicas to 1 because we only have 1 node in our cluster right now. If you’re switching to a HA cluster later on, you can update the replica count later to reflect the number of nodes.

B. Use let’s encrypt (cert-manager) to generate valid certificates (easiest)

Replace the hostname and e-mail (including the brackets) with the ones applicable to your rancher cluster and run the following command:

helm install rancher rancher-stable/rancher --version=2.5.8 --namespace cattle-system --create-namespace --set hostname=<replace-with-your-hostname>  --set replicas=1 --set ingress.tls.source=letsEncrypt --set letsEncrypt.email=<replace-with-your-email>

C. Rancher generated (self-signed) certificate

Replace the hostname (including the brackets) with one of your clusters and run the command below. If you’re using a private CA, append --set privateCA=true to the command. After running it, follow “Adding TLS Certificates” to upload them to Rancher.

helm install rancher rancher-stable/rancher --version=2.5.8 --namespace cattle-system --create-namespace --set hostname=<replace-with-your-hostname> --set replicas=1 --set ingress.tls.source-secret

Note: We’re setting replicas to one because we only have 1 node in our cluster right now. If you’re switching to a HA cluster later on, you can update the replica count later to reflect the number of nodes.

Install finished

Once you’ve run the helm command, whichever option you’ve picked, once finished should look something like this:

Migrate-rancher-34

  1. Whilst Rancher is deploying, take note of all the --set options you used in the previous step. You’ll be needing those in the future when you upgrade your Rancher installation. (If you ever need to retrieve them you can use helm get values rancher -n cattle-system > rancher-values.yaml and it’ll export it to rancher-values.yaml).
  2. Check up on the status of installation using: kubectl -n cattle-system rollout status deploy/rancher. After a couple of minutes (depending on your host could be 15-20 minutes) it should start informing you that the replicas are becoming available. If you see error: deployment "rancher" exceeded its progress deadline, you can check the status again using kubectl -n cattle-system get deploy rancher.

Migrate-rancher-35

  1. Once it’s finished, go into your browser and navigate to your Rancher URL. You should see the Rancher GUI appear. In the case of Firefox, you might get an SSL error first if you used self-signed certificates and they changed; just refresh, and it’ll go through. If you’re prompted with login, the credentials are the same as they were before. Once signed in you should see your clusters.

NOTE: It’s perfectly normal if you see some of your clusters switch a couple of times between active and unavailable at this point. After a few minutes, they should settle down, and all show up as available.

Migrate-rancher-36

  1. After migrating a cluster like this, I’ve noticed that the rancher-webhook pod might get into an infinite restart loop simply logging “unauthorized”. The solution for this is deleting the actual pod. Doing so will trigger a clean restart after it will work. You can easily remove this pod with the following command:

kubectl get pods -n cattle-system | grep 'rancher-webhook' | awk '{print $1}' | xargs kubectl delete pod -n cattle-system

And with that out of the way, congratulations! You’ve just successfully migrated your Rancher single-node docker install to a single node K3S cluster.

12. Cleaning up

During this guide, you’ve created a PV and installed the backup-restore-operator into your cluster in a way that you’re probably not going to want to keep using it. In order to remove those, we can run a few simple commands:

  1. helm uninstall rancher-backup -n cattle-resources-system
  2. helm uninstall rancher-backup-crd -n cattle-resources-system
  3. kubectl delete pv rancher-backup

13. Extra: Update Rancher to 2.6.3

So we’ve migrated Rancher to our shiny new Kubernetes cluster, but you can’t help but notice that the versions we’ve migrated to are slightly outdated. Well, you’d be correct. This guide has actually been written a little while ago and I hadn’t gotten around to publishing it yet. Fortunately, this gives me the opportunity to teach you how to update and maintain your Rancher and K3s installation. So without further ado, let’s get updating.

What we’ll be doing

  • Re-install Backup Restore Operator to take a back-up; this time it will correctly function with hostPath.
  • Take a backup.
  • Update Rancher using Helm along with the values we used to install it.
  • Update K3s.

Important: If you manage any downstream clusters that run on K3s (clusters you control from inside Rancher): There’s an issue updating the Kubernetes version of those in Rancher 2.6.0-2.6.2 when upgrading Rancher from 2.5.x. It will fail because it cannot find the service account it’s looking for. This should be fixed in Rancher 2.6.3 (which was released 4 weeks ago), but if you happen to run into this issue as I did, the solution can be found in this ticket.

Steps:

  1. Re-install Rancher backup restore operator and take a back-up as we did in steps two through four.
  2. SSH into your Rancher host machine and confirm the backup is present on the host in the directory that you specified, in this guide /rancher-backup.
  3. When upgrading a deployment via helm, you always have to (re-)specify the values you passed for installation. Because upgrading could be adjusting the config or changing the version. The easiest way to do this is to retrieve the old values and save them to a YAML file using helm get values rancher -n cattle-system > rancher-values.yaml.Note: Helm uses the context from your kubeconfig file. If you’re updating later down the line, ensure you’ve loaded the right context. If you’ve closed your terminal since the earlier procedure, check out section eight, step three for a refresher. (And perhaps consider merging your kubeconfig. Also, I can recommend the plugins kubectx and kubens for switching between contexts and namespaces.)
  4. In order to update the Rancher deployment, we first have to make sure our helm repos are actually up to date. We can update these using helm repo update.
  5. Once you’ve done that you can upgrade your Rancher deployment using helm upgrade rancher rancher-stable/rancher -n cattle-system -f rancher-values.yaml --version 2.6.3 this will upgrade your Rancher deployment to Rancher v2.6.3 using the values you provided during the original installation. If you want to test your command before running it you can add --dry-run to make sure it’s all valid.

Steps to take next

There are a couple of things I would suggest you do next:

Take the K3s class if you’re new to K3s

If you just want to keep using your Rancher installation as a single node cluster, you can proceed in doing so. If this is your first time dealing with K3s and want to learn more about it (including backup and restore) I can recommend the free “Up and Running: K3s” class in the SUSE & Rancher community.

Set up a cronjob to restart K3s every 14 days to keep its certificates fresh

K3s uses certificates internally that have a validity of 365 days. K3s renews those certificates on the restart when they’re within 90 days of expiration. The easiest way to realize renewing these is to create a cronjob to restart K3s. Don’t worry, your Rancher workload will actually keep running while you do this. The K3s control plane can restart independently from its workloads.

In order to create a cronjob to restart K3s every 14 days (this is in order to ensure multiple attempts before the 90 days are over in case of failure), use the following clusters:

  1. SSH into your cluster.
  2. Open crontab using sudo crontab -e, if it asks for an editor and don’t know what to do, pick nano.
  3. At the bottom of the file, add the following line:

0 3 */14 * * sudo service k3s restart

  1. For nano: Hit Ctrl+X, hit y followed by enter to save the changes.
  2. Your K3s will now restart every 14 days at 3 am.

Add your other nodes and make your setup fully HA

If you’ve followed this guide with the intention of migrating to a HA setup, now is the time to start adding your nodes to the cluster. Depending on your option, pick one of the following links for the next steps:

Once you’ve done that, you have a multi-master setup, but your Kube-config is still pointing at only one node at a time. Also, your downstream clusters are currently getting round-robin DNS, which will fail if one of the nodes actually goes down. Depending on where you’ve chosen to host the Rancher installation there are a few solutions for this:

  • Create a cloud-based TCP load balancer that redirects traffic to the individual nodes and detects when they’re down.
  • Install kube-vip as a service type load-balancer and have that target your nodes using a floating IP.
  • Install metalLB, create a service type load-balancer and use that to distribute the workloads.

Summary

You’ve just migrated your Rancher single-node docker installation into a Kubernetes cluster! In doing so, you’ve brought upon your cluster the benefits of Kubernetes excellent lifecycle management, as well as the other benefits that come with Kubernetes. You’ve also future-proofed your Rancher installation since the docker installation will eventually be deprecated.

I hope you found this guide helpful in successfully migrating your Rancher installation to a Kubernetes cluster. This is the first time I’ve ever written a guide on anything Rancher / Kubernetes / K3s related so feedback is welcome.

I’ve also published this guide on my blog if you want to link to it somewhere public: https://vashiru.tech/migrating-rancher-single-node-docker-to-a-ha-kubernetes-cluster/.

Rancher Desktop 0.7 – Now with Docker CLI Support, The Ability To Run On Apple Silicon, and More

Thursday, 16 December, 2021

The latest release of Rancher Desktop brings two new features along with numerous bug fixes and other changes.

Docker CLI / Moby / dockerd

Many people are familiar with the Docker CLI. It has features that other container management CLI tools have not, yet, implemented. We wanted to support these features. Starting with Rancher Desktop 0.7, there is support for the Docker CLI.

You can choose your runtime, which impacts if you can use nerdctl or docker for your CLI, when you start Rancher Desktop for the very first time. If you want to change it later, you can do so in the Kubernetes Settings.

The Docker CLI communicates over a socket to a container runtime other than containerd. This meant that Rancher Desktop needed to support a second container runtime. This happens through the use of Moby and is similar to the way many popular Linux distributions provide Docker CLI support. Moby provides dockerd (the container runtime). Only one container runtime is in use at a time.

The socket needed for the runtime and CLI to communicate can be used by other tools, such as k3d. That means you can use k3d to manage multiple k3s environments.

This second image shows the updated Kubernetes Settings screen with the new Container Runtime toggle. The first screenshot shows the styling in dark mode while this second screenshot shows the settings in light mode.

Apple Silicon Support

Alongside the most recent round of updates to the Apple Macbook Pro laptops, Apple stopped shipping Intel based Macbook Pros. Apple is well into their transition to Apple Silicon based computers. To support that, Rancher Desktop now ships a build that runs on Apple Silicon.

This is the first version with support so there are some caveats. First, Rosetta 2 is needed to run this version. There are some components that Rancher Desktop uses that aren’t available with Apple Silicon builds. An environment that is completely native is in the roadmap. Second, cross building between amd64 and ARM isn’t supported, yet. Running Rancher Desktop on Apple Silicon is an ARM based experience.

Networking On Mac

There have been requests on Mac to have a routable IP address to the virtual machine where the containers and Kubernetes is running. As of v0.7.0, that is now possible.

Linux Repositories

When we first released Linux support for Rancher Desktop, the RPMs and Debian packages were attached to the release on GitHub. This didn’t provide a good means for installing and managing software using the typical methods on Linux. Starting with v0.7.0, there are now repositories that you can use to install and update Rancher Desktop from.

There are some Linux Desktop environments that may have trouble with these repositories. In addition to the repositories, an AppImage is available of Rancher Desktop. This version currently has some limitations, such as a lack of a self update feature. We are working on it.

Other Updates

In addition to these large changes there are a number of other small changes including:

  • Helm has been updated to v3.7.2
  • nerdctl, which has seen active development to add many new features, has been updated to v0.15
  • Many small bugs have been fixed

Next Steps

There are several next steps:

Deploying and Serving a Web Application on Kubernetes with Docker, K3s and Knative

Monday, 14 June, 2021

This article will take a working TODO application written in Flask and JavaScript with a MongoDB database and learn how to deploy it onto Kubernetes. This post is geared toward beginners: if you do not have access to a Kubernetes cluster, fear not!

We’ll use K3s, a lightweight Kubernetes distribution that is excellent for getting started quickly. But first, let’s talk about what we want to achieve.

First, I’ll introduce the example application. This is kept intentionally simple, but it illustrates a common use case. Then we’ll go through the process of containerizing the application. Before we move on, I’ll talk about how we can use containers to ease our development, especially if we work in a team and want to ease developer ramp-up time or when we are working in a fresh environment.

Once we have containerized the applications, the next step is deploying them onto Kubernetes. While we can create ServicesIngresses and Gateways manually, we can use Knative to stand up our application in no time at all.

Setting Up the App

We will work with a simple TODO application that demonstrates a front end, REST API back end and MongoDB working in concert. Credits go to Prashant Shahi for coming up with the example application. I have made some minor changes purely for pedagogical purposes.

First, git clone the repository:

git clone https://github.com/benjamintanweihao/Flask-MongoDB-K3s-KNative-TodoApp

Next, let’s inspect the directory to get the lay of the land:

% cd Flask-MongoDB-K3s-KNative-TodoApp
% tree

The folder structure is a typical Flask application. The entry point is app.py, which also contains the REST APIs. The templates folder consists of the files that would be rendered as HTML.

├── app.py
├── requirements.txt
├── static
│   ├── assets
│   │   ├── style.css
│   │   ├── twemoji.js
│   │   └── twemoji.min.js
└── templates
    ├── index.html
    └── update.html

Open app.py and we can see all the major pieces:

mongodb_host = os.environ.get('MONGO_HOST', 'localhost')
mongodb_port = int(os.environ.get('MONGO_PORT', '27017'))
client = MongoClient(mongodb_host, mongodb_port)
db = client.camp2016
todos = db.todo 

app = Flask(__name__)
title = "TODO with Flask"

@app.route("/list")
def lists ():
    #Display the all Tasks
    todos_l = todos.find()
    a1="active"
    return render_template('index.html',a1=a1,todos=todos_l,t=title,h=heading)

if __name__ == "__main__":
    env = os.environ.get('APP_ENV', 'development')
    port = int(os.environ.get('PORT', 5000))
    debug = False if env == 'production' else True
    app.run(host='0.0.0.0', port=port, debug=debug)

From the above code snippet, you can see that the application requires MongoDB as the database. With the lists() method, you can then see an example of how a route is defined (i.e. @app.route("/list")), how data is fetched from MongoDB, and finally, how the template is rendered.

Another thing to notice here is the use of environment variables for MONGO_HOST and MONGO_PORT and Flask-related environment variables. The most important is debug. When set to True, the Flask server automatically reloads when it detects and changes. This is especially handy during development and is something we’ll exploit.

Developing with Docker Containers

When working on applications, I spent a lot of time setting up my environment and installing all the dependencies. After that, I could get up and running by adding new features. However, this only describes an ideal scenario, right?

How often have you gone back to an application that you developed (say six months ago), only to find out that you are slowly descending into dependency hell? Dependencies are often a moving target; unless you lock things down, your application might not work properly. One way to get around this is to package all the dependencies into Docker containers.

Another nice thing that Docker brings is automation. That means no more copying and pasting commands and setting up things like databases.

Dockerizing the Flask Application

Here’s the Dockerfile:

FROM alpine:3.7
COPY . /app
WORKDIR /app

RUN apk add --no-cache bash git nginx uwsgi uwsgi-python py2-pip \
    && pip2 install --upgrade pip \
    && pip2 install -r requirements.txt \
    && rm -rf /var/cache/apk/*

EXPOSE 5000
ENTRYPOINT ["python"]

We start with a minimal (in terms of size and functionality) base image. Then, the application’s contents go into the container’s directory. Next, we execute a series of commands to install Python, the Nginx web server and all the Flask application’s requirements. These are exactly the steps needed to set up the application on a fresh system.

You can build the Docker container like so:

% docker build -t <yourusername>/todo-app .

You should see something like this:

# ...
Successfully built c650af8b7942
Successfully tagged benjamintanweihao/todo-app:latest

What about MongoDB?

Should you go through the same process of creating a Dockerfile for MongoDB? The good news is that someone else has done it more often than not. In our case: https://hub.docker.com/_/mongo. However, now you have two containers, with the Flask container depending on the MongoDB one.

One way is to start the MongoDB container first, followed by the Flask one. However, let’s say you want to add caching and decide to bring in a Redis container. Then the process of starting each container gets old fast. The solution is Docker Compose, a tool that lets you define and run multiple Docker containers, which is exactly the situation that we have here.

Docker Compose

Here’s the Docker compose file, docker-compose.yaml:

services:
  flaskapp:
    build: .
    image: benjamintanweihao/todo-app:latest
    ports:
      - 5000:5000
    container_name: flask-app
    environment:
      - MONGO_HOST=mongo
      - MONGO_PORT=27017
    networks:
      - todo-net
    depends_on:
      - mongo
    volumes:
      - .:/app # <--- 
  mongo:
    image: mvertes/alpine-mongo
    ports:
      - 27017:27017
    networks:
      - todo-net

networks:
  todo-net:
    driver: bridge

Even if you’re unfamiliar with Docker Compose, the YAML file presented here isn’t complicated. Let’s go through the important bits.

At this highest level, this file defines services, composed of the flaskapp and mongo, and networksSpecifying a bridged connection. This creates a network connection so that the containers defined in services can communicate with each other.

Each service defines the image, along with the port mappings, and the network defined earlier. Environment variables have also been defined in flaskapp (look at app.py to see that they are indeed the same ones.)

I want to call your attention to the volumes specified in flaskapp. What we are doing here is mapping the current directory of the host (which should be the project directory containing app.py to the /app directory of the container.)  Why are we doing this? Recall that in the Dockerfile, we copied the app into the /app directory like so:

COPY . /app

Now imagine that you want to make a change to the app. You wouldn’t be able to easily change app.py in the container. By mapping over the local directory, you are essentially overwriting the app.py in the container with the local copy in your directory. So assuming that the Flask application is in debug mode (it is if you have not changed anything at this point), when you launch the containers and make a change, the rendered output reflects the change.

However, it is important to realize that the app.py in the container is still the old version, and you will still need to remember to build a new image. (Hopefully, you have CI/CD set up to do this automatically!)

Enough talk; let’s see this in action. Run the following command:

docker-compose up

This is what you should see:

Creating network "flask-mongodb-k3s-knative-todoapp_my-net" with driver "bridge"
Creating flask-mongodb-k3s-knative-todoapp_mongo_1 ... done
Creating flask-app                                 ... done
Attaching to flask-mongodb-k3s-knative-todoapp_mongo_1, flask-app
# ... more output truncated
flask-app   |  * Serving Flask app "app" (lazy loading)
flask-app   |  * Environment: production
flask-app   |    WARNING: Do not use the development server in a production environment.
flask-app   |    Use a production WSGI server instead.
flask-app   |  * Debug mode: on
flask-app   |  * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask-app   |  * Restarting with stat
mongo_1     | 2021-05-15T15:41:37.993+0000 I NETWORK  [listener] connection accepted from 172.23.0.1:48844 #2 (2 connections now open)
mongo_1     | 2021-05-15T15:41:37.993+0000 I NETWORK  [conn2] received client metadata from 172.23.0.1:48844 conn2: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "", architecture: "x86_64", version: "5.8.0-53-generic" }, platform: "CPython 2.7.15.final.0" }
flask-app   |  * Debugger is active!
flask-app   |  * Debugger PIN: 183-021-098

Now head to http://localhost:5000 in your browser:

If you see this, congratulations! Flask and Mongo are working properly together. Feel free to play around with the application to get a feel of it.

Now let’s make a tiny change to app.py in the title of the application:

index d322672..1c447ba 100644
--- a/app.py
+++ b/app.py
-heading = "tOdO Reminder"
+heading = "TODO Reminder!!!!!"

Save the file and reload the app:

Once you are done, you can issue the following command:

docker-compose down

Getting the Application onto Kubernetes

Now comes the fun part. Up to this point, we have containerized our application and its supporting services (just MongoDB for now). How can we start to deploy our application onto Kubernetes?

Before that, let’s install Kubernetes. For this, I’m picking K3s because it’s the easiest way to install Kubernetes and super easy to get up and running.

% curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --no-deploy=traefik"  sh -s -

In a few moments, you will have Kubernetes installed:

[INFO]  Finding release for channel stable
[INFO]  Using v1.20.6+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.20.6+k3s1/sha256sum-amd64.txt
# truncated ...
[INFO]  systemd: Starting k3s

Verify that K3s has been set up properly:

% kubectl get no
NAME      STATUS   ROLES                  AGE     VERSION
artemis   Ready    control-plane,master   2m53s   v1.20.6+k3s1

MongoDB

There are multiple ways of doing this. You could use the image we created, a MongoDB operator or Helm:

helm install mongodb-release bitnami/mongodb --set architecture=standalone --set auth.enabled=false
** Please be patient while the chart is being deployed **

MongoDB(R) can be accessed on the following DNS name(s) and ports from within your cluster:

    mongodb-release.default.svc.cluster.local

To connect to your database, create a MongoDB(R) client container:

    kubectl run --namespace default mongodb-release-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.6-debian-10-r0 --command -- bash

Then, run the following command:
    mongo admin --host "mongodb-release"

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/mongodb-release 27017:27017 &
    mongo --host 127.0.0.1

Install Knative and Istio

In this post, we will be using Knative. Knative builds on Kubernetes, making it easy for developers to deploy and run applications without knowing many of the gnarly details of Kubernetes.

Knative is made up of two parts: Serving and Eventing. In this section, we will deal with the Serving portion. With Knative Serving, you can create scalable, secure, and stateless services in a matter of seconds, and that is what we will do with our TODO app! Before that, let’s install Knative:

The following instructions were based on: https://knative.dev/docs/install/install-serving-with-yaml/:

kubectl apply -f https://github.com/knative/serving/releases/download/v0.22.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.22.0/serving-core.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/v0.22.0/istio.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/v0.22.0/net-istio.yaml

This sets up Knative and Istio. You might be wondering why we need Istio. The reason is that Knative requires an Ingress controller to perform things like traffic splitting (for example, version 1 and version 2 of the TODO app running concurrently) and automatic HTTP request retries.

Are there alternatives to Istio? At this point, I am only aware of one: Gloo. Traefik is not supported now, so we had to disable it when installing K3s. Since Istio is the default and the most supported, we’ll go with it.

Now wait till all the knative-serving pods are running:

kubectl get pods --namespace knative-serving -w
NAME                                READY   STATUS    RESTARTS   AGE
controller-57956677cf-2rqqd         1/1     Running   0          3m39s
webhook-ff79fddb7-mkcrv             1/1     Running   0          3m39s
autoscaler-75895c6c95-2vv5b         1/1     Running   0          3m39s
activator-799bbf59dc-t6v8k          1/1     Running   0          3m39s
istio-webhook-5f876d5c85-2hnvc      1/1     Running   0          44s
networking-istio-6bbc6b9664-shtd2   1/1     Running   0          44s

Setting up a Custom Domain

By default, Knative Serving uses example.com as the default domain. If you have set up K3s as per the instructions, you should have a load balancer installed. This means that with some setup, you can create a custom domain using a “magic” DNS service like sslip.io.

sslip.io is a service that returns that IP Address when queried with a hostname with an embedded IP address. For example, a URL such as 192.168.0.1.sslip.io will point to 192.168.0.1. This is excellent for experimenting, where you don’t have to go buy your own domain name.

Go ahead and apply the following manifest:

kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-default-domain.yaml

If you open the  serving-default-domain.yaml, you will notice the following in the spec:

# other parts truncated      spec:
    serviceAccountName: controller
    containers:
        - name: default-doma
          image: ko://knative.dev/serving/cmd/default-domain
          args: ["-magic-dns=sslip.io"]

This enables the “magic” DNS that you will use in the next step.

Testing that Everything Works

Download the kn binary. You can find the links here: https://knative.dev/development/client/install-kn/. Be sure to rename the binary  kn and place it somewhere in your $PATH. Once you get that sorted out, go ahead and create the sample Hello World service. I have already pushed the benjamintanweihao/helloworld-python image to Docker Hub:

% kn service create helloworld-python --image=docker.io/benjamintanweihao/helloworld-python --env TARGET="Python Sample v1"

This results in the following output:

Creating service 'helloworld-python' in namespace 'default':

  0.037s The Route is still working to reflect the latest desired specification.
  0.099s Configuration "helloworld-python" is waiting for a Revision to become ready.
 29.277s ...
 29.314s Ingress has not yet been reconciled.
 29.446s Waiting for load balancer to be ready
 29.605s Ready to serve.

Service 'helloworld-python' created to latest revision 'helloworld-python-00001' is available at URL:
http://helloworld-python.default.192.168.86.26.sslip.io

To list all the deployed Knative services in all namespaces, you can do:

% kn service  list -A

With kubectl, this becomes:

% kubectl get ksvc -A

To delete the service, it is as simple as:

kn service delete helloworld-python # or kubectl delete ksvc helloworld-python

If you haven’t done so, ensure the todo-app image has been pushed to DockerHub. (If you are unfamiliar with pushing images to DockerHub, then the DockerHub Quickstart is a great place). Remember to replace {username} with your DockerHub ID :

% docker push {username}/todo-app:latest

Once the image has been pushed, you can then use the kn command to create the TODO service. Remember to replace {username} with your DockerHub ID:

kn service create todo-app --image=docker.io/{username}/todo-app --env MONGO_HOST="mongodb-release.default.svc.cluster.local" 

If everything went well, you will see this:

Creating service 'todo-app' in namespace 'default':

  0.022s The Route is still working to reflect the latest desired specification.
  0.085s Configuration "todo-app" is waiting for a Revision to become ready.
  4.586s ...
  4.608s Ingress has not yet been reconciled.
  4.675s Waiting for load balancer to be ready
  4.974s Ready to serve.

Service 'todo-app' created to latest revision 'todo-app-00001' is available at URL:
http://todo-app.default.192.168.86.26.sslip.io

Now head over to http://todo-app.default.192.168.86.26.sslip.io (or whatever has been printed on the last line of the previous output) and you should see the application! Now take a step back and see what Knative has done for you. Knative has spun up a service for you in a single command and given you a URL that you can access from your cluster.

I’ve barely scratched the surface with Knative, but I hope this motivates you to learn more about it! When I started looking at Knative, I didn’t quite understand what it did. Hopefully, the example sheds some light on the awesomeness of Knative and its convenience.

Conclusion

In this article, we took a whirlwind tour of taking a web application built in Python and requiring MongoDB and learned how to:

  1. Containerize the TODO application using Docker
  2. Use Docker to alleviate dependency hell
  3. Use Docker for development
  4. Use Docker Compose to package multiple containers
  5. Install K3s
  6. Install KNative (Serving) and Istio
  7. Use Helm to deploy MongoDB
  8. Use Knative to deploy the TODO application in a single line

While migrating an application to Kubernetes is certainly not a trivial task, containerizing your application usually gets you halfway there. Of course, there are still many things that weren’t covered, such as security and scaling.

K3s is an excellent platform to test and run Kubernetes workloads and is especially useful when running on a laptop/desktop.

I’ve barely scratched the surface with Knative, but I hope this motivates you to learn more about it! When I started looking at Knative, I didn’t quite understand what it did. Hopefully, the example sheds some light on the awesomeness of Knative and its conveniences. Indeed, one of the highlights of Knative is to “Stand up a scalable, secure, stateless service in seconds.” And as you can see, Knative delivers on that promise.

I will cover more about Knative and go deeper into its core features in a future article. I hope you can take what you have read here and adapt it to your applications!

Deploying and Serving Web Applications on Kubernetes with Docker, K3s and Knative

Monday, 14 June, 2021

In this article, we will take a working TODO application written in Flask and JavaScript, with a MongoDB database, and learn how to deploy it onto Kubernetes. This post is geared toward beginners; if you do not have access to a Kubernetes cluster, fear not!

We’ll use K3s, a lightweight Kubernetes distribution that is excellent for getting started quickly.

Let’s talk about what we want to achieve.

First, I’ll introduce the example application. This is kept intentionally simple but illustrates a common use case. Then we’ll go through the process of containerizing the application. Before we move on, I’ll talk about how we can use containers to ease our development, especially if we work in a team and want to ease developer ramp-up time or when we are working in a fresh environment.

Once we have containerized the applications, the next step is deploying them onto Kubernetes. While we can create ServicesIngresses and Gateways manually, we can use Knative to stand up our application in no time at all.

Setting up the app

We will work with a simple TODO application that demonstrates a front end, REST API back end and MongoDB working in concert. Credits go to Prashant Shahi for coming up with the example application. I have made some minor changes purely for pedagogical purposes.

First, git clone the repository:

git clone https://github.com/benjamintanweihao/Flask-MongoDB-K3s-KNative-TodoApp

Next, let’s inspect the directory to get the lay of the land:

% cd Flask-MongoDB-K3s-KNative-TodoApp
% tree

The folder structure is a typical Flask application. The entry point is app.py which also contains the REST APIs. The templates folder consists of the files that would be rendered as HTML.

├── app.py
├── requirements.txt
├── static
│   ├── assets
│   │   ├── style.css
│   │   ├── twemoji.js
│   │   └── twemoji.min.js
└── templates
    ├── index.html
    └── update.html

Open app.py and we can see all the major pieces:

mongodb_host = os.environ.get('MONGO_HOST', 'localhost')
mongodb_port = int(os.environ.get('MONGO_PORT', '27017'))
client = MongoClient(mongodb_host, mongodb_port)
db = client.camp2016
todos = db.todo 

app = Flask(__name__)
title = "TODO with Flask"

@app.route("/list")
def lists ():
    #Display the all Tasks
    todos_l = todos.find()
    a1="active"
    return render_template('index.html',a1=a1,todos=todos_l,t=title,h=heading)

if __name__ == "__main__":
    env = os.environ.get('APP_ENV', 'development')
    port = int(os.environ.get('PORT', 5000))
    debug = False if env == 'production' else True
    app.run(host='0.0.0.0', port=port, debug=debug)

From the above code snippet, you can see that the application requires MongoDB as the database. With the lists() method, you can then see an example of how a route is defined (i.e. @app.route("/list")), how data is fetched from MongoDB and finally, how the template is rendered.

Another thing to notice here is the use of environment variables for MONGO_HOST and MONGO_PORT and Flask-related environment variables. The most important is debug. When set to True, the Flask server automatically reloads when it detects and changes. This is especially handy during development and is something we’ll exploit.

Developing with docker containers

When working on applications, I spent a lot of time setting up my environment and installing all the dependencies. After that, I could get up and running by adding new features. However, this only describes an ideal scenario, right?

How often have you gone back to an application that you developed (say six months ago), only to find out that you are slowly descending into dependency hell? Dependencies are often a moving target; unless you lock things down, your application might not work properly. One way to get around this is to package all the dependencies into Docker containers.

Another nice thing that Docker brings is automation. That means no more copying and pasting commands and setting up things like databases.

Dockerizing the flask application

Here’s the Dockerfile:

FROM alpine:3.7
COPY . /app
WORKDIR /app

RUN apk add --no-cache bash git nginx uwsgi uwsgi-python py2-pip \
    && pip2 install --upgrade pip \
    && pip2 install -r requirements.txt \
    && rm -rf /var/cache/apk/*

EXPOSE 5000
ENTRYPOINT ["python"]

We start with a minimal (in terms of size and functionality) base image. Then, the application’s contents go into the container’s directory. Next, we execute a series of commands to install Python, the Nginx web server and all the Flask application’s requirements. These are exactly the steps needed to set up the application on a fresh system.

You can build the Docker container like so:

% docker build -t <yourusername>/todo-app .

You should see something like this:

# ...
Successfully built c650af8b7942
Successfully tagged benjamintanweihao/todo-app:latest

What about MongoDB?

Should you go through the same process of creating a Dockerfile for MongoDB? The good news is that someone else has done it more often than not. In our case: https://hub.docker.com/_/mongo. However, now you have two containers, with the Flask container depending on the MongoDB one.

One way is to start the MongoDB container first, followed by the Flask one. However, let’s say you want to add caching and decide to bring in a Redis container. Then the process of starting each container gets old fast. The solution is Docker Compose, a tool that lets you define and run multiple Docker containers, which is exactly the situation that we have here.

Docker compose

Here’s the Docker compose file, docker-compose.yaml:

services:
  flaskapp:
    build: .
    image: benjamintanweihao/todo-app:latest
    ports:
      - 5000:5000
    container_name: flask-app
    environment:
      - MONGO_HOST=mongo
      - MONGO_PORT=27017
    networks:
      - todo-net
    depends_on:
      - mongo
    volumes:
      - .:/app # <--- 
  mongo:
    image: mvertes/alpine-mongo
    ports:
      - 27017:27017
    networks:
      - todo-net

networks:
  todo-net:
    driver: bridge

Even if you’re unfamiliar with Docker Compose, the YAML file presented here isn’t complicated. Let’s go through the important bits.

At this highest level, this file defines services, composed of the flaskapp and mongo, and networksSpecifying a bridged connection. This creates a network connection so that the containers defined in services can communicate with each other.

Each service defines the image, along with the port mappings, and the network defined earlier. Environment variables have also been defined in flaskapp (look at app.py to see that they are indeed the same ones.)

I want to call your attention to the volumes specified in flaskapp. What we are doing here is mapping the current directory of the host (which should be the project directory containing app.py to the /app directory of the container.)  Why are we doing this? Recall that in the Dockerfile, we copied the app into the /app directory like so:

COPY . /app

Now imagine that you want to make a change to the app. You wouldn’t be able to easily change app.py in the container. By mapping over the local directory, you are essentially overwriting the app.py in the container with the local copy in your directory. So assuming that the Flask application is in debug mode (it is if you have not changed anything at this point), when you launch the containers and make a change, the rendered output reflects the change.

However, it is important to realize that the app.py in the container is still the old version, and you will still need to remember to build a new image. (Hopefully, you have CI/CD set up to do this automatically!)

Enough talk; let’s see this in action. Run the following command:

docker-compose up

This is what you should see:

Creating network "flask-mongodb-k3s-knative-todoapp_my-net" with driver "bridge"
Creating flask-mongodb-k3s-knative-todoapp_mongo_1 ... done
Creating flask-app                                 ... done
Attaching to flask-mongodb-k3s-knative-todoapp_mongo_1, flask-app
# ... more output truncated
flask-app   |  * Serving Flask app "app" (lazy loading)
flask-app   |  * Environment: production
flask-app   |    WARNING: Do not use the development server in a production environment.
flask-app   |    Use a production WSGI server instead.
flask-app   |  * Debug mode: on
flask-app   |  * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask-app   |  * Restarting with stat
mongo_1     | 2021-05-15T15:41:37.993+0000 I NETWORK  [listener] connection accepted from 172.23.0.1:48844 #2 (2 connections now open)
mongo_1     | 2021-05-15T15:41:37.993+0000 I NETWORK  [conn2] received client metadata from 172.23.0.1:48844 conn2: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "", architecture: "x86_64", version: "5.8.0-53-generic" }, platform: "CPython 2.7.15.final.0" }
flask-app   |  * Debugger is active!
flask-app   |  * Debugger PIN: 183-021-098

Now head to http://localhost:5000 in your browser:

If you see this, congratulations! Flask and Mongo are working properly together. Feel free to play around with the application to get a feel of it.

Now let’s make a tiny change to app.py in the title of the application:

index d322672..1c447ba 100644
--- a/app.py
+++ b/app.py
-heading = "tOdO Reminder"
+heading = "TODO Reminder!!!!!"

Save the file and reload the app:

Once you are done, you can issue the following command:

docker-compose down

Getting the application onto Kubernetes

Now comes the fun part. Up to this point, we have containerized our application and its supporting services (just MongoDB for now). How can we start to deploy our application onto Kubernetes?

Before that, let’s install Kubernetes. For this, I’m picking K3s because it’s the easiest way to install Kubernetes and super easy to get up and running.

% curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --no-deploy=traefik"  sh -s -

In a few moments, you will have Kubernetes installed:

[INFO]  Finding release for channel stable
[INFO]  Using v1.20.6+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.20.6+k3s1/sha256sum-amd64.txt
# truncated ...
[INFO]  systemd: Starting k3s

Verify that K3s has been set up properly:

% kubectl get no
NAME      STATUS   ROLES                  AGE     VERSION
artemis   Ready    control-plane,master   2m53s   v1.20.6+k3s1

MongoDB

There are multiple ways of doing this. You could use the image we created, a MongoDB operator or Helm:

helm install mongodb-release bitnami/mongodb --set architecture=standalone --set auth.enabled=false
** Please be patient while the chart is being deployed **

MongoDB(R) can be accessed on the following DNS name(s) and ports from within your cluster:

    mongodb-release.default.svc.cluster.local

To connect to your database, create a MongoDB(R) client container:

    kubectl run --namespace default mongodb-release-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.6-debian-10-r0 --command -- bash

Then, run the following command:
    mongo admin --host "mongodb-release"

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/mongodb-release 27017:27017 &
    mongo --host 127.0.0.1

Install Knative and Istio

In this post, we will be using Knative. Knative builds on Kubernetes, making it easy for developers to deploy and run applications without knowing a lot of the gnarly details of Kubernetes.

Knative is made up of two parts: Serving and Eventing. In this section, we will deal with the Serving portion. With Knative Serving, you can create scalable, secure and stateless services in a matter of seconds, and that is what we will do with our TODO app! Before that, let’s install Knative:

The following instructions were based on: https://knative.dev/docs/install/install-serving-with-yaml/:

kubectl apply -f https://github.com/knative/serving/releases/download/v0.22.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.22.0/serving-core.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/v0.22.0/istio.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/v0.22.0/net-istio.yaml

This sets up Knative and Istio. You might be wondering why do we need Istio. The reason is that Knative requires an Ingress controller so that it can perform things like traffic splitting (for example, version 1 and version 2 of the TODO app running concurrently) and automatic HTTP request retries.

Are there alternatives to Istio? At this point, I am only aware of one: Gloo. Traefik is not supported now, so we had to disable it when installing K3s. Since Istio is the default and the most supported, we’ll go with it.

Now wait till all the knative-serving pods are running:

kubectl get pods --namespace knative-serving -w
NAME                                READY   STATUS    RESTARTS   AGE
controller-57956677cf-2rqqd         1/1     Running   0          3m39s
webhook-ff79fddb7-mkcrv             1/1     Running   0          3m39s
autoscaler-75895c6c95-2vv5b         1/1     Running   0          3m39s
activator-799bbf59dc-t6v8k          1/1     Running   0          3m39s
istio-webhook-5f876d5c85-2hnvc      1/1     Running   0          44s
networking-istio-6bbc6b9664-shtd2   1/1     Running   0          44s

Setting up a custom domain

By default, Knative Serving uses example.com as the default domain. If you have set up K3s as per the instructions, you should have a load balancer installed. This means that with some setup, you can create a custom domain using a “magic” DNS service like sslip.io.

sslip.io is a service that returns that IP Address when queried with a hostname with an embedded IP address. For example, a URL such as 192.168.0.1.sslip.io will point to 192.168.0.1. This is excellent for experimenting where you don’t have to go buy your own domain name.

Go ahead and apply the following manifest:

kubectl apply -f https://storage.googleapis.com/knative-nightly/serving/latest/serving-default-domain.yaml

If you open the  serving-default-domain.yaml, you will notice the following in the spec:

# other parts truncated      spec:
    serviceAccountName: controller
    containers:
        - name: default-doma
          image: ko://knative.dev/serving/cmd/default-domain
          args: ["-magic-dns=sslip.io"]

This enables the “magic” DNS that you will use in the next step.

Testing that everything works

Download the kn binary. You can find the links here: https://knative.dev/development/client/install-kn/. Be sure to rename the binary  kn and place it somewhere in your $PATH. Once you get that sorted out, go ahead and create the sample Hello World service. I have already pushed the benjamintanweihao/helloworld-python image to Docker Hub:

% kn service create helloworld-python --image=docker.io/benjamintanweihao/helloworld-python --env TARGET="Python Sample v1"

This results in the following output:

Creating service 'helloworld-python' in namespace 'default':

  0.037s The Route is still working to reflect the latest desired specification.
  0.099s Configuration "helloworld-python" is waiting for a Revision to become ready.
 29.277s ...
 29.314s Ingress has not yet been reconciled.
 29.446s Waiting for load balancer to be ready
 29.605s Ready to serve.

Service 'helloworld-python' created to latest revision 'helloworld-python-00001' is available at URL:
http://helloworld-python.default.192.168.86.26.sslip.io

To list all the deployed Knative services in all namespaces, you can do:

% kn service  list -A

With kubectl, this becomes:

% kubectl get ksvc -A

To delete the service, it is as simple as:

kn service delete helloworld-python # or kubectl delete ksvc helloworld-python

If you haven’t done so, ensure the todo-app image has been pushed to DockerHub. (If you are unfamiliar with pushing images to DockerHub, then the DockerHub Quickstart is a great place). Remember to replace {username} with your DockerHub ID :

% docker push {username}/todo-app:latest

Once the image has been pushed, you can then use the kn command to create the TODO service. Remember to replace {username} with your DockerHub ID:

kn service create todo-app --image=docker.io/{username}/todo-app --env MONGO_HOST="mongodb-release.default.svc.cluster.local" 

If everything went well, you will see this:

Creating service 'todo-app' in namespace 'default':

  0.022s The Route is still working to reflect the latest desired specification.
  0.085s Configuration "todo-app" is waiting for a Revision to become ready.
  4.586s ...
  4.608s Ingress has not yet been reconciled.
  4.675s Waiting for load balancer to be ready
  4.974s Ready to serve.

Service 'todo-app' created to latest revision 'todo-app-00001' is available at URL:
http://todo-app.default.192.168.86.26.sslip.io

Now head over to http://todo-app.default.192.168.86.26.sslip.io (or whatever has been printed on the last line of the previous output) and you should see the application! Now take a step back and see what Knative has done for you.  Knative has spun up a service for you in a single command and given you a URL that you can access from your cluster.

I’ve barely scratched the surface with Knative, but I hope this motivates you to learn more about it! When I started looking at Knative, I didn’t quite understand what it did. Hopefully, the example sheds some light on the awesomeness of Knative and its convenience.

Conclusion

In this article, we took a whirlwind tour of taking a web application built in Python and requiring MongoDB and learned how to:

  1. Containerize the TODO application using Docker
  2. Use Docker to alleviate dependency hell
  3. Use Docker for development
  4. Use Docker Compose to package multiple containers
  5. Install K3s
  6. Install KNative (Serving) and Istio
  7. Use Helm to deploy MongoDB
  8. Use Knative to deploy the TODO application in a single line

While migrating an application to Kubernetes is certainly not a trivial task, containerizing your application usually gets you halfway there. Of course, there are still many things that weren’t covered, such as security and scaling.

K3s is an excellent platform to test and run Kubernetes workloads and is especially useful when running on a laptop/desktop.

I’ve barely scratched the surface with Knative, but I hope this motivates you to learn more about it! When I started looking at Knative, I didn’t quite understand what it did. Hopefully, the example sheds some light on the awesomeness of Knative and its conveniences. Indeed, one of the highlights of Knative is to “Stand up a scalable, secure, stateless service in seconds.” And as you can see, Knative delivers on that promise.

I will cover more about Knative and go deeper into its core features in a future article. I hope you can take what you have read here and adapt it to your applications!

Tags: ,,,, Category: Uncategorized Comments closed

Beyond Docker: A Look at Alternatives to Container Management

Monday, 24 May, 2021

A deep dive into container stacks and the choices the ecosystem provides

Docker appeared in 2013 and popularized the idea of containers to the point that most people still equate the notion of a container to a “Docker container.”

Being first in its category, Docker set some standards that newcomers must adhere to. For example, there is a large repository of Docker system images. All of the alternatives had to use the same image format while trying, at the same time, to change one or more parts of the entire stack on which Docker was based.

In the meantime, new container standards appeared, and the container ecosystem grew in different directions. Now there are many ways to work with containers besides Docker.

In this blog post, I will

  • introduce chrootcgroups and namespaces as the technical foundation of containers
  • define the software stack that Docker is based upon
  • state the standards that Docker and Kubernetes adhere to and then
  • describe alternative solutions which try to replace the original Docker containers with better and more secure components.

Software Stack for Containers

Linux features such as chroot calls, cgroups and namespaces help containers run in isolation from all other processes and thus guarantee safety during runtime.

Chroot

All Docker-like technologies have their roots in a root directory of a Unix-like operating system (OS). Above the root directory is a root file system and other directories.

On Linux, root directory is both the basis of file system and the start of all other directories. This is dangerous in the long term, as any unwanted deletion in the root directory affects the entire OS. That’s why a system call chroot() exists. It creates additional root directories, such as one to run legacy software, another to contain databases, etc.

To all those environments, chroot appears to be a true root directory, but in reality, it just prepends pathnames to any name starting with. The real root directory still exists; any process can refer to any location beyond the designated root.

Linux cgroups

Control groups (cgroups) have been a feature of the Linux kernel since version 2.6.24 in 2008. A cgroup will limit, isolate and measure usage of system resources (memory, CPU, network and I/O) for several processes at once.

Let’s say we want to prevent our users from sending many email messages from the server. We create a cgroup with memory limit of 1GB and 50 percent of CPU and add the application process id to the group. The system will throttle down the email-sending process when these limits are reached. It may even kill the process, depending on the hosting strategy.

Namespaces

Linux namespace is another useful abstraction layer. A namespace allows us to have many process hierarchies, each with its own nested “subtree.” A namespace can use a global resource and present it to its members as if it were their own resource.

Here’s an example. A Linux system starts with a process identifier (PID) of 1 and all other processes will be contained in its tree. PID namespace allows us to span a new tree, with its own PID 1 process. There are now two PIDs with the value of 1. Each namespace can spawn its own namespaces and the same process can have several PIDs attached to it.

A process in a child namespace will have no idea of the parent’s process existence, while the parent namespace will have access to the entire child namespace.

There are seven types of namespaces: cgroup, IPC, network, mount, PID, user and UTS.

Network Namespace

Some resources are scarce. By convention, some ports have predefined roles and should not be used for anything else: port 80 only serves HTTP calls, port 443 only serves HTTPS calls and so on. In a shared hosting environment, two or more sites can listen to HTTP requests from port 80. The one that first got hold of it, would not let any other app access the data on that port. That first app would be visible on the Internet, while all the others would be invisible.

The solution is to use network namespaces, with which inner processes will see different network interfaces.

In one network namespace, the same port can be open, while in another, it may be shut down. For this to work, we must adopt additional “virtual” network interfaces, which belong to several namespaces simultaneously. There also must be a router process somewhere in the middle, to connect requests coming to a physical device to the appropriate namespace and the process in it.

Complicated? Yes! That’s why Docker and similar tools are so popular. Let’s now introduce Docker and see how it compares to its alternatives.

Docker: Containers Everyone!

Before containers came to rule the world of cloud computing, virtual machines were quite popular. If you have a Windows machine but want to develop mobile apps for iOS, you can either buy a new Mac (expensive but excellent solution) or install its virtual machine onto the Windows hardware (a cheap but slow and unreliable solution). VMs can also be clumsy, they often gobble up resources that they do not need and are usually slow to start (up to a minute).

Enter containers.

Containers are standard units of software that have everything needed for the program to run: the operating system, databases, images, icons, software libraries, code and everything else. A container also runs in isolation from all other containers and even from the OS itself. Containers are lightweight compared to VMs, so they can start fast and are easily replaced.

To run isolated and protected, containers are based on chroot, cgroups and namespaces.

The image of a container is a template from which the application is formed on the actual machine. Creating as many containers as needed from a single image is possible. A text document called Dockerfile contains all the information needed to assemble an image.

The true revolution that Docker brought was creation of a registry of docker images and the development of Docker engine, with which those images ran everywhere in the same manner. Being the first and widely adopted, an implicit world standard for container images was formed and all eventual competitors had to pay attention to it.

CRI and OCI

Open Container Initiative or OCI  publishes specifications for images and containers. It was started in 2015 by Docker and was accepted by Microsoft, Facebook, Intel, VMWare, Oracle and many other industry giants.

OCI also implements the specification It is called runc and deals directly with containers, creates them, runs them and so on.

Container Runtime Interface or CRI is a Kubernetes API that defines how Kubernetes interacts with container runtimes. It also is standardized so you can choose which CRI implementation to adopt.

Software Stack for Containers with CRI and OCI

The software stack that runs containers will have Linux as its most basic part:

Note that containerd and CRI-O both adhere to the CRI and OCI specifications. For Kubernetes, it means that it can use either containerd or CRI-O without the user ever noticing the difference. It can also use any of the other alternatives that we are now going to mention – which was exactly the goal when software standards such as OCI and CRI were created and adopted.

Docker Software Stack

The software stack for Docker is

— docker-cli, Docker command line interface for developers

— containerd, originally written by Docker, later spun off as an independent project; it implements the CRI specification

— runc, which implements the OCI spec

— containers (using chroot, cgroups, namespaces, etc.)

The software stack for Kubernetes is almost the same; instead of containerd, Kubernetes uses CRI-O, a CRI implementation created by Red Hat/IBM and others.

containerd

containerd runs as a daemon on Linux and Windows. It loads images, executes them as containers, supervises low-level storage and takes care of the entire container runtime and lifecycle.

Containerd started as a part of Docker in 2014 and in 2017 became a part of Cloud Native Computing Foundation (CNCF). The CNCF is a vendor-neutral home for Kubernetes, Prometheus, Envoy, containerd, CRI-O, podman and other cloud-based software.

runc

runc is a reference implementation for the OCI specification. It creates and runs containers and the processes within them. It uses lower-level Linux features, such as cgroups and namespaces.

Alternatives to runc include kata-runtime, gVisor and CRI-O.

kata-runtime implements the OCI specification using hardware virtualization as individual lightweight VMs. Its runtime is compatible with OCI, CRI-O and containerd, so it works seamlessly with Docker and Kubernetes.

gVisor from Google creates containers that have their own kernel. It implements OCI through a program called runsc, which integrates with Docker and Kubernetes. A container with its own kernel is more secure than without, but it is not a panacea, and there is a penalty to pay in resource usage with that approach.

CRI-O, a container stack designed purely for Kubernetes, was the first implementation of the CRI standard. It pulls images from any container registry and serves as a lightweight alternative to using Docker.

Today it supports runc and Kata Containers as the container runtimes, but any other OC-compatible runtime can also be plugged in (at least, in theory).

It is a CNCF incubating project.

Podman

Podman is a daemon-less Docker alternative. Its commands are intentionally as compatible with Docker as possible, so you can make an alias and start using word “docker” instead of “podman” in a CLI interface.

Podman aims to replace Docker, so sticking to the same set of commands makes sense. Podman tries to improve on two problems in Docker.

First, Docker is always executing with an internal daemon. The daemon is single process, running in the background. If it fails, the whole system will fail.

Second, Docker runs as a background process with root privileges, so when you give access to a new user, you are actually giving access to the entire server.

Podman is a remote Linux client that runs containers directly from the operating system. You also can run them completely rootless.

It downloads images from DockerHub and runs them in exactly the same way as Docker, with exactly the same commands.

Podman runs the commands and the images as user other than root, so it is more secure than Docker.

On the other hand, many tools developed for Docker are not available on Podman, such as Portainer and Watchtower. Moving away from Docker means sacrificing your established workflow.

Podman has a similar directory structure to buildahskopeo and CRI-I. Its pods are also very similar to Kubernetes pods.

Developed by RedHat, Podman is a player to watch in this space.

Honorable Mention: LXC/LXD

Introduced in 2008, LXC (LinuX Containers) stack was the first upstream-kernel container on Linux. The first version of Docker used LXC but in a later development, they moved away, having implemented runc.

The goal of LXC is to run multiple isolated Linux virtual environments on a control host using a single Linux kernel. To that end, it uses cgroups functionality without needing to start any virtual machines; it also uses namespaces to completely isolate the application from the underlying system.

LXC aims to create system containers, almost like you would have in a virtual machine – but without the overhead that comes from trying to emulate the entire virtualized hardware.

LXC does not emulate hardware and packages but contains only the needed applications, so it executes almost at the bare metal speed. In contrast, virtual machines contain the entire OS, then emulate hardware such as hard drives, virtual processor and network interfaces.

So, LXC is small and fast while VMs are big and slow. On the other hand, virtual environments cannot be packaged into ready-made and quickly deployable machines and are difficult to manage through GUI management consoles. LXC requires high technical skills, and the result may be an optimized machine that is incompatible with other environments.

LXC vs Docker Approach

LXC Is like a supercharged chroot on Linux and produces “small” servers that boot faster and need less RAM. Docker, however, offers much more:

  • Portable deployment across machines: the object that you create with one version of Docker can be transferred and installed onto any other Docker-enabled Linux host.
  • Versioning: Docker can track versions in a git-like manner – you can create new versions of a container, roll them back and so on.
  • Reusing components: With Docker, you can stack already created packages into new packages. If you want a LAMP environment, you can install its components once and then reuse them as an already pre-made LAMP image.
  • Docker image archive: hundreds of thousands of Docker images can be downloaded from dedicated sites, and it is very easy to upload a new image to one such repository.

Finally, LXC is geared toward system admins while Docker is more geared to developers. That’s why Docker is more popular.

LXD

LXD has a privileged daemon that exposes a REST API over a local UNIX socket and over the network (if enabled). You can access it through a command line tool, but it always communicates with REST API calls. It will always function the same whether the client is on your local machine or somewhere on a remote server.

LXD can scale from one local machine to several thousand remote machines. Like Docker, it is image-based, with images available for the more popular Linux distributions. Canonical, the company that owns Ubuntu, is financing the development of LXD, so it will always run on the latest versions of Ubuntu and other similar Linux operating systems.

LXD integrates seamlessly with OpenNebula and OpenStack standards.

Technically, LXD is written “on top” of LXC (both are using the same liblxc library and Go language to create containers) but the goal of LXD is to improve user experience compared to LXC.

Docker Forever or Not?

Docker boasts 11 million developers, 7 million applications and 13 billion monthly image downloads. To say that Docker is still the leader would be an understatement. However, this article shows that replacing one or more parts of the Docker software stack is possible, often without compatibility problems. Alternatives do exist, with security as the main goal compared to what Docker offers.

Introduction to k3d: Run K3s in Docker

Wednesday, 3 March, 2021

In this blog post, we’re going to talk about k3d, a tool that allows you to run throwaway Kubernetes clusters anywhere you have Docker installed. I’ve anticipated your questions…so let’s go!

What is k3d?

k3d is a small program made for running a K3s cluster in Docker. K3s is a lightweight, CNCF-certified Kubernetes distribution and Sandbox project. Designed for low-resource environments, K3s is distributed as a single binary that uses under 512MB of RAM. To learn more about K3s, head over to the documentation or check out some of our blog posts and videos.

k3d uses a Docker image built from the K3s repository to spin up multiple K3s nodes in Docker containers on any machine with Docker installed. That way, a single physical (or virtual) machine (let’s call it Docker Host) can run multiple K3s clusters, with multiple server and agent nodes each, simultaneously.

What Can k3d Do?

As of k3d version v4.0.0, released in January 2021, k3d’s abilities boil down to the following features:

  • create/stop/start/delete/grow/shrink K3s clusters (and individual nodes)
    • via command line flags
    • via configuration file
  • manage and interact with container registries that can be used with the cluster
  • manage Kubeconfigs for the clusters
  • import images from your local Docker daemon into the container runtime running in the cluster

Obviously, there’s way more to it and you can tweak everything in great detail.

What is k3d Used For?

The main use case for k3d is local development on Kubernetes with little hassle and resource usage. The intention behind the initial development of k3d was to provide developers with an easy tool that allowed them to run a lightweight Kubernetes cluster on their development machine, giving them fast iteration times in a production-like environment (as opposed to running docker-compose locally vs. Kubernetes in production).

Over time, k3d also evolved into a tool used by operations to test some Kubernetes (or, specifically K3s) features in an isolated environment. For example, with k3d you can easily create multi-node clusters, deploy something on top of it, simply stop a node and see how Kubernetes reacts and possibly reschedules your app to other nodes.

Additionally, you can use k3d in your continuous integration system to quickly spin up a cluster, deploy your test stack on top of it and run integration tests. Once you’re finished, you can simply decomission the cluster as a whole. No need to worry about proper cleanups and possible leftovers.

We also provide a k3d-dind image (similar to dreams within dreams in the movie Inception, we’ve got containers within containers within containers.) With that, you can create a docker-in-docker environment where you run k3d, which spawns a K3s cluster in Docker. That means that you only have a single container (k3d-dind) running on your Docker host, which in turn runs a whole K3s/Kubernetes cluster inside.

How Do I Use k3d?

  1. Install k3d (and kubectl, if you want to use it)
    • Note: to follow along with this post, use at least k3d v4.1.1
  2. Try one of the following examples or use the documentation or the CLI help text to find your own way (k3d [command] --help)

The “Simple” Way

k3d cluster create

This single command spawns a K3s cluster with two containers: A Kubernetes control-plane node (server) and a load balancer (serverlb) in front of it. It puts both of them in a dedicated Docker network and exposes the Kubernetes API on a randomly chosen free port on the Docker host. It also creates a named Docker volume in the background as a preparation for image imports.

By default, if you don’t provide a name argument, the cluster will be named k3s-default and the containers will show up as k3d-<cluster-name>-<role>-<#>, so in this case k3d-k3s-default-serverlb and k3d-k3s-default-server-0.

k3d waits until everything is ready, pulls the Kubeconfig from the cluster and merges it with your default Kubeconfig (usually it’s in $HOME/.kube/config or whatever path your KUBECONFIG environment variable points to).

No worries, you can tweak that behavior as well.

Check out what you’ve just created using kubectl to show you the nodes: kubectl get nodes.

k3d also gives you some commands to list your creations: k3d cluster|node|registry list.

The “Simple but Sophisticated” Way

k3d cluster create mycluster --api-port 127.0.0.1:6445 --servers 3 --agents 2 --volume '/home/me/mycode:/code@agent[*]' --port '8080:80@loadbalancer'

This single command spawns a K3s cluster with six containers:
* 1 load balancer
* 3 servers (control-plane nodes)
* 2 agents (formerly worker nodes)

With the --api-port 127.0.0.1:6445, you tell k3d to map the Kubernetes API Port (6443 internally) to 127.0.0.1/localhost’s port 6445. That means that you will have this connection string in your Kubeconfig: server: https://127.0.0.1:6445 to connect to this cluster.

This port will be mapped from the load balancer to your host system. From there, requests will be proxied to your server nodes, effectively simulating a production setup, where server nodes also can go down and you would want to failover to another server.

The --volume /home/me/mycode:/code@agent[*] bind mounts your local directory /home/me/mycode to the path /code inside all ([*] of your agent nodes). Replace * with an index (here: 0 or 1) to only mount it into one of them.

The specification telling k3d which nodes it should mount the volume to is called “node filter” and it’s also used for other flags, like the --port flag for port mappings.

That said, --port '8080:80@loadbalancer' maps your local host’s port 8080 to port 80 on the load balancer (serverlb), which can be used to forward HTTP ingress traffic to your cluster. For example, you can now deploy a web app into the cluster (Deployment), which is exposed (Service) externally via an Ingress such as myapp.k3d.localhost.

Then (provided that everything is set up to resolve that domain to your local host IP), you can point your browser to http://myapp.k3d.localhost:8080 to access your app. Traffic then flows from your host through the Docker bridge interface to the load balancer. From there, it’s proxied to the cluster, where it passes via Ingress and Service to your application Pod.

Note: You have to have some mechanism set up to route to resolve myapp.k3d.localhost to your local host IP (127.0.0.1).
The most common way is using entries of the form 127.0.0.1 myapp.k3d.localhost in your /etc/hosts file (C:WindowsSystem32driversetchosts on windows).
However, this does not allow for wildcard entries (*.localhost), so it may become a bit cumbersome after a while, so you may want to have a look at tools like dnsmasq (MacOS/UNIX) or Acrylic (Windows) to ease the burden.

Tip: You can install the package libnss-myhostname on some systems (at least Linux operating systems including SUSE Linux and openSUSE), to auto-resolve *.localhost domains to 127.0.0.1, which means you don’t have to fiddle around with e.g. /etc/hosts, if you prefer to test via Ingress, where you need to set a domain.

One interesting thing to note here: if you create more than one server node, K3s will be given the --cluster-init flag, which means that it swaps its internal datastore (by default that’s SQLite) for etcd.

The “Configuration as Code” Way

As of k3d v4.0.0 (January 2021), we support config files to configure everything as code that you’d previously do via command line flags (and soon possibly even more than that).
As of this writing, the JSON-Schema used to validate the configuration file can be found in the repository.

Here’s an example config file:

# k3d configuration file, saved as e.g. /home/me/myk3dcluster.yaml
apiVersion: k3d.io/v1alpha2  # this will change in the future as we make everything more stable
kind: Simple  # internally, we also have a Cluster config, which is not yet available externally
name: mycluster  # name that you want to give to your cluster (will still be prefixed with `k3d-`)
servers: 1  # same as `--servers 1`
agents: 2  # same as `--agents 2`
kubeAPI:  # same as `--api-port 127.0.0.1:6445`
  hostIP: "127.0.0.1"
  hostPort: "6445"
ports:
  - port: 8080:80  # same as `--port 8080:80@loadbalancer
    nodeFilters:
      - loadbalancer
options:
  k3d:  # k3d runtime settings
    wait: true  # wait for cluster to be usable before returining; same as `--wait` (default: true)
    timeout: "60s"  # wait timeout before aborting; same as `--timeout 60s`
  k3s:  # options passed on to K3s itself
    extraServerArgs:  # additional arguments passed to the `k3s server` command
      - --tls-san=my.host.domain
    extraAgentArgs: []  # addditional arguments passed to the `k3s agent` command
  kubeconfig:
    updateDefaultKubeconfig: true  # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true)
    switchCurrentContext: true  # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)

Assuming that we saved this as /home/me/myk3dcluster.yaml, we can use it to configure a new cluster:

k3d cluster create --config /home/me/myk3dcluster.yaml

Note that you can still set additional arguments or flags, which will then take precedence (or will be merged) with whatever you have defined in the config file.

What More Can I Do with k3d?

You can use k3d in even more ways, including:

  • Create a cluster together with a k3d-managed container registry
  • Use the cluster for fast development with hot code reloading
  • Use k3d in combination with other development tools like Tilt or Skaffold
    • both can leverage the power of importing images via k3d image import
    • both can alternatively make use of a k3d-managed registry to speed up your development loop
  • Use k3d in your CI system (we have a PoC for that)
  • Integrate it in your vscode workflow using the awesome new community-maintained vscode extension
  • Use it to set up K3s high availability

You can try all of these yourself by using prepared scripts in this demo repository or watch us showing them off in one of our meetups.

Other than that, remember that k3d is a community-driven project, so we’re always happy to hear from you on Issues, Pull-Requests, Discussions and Slack Chats!