Managing Sensitive Data in Kubernetes with Sealed Secrets and External Secrets Operator (ESO)

Thursday, 31 March, 2022

Having multiple environments that can be dynamically configured has become akin to modern software development. This is especially true in an enterprise context where the software release cycles typically consist of separate compute environments like dev, stage and production. These environments are usually distinguished by data that drives the specific behavior of the application.

For example, an application may have three different sets of database credentials for authentication (AuthN) purposes. Each set of credentials would be respective to an instance for a particular environment. This approach essentially allows software developers to interact with a developer-friendly database when carrying out their day-to-day coding. Similarly, QA testers can have an isolated stage database for testing purposes. As you would expect, the production database environment would be the real-world data store for end-users or clients.

To accomplish application configuration in Kubernetes, you can either use ConfigMaps or Secrets. Both serve the same purpose, except Secrets, as the name implies, are used to store very sensitive data in your Kubernetes cluster. Secrets are native Kubernetes resources saved in the cluster data store (i.e., etcd database) and can be made available to your containers at runtime.

However, using Secrets optimally isn’t so straightforward. Some inherent risks exist around Secrets. Most of which stem from the fact that, by default, Secrets are stored in a non-encrypted format (base64 encoding) in the etcd datastore. This introduces the challenge of safely storing Secret manifests in repositories privately or publicly. Some security measures that can be taken include: encrypting secrets, using centralized secrets managers, limiting administrative access to the cluster, enabling encryption of data at rest in the cluster datastore and enabling TLS/SSL between the datastore and Pods.

In this post, you’ll learn how to use Sealed Secrets for “one-way” encryption of your Kubernetes Secrets and how to securely access and expose sensitive data as Secrets from centralized secret management systems with the External Secrets Operator (ESO).

 

Using Sealed Secrets for one-way encryption

One of the key advantages of Infrastructure as Code (IaC) is that it allows teams to store their configuration scripts and manifests in git repositories. However, because of the nature of Kubernetes Secrets, this is a huge risk because the original sensitive credentials and values can easily be derived from the base64 encoding format.

``` yaml

apiVersion: v1

kind: Secret

metadata:

  name: my-secret

type: Opaque

data:

  username: dXNlcg==

  password: cGFzc3dvcmQ=

```

Therefore, as a secure workaround, you can use Sealed Secrets. As stated above, Sealed Secrets allow for “one-way” encryption of your Kubernetes Secrets and can only be decrypted by the Sealed Secrets controller running in your target cluster. This mechanism is based on public-key encryption, a form of cryptography consisting of a public key and a private key pair. One can be used for encryption, and only the other key can be used to decrypt what was encrypted. The controller will generate the key pair, publish the public key certificate to the logs and expose it over an HTTP API request.

To use Sealed Secrets, you have to deploy the controller to your target cluster and download the kubeseal CLI tool.

  • Sealed Secrets Controller – This component extends the Kubernetes API and enables lifecycle operations of Sealed Secrets in your cluster.
  • kubeseal CLI Tool – This tool uses the generated public key certificate to encrypt your Secret into a Sealed Secret.

Once generated, the Sealed Secret manifests can be stored in a git repository or shared publicly without any ramifications. When you create these Sealed Secrets in your cluster, the controller will decrypt it and retrieve the original Secret, making it available in your cluster as per norm. Below is a step-by-step guide on how to accomplish this.

To carry out this tutorial, you will need to be connected to a Kubernetes cluster. For a lightweight solution on your local machine, you can use Rancher Desktop.

To download kubeseal, you can select the binary for your respective OS (Linux, Windows, or Mac) from the GitHub releases page. Below is an example for Linux.

``` bash

wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.17.3/kubeseal-linux-amd64 -O kubeseal

sudo install -m 755 kubeseal /usr/local/bin/kubeseal

```

Installing the Sealed Secrets Controller can either be done via Helm or kubectl. This example will use the latter. This will install Custom Resource Definitions (CRDs), RBAC resources, and the controller.

``` bash

wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/controller.yaml

kubectl apply -f controller.yaml

```

You can ensure that the relevant Pod is running as expected by executing the following command:

``` bash

kubectl get pods -n kube-system | grep sealed-secrets-controller

```

Once it is running, you can retrieve the generated public key certificate using kubeseal and store it on your local disk.

``` bash

kubeseal --fetch-cert > public-key-cert.pem

```

You can then create a Secret and seal it with kubeseal. This example will use the manifest detailed at the start of this section, but you can change the key-value pairs under the data field as you see fit.

``` bash

kubeseal --cert=public-key-cert.pem --format=yaml < secret.yaml > sealed-secret.yaml

```

The generated output will look something like this:

``` yaml

apiVersion: bitnami.com/v1alpha1

kind: SealedSecret

metadata:

  creationTimestamp: null

  name: my-secret

  namespace: default

spec:

  encryptedData:

    password: AgBvA5WMunIZ5rF9...

    username: AgCCo8eSORsCbeJSoRs/...

  template:

    data: null

    metadata:

      creationTimestamp: null

      name: my-secret

      namespace: default

    type: Opaque

```

This manifest can be used to create the Sealed Secret in your cluster with kubectl and afterward stored in a git repository without the concern of any individual accessing the original values.

``` bash

kubectl create -f sealed-secret.yaml

```

You can then proceed to review the secret and fetch its values.

``` bash

kubectl get secret my-secret -o jsonpath="{.data.user}" | base64 --decode

kubectl get secret my-secret -o jsonpath="{.data.password}" | base64 --decode

```

 

Using External Secrets Operator (ESO) to access Centralized Secrets Managers

Another good practice for managing your Secrets in Kubernetes is to use centralized secrets managers. Secrets managers are hosted third-party platforms used to store sensitive data securely. These platforms typically offer encryption of your data at rest and expose an API for lifecycle management operations such as creating, reading, updating, deleting, or rotating secrets. In addition, they have audit logs for trails and visibility and fine-grained access control for operations of stored secrets. Examples of secrets managers include HashiCorp Vault, AWS Secrets Manager, IBM Secrets Manager, Azure Key Vault, Akeyless, Google Secrets Manager, etc. Such systems can put organizations in a better position when centralizing the management, auditing, and securing secrets. The next question is, “How do you get secrets from your secrets manager to Kubernetes?” The answer to that question is the External Secrets Operator (ESO).

The External Secrets Operator is a Kubernetes operator that enables you to integrate and read values from your external secrets management system and insert them as Secrets in your cluster. The ESO extends the Kubernetes API with the following main API resources:

  • SecretStore – This is a namespaced resource that determines how your external Secret will be accessed from an authentication perspective. It contains references to Secrets that have the credentials to access the external API.
  • ClusterSecretStore – As the name implies, this is a global or cluster-wide SecretStore that can be referenced from all namespaces to provide a central gateway to your secrets manager.
  • ExternalSecret – This resource declares the data you want to fetch from the external secrets manager. It will reference the SecretStore to know how to access sensitive data.

Below is an example of how to access data from AWS Secrets Manager and make it available in your K8s cluster as a Secret. As a prerequisite, you will need to create an AWS account. A free-tier account will suffice for this demonstration.

You can create a secret in AWS Secrets Manager as the first step. If you’ve got the AWS CLI installed and configured with your AWS profile, you can use the CLI tool to create the relevant Secret.

``` bash

aws secretsmanager create-secret --name <name-of-secret> --description <secret-description> --secret-string <secret-value> --region <aws-region>

```

Alternatively, you can create the Secret using the AWS Management Console.

As you can see in the images above, my Secret is named “alias” and has the following values:

``` json

{

  "first": "alpha",

  "second": "beta"

}

```

After you’ve created the Secret, create an IAM user with programmatic access and safely store the generated AWS credentials (access key ID and a secret access key). Make sure to limit this user’s service and resource permissions in a custom IAM Policy.

``` json

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Action": [

        "secretsmanager:GetResourcePolicy",

        "secretsmanager:GetSecretValue",

        "secretsmanager:DescribeSecret",

        "secretsmanager:ListSecretVersionIds"

      ],

      "Resource": [

        "arn:aws:secretsmanager:<aws-region>:<aws-account-id>:secret:<secret-name>",

      ]

    }

  ]

}

```

Once that is done, you can install the ESO with Helm.

``` bash

helm repo add external-secrets https://charts.external-secrets.io



helm install external-secrets \

   external-secrets/external-secrets \

    -n external-secrets \

    --create-namespace

```

Next, you can create the Secret that the SecretStore resource will reference for authentication. You can optionally seal this Secret using the approach demonstrated in the previous section that deals with encrypting Secrets with kubeseal.

``` yaml

apiVersion: v1

kind: Secret

metadata:

  name: awssm-secret

type: Opaque

data:

  accessKeyID: PUtJQTl11NKTE5...

  secretAccessKey: MklVpWFl6f2FxoTGhid3BXRU1lb1...

```

If you seal your Secret, you should get output like the code block below.

``` yaml

apiVersion: bitnami.com/v1alpha1

kind: SealedSecret

metadata:

  creationTimestamp: null

  name: awssm-secret

  namespace: default

spec:

  encryptedData:

    accessKeyID: Jcl1bC6LImu5u0khVkPcNa==...

    secretAccessKey: AgBVMUQfSOjTdyUoeNu...

  template:

    data: null

    metadata:

      creationTimestamp: null

      name: awssm-secret

      namespace: default

    type: Opaque

```

Next, you need to create the SecretStore.

``` yaml

apiVersion: external-secrets.io/v1alpha1

kind: SecretStore

metadata:

  name: awssm-secretstore

spec:

  provider:

    aws:

      service: SecretsManager

      region: eu-west-1

      auth:

        secretRef:

          accessKeyIDSecretRef:

            name: awssm-secret

            key: accessKeyID

          secretAccessKeySecretRef:

            name: awssm-secret

            key: secretAccessKey

```

The last resource to be created is the ExternalSecret.

``` yaml

apiVersion: external-secrets.io/v1alpha1

kind: ExternalSecret

metadata:

  name: awssm-external-secret

spec:

  refreshInterval: 1440m

  secretStoreRef:

    name: awssm-secretstore

    kind: SecretStore

  target:

    name: alias-secret

    creationPolicy: Owner

  data:

  - secretKey: first

    remoteRef:

      key: alias

      property: first

  - secretKey: second

    remoteRef:

      key: alias

      property: second

```

You can then chain the creation of these resources in your cluster with the following command:

``` bash

kubectl create -f sealed-secret.yaml,secret-store.yaml,external-secret.yaml

```

After this execution, you can review the results using any of the approaches below.

``` bash

kubectl get secret alias-secret -o jsonpath="{.data.first}" | base64 --decode

kubectl get secret alias-secret -o jsonpath="{.data.second}" | base64 --decode

```

You can also create a basic Job to test its access to these external secrets values as environment variables. In a real-world scenario, make sure to apply fine-grained RBAC rules to Service Accounts used by Pods. This will limit the access that Pods have to the external secrets injected into your cluster.

``` yaml

apiVersion: batch/v1

kind: Job

metadata:

  name: job-with-secret

spec:

  template:

    spec:

      containers:

        - name: busybox

          image: busybox

          command: ['sh', '-c', 'echo "First comes $ALIAS_SECRET_FIRST, then comes $ALIAS_SECRET_SECOND"']

          env:

            - name: ALIAS_SECRET_FIRST

              valueFrom:

                secretKeyRef:

                  name: alias-secret

                  key: first

            - name: ALIAS_SECRET_SECOND

              valueFrom:

                secretKeyRef:

                  name: alias-secret

                  key: second

      restartPolicy: Never

  backoffLimit: 3

```

You can then view the logs when the Job has been completed.

Conclusion

In this post, you learned that using Secrets in Kubernetes introduces risks that can be mitigated with encryption and centralized secrets managers. Furthermore, we covered how Sealed Secrets and the External Secrets Operator can be used as tools for managing your sensitive data. Alternative solutions that you can consider for encryption and management of your Secrets in Kubernetes are Mozilla SOPS and Helm Secrets. If you’re interested in a video walk-through of this post, you can watch the video below.

Let’s continue the conversation! Join the SUSE & Rancher Community, where you can further your Kubernetes knowledge and share your experience.

Running Serverless Applications on Kubernetes with Knative

Friday, 11 March, 2022

Kubernetes provides a set of primitives to run resilient, distributed applications. It takes care of scaling and automatic failover for your application and it provides deployment patterns and APIs that allow you to automate resource management and provision new workloads.

One of the main challenges that developers face is how to focus more on the details of the code rather than the infrastructure where that code runs. For that, serverless is one of the leading architectural paradigms to address this challenge. There are various platforms that allow you to run serverless applications either deployed as single functions or running inside containers, such as AWS Lambda, AWS Fargate, and Azure Functions. These managed platforms come with some drawbacks like:

-Vendor lock-in

-Constraint in the size of the application binary/artifacts

-Cold start performance

You could be in a situation where you’re only allowed to run applications within a private data center, or you may be using Kubernetes but you’d like to harness the benefits of serverless. There are different open source platforms, such as Knative and OpenFaaS, that use Kubernetes to abstract the infrastructure from the developer, allowing you to deploy and manage your applications using serverless architecture and patterns. Using any of those platforms takes away the problems mentioned in the previous paragraph.

This article will show you how to deploy and manage serverless applications using Knative and Kubernetes.

Serverless Landscape

Serverless computing is a development model that allows you to build and run applications without having to manage servers. It describes a model where a cloud provider handles the routine work of provisioning, maintaining, and scaling the server infrastructure, while the developers can simply package and upload their code for deployment. Serverless apps can automatically scale up and down as needed, without any extra configuration by the developer.

As stated in a white paper by the CNCF serverless working group, there are two primary serverless personas:

-Developer: Writes code for and benefits from the serverless platform that provides them with the point of view that there are no servers and that their code is always running.

-Provider: Deploys the serverless platform for an external or internal customer.

The provider needs to manage servers (or containers) and will have some cost for running the platform, even when idle. A self-hosted system can still be considered serverless: Typically, one team acts as the provider and another as the developer.

In the Kubernetes landscape, there are various ways to run serverless apps. It can be through managed serverless platforms like IBM Cloud Code and Google Cloud Run, or open source alternatives that you can self-host, such as OpenFaaS and Knative.

Introduction to Knative

Knative is a set of Kubernetes components that provides serverless capabilities. It provides an event-driven platform that can be used to deploy and run applications and services that can auto-scale based on demand, with out-of-the-box support for monitoring, automatic renewal of TLS certificates, and more.

Knative is used by a lot of companies. In fact, it powers the Google Cloud Run platform, IBM Cloud Code Engine, and Scaleway serverless functions.

The basic deployment unit for Knative is a container that can receive incoming traffic. You give it a container image to run and Knative handles every other component needed to run and scale the application. The deployment and management of the containerized apps are handled by one of the core components of Knative, called Knative Serving. Knative Serving is the component in Knative that manages the deployment and rollout of stateless services, plus its networking and autoscaling requirements.

The other core component of Knative is called Knative Eventing. This component provides an abstract way to consume Cloud Events from internal and external sources without writing extra code for different event sources. This article focuses on Knative Serving but you will learn about how to use and configure Knative Eventing for different use-cases in a future article.

Development Set Up

In order to install Knative and deploy your application, you’ll need a Kubernetes cluster and the following tools installed:

-Docker

-kubectl, the Kubernetes command-line tool

-kn CLI, the CLI for managing Knative application and configuration

Installing Docker

To install Docker, go to the URL docs.docker.com/get-docker and download the appropriate binary for your OS.

Installing kubectl

The Kubernetes command-line tool kubectl allows you to run commands against Kubernetes clusters. Docker Desktop installs kubectl for you, so if you followed the previous section in installing Docker Desktop, you should already have kubectl installed and you can skip this step. If you don’t have kubectl installed, follow the instructions below to install it.

If you’re on Linux or macOS, you can install kubectl using Homebrew by running the command brew install kubectl. Ensure that the version you installed is up to date by running the command kubectl version --client.

If you’re on Windows, run the command curl -LO https://dl.k8s.io/release/v1.21.0/bin/windows/amd64/kubectl.exe to install kubectl, and then add the binary to your PATH. Ensure that the version you installed is up to date by running the command kubectl version --client. You should have version 1.20.x or v1.21.x because in a future section, you’re going to create a server cluster with Kubernetes version 1.21.x.

Installing kn CLI

The kn CLI provides a quick and easy interface for creating Knative resources, such as services and event sources, without the need to create or modify YAML files directly. kn also simplifies completion of otherwise complex procedures, such as autoscaling and traffic splitting.

To install kn on macOS or Linux, run the command brew install kn.

To install kn on Windows, download and install a stable binary from https://mirror.openshift.com/pub/openshift-v4/clients/serverless/latest. Afterward, add the binary to the system PATH.

Creating a Kubernetes Cluster

You need a Kubernetes cluster to run Knative. For this article, you’re going to work with a local Kubernetes cluster running on Docker. You should have Docker Desktop installed.

Create a Cluster with Docker Desktop

Docker Desktop includes a standalone Kubernetes server and client. This is a single-node cluster that runs within a Docker container on your local system and should be used only for local testing.

To enable Kubernetes support and install a standalone instance of Kubernetes running as a Docker container, go to Preferences > Kubernetes and then click Enable Kubernetes.

Click Apply & Restart to save the settings and then click Install to confirm, as shown in the image below.

Figure 1: Enable Kubernetes on Docker Desktop

This instantiates the images required to run the Kubernetes server as containers.

The status of Kubernetes shows in the Docker menu and the context points to docker-desktop, as shown in the image below.

Figure 2 : kube context

Alternatively, Create a Cluster with Kind

You can also create a cluster using kind, a tool for running local Kubernetes clusters using Docker container nodes. If you have kind installed, you can run the following command to create your kind cluster and set the kubectl context.

curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/01-kind.sh | sh

Install Knative Serving

Knative Serving manages service deployments, revisions, networking, and scaling. The Knative Serving component exposes your service via an HTTP URL and has safe defaults for its configurations.

For kind users, follow these instructions to install Knative Serving:

-Run the command curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/02-serving.sh | sh to install Knative Serving.

-When that’s done, run the command curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/02-kourier.sh | sh to install and configure Kourier.

For Docker Desktop users, run the command curl -sL https://raw.githubusercontent.com/csantanapr/knative-docker-desktop/main/demo.sh | sh.

Deploying Your First Application

Next, you’ll deploy a basic Hello World application so that you can learn how to deploy and configure an application on Knative. You can deploy an application using a YAML file and the kubectl command, or using the kn command and passing the right options. For this article, I’ll be using the kn command. The sample container image you’ll use is hosted on gcr.io/knative-samples/helloworld-go.

To deploy an application, you use the kn service create command, and you need to specify the name of the application and the container image to use.

Run the following command to create a service called hello using the image https://gcr.io/knative-samples/helloworld-go.

kn service create hello \
--image gcr.io/knative-samples/helloworld-go \
--port 8080 \
--revision-name=world

The command creates and starts a new service using the specified image and port. An environment variable is set using the --env option.

The revision name is set to world using the --revision-name option. Knative uses revisions to maintain the history of each change to a service. Each time a service is updated, a new revision is created and promoted as the current version of the application. This feature allows you to roll back to previous version of the service when needed. Specifying a name for the revision allows you to easily identify them.

When the service is created and ready, you should get the following output printed in the console.

Service hello created to latest revision 'hello-world'
is available at URL: http://hello.default.127.0.0.1.nip.io

Confirm that the application is running by running the command curl http://hello.default.127.0.0.1.nip.io. You should get the output Hello World! printed in the console.

Update the Service

Suppose you want to update the service; you can use the kn service update command to make any changes to the service. Each change creates a new revision and directs all traffic to the new revision once it’s started and is healthy.

Update the TARGET environment variable by running the command:

kn service update hello \
--env TARGET=Coder \
--revision-name=coder

You should get the following output when the command has been completed.

Service 'hello' updated to latest revision
'hello-coder' is available at
URL: http://hello.default.127.0.0.1.nip.io

Run the curl command again and you should get Hello Coder! printed out.

~ curl http://hello.default.127.0.0.1.nip.io
~ Hello Coder!

Traffic Splitting and Revisions

Knative Revision is similar to a version control tag or label and it’s immutable. Every Knative Revision has a corresponding Kubernetes Deployment associated with it; it allows the application to be rolled back to any of the previous revisions. You can see the list of available revisions by running the command kn revisions list. This should print out a list of available revisions for every service, with information on how much traffic each revision gets, as shown in the image below. By default, each new revision gets routed 100% of traffic when created.

Figure 5 : Revision list

With revisions, you may wish to deploy applications using common deployment patterns such as Canary or blue-green. You need to have more than one revision of a service in order to use these patterns. The hello service you deployed in the previous section already have two revisions named hello-world and hello-coder respectively. You can split traffic 50% for each revision using the following command:

kn service update hello \
--traffic hello-world=50 \
--traffic hello-coder=50

Run the curl http://hello.default.127.0.0.1.nip.io command a few times to see that you get Hello World! sometimes, and Hello Coder! other times.

Figure 6 : Traffic Splitting

Autoscaling Services

One of the benefits of serverless is the ability to scale up and down to meet demand. When there’s no traffic coming in, it should scale down, and when it peaks, it should scale up to meet demand. Knative scales out the pods for a Knative Service based on inbound HTTP traffic. After a period of idleness (by default, 60 seconds), Knative terminates all of the pods for that service. In other words, it scales down to zero. This autoscaling capability of Knative is managed by Knative Horizontal Pod Autoscaler in conjunction with the Horizontal Pod Autoscaler built into Kubernetes.

If you’ve not accessed the hello service for more than one minute, the pods should have already been terminated. Running the command kubectl get pod -l serving.knative.dev/service=hello -w should show you an empty result. To see the autoscaling in action, open the service URL in the browser and check back to see the pods started and responding to the request. You should get an output similar to what’s shown below.

Scaling Up
Scaling Up

Scaling Down
Scaling Down

There you have the awesome autoscaling capability of serverless.

If you have an application that is badly affected by the cold-start performance, and you’d like to keep at least one instance of the application running, you can do so by running the command kn service update <SERVICE_NAME> --scale-min <VALUE>. For example, to keep at least one instance of the hello service running at all times, you can use the command kn service update hello --scale-min 1.

What’s Next?

Kubernetes has become a standard tool for managing container workloads. A lot of companies rely on it to build and scale cloud native applications, and it powers many of the products and services you use today. Although companies are adopting Kubernetes and reaping some benefits, developers aren’t interested in the low-level details of Kubernetes and therefore want to focus on their code without worrying about the infrastructure bits of running the application.

Knative provides a set of tools and CLI that developers can use to deploy their code and have Knative manage the infrastructure requirement of the application. In this article, you saw how to install the Knative Serving component and deploy services to run on it. You also learned how to deploy services and manage their configuration using the kn CLI. If you want to learn more about how to use the kn CLI, check out this free cheat sheet I made at cheatsheet.pmbanugo.me/knative-serving.

In a future article, I’ll show you how to work with Knative Eventing and how your application can respond to Cloud Events in and out of your cluster.

In the meantime, you can get my book How to build a serverless app platform on Kubernetes. It will teach you how to build a platform to deploy and manage web apps and services using Cloud Native technologies. You will learn about serverless, Knative, Tekton, GitHub Apps, Cloud Native Buildpacks, and more!

Get your copy at books.pmbanugo.me/serverless-app-platform

Let’s continue the conversation! Join the SUSE & Rancher Community where you can further your Kubernetes knowledge and share your experience.

Automate Deployments to Amazon EKS with Skaffold and GitHub Actions

Monday, 28 February, 2022

Creating a DevOps workflow to optimize application deployments to your Kubernetes cluster can be a complex journey. I recently demonstrated how to optimize your local K8s development workflow with Rancher Desktop and Skaffold. If you haven’t seen it yet, you can watch it by viewing the video below.

You might be wondering, “What happens next?” How do you extend this solution beyond a local setup to a real-world pipeline with a remote cluster? This tutorial responds to that question and will walk you through how to create a CI/CD pipeline for a Node.js application using Skaffold and GitHub Actions to an EKS cluster.

All the source code for this tutorial can be found in this repository.

Objectives

By the end of this tutorial, you’ll be able to:

1. Configure your application to work with Skaffold

2. Configure a CI stage for automated testing and building with GitHub Actions

3. Connect GitHub Actions CI with Amazon EKS cluster

4. Automate application testing, building, and deploying to an Amazon EKS cluster.

Prerequisites

To follow this tutorial, you’ll need the following:

-An AWS account.

-AWS CLI is installed on your local machine.

-AWS profile configured with the AWS CLI. You will also use this profile for the CI stage in GitHub Actions.

-A DockerHub account.

-Node.js version 10 or higher installed on your local machine.

-kubectl is installed on your local machine.

-Have a basic understanding of JavaScript.

-Have a basic understanding of IaC (Infrastructure as Code).

-Have a basic understanding of Kubernetes.

-A free GitHub account, with git installed on your local machine.

-An Amazon EKS cluster. You can clone this repository that contains a Terraform module to provision an EKS cluster in AWS. The repository README.md file contains a guide on how to use the module for cluster creation. Alternatively, you can use `eksctl` to create a cluster automatically. Running an Amazon EKS cluster will cost you $0.10 per hour. Remember to destroy your infrastructure once you are done with this tutorial to avoid additional operational charges.

Understanding CI/CD Process

Getting your CI/CD process right is a crucial step in your team’s DevOps lifecycle. The CI step is essentially automating the ongoing process of integrating the software from the different contributors in a project’s version control system, in this case, GitHub. The CI automatically tests the source code for quality checks and makes sure the application builds as expected.

The continuous deployment step picks up from there and automates the deployment of your application using the successful build from the CI stage.

Create Amazon EKS cluster

As mentioned above, you can clone or fork this repository that contains the relevant Terraform source code to automate the provisioning of an EKS cluster in your AWS account. To follow this approach, ensure that you have Terraform installed on your local machine. Alternatively, you can also use eksctl to provision your cluster. The AWS profile you use for this step will have full administrative access to the cluster by default. To communicate with the created cluster via kubectl, ensure your AWS CLI is configured with the same AWS profile.

You can view and confirm the AWS profile in use by running the following command:

aws sts get-caller-identity

Once your K8s cluster is up and running, you can verify the connection to the cluster by running `kubectl cluster-info` or `kubectl config current-context`.

Application Overview and Dockerfile

The next step is to create a directory on your local machine for the application source code. This directory should have the following folder structure (in the code block below). Ensure that the folder is a git repository by running the `git init` command.

Application Source Code

To create a package.json file from scratch, you can run the `npm init` command in the root directory and respond to the relevant questions you are prompted with. You can then proceed to install the following dependencies required for this project.

npm install body-parser cors express 
npm install -D chai mocha supertest nodemon

After that, add the following scripts to the generated package.json:

scripts: {
  start: "node src/index.js",
  dev: "nodemon src/index.js",
  test: "mocha 'src/test/**/*.js'"
},

Your final package.json file should look like the one below.

{
  "name": "nodejs-express-test",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "node src/index.js",
    "dev": "nodemon src/index.js",
    "test": "mocha 'src/test/**/*.js'"
  },
  "repository": {
    "type": "git",
    "url": "git+<your-github-uri>"
  },
  "author": "<Your Name>",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.19.0",
    "cors": "^2.8.5",
    "express": "^4.17.1"
  },
  "devDependencies": {
    "chai": "^4.3.4",
    "mocha": "^9.0.2",
    "nodemon": "^2.0.12",
    "supertest": "^6.1.3"
  }
}

Update the app.js file to initialize the Express web framework and add a single route for the application.

// Express App Setup
const express = require('express');
const http = require('http');
const bodyParser = require('body-parser');
const cors = require('cors');


// Initialization
const app = express();
app.use(cors());
app.use(bodyParser.json());


// Express route handlers
app.get('/test', (req, res) => {
  res.status(200).send({ text: 'Simple Node App Is Working As Expected!' });
});


module.exports = app;

Next, update the index.js in the root of the src directory with the following code to start the webserver and configure it to listen for traffic on port `8080`.

const http = require('http');
const app = require('./app');


// Server
const port = process.env.PORT || 8080;
const server = http.createServer(app);
server.listen(port, () => console.log(`Server running on port ${port}`));

The last step related to the application is the test folder which will contain the index.js file with code to test the single route you’ve added to our application.

You can redirect to the index.js file in the test folder and add code to test the route you added to the application.

const { expect } = require('chai');
const { agent } = require('supertest');
const app = require('../app');


const request = agent;


describe('Some controller', () => {
  it('Get request to /test returns some text', async () => {
    const res = await request(app).get('/test');
    const textResponse = res.body;
    expect(res.status).to.equal(200);
    expect(textResponse.text).to.be.a('string');
    expect(textResponse.text).to.equal('Simple Node App Is Working As Expected!');
  });
});

Application Dockerfile

Later on, we will configure Skaffold to use Docker to build our container image. You can proceed to create a Dockerfile with the following content:

FROM node:14-alpine
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install 
COPY . .
EXPOSE 8080
RUN chown -R node /usr/src/app
USER node
CMD ["npm", "start"]

Kubernetes Manifest Files for Application

The next step is to add the manifest files with the resources that Skaffold will deploy to your Kubernetes cluster. These files will be deployed continuously based on the integrated changes from the CI stage of the pipeline. You will be deploying a Deployment with three replicas and a LoadBalancer service to proxy traffic to the running Pods. These resources can be added to a single file called manifests.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
 name: express-test
spec:
 replicas: 3
 selector:
   matchLabels:
     app: express-test
 template:
   metadata:
     labels:
       app: express-test
   spec:
     containers:
     - name: express-test
       image: <your-docker-hub-account-id>/express-test
       resources:
          limits:
            memory: 128Mi
            cpu: 500m
       ports:
       - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: express-test-svc
spec:
  selector:
    app: express-test
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080

Skaffold Configuration File

In this section, you’ll populate your Skaffold configuration file (skaffold.yaml). This file will determine how your application is built and deployed by the Skaffold CLI tool in the CI stage of your pipeline. Your file will specify Docker as the image builder with the Dockerfile you created earlier to define the steps of how the image should be built. By default, Skaffold will use the gitCommit to tag the image create the Deployment manifest file with this image tag.

This configuration file will also contain a step for testing the application’s container image by executing the `npm run test` command that we added to the scripts section of the package.json file. Once the image has been successfully built and tested, it will be pushed to your Docker Hub account in the repository that you specify in the tag prefix.

Finally, we’ll specify that we want Skaffold to use kubectl to deploy the manifest file resources in the manifest.yaml file.

The complete configuration file will look like this:

apiVersion: skaffold/v2beta26
kind: Config
metadata:
  name: nodejs-express-test
build:
  artifacts:
  - image: <your-docker-hub-account-id>/express-test
    docker:
      dockerfile: Dockerfile
test:
  - context: .
    image: <your-docker-hub-account-id>/express-test
    custom:
      - command: npm run test
deploy:
  kubectl:
    manifests:
    - manifests.yaml

GitHub Secrets and GitHub Actions YAML File

In this section, you will create a remote repository for your project in GitHub. In addition to this, you will add secrets for your CI environment and a configuration file for the GitHub Actions CI stage.

Proceed to create a repository in GitHub and complete the fields you will be presented with. This will be the remote repository for the local one you created in an earlier step.

After you’ve created your repository, go to the repo Settings page. Under Security, select Secrets > Actions. In this section, you can create sensitive configuration data that will be exposed during the CI runtime as environment variables.

Proceed to create the following secrets:

-AWS_ACCCESS_KEY_ID – This is the AWS-generated Access Key for the profile you used to provision your cluster earlier.

-AWS_SECRET_ACCESS_KEY – This is the AWS-generated Secret Access Key for the profile you used to provision your cluster earlier.

-DOCKER_ID – This is the Docker ID for your DockerHub account.

-DOCKER_PW – This is the password for your DockerHub account.

-EKS_CLUSTER – This is the name you gave to your EKS cluster.

-EKS_REGION – This is the region where your EKS cluster has been provisioned.

Lastly, you are going to create a configuration file (main.yml) that will declare how the pipeline will be triggered, the branch to be used, and the steps that your CI/CD process should follow. As outlined at the start, this file will live in the .github/workflows folder and will be used by GitHub Actions.

The steps that we want to define are as follows:

-Expose our Repository Secrets as environment variables

-Install Node.js dependencies for the application

-Log in to Docker registry

-Install kubectl

-Install Skaffold

-Cache skaffold image builds & config

-Check that the AWS CLI is installed and configure your profile

-Connect to the EKS cluster

-Build and deploy to the EKS cluster with Skaffold

-Verify deployment

You can proceed to update the main.yml file with the following content.

name: 'Build & Deploy to EKS'
on:
  push:
    branches:
      - main
env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  EKS_CLUSTER: ${{ secrets.EKS_CLUSTER }}
  EKS_REGION: ${{ secrets.EKS_REGION }}
  DOCKER_ID: ${{ secrets.DOCKER_ID }}
  DOCKER_PW: ${{ secrets.DOCKER_PW }}
jobs:
  deploy:
    name: Deploy
    runs-on: ubuntu-latest
    env:
      ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'
    steps:
      # Install Node.js dependencies
      - uses: actions/checkout@v2
      - uses: actions/setup-node@v2
        with:
          node-version: '14'
      - run: npm install
      - run: npm test
      # Login to Docker registry
      - name: Login to Docker Hub
        uses: docker/login-action@v1
        with:
          username: ${{ secrets.DOCKER_ID }}
          password: ${{ secrets.DOCKER_PW }}
      # Install kubectl
      - name: Install kubectl
        run: |
          curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
          curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
          echo "$(<kubectl.sha256) kubectl" | sha256sum --check


          sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
          kubectl version --client
      # Install Skaffold
      - name: Install Skaffold
        run: |
          curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \
          sudo install skaffold /usr/local/bin/
          skaffold version
      # Cache skaffold image builds & config
      - name: Cache skaffold image builds & config
        uses: actions/cache@v2
        with:
          path: ~/.skaffold/
          key: fixed-${{ github.sha }}
      # Check AWS version and configure profile
      - name: Check AWS version
        run: |
          aws --version
          aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
          aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
          aws configure set region $EKS_REGION
          aws sts get-caller-identity
      # Connect to EKS cluster
      - name: Connect to EKS cluster 
        run: aws eks --region $EKS_REGION update-kubeconfig --name $EKS_CLUSTER
      # Build and deploy to EKS cluster
      - name: Build and then deploy to EKS cluster with Skaffold
        run: skaffold run
      # Verify deployment
      - name: Verify the deployment
        run: kubectl get pods

Once you’ve updated this file, you can commit all the changes in your local repository and push them to the remote repository you created.

git add .
git commit -m "other: initial commit"
git remote add origin <your-remote-repository>
git push -u origin <main-branch-name>

Reviewing Pipeline Success

After pushing your changes, you can track the deployment in the Actions page of the remote repository you set up in your GitHub profile.

Conclusion

This tutorial taught you how to create automated deployments to an Amazon EKS cluster using Skaffold and GitHub Actions. As mentioned in the introduction, all the source code for this tutorial can be found in this repository. If you’re interested in a video walk-through of this post, you can watch the video below.

Make sure to destroy the following infrastructure provisioned in your AWS account:

-Load Balancer created by service resource in Kubernetes.

-Amazon EKS cluster

-VPC and all networking infrastructure created to support EKS cluster

Let’s continue the conversation! Join the SUSE & Rancher Community where you can further your Kubernetes knowledge and share your experience.

Stupid Simple Kubernetes: Service Mesh

Wednesday, 16 February, 2022

We covered the what, when and why of Service Mesh in a previous post. Now I’d like to talk about why they are critical in Kubernetes. 

To understand the importance of using service meshes when working with microservices-based applications, let’s start with a story.  

Suppose that you are working on a big microservices-based banking application, where any mistake can have serious impacts. One day the development team receives a feature request to add a rating functionality to the application. The solution is obvious: create a new microservice that can handle user ratings. Now comes the hard part. The team must come up with a reasonable time estimate to add this new service.  

The team estimates that the rating system can be finished in 4 sprints. The manager is angry. He cannot understand why it is so hard to add a simple rating functionality to the app.  

To understand the estimate, let’s understand what we need to do in order to have a functional rating microservice. The CRUD (Create, Read, Update, Delete) part is easy — just simple coding. But adding this new project to our microservices-based application is not trivial. First, we have to implement authentication and authorization, then we need some kind of tracing to understand what is happening in our application. Because the network is not reliable (unstable connections can result in data loss), we have to think about solutions for retries, circuit breakers, timeouts, etc.  

We also need to think about deployment strategies. Maybe we want to use shadow deployments to test our code in production without impacting the users. Maybe we want to add A/B testing capabilities or canary deployments. So even if we create just a simple microservice, there are lots of cross-cutting concerns that we have to keep in mind.  

Sometimes it is much easier to add new functionality to an existing service than create a new service and add it to our infrastructure. It can take a lot of time to deploy a new service, add authentication and authorization, configure tracing, create CI/CD pipelines, implement retry mechanisms and more. But adding the new feature to an existing service will make the service too big. It will also break the rule of single responsibility, and like many existing microservices projects, it will be transformed into a set of connected macroservices or monoliths. 

We call this the cross-cutting concerns burden — the fact that in each microservice you must reimplement the cross-cutting concerns, such as authentication, authorization, retry mechanisms and rate limiting. 

What is the solution to this burden? Is there a way to implement all these concerns once and inject them into every microservice, so the development team can focus on producing business value? The answer is Istio.  

Set Up a Service Mesh in Kubernetes Using Istio  

Istio solves these issues using sidecars, which it automatically injects into your pods. Your services won’t communicate directly with each other — they’ll communicate through sidecars. The sidecars will handle all the cross-cutting concerns. You define the rules once, and these rules will be injected automatically into all of your pods.   

Sample Application 

Let’s put this idea into practice. We’ll build a sample application to explain the basic functionalities and structure of Istio.  

In the previous post, we created a service mesh by hand, using envoy proxies. In this tutorial, we will use the same services, but we will configure our Service Mesh using Istio and Kubernetes.  

The image below depicts that application architecture.  

 

  1. Kubernetes(we used the 1.21.3 version in this tutorial) 
  1. Helm (we used the v2) 
  1. Istio (we used 1.1.17) - setup tutorial 
  1. Minikube, K3s or Kubernetes cluster enabled in Docker 

Git Repository 

My Stupid Simple Service Mesh in Kubernetes repository contains all the scripts for this tutorial. Based on these scripts you can configure any project. 

Running Our Microservices-Based Project Using Istio and Kubernetes 

As I mentioned above, step one is to configure Istio to inject the sidecars into each of your pods from a namespace. We will use the default namespace. This can be done using the following command: 

kubectl label namespace default istio-injection=enabled 

In the second step, we navigate into the /kubernetes folder from the downloaded repository, and we apply the configuration files for our services: 

kubectl apply -f service1.yaml 
kubectl apply -f service2.yaml 
kubectl apply -f service3.yaml 

After these steps, we will have the green part up and running: 

 

For now, we can’t access our services from the browser. In the next step, we will configure the Istio Ingress and Gateway, allowing traffic from the exterior. 

The gateway configuration is as follows: 

apiVersion: networking.istio.io/v1alpha3 
kind: Gateway 
metadata:   
    name: http-gateway 
spec: 
    selector:  
        istio: ingressgateway 
    servers: 
        - port: 
            number: 80 
            name: http 
            protocol: HTTP 
        hosts:    - “*”  

Using the selector istio: ingressgateway, we specify that we would like to use the default ingress gateway controller, which was automatically added when we installed Istio. As you can see, the gateway allows traffic on port 80, but it doesn’t know where to route the requests. To define the routes, we need a so-called VirtualService, which is another custom Kubernetes resource defined by Istio. 

apiVersion: networking.istio.io/v1b 
kind: VirtualService 
metadata: 
    name: sssm-virtual-services 
spec: 
    hosts:  - "*" 
    gateways:  - http-gateway 
    http:   
        - match: 
            - uri: 
                prefix: /service1 
            route: 
                - destination: 
                    host: service1 
                    port: 
                        number: 80 
        - match: 
            - uri: 
                prefix: /service2 
            route: 
                - destination: 
                    host: service2 
                    port: 
                        number: 80 

The code above shows an example configuration for the VirtualService. In line 7, we specified that the virtual service applies to the requests coming from the gateway called http-gateway and from line 8 we define the rules to match the services where the requests should be sent. Every request with /service1 will be routed to the service1 container while every request with /service2 will be routed to the service2 container. 

At this step, we have a working application. Until now there is nothing special about Istio — you can get the same architecture with a simple Kubernetes Ingress controller, without the burden of sidecars and gateway configuration.  

Now let’s see what we can do using Istio rules. 

Security in Istio 

Without Istio, every microservice must implement authentication and authorization. Istio removes the responsibility of adding authentication and authorization from the main container (so developers can focus on providing business value) and moves these responsibilities into its sidecars. The sidecars can be configured to request the access token at each call, making sure that only authenticated requests can reach our services. 

apiVersion: authentication.istio.io/v1beta1 
kind: Policy 
metadata: 
    name: auth-policy 
spec:   
    targets:   
        - name: service1   
        - name: service2   
        - name: service3  
        - name: service4   
        - name: service5   
    origins:  
    - jwt:       
        issuer: "{YOUR_DOMAIN}"      
        jwksUri: "{YOUR_JWT_URI}"   
    principalBinding: USE_ORIGIN 

As an identity and access management server, you can use Auth0, Okta or other OAuth providers. You can learn more about authentication and authorization using Auth0 with Istio in this article. 

Traffic Management Using Destination Rules 

Istio’s official documentation says that the DestinationRule “defines policies that apply to traffic intended for a service after routing has occurred.” This means that the DestionationRule resource is situated somewhere between the Ingress controller and our services. Using DestinationRules, we can define policies for load balancing, rate limiting or even outlier detection to detect unhealthy hosts.  

Shadowing 

Shadowing, also called Mirroring, is useful when you want to test your changes in production silently, without affecting end users. All the requests sent to the main service are mirrored (a copy of the request) to the secondary service that you want to test. 

Shadowing is easily achieved by defining a destination rule using subsets and a virtual service defining the mirroring route.  

The destination rule will be defined as follows: 

apiVersion: networking.istio.io/v1beta1 
kind: DestinationRule 
metadata:   
    name: service2 
spec:   
    host: service2 
    subsets:   
    - name: v1      
      labels:       
          version: v1 
    - name: v2     
      labels:       
          version: v2 

As we can see above, we defined two subsets for the two versions.  

Now we define the virtual service with mirroring configuration, like in the script below: 

apiVersion: networking.istio.io/v1alpha3 
kind: VirtualService 
metadata:   
    name: service2 
spec:   
    hosts:     
    - service2   
    http:   
    - route:     
        - destination:         
          host: service2 
          subset: v1            
        mirror:       
            host: service2 
            subset: v2 

In this virtual service, we defined the main destination route for service2 version v1. The mirroring service will be the same service, but with the v2 version tag. This way the end user will interact with the v1 service, while the request will also be sent also to the v2 service for testing. 

Traffic Splitting 

Traffic splitting is a technique used to test your new version of a service by letting only a small part (a subset) of users to interact with the new service. This way, if there is a bug in the new service, only a small subset of end users will be affected.  

This can be achieved by modifying our virtual service as follows: 

apiVersion: networking.istio.io/v1alpha3 
kind: VirtualService 
metadata:   
    name: service2 
spec:   
    hosts:     
    - service2  
    http:   
    - route:     
        - destination:         
              host: service2         
              subset: v1       
         weight: 90            
         - destination:         
               host: service2 
               subset: v2       
         weight: 10    

The most important part of the script is the weight tag, which defines the percentage of the requests that will reach that specific service instance. In our case, 90 percent of the request will go to the v1 service, while only 10 percent of the requests will go to v2 service. 

Canary Deployments 

In canary deployments, newer versions of services are incrementally rolled out to users to minimize the risk and impact of any bugs introduced by the newer version. 

This can be achieved by gradually decreasing the weight of the old version while increasing the weight of the new version. 

A/B Testing 

This technique is used when we have two or more different user interfaces and we would like to test which one offers a better user experience. We deploy all the different versions and we collect metrics about the user interaction. A/B testing can be configured using a load balancer based on consistent hashing or by using subsets. 

In the first approach, we define the load balancer like in the following script: 

apiVersion: networking.istio.io/v1alpha3 
kind: DestinationRule 
metadata:   
    name: service2 
spec:   
    host: service2 
    trafficPolicy:     
        loadBalancer:       
            consistentHash:         
                httpHeaderName: version 

As you can see, the consistent hashing is based on the version tag, so this tag must be added to our service called “service2”, like this (in the repository you will find two files called service2_v1 and service2_v2 for the two different versions that we use): 

apiVersion: apps/v1 
kind: Deployment 
metadata:   
    name: service2-v2   
    labels:     
        app: service2 
spec:   
    selector:     
        matchLabels:       
            app: service2   
    strategy:     
        type: Recreate   
    template:     
        metadata:      
            labels:         
                app: service2         
                version: v2     
        spec:       
            containers:       
            - image: zoliczako/sssm-service2:1.0.0         
              imagePullPolicy: Always         
              name: service2         
              ports:           
              - containerPort: 5002         
              resources:           
                  limits:             
                      memory: "256Mi"             
                      cpu: "500m" 

The most important part to notice is the spec -> template -> metadata -> version: v2. The other service has the version: v1 tag. 

The other solution is based on subsets. 

Retry Management 

Using Istio, we can easily define the maximum number of attempts to connect to a service if the initial attempt fails (for example, in case of overloaded service or network error). 

The retry strategy can be defined by adding the following lines to the end of our virtual service: 

retries:   
    attempts: 5 
    perTryTimeout: 10s 

With this configuration, our service2 will have five retry attempts in case of failure and it will wait 10 seconds before returning a timeout. 

Learn more about traffic management in this article. You’ll find a great workshop to configure an end-to-end service mesh using Istio here. 

Conclusion 

In this chapter, we learned how to set up and configure a service mesh in Kubernetes using Istio. First, we configured an ingress controller and gateway and then we learned about traffic management using destination rules and virtual services.  

Want to Learn More from our Stupid Simple Series?

Read our eBook: Stupid Simple Kubernetes. Download it here!

Kubernetes 1.23: The Next Frontier    

Tuesday, 7 December, 2021
I had the honor and privilege to lead the Kubernetes 1.23 release on December 7, 2021.

Including myself, 41 people on the 1.23 release team managed the day-to-day work required to release Kubernetes. The release team is part of the Kubernetes Special Interest Group (SIG) Release.

The 1.23 release cycle started on August 23, 2021, and ran for 16 weeks. For 1.23, there were contributions from 1,084 contributors.

Since Kubernetes 1.10, each release has a theme and logo.
“The Next Frontier” theme represents the new and graduated enhancements in 1.23, Kubernetes’ history of Star Trek references, and the growth of community members in the release team.

Kubernetes has a history of Star Trek references. The original code name for Kubernetes within Google is Project 7, a reference to Seven of Nine initially from Star Trek Voyager and the seven spokes in the Kubernetes logo. And, of course, Borg, the predecessor to Kubernetes. “The Next Frontier” is a fusion of two Star Trek titles, Star Trek V: The Final Frontier and Star Trek: The Next Generation. Many new Kubernetes contributors apprentice in a release team and have shadow roles. This is their first contribution to their respective open source frontier for many.

What’s New in Kubernetes 1.23?

The 1.23 release consists of 47 enhancements, with 11 enhancements graduating to stable, 17 enhancements moving to beta, 19 enhancements entered as alpha, and one deprecated feature.

Here are some of my favorites:

Dual-stack IPv4/IPv6 Networking Graduates to Stable
Dual-stack was introduced as alpha in 1.15 and refactored in 1.20. Before 1.20, you had to have a service for each IP family model to implement dual-stack. In 1.20, the Service API supports dual-stack. In 1.21, clusters enabled dual-stack by default. In 1.23 the final move to graduate to stable is removing the IPv6DualStack feature flag.

PodSecurity Admission Graduates to Beta
If you haven’t heard, PodSecurityPolicy (PSP) is deprecated as of 1.21, and the plan is to remove PSP in 1.25. PodSecurity Admission replaces PSP. PodSecurity is an admission controller that evaluates Pods against a predefined set of Pod Security Standards to either admit or deny the Pod from running. Pod Security Standards (PSS) define privilege, baseline, and restricted policies. There are three policy modes that PodSecurity can be set to: enforceaudit, and warn.

Supply-chain Levels for Software Artifacts (SLSA) Level 1 Compliance
SLSA is an end-to-end framework to ensure the integrity of the software artifacts. Kubernetes 1.23 meets SLSA Level 1 compliance meaning that the build is scripted and the release provides provenance attestation files that describe the staging and release phases of the release process. The artifacts are verified as they are handed over from one phase to the next.

Defend Against Logging Secrets via Static Analysis Graduates to Stable
The 2019 third-party security audit for Kubernetes (I am currently the lead of the third-party audit subproject for Kubernetes) revealed that secrets were exposed to logs or execution environments. The Kubernetes project uses the go-flow-levee taint propagation analysis tool for Go to fix this. Taint propagation analysis inspects how data is spread and consumed in a program, which is used to harden boundaries for the data. This enhancement graduating to stable means that the analysis runs as a blocking pre-submit test. When this enhancement was in beta, the analysis was validated to run at scale with no false positives, test failures, or other issues.

HorizontalPodAutoscaler (HPA) v2 API Graduates to Stable
The HorizontalPodAutoscaler autoscaling/v2 stable API is GA, which supports multiple and custom metrics used by HPA. This means that the autoscaling/v2beta2 API is deprecated. There are no plans to deprecate the autoscaling/v1 API. There is no current plan to remove the autoscaling/v2beta1 and autoscaling/v2beta2 API, but the earliest they can be removed is in 1.24 and 1.27, respectively.

TTL “After Finished” Controller Graduates to Stable
There’s a new stable controller: the TTL Controller cleans up Jobs and Pods after they finish. If a Job or Pod isn’t controlled by a higher-level resource e.g. CronJob for Jobs or Jobs for Pods, it can be hard for users to clean up over time. Finished Jobs and Pods can accumulate and fill up resource quotas. To use this feature, set a Job’s .spec.ttlSecondsAfterFinished field to the number of seconds to clean up after. The TTL Controller watches all Jobs. If the Job is finished, the TTL Controller checks if the Job’s .spec.ttlSecondsAfterFinished is set; if it’s not set then the TTL Controller doesn’t do anything else. If .spec.ttlSecondsAfterFinished is set then the TTL Controller compares the .spec.ttlSecondsAfterFinished with the Job’s finished time (.status.conditions.lastTransistionTime). The Job is deleted if it’s later than the current time.

Kubelet CRI API Moves to Beta
This move is essential because Dockershim is targeted to be removed in 1.24. For Dockershim to be removed in 1.24, the Kubelet CRI API needs to be in beta in 1.23, so it can graduate to stable in 1.24 when Dockershim is removed. So a CRI-compliant container runtime (e.g. containerd, cri-o, Docker with cri-dockerd ) is required for 1.24. Users that use RKE, RKE2, and K3s are not affected. RKE2 and K3s use containerd, while RKE uses Docker with cri-dockerd, not Dockershim.

Ephemeral Containers Graduates to Beta
With the kubectl debug command, an ephemeral container is launched in a running Pod to troubleshoot or observe the containers of the Pod.

Topology Aware Hints Graduates to Beta
The EndpointSlice controller can help keep network traffic in the same zone for better performance and in some cases increase cost savings by reducing cross-zone networking costs. The EndpointSlice controller reads the topology.kubernetes.io/zone label on Nodes to determine which zone a Pod is running on. A service.kubernetes.io/topology-aware-routing: Auto annotation on a Service is required to enable Topology Aware Routing. The EndpointSlice controller provides zone hints for each endpoint.

Auto Remove PersistentVolumeClaims (PVCs) from StatefuleSets is Introduced as Alpha
PVCs from StatefulSets can be auto-deleted if the StatefulSet is deleted or scaled down. There are new fields in the StatefulSet spec:
– .spec.PersistentVolumeClaimPolicy.OnSetDeletion specifies if the PVC is deleted when the StatefulSet is deleted with the value Delete, the other option is Retain
– .spec.PersistentVolumeClaimPolicy.OnSCaleDown specifies if the PVC is deleted when the StatefulSet is scaled down with the value Delete, the other option is Retain

The kubectl events Command is Introduced as Alpha
Kubectl events have limits like sorting and the –watch option, which can enhance the event’s functionality. The output is sorted by default. Events can be sorted with other criteria and can be listed in a timeline of the last n minutes. The –watch option is also sorted.

OpenAPI v3 is Introduced as Alpha
There’s a new endpoint to publish OpenAPI v3.0 spec for all Kubernetes types. OpenAPIv2 strips several fields while OpenAPI v3 is more transparent. A separate spec is published per Kubernetes group version at the $cluster/openapi/v3/apis/<group>/<version> endpoint for improved performance and discovery, all group versions can be found at $cluster/openapi/v3. OpenAPI v3 is more expressive than v2.

Custom Buildpacks with Epinio

Wednesday, 17 November, 2021

Epinio is a build and application hosting platform running on Kubernetes. Developers push their source code to Epinio with a simple CLI command, Epinio builds an Open Container Initiative (OCI) image using Buildpacks, and the image is deployed within the Kubernetes cluster.

The process is mostly transparent thanks to the wide range of languages supported by the default Paketo buildpacks utilized by Epinio. But not every codebase can be built by Paketo, or you may want to build your code with your own custom buildpacks, which requires using a custom builder.

In this post, you’ll learn how to build a custom builder and use it with Epinio to build your application.

Buildpacks and Builders

In a previous post, I documented the process of building a custom buildpack with the end result being a custom Java buildpack able to build Maven projects.

The source code to the custom buildpack and builder is available on GitHub.

In order to use this buildpack in Epinio, it is important to understand the distinction between a buildpack and a builder.

buildpack contains the logic required to detect and build a specific language. Popular projects, like Paketo, include a number of buildpacks supporting popular languages such as Java, PHP, .NET Core, Node.js, Ruby etc.

Buildpacks are then combined into a single builder. A builder gives each of its child buildpacks an opportunity to scan the application source code, and selects the first buildpack to report that it is compatible with the code to complete the build.

If you followed the previous post, you would have created a single buildpack. To use this buildpack with Epinio, you must package the buildpack as a builder. Creating a builder is simple, requiring only two new files.

Creating a Builder

The first step is to add a package.toml file, which allows the pack command to package the buildpack. The contents of the file are shown below:

[buildpack]
uri = "."

The URI property specifies the directory containing the buildpack code. The URI is set to the current directory because the package.toml file has been saved alongside the buildpack files.

The next file you must add is builder.toml:

[[buildpacks]]
uri = "."

[[order]]
[[order.group]]
id = "mcasperson/java"
version = "0.0.1"

[stack]
id = "heroku-20"
run-image = "heroku/pack:20"
build-image = "heroku/pack:20-build"

The buildpacks array references one or more buildpacks to include in the builder. You include the buildpack from the same directory by setting the URI property to a period.

The order array defines the order in which the builder will execute buildpacks looking for one that is able to build the supplied source code. As this builder only has one buildpack, there is only one buildpack defined in this array. The id and version properties must match the same values from the buildpack buildpack.toml file.

The stack section defines the stack this builder uses to compile and run applications. Each stack has two OCI images: one called build-image, used while building the applications, and one called run-image, used to execute the built application.

You can find stacks by running the command:

pack stack suggest

The following is a snippet from the command output. Note the id, run-image, and build-image values from the buildpack.toml file match the values returned by the pack command:

Stack ID: heroku-20
Description: The official Heroku stack based on Ubuntu 20.04
Maintainer: Heroku
Build Image: heroku/pack:20-build
Run Image: heroku/pack:20

To create the builder image, run the following command, replacing mcasperson with your Docker Hub username:

pack builder create mcasperson/my-builder:latest --config ./builder.toml

The resulting image is then pushed to Docker Hub with the command:

docker push mcasperson/my-builder:latest

You now have a custom builder published to Docker Hub ready to use with your Epinio builds.

Custom Builders with Epinio

The ability to use a custom builder with Epinio was recently added, allowing the builder to be specified with the –builder-image argument:

epinio push --name my-java-app --builder-image mcasperson/my-builder

This command instructs Epinio to download and use the custom builder when compiling your application code. You can also supply any other publicly available builders, such as those provided by Heroku or Google.

Conclusion

By creating your own custom buildpacks and builders, you gain complete control over how your code is compiled. And thanks to Epinio’s ability to utilize custom builders, integrating your own builder into established build and deployment workflows is as simple as a single command line argument.

Learn more about Epinio! Read Mario Manno‘s post,  New Ideas on How to Install Epinio.

Tags: , Category: Kubernetes, Rancher Kubernetes Comments closed

Run Your First CI/CD Pipeline with Rancher

Wednesday, 17 November, 2021
This article describes how to run a very simple CI/CD toolkit based on the Rancher platform. We will use tools like Gitea, Drone, and Keel to ensure our application’s continuous integration and delivery. We will pre-configure one host to run Rancher. We will only do simple manual operations in the Rancher interface.

Prepare Host

Host specification: 8 Core CPU, 12Gb RAM (the bigger, the better). Your host must support hardware virtualization, search, and enable from BIOS extension: VT-x or VT-d or AMD-V. Install Ubuntu 20.04 Server (I have not had time to get acquainted with SUSE, but you can do it on SUSE and in the comments, add how to do it there) on host and update:

sudo apt update && sudo apt upgrade && sudo shutdown -r now

After reboot (host reboots help make sure you’re okay, so it’s good to do them sometimes):

sudo apt install ubuntu-desktop && sudo shutdown -r now

You will be able to work in the GUI and copy-pasting commands with Terminal.

You do not need to disable the swap file, because if you have little RAM, then you will encounter frequent host freezes. Rancher can work with swap enabled. Example VM (Rancher M&C) and host (cluster):

To change the network interface configuration, we will use bridge:

sudo vi /etc/netplan/00-installer-config.yaml 

# This is the network config written by 'subiquity'
network:
  ethernets:
    ens5:
      dhcp4: false
      dhcp6: false
  bridges:
    br0:
          interfaces: [ens5]
          addresses: [172.16.77.28/24]
          gateway4: 172.16.77.1
          mtu: 1500
          nameservers:
                addresses: [172.16.77.1]
          parameters:
                stp: true
                forward-delay: 4
          dhcp4: false
          dhcp6: false
  version: 2

To exit the vi editor with saving, use the keyboard shortcut Shift+ZZ.

sudo netplan apply

Install virt-manager for host and create VM

Install virt-manager:

sudo apt install qemu qemu-kvm libvirt-daemon libvirt-clients bridge-utils virt-manager
sudo virt-manager

Create VM for Rancher M&C-server, use 3Gb RAM, 2 VCPU, and 25Gb Disk. For network use shared device br0I used the same ISO as when installing the hostSet auto-start VM in GUI virt-manager. Prepare VM:

sudo apt update && sudo apt upgrade && sudo shutdown -r now

Check IP-address in VM:

ip a

Add record in /etc/hosts on host:

sudo vi /etc/hosts
<remote ip-address VM>         rancher.lan

Create ssh-key on host and connect VM

ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub rancher@rancher.lan
ssh rancher@rancher.lan

Install docker-ce on host and VM

sudo apt install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io

Install utils: rke, kubectl, and helm on host

rke

Go to https://github.com/rancher/rke/releases and choice last release:

wget https://github.com/rancher/rke/releases/download/v1.3.2/rke_linux-amd64
mkdir rancher mv rke_linux-amd64 rancher/rke cd rancher chmod +x rke
./rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:
[+] Number of Hosts [1]:
[+] SSH Address of host (1) [none]: 172.16.77.32 <--this is remote ip-addres rancher M&C-server
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (172.16.77.32) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (172.16.77.32) [none]: ~/.ssh/id_rsa
[+] SSH User of host (172.16.77.32) [ubuntu]: rancher
[+] Is host (172.16.77.32) a Control Plane host (y/n)? [y]:
[+] Is host (172.16.77.32) a Worker host (y/n)? [n]: y
[+] Is host (172.16.77.32) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (172.16.77.32) [none]:
[+] Internal IP of host (172.16.77.32) [none]:
[+] Docker socket path on host (172.16.77.32) [/var/run/docker.sock]:
[+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]:
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.21.5-rancher1]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]:

Up RKE-cluster on VM:

./rke up

kubectl

sudo apt update && sudo apt install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubectl
cp kube_config_cluster.yml ~/.kube/config
kubectl get all

helm

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt update
sudo apt install helm

Reboot the host and make sure everything works:

kubectl get all

It takes time to start the cluster, so it will not be available immediately.

Run Rancher M&C-server

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace --version v1.6.1 \
--set installCRDs=true --wait --debug

Install Rancher:

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm install rancher rancher-latest/rancher \
--wait --debug --namespace cattle-system \
--create-namespace --set hostname=rancher.lan \
--set replicas=1

Open browser https://rancher.lan and follow the login instructions on the first screen.

kubectl get secret --namespace cattle-system \
bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{"\n"}}'

Run Rancher-cluster

Go to Menu and click Clusters, click Create and select Custom, set name cluster: sandbox, click Next and set all checkboxes Node role: etcd, ControlPlane and Worker. Copy Registration command.

Run Registration command on host; we will use our host as a cluster node.

sudo docker run -d --privileged --restart=unless-stopped \
--net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run  rancher/rancher-agent:v2.6.2
--server https://rancher.lan --token <token> \
--ca-checksum <checksum> \
--etcd --controlplane --worker

Run MetalLB

Prepare host

sudo apt install ipvsadm
sudo vi /etc/modules
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh

In shell add modules:

modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh

Go to Rancher https://rancher.lanCluster ManagementClustersEdit config “sandbox” cluster (not local), Edit as YAML and add kube-proxy section:

    kubeproxy:
      extra_args:
        proxy-mode: ipvs

wait for the cluster to update. Check that the ipvs is working:

ip a | grep ipvs
ipvsadm -Ln

Install MetalLB from helm chart

Explore cluster sandbox, click App&Marketplace, click Create, set name metallb, and add repo-url: https://metallb.github.io/metallb
Create namespace in System Project: metallb-system
Click Charts and click metallb helm chart, click Install, select metallb-system namespace and set Name metallb, set checkbox Customize Helm options before install (i use default). Create config in Menu — Storage – ConfigMaps – Create — Edit as YAML (use namespace metallb-system):

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.16.77.111-172.16.77.222 #<--set your ip LAN subnet

Redeploy Deployment controller and speaker DaemonSets.

Run Longhorn

Go to Cluster Tools, select Longhorn, in Longhorn Storage Class Settings — Set replica count for Longhorn StorageClass – 1, click Next and Install.

Run Docker Registry

Create namespace docker-registry in Default project, App&Marketplace — Repositories — Create — https://helm.twun.io

In values set:

persistence:
  accessMode: ReadWriteOnce
  enabled: true
  size: 20Gi

and

service:
  annotations: {}
  name: registry
  port: 5000
  type: LoadBalancer

in host add insecure registry:

sudo vi /etc/docker/daemon.json

{
"insecure-registries": ["172.16.77.111:5000"]
}

and reboot host.

Run Gitea

Create namespace gitea in Default project, App&Marketplace — Repositories — Create — https://dl.gitea.io/charts/

In values set:

  config:
    APP_NAME: Git Local
    server:
      DOMAIN: 172.16.77.112:3000
      ROOT_URL: http://172.16.77.112:3000/
      SSH_DOMAIN: 172.16.77.113
persistence:
  accessModes:
    - ReadWriteOnce
  enabled: true
  size: 20Gi
postgresql:
  persistence:
    size: 20Gi
service:
  http:
    annotations: null
    clusterIP: None
    loadBalancerSourceRanges: []
    port: 3000
    type: LoadBalancer
  ssh:
    annotations: null
    clusterIP: None
    loadBalancerSourceRanges: []
    port: 22
    type: LoadBalancer

Go to http://172.16.77.112:3000/ and add Applications in Settings, Name drone, copy the Client ID and Client Secret, set Redirect URI http://172.16.77.114/login

Run Drone

Drone server

Create namespace drone in Default project, App&Marketplace — Repositories — Create — https://charts.drone.io

In values set:

env:
  DRONE_SERVER_HOST: 172.16.77.114
  DRONE_SERVER_PROTO: http
  DRONE_GITEA_CLIENT_ID: <> <<-- add from gites
  DRONE_GITEA_CLIENT_SECRET: <> <<-- add from getea
  DRONE_GITEA_SERVER: http://172.16.77.112:3000/
  DRONE_GIT_ALWAYS_AUTH: true
  DRONE_RPC_SECRET: <> <<-- set hex 16
  DRONE_USER_CREATE: username:octocat,machine:false,admin:true,token:<> <<--set hex 16
persistentVolume:
  accessModes:
    - ReadWriteOnce
  annotations: {}
  enabled: true
  existingClaim: ''
  mountPath: /data
  size: 8Gi
service:
  port: 80
  type: LoadBalancer

Drone Kubernets runner

Install drone-runner-kube from helm: App&Marketplace — Charts — use namespace drone, in values set:

env:
  DRONE_NAMESPACE_DEFAULT: drone
  DRONE_RPC_HOST: drone.drone.svc.cluster.local
  DRONE_RPC_PROTO: http
  DRONE_RPC_SECRET: <> <<--from drone
  DRONE_UI_PASSWORD: root
  DRONE_UI_USERNAME: root

Run Keel

Create namespace keel in Default project, App&Marketplace — Repositories — Create — https://charts.keel.sh

In values set:

basicauth:
  enabled: true
  password: admin
  user: admin
persistence:
  enabled: true
  size: 8Gi
service:
  clusterIP: ''
  enabled: true
  externalPort: 9300
  type: LoadBalancer

http://172.16.77.115:9000/ – keel dashboard.

Run SonarQube

Install sonarqube from helm, create namespace sonarqube: App&Marketplace — Repositories — Create — https://SonarSource.github.io/helm-chart-sonarqube

In values set:

service:
  annotations: {}
  externalPort: 9000
  internalPort: 9000
  labels: null
  type: LoadBalancer

Go to http://172.16.77.116:9000/account/security/ (admin:admin), generate and save Token.

Run Athens

Create namespace Athens in Default project, App&Marketplace — Repositories — Create — https://athens.blob.core.windows.net/charts

In values set:

service:
  annotations: {}
  nodePort:
    port: 30080
  servicePort: 80
  type: LoadBalancer
storage:
  disk:
    persistence:
      accessMode: ReadWriteOnce
      enabled: true
      size: 20Gi
    storageRoot: /var/lib/athens

Create Your First CI/CD Pipeline for Go App

Dockerfile.multistage:

## Build


FROM golang:1.16-buster AS build

WORKDIR /app

COPY go.mod .
COPY go.sum .
RUN GOPROXY=http://172.16.77.117 go mod download

COPY *.go ./

RUN go build -o /my-app


## Deploy


FROM gcr.io/distroless/base-debian10

WORKDIR /

COPY --from=build /my-app /my-app

EXPOSE 8080

USER nonroot:nonroot

ENTRYPOINT ["/my-app"]

.drone.yml (change ip-address in steps)

kind: pipeline
type: kubernetes
name: default

steps:

- name: greeting
  image: golang:1.16
  commands:
  - go mod download
  - go build -v ./...
  environment:
    GOPROXY: http://172.16.77.117

- name: code-analysis
  image: aosapps/drone-sonar-plugin
  settings:
    sonar_host: http://172.16.77.116:9000
    sonar_token: <sonar_token>
  when:
    branch:
    - master
    event:
    - pull_request

- name: publish-feature
  image: plugins/docker
  settings:
    repo: 172.16.77.111:5000/test2
    registry: 172.16.77.111:5000
    insecure: true
    dockerfile: Dockerfile.multistage
    tags:
    - ${DRONE_BRANCH//\//-}-${DRONE_COMMIT_SHA:0:8}
  when:
    branch:
    - feature/*

- name: deploy-feature
  image: plugins/webhook
  settings:
    username: admin
    password: admin
    urls: http://172.16.77.115:9300/v1/webhooks/native
    debug: true
    content_type: application/json
    template: |
      {
        "name": "172.16.77.111:5000/test2",
        "tag": "${DRONE_BRANCH//\//-}-${DRONE_COMMIT_SHA:0:8}"
      }
  when:
    branch:
    - feature/*

- name: publish-master
  image: plugins/docker
  settings:
    repo: 172.16.77.111:5000/test2
    registry: 172.16.77.111:5000
    insecure: true
    dockerfile: Dockerfile.multistage
    tags:
    - ${DRONE_BRANCH//\//-}-${DRONE_COMMIT_SHA:0:8}
  when:
    branch:
    - master
    event:
    - pull_request

- name: deploy-master
  image: plugins/webhook
  settings:
    username: admin
    password: admin
    urls: http://172.16.77.115:9300/v1/webhooks/native
    debug: true
    content_type: application/json
    template: |
      {
        "name": "172.16.77.111:5000/test2",
        "tag": "${DRONE_BRANCH//\//-}-${DRONE_COMMIT_SHA:0:8}"
      }
  when:
    branch:
    - master
    event:
    - pull_request

- name: publish-release
  image: plugins/docker
  settings:
    repo: 172.16.77.111:5000/test2
    registry: 172.16.77.111:5000
    insecure: true
    dockerfile: Dockerfile.multistage
    tags:
    - latest
    - ${DRONE_TAG##v}
  when:
    event:
    - tag

- name: deploy-release
  image: plugins/webhook
  settings:
    username: admin
    password: admin
    urls: http://172.16.77.115:9300/v1/webhooks/native
    debug: true
    content_type: application/json
    template: |
      {
        "name": "172.16.77.111:5000/test2",
        "tag": "${DRONE_TAG##v}"
      }
  when:
    event:
    - tag

Can add a promotions stage to upload the release to the docker hub:

- name: promote-release
  image: plugins/docker
  settings:
    repo: myrepo/test2


    dockerfile: Dockerfile.multistage
    tags:
    - latest
    - ${DRONE_TAG##v}
  when:
    event:
    - promote
    target:
    - production

Create repo in Gitea and add source code Go app. In Drone interface, activate repo.

git branch feature/feature-1
git add .
git commit -m "feature/feature-1"

[master 686fb73] feature/feature-1
 1 file changed, 1 insertion(+)

git push origin feature/feature-1

remote: 
remote: Create a new pull request for 'feature/feature-1':
remote:   http://172.16.77.112:3000/gitea_admin/test_drone/compare/master...feature/feature-1
remote: 
remote: . Processing 1 references
remote: Processed 1 references in total
To 172.16.77.113:gitea_admin/test_drone.git
 * [new branch]      feature/feature-1 -> feature/feature-1

In drone interface:

In gitea interface:

In docker-registry an image with tag “feature-feature-1-cbebf353” will be added:

Create PR in Gitea:

Drone PR:

SonarQube PR:

Release:

Continuous Delivery

Create 3 deployments: test2-dev, test2-stage, test2-release, and use keel annotations for policy (you only need to add annotations to the deployment, on the keel side, you don’t need to do anything, everything that you add to annotations will be displayed in the dashboard keel):

Cleanup

Delete VM from virt-manager. Check docs https://rancher.com/docs/rancher/v2.5/en/cluster-admin/cleaning-cluster-nodes/ for the cleanup host.

Learn more about Rancher! Check out our free class replay, Up and Running: Rancher. Register here.

Tags: , Category: Rancher Kubernetes Comments closed

Managing Rancher Resources Using Pulumi as an Infrastructure as Code Tool

Tuesday, 19 October, 2021

Using an Infrastructure as Code (IaC) solution to automate the management of your cloud resources is one of the recommended ways to reduce the toil that results from repetitive operations to manage your infrastructure.

In this article, you will learn about Infrastructure as Code and how to reduce toil by using IaC to provision your Rancher resources.

Prerequisites

This article contains a demo where Rancher resources were provisioned using Pulumi. You will need the following tools installed on your computer to follow along with the demo:

Introduction To Infrastructure As Code

Infrastructure as code (IaC) refers to managing resources within an infrastructure through a reusable definition file. Resources managed can include virtual machines, networks and storage units.

With IaC, your cloud resources are defined within a configuration file, and you can use an IaC tool to create the defined resources on your behalf.

IaC tools are split into two categories: Native IaC tools for tools built and used with public cloud providers such as ARM Template on Azure, and multi-cloud IaC tools that provision resources across different infrastructure providers (such as Terraform and Pulumi) for creating resources on Google CloudAWS and other platforms.

This article focuses on learning how to provision resources on a Rancher server using Pulumi.

Introducing Pulumi

Pulumi is an open source project that provides an SDK for you to manage cloud resources using one of the four supported programming languages. Similar to Terraform, Pulumi provides the free Pulumi cloud service by default to better manage the state of your provisioned resources across a team.

Note: The Windows Containers With Rancher and Terraform post on the rancher blog explains how to provision an RKE cluster using Terraform.

Used Pulumi Concepts

Before you move further, it would be helpful to have a simplified understanding of the following Pulumi concepts that are frequently used within this article.

Note: The Concepts & Architecture section within the Pulumi documentation contains all explanations of Pulumi’s concepts.

  • Stack: A stack within Pulumi is an independent instance containing your project. You can also liken a Pulumi stack to environments. Similar to the way you have development and production environments for your projects, you can also have multiple stacks containing various phases of your Pulumi project. This tutorial will use the default stack created when you bootstrap a new Pulumi project.

  • Inputs: Inputs are the arguments passed into a resource before creation. These arguments can be of various data values such as strings, arrays, or even numbers.

  • Outputs: Pulumi outputs are special values obtained from resources after they have been created. An example output could be the ID of the K3 cluster after it was created.

Creating Configuration Variables

Rancher API Keys

Your Rancher Management server will use an API token to authenticate the API requests made by Pulumi to create Rancher resources. API tokens are managed in the account page within the Rancher Management Console.

To create an API token, open your Rancher Management Console and click the profile avatar to reveal a dropdown. Click the Account & API Keys item from the dropdown to navigate to the account page.

From the account page, click the Create API Key button to create a new API Key.

 

Provide a preferred description for the API Key within the description text input on the next page. You can also set an expiry date for the API key by clicking the radio button for a preferred expiry period. Since the API key will create new resources, leave the scope dropdown at its default No Scope selection.

Click the Create button to save and exit the current page.

An Access Key, Secret Key, and Bearer Token will be displayed on the next page. Ensure you note the Bearer Token within a secure notepad as you will reference it when working with Pulumi.

Azure Service Principal Variables

Execute the Azure Active Directory ( ad ) command below to create a service principal to be used with Rancher:

az ad sp create-for-rbac -n "rancherK3Pulumi" --sdk-auth

As shown in the image below, the az command above returns a JSON response containing Client ID, Subscription ID, and Client Secret fields. Note the value of these fields in a secure location, as you will use them in the next step when storing credentials with Pulumi.

Setting Configuration Secrets

With the configuration variables created from the last section, let’s store them using Pulumi Secrets. The secret feature of Pulumi enables you to encrypt sensitive values used within your stack without the value into the stack’s state file.

Note: Replace the placeholders with the corresponding values obtained from the previous section.

Execute the series of commands below to store the environment variables used by Pulumi to provision Rancher resources.

The “–secret flag” passed to some of the commands below will ensure the values are hashed before being stored in the pulumi.dev.yaml file.

# Rancher API Keys
pulumi config set rancher2:apiUrl <RANCEHR_API_URL>
pulumi config set rancher2:accessToken <RANCHER_ACCESS_TOKEN> --secret
pulumi config set PROJECT_ID <PROJECT_ID_TEXT>

# Azure Service Principal Credentials
pulumi config set SUBSCRIPTION_ID <SUBSCRIPTION_ID_TEXT>
pulumi config set --secret CLIENT_ID <CLIENT_ID_TEXT>
pulumi config set --secret CLIENT_SECRET <CLIENT_SECRET_TEXT>

Creating A Pulumi Project

Now that you understand what Pulumi is, we will use Pulumi to provision a Rancher Kubernetes cluster on a Rancher management server.

At a bird’s eye view, the image below shows a graph of all Rancher resources that will be provisioned within a Pulumi stack in this article.

Within the steps listed out below, you will gradually put together the resources for an RKE cluster.

  1. Execute the two commands below from your local terminal to create an empty directory (rancher-pulumi) and change your working directory into the new directory.

# create new directory
mkdir rancher-pulumi-js

# move into new directory
cd rancher-pulumi-js
  1. Execute the Pulumi command below to initialize a new Pulumi project using the javascript template within the empty directory.

The -n (name) flag passed to the command below will specify the Pulumi project name as rancher-pulumi-js.

pulumi new -n rancher-pulumi-js javascript

Using the JavaScript template specified in the command above, the Pulumi CLI will generate the boilerplate files needed to build a stack using the Node.js library for Pulumi.

Execute the NPM command below to install the @pulumi/rancher2 Rancher resource provider.

npm install @pulumi/rancher2 dotenv

Provisioning Rancher Resources

The code definition for the resources to be created will be stored in the generated index.js file. The steps below will guide you on creating a new Rancher namespace and provisioning an EKS cluster on AWS using Rancher.

  1. Add the code block’s content below into the index.js file to create a namespace within your Rancher project. You can liken a namespace to an isolated environment that contains resources within your project.

The code block contains a RANCHER_PREFIX variable that contains an identifier text. In the preceding code blocks, you will prefix this variable to the name of other resources created to indicate that they were created using Pulumi.

The Pulumi Config variable within the code block stores an instance of the Pulumi Config class. The required method is later executed to retrieve the configuration variables that were stored as secrets.

"use strict";
const pulumi = require("@pulumi/pulumi");
const rancher2 = require("@pulumi/rancher2");

const RANCHER_PREFIX = "rancher-pulumi"
const pulumiConfig = new pulumi.Config();

// Create a new rancher2 Namespace
new rancher2.Namespace(`${RANCHER_PREFIX}-namespace`, {
   containerResourceLimit: {
       limitsCpu: "20m",
       limitsMemory: "20Mi",
       requestsCpu: "1m",
       requestsMemory: "1Mi",
   },
   description: `Namespace to store resources created within ${RANCHER_PREFIX} project`,
   projectId: pulumiConfig.require('PROJECT_ID'),
   resourceQuota: {
       limit: {
           limitsCpu: "100m",
           limitsMemory: "100Mi",
           requestsStorage: "1Gi",
       }
   }
});

Similar to the Terraform Plan command, you can also use the Pulumi preview command to view the changes to your stack before they are applied.

The image below shows the diff log of the changes caused by adding the Rancher namespace resource.

  1. Next, add the cluster resource within the code block below into the index.js file to provision an RKE cluster within the new namespace.

const rkeCluster = new rancher2.Cluster(`${RANCHER_PREFIX}-rke-cluster`, {
   description: `RKE cluster created within ${RANCHER_PREFIX} project`,
   rkeConfig: {
       network: {
           plugin: "canal",
       },
   },

   clusterMonitoringInput: {
       answers: {
           "exporter-kubelets.https": true,
           "exporter-node.enabled": true,
           "exporter-node.ports.metrics.port": 9796,
           "exporter-node.resources.limits.cpu": "200m",
           "exporter-node.resources.limits.memory": "200Mi",
           "prometheus.persistence.enabled": "false",
           "prometheus.persistence.size": "50Gi",
           "prometheus.persistence.storageClass": "default",
           "prometheus.persistent.useReleaseName": "true",
           "prometheus.resources.core.limits.cpu": "1000m",
           "prometheus.resources.core.limits.memory": "1500Mi",
           "prometheus.resources.core.requests.cpu": "750m",
           "prometheus.resources.core.requests.memory": "750Mi",
           "prometheus.retention": "12h"
       },
       version: "0.1.0",
   }
})

The next step will be registering a node for the created cluster before it can be fully provisioned.

  1. Next, add the CloudCredential resource within the code block below to create a cloud credential containing your Azure Service Principal configuration. Cloud Credentials is a feature of Rancher that helps you securely store the credentials of an infrastructure needed to provision a cluster.

Without automation, you would need to use the Rancher Management dashboard to create a cloud credential. Visit the Rancher Documentation on Cloud Credentials to learn how to manage the credentials from your Rancher Management dashboard.

// Create a new rancher2 Cloud Credential
const rancherCloudCredential = new rancher2.CloudCredential("rancherCloudCredential", {
   description: `Cloud credential for ${RANCHER_PREFIX} project.`,
   azureCredentialConfig: {
       subscriptionId: pulumiConfig.require('SUBSCRIPTION_ID'),
       clientId: pulumiConfig.require('CLIENT_ID'),
       clientSecret: pulumiConfig.require('CLIENT_SECRET')
   }
});
  1. Add the NodeTemplate resource within the code block below into the index.js file to create a reusable NodeTemplate that uses Azure as an infrastructure provider. The Node Template resource will define the settings for the operating system running the nodes for the RKE cluster as a reusable template.

The Pulumi Rancher2 provider will specify most of the default settings for the NodeTemplate; however, the fields within the code block customize the NodeTemplate created.

// create a rancher node template
const rancherNodeTemplate = new rancher2.NodeTemplate("rancherNodeTemplate", {
   description: `NodeTemplate created by ${RANCHER_PREFIX} project using Azure`,
   cloudCredentialId: rancherCloudCredential.id,
   azureConfig: {
     storageType: "Standard_RAGRS",
       size: "Standard_B2s"
   }
});
  1. Add the NodePool resource into the index.js file to create a node pool for the RKE cluster created in step two.

// Create a new rancher2 Node Pool
const rancherNodePool = new rancher2.NodePool(`${RANCHER_PREFIX}-cluster-pool`, {
   clusterId: rkeCluster.id,
   hostnamePrefix: `${RANCHER_PREFIX}-cluster-0`,
   nodeTemplateId: rancherNodeTemplate.id,
   quantity: 1,
   controlPlane: true,
   etcd: true,
   worker: true,
});

At this point, you have added all the Pulumi objects to build a Rancher Kubernetes Engine cluster using Azure as an infrastructure provider. Now we can proceed to build the resources that have been defined in the index.js file.

Execute the Pulumi up command to generate an interactive plan of the changes within the stack. Select the yes option to apply the changes and create the Rancher Kubernetes Cluster.

The RKE cluster will take some minutes before it is fully provisioned and active. In the meantime, you can view the underlying resources created by viewing them through your Rancher management dashboard and the Azure portal.

Summary

Pulumi is a great tool that brings Infrastructure as Code a step closer to you by enabling the provisioning of resources using your preferred programming languages.

Within this blog post, we used the Rancher2 provider for Pulumi to provision a Kubernetes cluster using the Rancher Kubernetes Engine on Azure. The Rancher2 Provider API documentation for Pulumi provides details of other provider APIs you can use to create Rancher resources.

Overall, IaC helps you to build consistent cloud infrastructures across multiple environments. As long as the providers used by the IaC tools remain the same, you can always provision new resources executing a command.

Tags: ,, Category: Rancher Kubernetes Comments closed

Are You Building Apps “For” or “In” the Cloud

Thursday, 26 August, 2021

There are a few paths to cloud development. We can even write code to create your own cloud. When we look at what it takes to develop an app in this new cloud world, we should look at the type. We first need to break down what apps we will build: are we building an app for the cloud or in it? In this short post, I’ll break down what it means by apps for the cloud and apps in the cloud.

First, what are apps in the cloud? Well, these are apps we write to run within a platform. Those apps are the vast majority of what we’ll be writing for most of us. Any area of cloud modernization will be us modernizing apps into the cloud. This development is in languages like .NET, Java, NodeJs, or Python. Since most companies are building apps in the cloud, enablement needs to focus on this. Getting your developers trained on best practices in the cloud. Such concepts are outlined in such things as the 12 or 15 factors. Managing data in the cloud is another area of enablement that companies should look into.

On the other side of this discussion, we have writing apps for the cloud. For most of us, this isn’t where we work–unless you are Google or contribute to OSS on the weekends. This is app development for features and functionality around a platform’s features — think of Helm or RabbitMQ. There’s nothing wrong with developing software for these purposes, but is it at the core of what your company does? If it’s not, then why are you building solutions for it? Individuals developing at this level often have a completely different skill set.

In this context, we’re looking to develop apps for infrastructure rather than a business. There is little business value when companies offer solutions to fix these problems. If you have those solutions, then chances are good that other companies already do, too. Some of these solutions are open source and free to use, while others are closed solutions. Then there are my personal favorites: productized open source solutions. They’re the best of both worlds, and they come with a few extra features.

If you are in the business of developing solutions for cloud infrastructure, go for it. However, if this is not your business, then reevaluate your options and consider other solutions. Know that others have had the same issues you are having. Using an open source or closed solution can get you back to focusing on your business.