Share with friends and colleagues on social media

 

Introduction

 

The efforts to manage the fleet of nodes powering a Kubernetes cluster are significantly reduced by SUSE CaaS Platform, however there’s always room for improvement. By taking advantage of the flexibility provided by modern Infrastructure as a Service platforms (also known as IaaS or  “clouds”), it is possible to automate even more operational aspects.

To facilitate this automation, Kubernetes enables integration between Kubernetes and different clouds through the “Cloud Provider Interface” (CPI).

SUSE CaaS Platform Cloud Provider Integration

 

SUSE CaaS Platform 3 leverages the Kubernetes Cloud Provider modules to provide a seamless integration with different cloud solutions.

This allows SUSE CaaS Platform users to take advantage of the underlying IaaS to automatically manage resources like Load Balancers, Nodes (Instances), Network Routes and Storage services; all of that both in private and public clouds.

What are the benefits provided by this integration? Let’s assume we want to deploy a simple web application that makes use of a database to store its data, something like a WordPress powered website.

The simplified “Kubernetes bill of materials” would consist of the following items:

  • a number of pods running the web frontend,
  • a number of pods running the database,
  • a persistent volume to store the database data.

With the CPI integration enabled, SUSE CaaS Platform can allocate the persistent storage using the native cloud storage. In the case of a deployment on top of OpenStack it would be seamlessly using a Cinder Block Storage.

Running our application is not enough, as we still have to find a way to expose it to our end users. To do that the cloud integration allows us to seamlessly use a Load Balancer as a Service provided by the underlying IaaS. Going back to the OpenStack example, we would be using an OpenStack Neutron Load Balancer. Kubernetes would also take care of keeping the configuration of this load balancer updated.

As you might have noted, the cloud provider integration shields the end user from the implementation details of the underlying platform. It also provides an abstraction layer that allows seamless migrations from one cloud provider to another, allowing users to avoid vendor lock-in.

SUSE CaaS Platform on OpenStack with CPI

 

SUSE CaaS Platform has always been capable of running on top of OpenStack. However starting with version 3.0 it is possible to have a full integration with this cloud infrastructure.

To deploy SUSE CaaS Platform on OpenStack you must first download the image called SUSE-CaaS-Platform-3.0-for-OpenStack-Cloud.x86_64.qcow2 (click “Free Downloads” on the SUSE web site, www.suse.com).

Next, the image is uploaded to OpenStack Glance service. Then we can deploy SUSE CaaS Platform using the Heat templates available inside of this GitHub repository: https://github.com/SUSE/caasp-openstack-heat-templates.

For more details about deploying SUSE CaaS Platform on OpenStack please refer to official documentation at https://www.suse.com/documentation/suse-caasp-3.

Once all the nodes are created, open the web UI running on the admin node and initiate the cluster configuration wizard. On the first configuration screen you will see a new “Cloud provider integration” box. Enable it and fill in the values requested by the form.

 

 

The screenshot above shows the settings requested by the OpenStack Cloud Provider Integration. We have options like:

  • OpenStack Keystone API URL: URL of the Keystone API used to authenticate the user. This value can be found on the OpenStack control plane under “Access and Security” > “API Access” > “Credentials”.
  • Domain ID or Domain name (optional): Specify the name of the domain your user belongs to.
  • Project ID or Project name (optional): Specify the name of the project where you want to create your resources.
  • Region name: Specify the identifier of the region to use when running on a multi-region OpenStack cloud. A region is a general division of an OpenStack deployment.
  • Username: Username of a valid user set in Keystone.
  • Password: Password of a valid user set in Keystone.
  • Subnet UUID for the CaaS Platform private network: Specify the identifier of the subnet you want to create your load balancer on. This value can be found on the OpenStack control panels, under “Network” > “Networks” – click on the respective network to get its subnets.
  • Floating network UUID (optional): When specified, will lead to the creation of a floating IP for the load balancer.
  • Load balancer monitor max retries: Number of ping failures before changing the load balancer member’s status to INACTIVE
  • Cinder Block Storage API version: Overrides automatic version detection. Valid values are v1, v2,v3 and auto. When auto is specified, automatic detection will select the highest supported version from OpenStack cloud.
  • Ignore Cinder availability zone: Influence availability zone use when attaching Cinder volumes. When Nova and Cinder have different availability zones, this should be set to “True”.

The integration with the OpenStack cloud will vary depending on the services offered by the cloud and by the list of extensions published by Neutron.

OpenStack integration in action, persistent volumes

 

The next paragraphs illustrate how to take advantage of OpenStack integration to have Kubernetes automatically create persistent volumes backed by Cinder Block Storage.

First we will need to create an Cinder StorageClass:

$ cat > cinder-sc.yaml <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: cinder
provisioner: kubernetes.io/cinder
EOF
$ kubectl create -f cinder-sc.yaml

Next, we will need to create a PersistentVolumeClaim with that StorageClass:

$ cat > cinder-pvc.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: cinder-vol
spec:
 storageClassName: cinder
 accessModes:
  - ReadWriteOnce
 resources:
  requests:
   storage: 5Gi
EOF
$ kubectl create -f cinder-pvc.yaml

Now we can deploy a simple container that makes use of the persistent storage we just allocated:

$ cat > busybox-with-cinder.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
 name: busybox
spec:
 containers:
  - image: busybox
  command:
   - sleep
   - "3600"
  imagePullPolicy: IfNotPresent
  name: busybox
  volumeMounts:
   - mountPath: "/data"
   name: data
 restartPolicy: Always
 volumes:
  - name: data
  persistentVolumeClaim:
   claimName: cinder-vol
EOF
$ kubectl create -f busybox-with-cinder.yaml

Wait until the pod is up and running and write some example data to /data where the Cinder volume was mounted.

$ kubectl exec -it busybox -- /bin/sh -c 'echo "Hello World!" > /data/test'
$ kubectl delete pod busybox --now

Wait for the pod to be deleted and then create it once again.

$ kubectl create -f busybox-with-pvc.yaml

As you will see, the previously created file is still available inside of the freshly started container, and its contents are exactly what we expected them to be:

$ kubectl exec -it busybox -- /bin/sh -c 'cat /data/test'

 

OpenStack integration in action, load balancer as a service

In the next paragraphs we will deploy a simple web application and expose it to external users via a Kubernetes service.

Let’s start by deploying a simple “Hello World” example application made by Google:

$ kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080

Then expose the application by declaring a Kubernetes service of type “LoadBalancer”:

$ kubectl expose deployment hello-world --type=LoadBalancer --name=hello

After some time, check the status of the service:

$ kubectl get svc -o wide

The output should look like the following:

$ kubectl get svc -o wide
NAME      TYPE   CLUSTER-IP EXTERNAL-IP    PORT(S) AGE SELECTOR
hello     LoadBalancer  172.24.39.28 < External_IP >  8080:31869/TCP   23m run=load-balancer-example
kubernetes   ClusterIP  172.24.0.1 <none>         443/TCP 6d <none>

The web application can be accessed from your web browser by visiting the  EXTERNAL-IP (10.84.73.198 in this case) and using the 8080 port number.

 

 

Let’s take a look at what happened behind the scenes. When we published using a service of type “LoadBalancer”, Kubernetes interacted with OpenStack and created a load balancer using Neutron. Then it took care of requesting a floating IP address for it (10.84.73.198) and reserved port 8080 for our application.

 

The Kubernetes service is exposing our application on all the nodes of the cluster on a randomly assigned port (31869 in this case). The configuration of the load balancer has to be updated whenever a worker node is added to or removed from the cluster. Thanks to the Kubernetes CPI integration there’s no need for the cluster operator to perform this task; the configuration is always kept in sync by Kubernetes.

SUSE CaaS Platform integration with Public Clouds

 

As mentioned before, SUSE CaaS Platform 3 features integration also with public cloud offerings like Amazon AWS, Google GCE, and Microsoft Azure. The integration is similar to the one we illustrated with OpenStack; we will cover that more in depth with another blog post.

 

Sources:

https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/


Share with friends and colleagues on social media
Tags: , , ,
Category: Cloud and as a Service Solutions, Containers as a Service, DevOps, OpenStack, Solutions, SUSE CaaS Platform, SUSE Cloud, SUSE in the Cloud, SUSE News
This entry was posted Thursday, 23 August, 2018 at 6:46 am
You can follow any responses to this entry via RSS.

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet