Share with friends and colleagues on social media

If you run a larger application that consists of several containers, you want to handle all these containers together – launch them, update them, monitor them etc.

Graphic "Earth Konzept" by fandijki/DigitalVision Vectors/Getty Images.

While you can develop and run such a containerized application on your single workstation, once the application is ready for production, you might want to distribute it over multiple machines for load balancing and availability. Container orchestration will take care of provisioning the container at a machine that has capacity for it.

Kubernetes (often abbreviated just as “k8s”) is an open source software for container orchestration. It allows you to deploy a group of containers together, scale them if the load increases, restart containers when they crash etc.

Kubernetes was initially written by Google engineers as the third iteration of a container orchestration engine – building upon their knowledge of how to run containerized applications at scale.

Kubernetes is now developed by an active Open Source community and is part of the Cloud Native Computing Foundation, a Linux Foundation project.

Running at scale means having highly-available services that can be run at large scale to service users globally, update these services several times an hour to roll out new features – all without service disruption. Kubernetes also will take care of networking – connecting containers with each other – and of providing storage for containers. Kubernetes uses a cluster of worker nodes to run these services.

The word Kubernetes has Greek origin and means helmsman or pilot. So, you can think of Kubernetes as the pilot of a ship of containers.

Let’s take a look at the Kubernetes architecture.

Kubernetes consists of several components – an API server, a scheduler, and a controller.

A Kubernetes cluster typically consists of the following:

  • A Kubernetes master node runs the server components and a set of worker nodes that run containers.
  • A scheduler to put containers on the worker nodes – running on the master node.
  • An API server, running on the master node.
  • A persistence layer with etcd. Etcd is a distributed key-value store that Kubernetes uses to store its state information.
  • A controller to reconcile states.
  • Kubernetes can run on bare-metal machines, in virtual machines, private cloud or public cloud.

SUSE CaaS Platform Architectural Overview

So, Kubernetes will track the state of the cluster, manage networking, schedule running containers on worker nodes, and monitor the containers and worker nodes.

As Carla Schroder writesIt is called “Production-Grade Container Orchestration,” because Kubernetes is like the conductor of a manic orchestra, with a large cast of players that constantly come and go.

Now, let’s look at some basic concepts and cool features of Kubernetes:

Pods: These are the smallest units of Kubernetes. A pod is a group of containers that are deployed together on the same host. They share network and storage. A pod can have only a single container.

Scale out: If load of your application increases, you can start more pods – instead of allocating additional memory and CPU to them. Kubernetes can observe CPU usage or use some application provided metric to scale out pods to scale out automatically. You can also scale out manually.

This means your application needs to be written in a way so that it can handle the scale out and balance work between several pods.

Rolling updates: If you have a scaled out application, it is easy to do updates as rolling updates: You can start a new pod using the updated container images, then stop an old container, start another one – until all container running old images have been stopped and you only have containers running new images. Kubernetes will do this rolling update automatically for you with a simple command. Also, if the new version is broken, you can easily go back to the previous one.

All these features of Kubernetes will help you in running and managing your containerized applications.

Our goal at SUSE is to make it easy for you to set up and manage a cluster that runs Kubernetes and then run your containerized applications on this cluster. For this we are currently developing SUSE Container as a Service Platform.


Share with friends and colleagues on social media
Tags: , , ,
Category: Cloud and as a Service Solutions, Containers, Containers as a Service, Technical Solutions
This entry was posted Tuesday, 16 May, 2017 at 5:24 am
You can follow any responses to this entry via RSS.

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet