Skuba on SUSE CaaS Platform 4
With SUSE CaaS Platform 4 we heard our customers feedback and decided to change what the lifecycle of the platform looks like.
Previous versions of SUSE CaaS Platform included an administrator node that despite being useful for managing the whole platform, was another component to take care of, and an extra machine to take into account when deploying the platform.
This administrator node used Salt to set up and maintain the Kubernetes cluster among the different nodes comprising your cluster.
During this time, your feedback has been that a little more flexibility on the deployment was appreciated, so you could experiment with slightly different setups, even if they were for proof of concepts while you were fleshing out the details of production clusters.
This is why we decided to leverage upstream tools, like `kubeadm` to perform the heavy-lifting of the cluster deployment.
Kubeadm is a great upstream tool but setting up a fully featured cluster can be complicated for many users.
For this reason, we created skuba.
Skuba is a wrapper around kubeadm, which makes it easier for our customers to deploy Kubernetes clusters on different machines. Once your infrastructure is ready, skuba will SSH into your nodes and set them up one by one.
Once you have a machine ready, you can make it create a cluster, or join it to an existing cluster. From there kubeadm will make sure that all prerequisites are met, and will perform the required actions to start the control plane components – if the machine is going to be a control plane node – to make the kubelet in the machine have the right configuration.
Kubeadm’s scope is very well defined. It will target the current machine it’s running on and it will perform some changes cluster-wide when a Kubernetes cluster is running, but the changes it performs to the node happens exclusively locally.
Skuba, additionally, will pull all control plane components from container images built by SUSE. Doing so ensures integrity in source to binary transformation, along with QA using SUSE’s Build Service present in our proven methodology of building and releasing software.
Along with the control plane components that get released to our registry (`registry.suse.com`), we have a small number of regular packages that skuba will install automatically in all the nodes that you want to form a cluster.
To run skuba, your infrastructure should already be created and accepting SSH connections. It should have the required repositories as well to install certain requirements skuba has. Skuba will use an existing ssh-agent to authenticate itself against the different machines that form part of your cluster.
No long lived administration node
With this new iteration, SUSE CaaS Platform 4 does not require a long-lived administration node as previous versions of the platform required.
To this end, you simply need a machine that you use to bootstrap the cluster, and afterwards for lifecycle management – a machine that is able to access the nodes you want to join, or upgrade. For this purpose, any machine will do as long as it can have skuba installed on it – and it’s only required while the join or upgrade operations are taking place.
Bootstrap is just a small part of the lifecycle management story. Before, we mentioned that Skuba will need to SSH into your nodes in order to set them up. This is only required when the machine has not joined the Kubernetes cluster yet (hence, when we are bootstrapping the machine to form or join a cluster).
Skuba will try to use the Kubernetes cluster whenever it’s possible to manage the lifecycle of the cluster itself.
Adding nodes to the cluster
Skuba will handle node addition to the cluster. It will generate a join configuration for the new node, and call kubeadm on the new node to join it to the existing cluster.
Since the joining node is not yet part of the cluster this operation will require SSH access to the new node.
Removing nodes from the cluster
Node removal is a final operation which only issued when we don’t plan to reuse a node in the future on any cluster. If we plan to reuse a node in the future, a reinstallation of the node is recommended, so it will join in a clean state.
Node removal will use Kubernetes to remove the node from the existing cluster and will make sure that certain cleanup operations happen on the target node.
Upgrading Kubernetes platform versions
SUSE CaaS Platform will release Kubernetes platform upgrades in a timely manner and skuba will take care of upgrading the nodes to the new platform version, taking into account the requirements of going through a supported upgrade path, in the case that more than one platform minor upgrade is available.
Skuba will fetch control plane components from our registry, and other number of components as regular packages in the nodes being upgraded.
Upgrading essential addons and cluster configurations
We have a number of essential addons and default cluster configurations distributed with SUSE CaaS Platform such as Cilium, Gangway and Dex. Skuba will take care of upgrading these essential addons and cluster configurations whenever we release new addon or cluster configuration versions.
What does the future look like for Skuba?
We have plans for SUSE CaaS Platform and Skuba that are aiming for the future, such as having a complete declarative definition of clusters within SUSE CaaS Platform, along with being able to managing the lifecycle of clusters within SUSE CaaS Platfrom in a declarative way.
Along the way there are a number of features and improvements we want to add to Skuba, but we are always happy to hear feedback from our customers in order to build the tool that best suits them.
Our priority is our customers. We heard your feedback, and created SUSE CaaS Platform 4 as result. This new version is much more configurable, leaving the door open for your own experimentation; closing the circle for rich feedback that can help make our product an even better suit for you and your needs.
We do have plans for the future, and where SUSE CaaS Platform is heading, but in the meantime, we’d love to know your feedback and how we can make your experience better.