Thanks to Sascha Grunert for the technical content of this post. In addition to being a member of the Containers Squad of the SUSE CaaS Platform team, Sascha is Technical Lead in the Kubernetes Release Engineering Subproject, which is part of SIG Release. He participated in many Kubernetes release cycles from different roles and is thrilled to give you an update about the next version.
SUSE congratulates the Kubernetes Project on another evolution of the most popular container orchestration and management platform, which forms the basis of our SUSE CaaS Platform. You can expect to see Kubernetes 1.19 supported in a future SUSE release.
Now let’s turn the keyboard over to Sascha…
On August 27th the Special Interest Group (SIG) Release published the next milestone in container orchestration software. This time we decided to stretch the release cycle from the usual three to more than four months. The general idea behind this decision was to slow down the development and reduce the stress for the contributors and consumers of Kubernetes, particularly in the time where so many of us are making work process adjustments for COVID-19. Let’s have a look if we managed to achieve that goal.
At the time of writing this blog post (a few days before the release), there are currently 3757 commits which made it into the master branch between v1.18.0 and the latest release candidate (RC) v.19.0-rc.4. But not all of those commits contain user facing changes: only 395 release notes are part of the latest RC. That’s about 100 release notes more than usual, which maps directly to the increased release cycle time span.
A good way to get an overview about what is currently going on in Kubernetes development is to have a look at the current Release Notes version. Let’s have a look at some highlights of the release.
Two Common Vulnerabilities and Exposures (CVE) will be fixed in Kubernetes v1.19.0.
The first one is CVE-2020-8559, which allows a privilege escalation from a node inside the cluster. This means if it is possible to intercept certain requests to the Kubelet, then an attacker could send a redirect response that may be followed by a client request using the credentials from the original request. This can lead to compromise of other nodes inside the cluster.
The other fixed vulnerability is CVE-2020-8557. This CVE allows a Denial of Service (DoS) via a mounted
/etc/hosts file inside a container. If a container writes a huge set of data to the
/etc/hosts file, then it could fill the storage space of the node and cause the node to fail. Root cause for this issue was that the kubelet does not evict this part of the ephemeral storage.
Seccomp, or Secure Compute Mode, has been graduated to General Availability (GA). Seccomp makes it possible for DevSecOps to create “guard rails” around containers or pods by guaranteeing that they are permitted to execute only the system calls that are required for their code. This prevents many privilege escalation exploits. A new ‘seccompProfile’ field has been added to the pod and container
securityContext objects. (The support for
container.seccomp.security.alpha.kubernetes.io/... annotations are now deprecated, and will be removed in Kubernetes v1.22.0. Right now, an automatic version skew handling will convert the new field into the annotations and vice versa.)
The Kube Scheduler config
kubescheduler.config.k8s.io has been graduated from
v1beta1. This API makes it easier for developers of alternate scheduler plugins to configure their code without tinkering with command line options. There is some manual action required when moving to the new API, but the corresponding Kubernetes Enhancement Proposal (KEP) should provide all the necessary details about the change.
The Kube API Server’s
componentstatus API is now deprecated. This API previously provided status of
kube-controller-manager components. A main drawback of this API was that it only worked when the components were local to the API server and exposed unsecured health endpoints. Instead of this API,
etcd health is now included in the
kube-apiserver health check and
kube-controller-manager health checks can be made directly against those components’ health endpoints. This provides mechanisms for health checks that are both externally accessible and more secure.
CustomResourceDefinitions now features support for marking versions as deprecated by setting
true, and for optionally overriding the default deprecation warning with a
spec.versions[*].deprecationWarning field. This will make it possible for developers of Custom Resources and Custom Resource Definitions (CRDs) to prepare consumers of their enhancements for the need to upgrade their code.
SetHostnameAsFQDN is now a new field in the
PodSpec. When it is set to
true, then the fully qualified domain name (FQDN) of a pod is set as hostname of its containers. This helps eliminate confusion in the management of hostnames.
CoreDNS has been updated to v1.7.0. This is a backwards-incompatible release; for example, the metrics names have been changed, and the federation plugin has been removed.
The Kubelet now features initial support for cgroups v2. This means that it is now possible to run the kubelet on hosts which use cgroups v2 unified or standalone mode.
Kubernetes now uses Golang version 1.15.0 for its builds. Besides that, Kubernetes no longer supports building hyperkube images. If you’ve seen relying on that in the past, please switch over to the new Debian distroless container images.
There are have many changes in conjunction with Kubeadm. For example, the
kubeadm config view command has been deprecated and will be removed in a feature release. Kubeadm now distinguishes between generated and user supplied component configs, regenerating the former if a config upgrade is required.
Another change to highlight is that Kubeadm now respects user specified etcd versions in the
ClusterConfiguration and properly uses them. Kubeadm now respects the
resolvConf value set by user even if
systemd-resolved service is active. The
kubeadm config upload command is now finally removed after its deprecation. If you still rely on it, please, use
kubeadm init phase upload-config instead.
Some API fields have been deprecated, like
apiextensions.k8s.io/v1beta1 (in favor of
apiregistration.k8s.io/v1beta1 (in favor of
autoscaling/v2beta1 (in favor of
Endpoints are now mirrored to
EndpointSlices by a new
EndpointSliceMirroring controller. The Kube Proxy now consumes
EndpointSlices instead of
Endpoints by default on Linux.
CertificateSigningRequest API has been promoted to
certificates.k8s.io/v1. This implies some changes to the API; for example,
spec.signerName is now required and requests for
kubernetes.io/legacy-unknown are not allowed to be created via the
We have many changes to kubectl. For example, it is now supported to create a deployment with replicas and a specified
kubectl annotate now has a
--list option, whereas the deprecated
--server-dry-run flag has been removed from
kubectl config view now redacts bearer tokens by default in the same way it did to client certificates. The
kubectl alpha debug command now supports debugging pods by copying the original ones. It also supports debugging nodes by creating a debugging container running in the host namespaces of the node.
This is by no means a complete list of changes in Kubernetes v1.19.0. As usual, every Kubernetes release comes with a huge bunch of fixes, features, deprecations and enhancements. If you like to dive deeper into the recent changes, then I recommend giving the current release notes version a look.