What’s new in Kubernetes 1.12
Kubernetes 1.12 will be released this week on Thursday, September 27, 2018. Version 1.12 ships just three months after Kubernetes 1.11 and marks the third major release of this year. The short cycle is inline with the quarterly release cycle the project has followed since it’s GA in 2015.
Kubernetes releases 2018
| Kubernetes Release | Date | |--------------------|--------------------| | 1.10 | March 26, 2018 | | 1.11 | June 27, 2018 | | 1.12 | September 27, 2018 |
Whether you are a developer using Kubernetes or an admin operating clusters, it’s worth getting an idea about the new features and fixes that you can expect in Kubernetes 1.12.
A total of 38 features are included in this milestone. Let’s have a look at some of the highlights.
Kubelet certificate rotation
Kubelet certificate rotation was promoted to beta status. This functionality allows for automated renewal of key and a certificate for the kubelet API server as the current certificate approaches expiration. Until the official 1.12 docs have been published, you can read the beta documentation on this feature here.
Network Policies: CIDR selector and egress rules
Two formerly beta features have now reached stable status: One of them is the
ipBlock selector, which allows specifying ingress/egress rules based on network addresses in CIDR notation. The second one adds support for filtering the traffic that is leaving the pods by specifying
egress rules. The below example illustrates the use of both features:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: network-policy namespace: default spec: podSelector: matchLabels: role: app policyTypes: - Egress egress: - to: - ipBlock: cidr: 10.0.0.0/24 (...)
As previoulsy beta features, both
ipBlock are already described in the official network policies documentation.
Mount namespace propagation
Mount namespace propagation, i.e. the ability to mount a volume
rshared so that any mounts from inside the container are reflected in the root (= host) mount namespace, has been promoted to stable. You can read more about this feature in the Kubernetes volumes docs.
Taint nodes by condition
This feature introduced in 1.8 as early alpha has been promoted to beta. Enabling it’s featureflag causes the node controller to create taints based on node conditions and the scheduler to filter nodes based on taints instead of conditions. The official documentation is available here.
Horizontal pod autoscaler with custom metrics
While support for custom metrics in HPA continuous to be in beta status, version 1.12 adds various enhancements like the the ability to select metrics based on the labels available in your monitoring pipeline. If you are interested in autoscaling pods based on application-level metrics provided by monitoring systems such as Prometheus, Sysdig or Datadog, I recommend to checkout the design proposal for external metrics in HPA.
If you are new to Kubernetes monitoring, check out our Intro to Kuberentes Monitoring. If you’re just getting underway with Kubernetes, read the Introduction to Kubernetes Monitoring, which will help you get the most out of the rest of this article. for a good primer.
RuntimeClass is a new cluster-scoped resource “that surfaces container runtime properties to the control plane”. In other words: This early alpha feature will enable users to select and configure (per pod) a specific container runtime (such as Docker, Rkt or Virtlet) by providing the
runtimeClass field in the
PodSpec. You can read more about it in these docs.
Resource Quota by priority
Resource quotas allow administrators to limit the resource consumption in namespaces. This is especially practical in scenarios where the available compute and storage resources in a cluster are shared by several tenants (users, teams). The beta feature Resource quota by priority allows admins to fine-tune resource allocation within the namespace by scoping quotas based on the PriorityClass of pods. You can find more details here.
One of the most exciting new 1.12 features for storage is the early alpha implementation of persistent volume snapshots. This feature allows users to create and restore snapshots at a particular point in time backed by any CSI storage provider. As part of this implementation three new API resources have been added:
VolumeSnapshotClass defines how snapshots for existing volumes are provisioned.
VolumeSnapshotContent represents existing snapshots and
VolumeSnapshot allows users to request a new snapshot of a persistent volume like so:
apiVersion: snapshot.storage.k8s.io/v1alpha1 kind: VolumeSnapshot metadata: name: new-snapshot-test spec: snapshotClassName: csi-hostpath-snapclass source: name: pvc-test kind: PersistentVolumeClaim
For the nitty gritty details take a look at the 1.12 documentation branch on Github.
Topology aware dynamic provisioning
Another storage related feature, topology aware dynamic provisioning, was introduced in v1.11 and has been promoted to beta in 1.12. It addresses some limitations with dynamic provisioning of volumes in clusters spread across multiple zones where single-zone storage backends are not globally accessible from all nodes.
Enhancements for Azure Cloud provider
These two improvements regarding running Kubernetes in Azure are shipping in 1.12:
Cluster autoscaler support
Azure availability zone support
Kubernetes v1.12 adds alpha support for Azure availability zones (AZ). Nodes in an availability zone will be added with label
failure-domain.beta.kubernetes.io/zone=<region>-<AZ> , and topology-aware provisioning is added for Azure managed disks storage class.
Kubernetes 1.12 contains many bug fixes and improvements of internal components, clearly focusing on stabilising the core, maturing existing beta features and improving the release velocity by adding more automated tests to the projects CI pipeline. A noteworthy example for the latter is the addition of CI e2e conformance tests for arm, arm64, ppc64, s390x and windows platforms to the projects test harness.
For a full list of changes in 1.12 see the release notes.
Rancher will support Kubernetes 1.12 on hosted clusters as soon as it becomes available on the particular provider. For RKE provisioned clusters it will be supported starting with Rancher 2.2.