SUSE CaaS Platform 4.0.3 Release Notes
- 1 About the Release Notes
- 2 Changes in 4.0.3
- 3 Changes in 4.0.2
- 4 Changes in 4.0.1
- 5 Known Issues
- 6 Supported Platforms
- 7 What Is New
- 7.1 Base Operating System Is Now SLES 15 SP1
- 7.2 Software Now Shipped As Packages Instead Of Disk Image
- 7.3 More Containerized Components
- 7.4 New Deployment Methods
- 7.5 Updates Using Kured
- 7.6 Automatic Installation Of Packages For Storage Backends Discontinued
- 7.7 Changes to the Kubernetes Stack
- 7.8 Centralized Logging
- 7.9 Obsolete Components
- 8 Known Issues
- 9 Support and Life Cycle
- 10 Support Statement for SUSE CaaS Platform
- 11 Documentation and Other Information
- 12 Obtaining Source Code
- 13 Legal Notices
SUSE CaaS Platform is an enterprise-ready Kubernetes-based container management solution.
1 About the Release Notes #
The most recent version of the Release Notes is available online at https://www.suse.com/releasenotes or https://documentation.suse.com/suse-caasp/4/.
Entries can be listed multiple times if they are important and belong to multiple sections.
Release notes usually only list changes that happened between two subsequent releases. Certain important entries from the release notes documents of previous product versions may be repeated. To make such entries easier to identify, they contain a note to that effect.
2 Changes in 4.0.3 #
Prometheus and Grafana: official monitoring solution for SUSE CaaS Platform
Airgap: format change of https://documentation.suse.com/external-tree/en-us/suse-caasp/4/skuba-cluster-images.txt
389-ds fixes (see below)
skuba fixes (see below)
2.1 Prometheus and Grafana: official monitoring solution for SUSE CaaS Platform #
Prometheus and Grafana were already documented but based on upstream helm charts and containers.
In version 4.0.3, official SUSE helm carts and containers are now available in the helm chart repository (kubernetes-charts.suse.com
) and the container registry (registry.suse.com
).
2.2 Airgap: Format change #
The format of https://documentation.suse.com/external-tree/en-us/suse-caasp/4/skuba-cluster-images.txt was changed to be able to express more data. Specifically to add skuba and SUSE CaaS Platform versions, so that one can match the images that should be pulled with the respective version.
This way, you can run air gapped production and staging clusters with different SUSE CaaS Platform versions.
2.3 Required Actions #
2.3.1 Prometheus and Grafana installation instructions #
You will need to use helm
and kubectl
to deploy Prometheus and Grafana.
Refer to: Monitoring chapter in the SUSE CaaS Platform admin guide
2.3.2 389-ds update instructions #
389-ds
containers have been updated in registry.suse.com
(see Bugs fixed below).
In order to deploy your 389-ds
container, see Configuring and external ldap server at the SUSE CaaS Platform admin guide
2.3.3 skuba update instructions #
Update skuba on your management workstation as you would do with any other package.
Refer to: SUSE Linux Enterprise Server 15 SP1 Admin Guide: Updating Software with Zypper
2.4 Documentation changes #
Added/Updated information about
389-ds
deployment and configurationAdded information about subnet sizing to deployment guide system requirements
Added information on using a cluster wide root CA to admin guide
Add note about NTP client requirement for management workstation
Unified use of placeholders in code examples to
<PLACEHOLDER>
formatVarious minor formatting and wording fixes
2.5 Bugs fixed in 4.0.3 since 4.0.2 #
bsc#1156667 [Prometheus and Grafana] - User "system:serviceaccount:monitoring:prometheus-kube-state-metrics" cannot list resource
bsc#1140533 [Prometheus and Grafana] - Prometheus and grafana images and helm charts on registry.suse.com
bsc#1155173 [skuba] - skuba node upgrade does not really upgrade node successfully
bsc#1151689 [skuba] - Default verbosity hides most errors
bsc#1151340 [389-ds] - ERR - add_new_slapd_process - Unable to start slapd because it is already running as process 8
bsc#1151343 [389-ds] - The config /etc/dirsrv/slapd-*/dse.ldif can not be accessed. Attempting restore
bsc#1151414 [389-ds] - NOTICE - dblayer_start - Detected Disorderly Shutdown last time Directory Server was running, recovering database.
bsc#1157332 [patterns-caasp] - caasp-release rpm not installed - probably should be included in the patterns?
3 Changes in 4.0.2 #
Note
Core addons are addons deployed automatically by skuba
when you bootstrap a cluster. Namely:
Cilium
Dex
Gangway
Kured
Default Pod Security Policies (PSP’s)
skuba addon
command has been introduced to handle core addonsskuba addon upgrade plan
will inform about what core addons will be upgradedskuba addon upgrade apply
will upgrade core addons in the current cluster
3.1 Required Actions #
When using
skuba addon upgrade apply
, all settings of all addons will be reverted to the defaults. Make sure to reapply your changes after runningskuba addon upgrade apply
, had you modified the default settings of core addons.
Warning
If you have not applied 4.0.1, the gangway configurations will be reverted to the defaults when applying this update, thus you will have to reapply your changes to addons/gangway/gangway.yaml after running skuba addon upgrade apply
3.2 Bugs fixed in 4.0.2 since 4.0.1 #
bsc#1145568 [remove-node] failed disarming kubelet due to 63 character limitation
bsc#1145907 LB dies when removing a master node in VMWare
bsc#1146774 AWS: pod to service connectivity broken in certain cases
bsc#1148090 Multinode cluster upgrade fails on 2nd master due to TLS handshake timeout
bsc#1148412 Gangway uses CSS stylesheet from cloudflare.com
bsc#1148524 Allow easy recovery from bootstrap failed during add-ons deployment phase
bsc#1148700 worker node upgrade needs to use kubeletVersion in nodeVersionInfoUpdate type
bsc#1149637 Misspelling of bootstrapping in a common error message
bsc#1153913 Can not bootstrap an new cluster if a valid
kubectl
config is presentbsc#1153928 Reboot can be triggered before skuba-update finishes
bsc#1154085 skuba node upgrade shows component downgrade
bsc#1154754 oauth2: cannot fetch token after 24 hours
4 Changes in 4.0.1 #
Updated Gangway container image (see Section 4.1, “Required Actions”)
Various bug fixes and improvements
4.1 Required Actions #
4.1.1 Update the gangway image #
The gangway image that shipped with SUSE CaaS Platform 4.0 must be updated manually by performing the following steps:
Delete the gangway deployment completely
kubectl -f delete addons/gangway/gangway.yaml
Delete the original image from node where gangway is running
sudo crictl rmi registry.suse.com/caasp/v4/gangway:3.1.0
Re-apply the gangway deployment
kubectl -f apply addons/gangway/gangway.yaml
5 Known Issues #
You must update the gangway container image manually after update (see Section 4.1, “Required Actions” ).
For a full list of Known Issues refer to: Bugzilla.
6 Supported Platforms #
This release supports deployment on:
SUSE OpenStack Cloud 8
VMWare ESXi 6.7.0.20000
Bare metal
(SUSE CaaS Platform 4.0.3 supports hardware that is certified for SLES through the YES certification program. You will find a database of certified hardware at https://www.suse.com/yessearch/.)
7 What Is New #
7.1 Base Operating System Is Now SLES 15 SP1 #
The previous version used a minimal OS image called MicroOS. SUSE CaaS Platform 4 uses standard SLES 15 SP1 as the base platform OS. SUSE CaaS Platform can be installed as an extension on top of that. Because SLES 15 is designed to address both cloud-native and legacy workloads, these changes make it easier for customers who want to modernize their infrastructure by moving existing workloads to a Kubernetes framework.
Transactional updates are available in SLES 15 SP1 as a technical preview but SUSE CaaS Platform 4 will initially ship without the transactional-update mechanism enabled. The regular zypper workflow allows use of interruption-free node reboot. The SLES update process should help customers integrate a Kubernetes platform into their existing operational infrastructure more easily, nevertheless transactional updates are still the preferred process for some customers, which is why we provide both options.
7.2 Software Now Shipped As Packages Instead Of Disk Image #
In the previous version, the deployment of the software was done by downloading and installing a disk image with a pre-baked version of the product. In SUSE CaaS Platform 4, the software is distributed as RPM packages from an extension module in SLES 15 SP1. This adaptation towards containers and SUSE Linux Enterprise Server mainly gives customers more deployment flexibility.
7.3 More Containerized Components #
We moved more of the components into containers, namely all the control plane components:
etcd
, kube-apiserver
, kube-controller-manager
, and kube-scheduler
.
The only pieces that are now running uncontainerized are CRI-O
, kubelet
and kubeadm
.
7.4 New Deployment Methods #
We are using a combination of skuba
(custom wrapper around kubeadm) and
HashiCorp Terraform to deploy SUSE CaaS Platform machines and clusters.
We provide Terraform state examples that you can modify to roll out clusters.
Deployment on bare metal using AutoYaST has now also been tested and documented: https://documentation.suse.com/suse-caasp/4/single-html/caasp-deployment/#deployment_bare_metal
Note
You must deploy a load balancer manually. This is currently not possible using Terraform. Find example load balancer configurations based on SUSE Linux Enterprise 15 SP1 and Nginx or HAProxy in the SUSE CaaS Platform Deployment Guide: https://documentation.suse.com/suse-caasp/4/single-html/caasp-deployment/#_load_balancer
7.5 Updates Using Kured #
Updates are implemented with skuba-update
, that makes use of the kured
tool and the SLE package manager. This is implemented in the skuba-update
tool which glues zypper and the kured tool (https://github.com/weaveworks/kured).
Kured (KUbernetes REboot Daemon) is a Kubernetes daemonset that performs safe
automatic node reboots when the need to do so is indicated by the package
management system of the underlying OS. Automatic updates can be manually
disabled and configured: https://documentation.suse.com/suse-caasp/4/single-html/caasp-admin/#_cluster_updates
7.6 Automatic Installation Of Packages For Storage Backends Discontinued #
In previous versions SUSE CaaS Platform would ship with packages to support all available storage backends.
This negated the minimal install size approach and is discontinued. If you require a specific software
package for your storage backend please install it using AutoYaST, Terraform or zypper
.
Refer to: https://documentation.suse.com/suse-caasp/4/single-html/caasp-admin/#_software_management
7.7 Changes to the Kubernetes Stack #
7.7.1 Updated Kubernetes #
SUSE CaaS Platform 4.0.3 ships with Kubernetes 1.15.2.
This latest version mainly contains enhancements to core Kubernetes APIs:
CustomResourceDefinitions Pruning, -Defaulting and -OpenAPI Publishing.
cluster life cycle stability and usability has been enhanced
(kubeadm init
and kubeadm join
can now be used to configure and deploy an HA control plane)
and new functionality of the Container Storage Interface (volume cloning) is available.
Read up about the details of the new features of Kubernetes 1.15.2 here:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#115-whats-new
7.7.2 CRI-O Replaces Docker #
SUSE CaaS Platform now uses CRI-O 1.15.0 as the default container runtime. CRI-O is a container runtime interface based on the OCI standard technology. The choice of CRI-O allows us to pursue our open-source agenda better than competing technologies.
CRI-O’s simplified architecture is tailored explicitly for Kubernetes and has a reduced footprint but also guarantees full compatibility with existing customer images thanks to its adherence to OCI standards. Other than Docker, CRI-O allows to update the container runtime without stopping workloads; providing improved flexibility and maintainabilitty to all SUSE CaaS Platform users.
We will strive to maintain SUSE CaaS Platform’s compatibility with the Docker Engine in the future.
7.7.3 Cilium Replaces Flannel #
SUSE CaaS Platform now uses Cilium 1.5.3 as the Container Networking Interface enabling networking policy support.
7.8 Centralized Logging #
The deployment of a Centralized Logging node is now supported for the purpose of
aggregating logs from all the nodes in the Kubernetes cluster.
Centralized Logging forwards system and Kubernetes cluster logs to a
specified external logging service, specifically the Rsyslog server,
using Kubernetes Metadata Module - mmkubernetes
.
7.9 Obsolete Components #
7.9.1 Salt #
Orchestration of the cluster no longer relies on Salt.
Orchestration is instead achieved with kubeadm
and skuba
.
7.9.2 Admin Node / Velum #
The admin node is no longer necessary. The cluster will now be controlled
by the master nodes and through API with skuba
on any SUSE Linux Enterprise system, such as a local workstation.
This also means the Velum dashboard is no longer available.
8 Known Issues #
8.1 Updating to SUSE CaaS Platform 4 #
In-place upgrades from earlier versions or from Beta 4 version to the generally available release is not supported. We recommend standing up a new cluster and redeploying workloads. For customers with production servers that cannot be redeployed, contact SUSE Consulting Services or your account team for further information.
8.2 Parallel Deployment #
To avoid fails, avoid parallel deployment of nodes. Joining master or worker nodes to an existing cluster should be done serially, meaning the nodes have to be added separately one after another. This issue will be fixed in the next release.
9 Support and Life Cycle #
SUSE CaaS Platform is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.
SUSE CaaS Platform 4 has a two-year life cycle. Each version will receive updates while it is current, and will be subject to critical updates for the remainder of its life cycle.
For more information, check our Support Policy page https://www.suse.com/support/policy.html.
10 Support Statement for SUSE CaaS Platform #
To receive support, you need an appropriate subscription with SUSE. For more information, see https://www.suse.com/support/programs/subscriptions/?id=SUSE_CaaS_Platform.
The following definitions apply:
- L1
Problem determination, which means technical support designed to provide compatibility information, usage support, ongoing maintenance, information gathering and basic troubleshooting using available documentation.
- L2
Problem isolation, which means technical support designed to analyze data, reproduce customer problems, isolate problem area and provide a resolution for problems not resolved by Level 1 or prepare for Level 3.
- L3
Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.
For contracted customers and partners, SUSE CaaS Platform 4 is delivered with L3 support for all packages, except for the following:
Technology Previews
Packages that require an additional customer contract
Packages with names ending in -devel (containing header files and similar developer resources) will only be supported together with their main packages.
SUSE will only support the usage of original packages. That is, packages that are unchanged and not recompiled.
11 Documentation and Other Information #
11.1 Available on the Product Media #
Get the detailed change log information about a particular package from the RPM (where FILENAME.rpm
is the name of the RPM):
rpm --changelog -qp FILENAME.rpm
11.2 Externally Provided Documentation #
For the most up-to-date version of the documentation for SUSE CaaS Platform 4, see https://www.suse.com/betaprogram/caasp-beta/#documentation
Find a collection of resources in the SUSE CaaS Platform Resource Library: https://www.suse.com/products/caas-platform/#resources
12 Obtaining Source Code #
This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to sle_source_request@suse.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.
13 Legal Notices #
SUSE makes no representations or warranties with regard to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to revise this publication and to make changes to its content, at any time, without the obligation to notify any person or entity of such revisions or changes.
Further, SUSE makes no representations or warranties with regard to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to make changes to any and all parts of SUSE software, at any time, without any obligation to notify any person or entity of such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classifications to export, re-export, or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical/biological weaponry end uses. Refer to https://www.suse.com/company/legal/ for more information on exporting SUSE software. SUSE assumes no responsibility for your failure to obtain any necessary export approvals.
Copyright © 2010-2019 SUSE LLC.
This release notes document is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA-4.0). You should have received a copy of the license along with this document. If not, see https://creativecommons.org/licenses/by-sa/4.0/.
SUSE has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at https://www.suse.com/company/legal/ and one or more additional patents or pending patent applications in the U.S. and other countries.
For SUSE trademarks, see SUSE Trademark and Service Mark list (https://www.suse.com/company/legal/). All third-party trademarks are the property of their respective owners.