Jump to content

Release Notes

SUSE Cloud Application Platform 1.5.2

Publication Date: 2021-06-15

This document provides guidance and an overview to high-level general features and updates for SUSE Cloud Application Platform 1.5.2. It also describes capabilities and limitations of SUSE Cloud Application Platform 1.5.2. For detailed information about deploying this product, see the Deployment Guide at https://documentation.suse.com/suse-cap/1/html/cap-guides/part-cap-deployment.html.

These release notes are updated periodically. The latest version of these release notes is always available at https://www.suse.com/releasenotes. General documentation can be found at https://documentation.suse.com/suse-cap/1.

1 About SUSE Cloud Application Platform

SUSE Cloud Application Platform is a modern application delivery platform used to bring an advanced cloud native developer experience to Kubernetes—​the de-facto standard for enterprise container orchestration. SUSE Cloud Application Platform eliminates manual IT configuration and helps accelerate innovation by getting applications to market faster. Developers can serve themselves and get apps to the cloud in minutes instead of weeks, while staying within IT guidelines, and without relying on scarce IT resources to perform manual configuration each step of the way. Streamlining application delivery opens a clear path to increased business agility, led by enterprise development, operations, and DevOps teams.

SUSE Cloud Application Platform increases business agility by helping enterprises to:

  • Boost developer productivity with easy one step deployment of cloud native applications using the language and framework most appropriate for the task.

  • Reduce complexity and increase IT efficiency with a single, lean, platform that brings together proven open source technologies for rapid application delivery at scale.

  • Maximize return on investment with industry leading open-source technologies that leverage your existing investments.

2 Support Statement for SUSE Cloud Application Platform

To receive support, you need an appropriate subscription with SUSE. For more information, see https://www.suse.com/support/?id=SUSE_Cloud_Application_Platform.

The following definitions apply:

2.1 Version Support

Technical Support and Troubleshooting (L1 - L2): Current and previous major versions (n-1). For example: SUSE will provide technical support and troubleshooting for versions 1.0, 1.1, 1.2, 1.3 (and all 2.x point releases) until the release of 3.0.

Patches and updates (L3): On the latest or last minor release of each major release. For example: SUSE will provide patches and updates for 1.3 (and 2.latest) until the release of 3.0.

SUSE Cloud Application Platform closely follows upstream Cloud Foundry releases which may implement fixes and changes which are not backwards compatible with previous releases. SUSE will backport patches for critical bugs and security issues on a best efforts basis.

2.2 Platform Support

SUSE Cloud Application Platform is fully supported on Amazon EKS, Microsoft Azure AKS and Google GKE. Each release is tested by SUSE Cloud Application Platform QA on these platforms.

SUSE Cloud Application Platform is fully supported on SUSE CaaS Platform, wherever it happens to be installed. If SUSE CaaS Platform is supported on a particular CSP, the customer can get support for SUSE Cloud Application Platform in that context.

SUSE can provide support for SUSE Cloud Application Platform on 3rd party/generic Kubernetes on a case-by-case basis provided:

  1. the Kubernetes cluster satisfies the Requirements listed here: https://documentation.suse.com/suse-cap/1.5.1/html/cap-guides/cha-cap-depl-kube-requirements.html#sec-cap-changes-kube-reqs

  2. The kube-ready-state-check.sh script has been run on the target Kubernetes cluster and does not show any configuration problems

  3. a SUSE Services or Sales Engineer has verified that SUSE Cloud Application Platform works correctly on the target Kubernetes cluster

Any incident with SUSE Cloud Application Platform is also fully supported as long as the problem can be replicated on SUSE CaaS Platform, AKS, Amazon EKS or GKE. Bugs identified on 3rd party / generic Kubernetes which are unique to that platform and can not be replicated on the core supported platforms are fixed on a best efforts basis. SUSE will not replicate the deployed Kubernetes environment internally in order to reproduce errors.

SUSE will only support the usage of original packages. That is, packages that are unchanged and not recompiled.

3 Major Changes

3.1 Release 1.5.2, February 2020

3.1.1 What Is New?

  • SUSE Cloud Foundry has been updated to version 2.20.3:

    • cf-deployment has been updated to 12.17

    • Optimized startup time for various roles

    • Added/fixed podAntiAffinity rules for various roles

  • Stratos Console has been updated to version 2.7.0:

  • Stratos Metrics has been updated to version 1.1.2:

3.1.2 Features and Fixes

  • Set topologyKey for podAntiAffinity rule to kubernetes.io/hostname

  • Inform operator of the need for CSR approval for Eirini internal registry in Helm notes

  • Pod restarts for many roles during UAA and SCF startup lessened for faster deployment

  • Expanded GoRouter ciphersuite to be compatible with a wider range of clients

  • Made Helm chart compatible with Kubernetes 1.16

  • Use system cert store if UAA_CA_CERT is not specified

  • Includes these Cloud Foundry component versions:

    • app-autoscaler: 1.2.4

    • bits-services: 2.28.0

    • bpm: 1.1.6

    • capi: 1.89.0

    • cf-acceptance-tests: 12.17

    • cf-deployment: 12.17

    • cf-mysql: 36.15.0

    • cf-routing: 0.196.0

    • cf-sle12: 1.81.65

    • cf-smoke-tests: 40.0.123

    • cf-syslog-drain: 10.2.5

    • cf-usb: 1.0.1

    • cflinuxfs3: 0.153.0

    • credhub: 2.5.9

    • diego: 2.41.0

    • eirini: 0.0.26

    • garden-runc: 1.19.9

    • groot-btrfs: 1.0.5

    • loggregator: 105.6.3

    • loggregator-agent: 5.2.4

    • log-cache: 2.6.4

    • mapfs: 1.1.0

    • nats: 30

    • nfs-volume: 1.5.7

    • postgres-release: 26

    • pxc: 0.21.0

    • scf-helper: 1.0.13

    • sle15: 10.84

    • statsd-injector: 1.11.8

    • uaa: 74.12

  • Buildpacks:

    • binary-buildpack: 1.0.35

    • dotnetcore-buildpack: 2.3.0

    • go-buildpack: 1.9.4

    • java-buildpack: 4.27.0

    • nginx-buildpack: 1.1.2

    • nodejs-buildpack: 1.7.7

    • php-buildpack: 4.4.2

    • python-buildpack: 1.7.3

    • staticfile-buildpack: 1.5.2

    • ruby-buildpack: 1.8.3

3.1.3 Known Issues

Important
Important: Mitigating Gorouter DoS Attacks (CVE-2020-15586)

The current release of SUSE Cloud Application Platform is affected by CVE-2020-15586 whereby the Gorouter is vulnerable to a Denial-of-Service (DoS) attack via requests with the "Expect: 100-continue" header. For details regarding this vulnerability, see https://www.cloudfoundry.org/blog/cve-2020-15586/.

If available, operators are advised to upgrade to a SUSE Cloud Application Platform release that is not affected by this vulnerability. Always review the release notes (https://suse.com/releasenotes/) to verify whether a given SUSE Cloud Application Platform release is affected. If it is not possible to upgrade immediately, we recommend operators follow the mitigations from Cloud Foundry’s security update (see https://www.cloudfoundry.org/blog/cve-2020-15586/): * Configure an HTTP load balancer in front of the Gorouters to drop the Expect 100-continue header completely. This may cause delays in HTTP clients that utilize the Expect: 100 continue behavior. However, this should not affect the correctness of HTTP applications. * Configure an HTTP load balancer in front of the Gorouters to drop the Expect: 100-continue header and immediately respond with “100 Continue”. This may cause HTTP clients to send the request body unnecessarily in some cases where the server would have responded with a final status code before requesting the body. However, this should not affect the correctness of HTTP applications.

If you are using a TCP / L4 load balancer for your Gorouters instead of an HTTP load balancer, consider the following: * Add firewall rules to prevent traffic from any source making requests that are causing this panic. ** You may use the extra_headers_to_log property to enable logging of the “Expect” request header to help identify sources of this malicious traffic.

Important
Important
  • With SUSE Cloud Application Platform 1.5.2, the database seeder, UAA and credhub will use TLS by default to connect with an external database. Other SCF components will use the unencrypted TCP protocol. If you would like to configure the seeder, UAA & Credhub services to connect via insecure means in a trusted network, you will need to use the following parameters:

env.DB_EXTERNAL_SSL_MODE - this is set to true by default and is used by the database seeder to communicate over TLS to the external database. Set it to false to have the seeder communicate over plain TCP.

env.UAADB_TLS - this is set to enabled by default and is used by UAA to communicate over TLS to the external database. Set it to disabled to skip TLS for UAA connections to the external database. Please check the external database section in the documentation for other usages of this flag.

env.CREDHUB_DB_REQUIRE_TLS - this is set to true by default and is used by credhub to communicate over TLS to the external database. Set it to false to disable TLS connections to the external database server from credhub.

Note that when the external database server is configured to use TLS, it must support both TLS and unencrypted connections. If the external database server only accepts TLS connections, some SCF components will not be able to communicate with the database server.

  • SUSE Cloud Application Platform 1.5.2 has been tested successfully with external database instances hosted on Amazon RDS MariaDB and Azure MariaDB.

  • The cf-usb service broker is not compatible with an external database hosted on Azure. We recommend using the service brokers that are provided by Microsoft Azure for OSBAPI connections to their hosted database services.

  • With SUSE Cloud Application Platform 1.5.2 if you are using external UAA with an external database, you must set up two separate database instances; one for UAA and one for SCF. One external database instance for both an external UAA and an SCF setup is not supported and will cause data conflicts resulting in deployment failures.

  • If you are upgrading SUSE Cloud Application Platform to 1.5.2 and use an external database in 1.5.1, please contact support for further instructions.

Please refer to the external database section of the documentation for more configuration options.

  • If you are upgrading SUSE Cloud Application Platform to 1.5.2 and already use Minibroker to connect to external databases and are using Kubernetes 1.16 or higher, which is the case with SUSE Containers as a Service Platform 4.1, you will need to update the database version to a compatible version and migrate your data over via the database’s suggested mechanism. This may require a database export/import.

  • Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

  • Prior to SUSE Cloud Application Platform 1.5.2, the documentation that covered rotating the CCDB secret keys was incorrect. This has been fixed with this version: please consult https://documentation.suse.com/suse-cap/1.5.2/html/cap-guides/cha-cap-ccdb-secret-rotation.html

3.2 Release 1.5.1, December 2019

3.2.1 What Is New?

3.2.2 Features and Fixes

  • Fixed Eirini on GKE, EKS and Kubernetes clusters using CRI-O

  • Eirini will use SLE15 as its default stack

  • eirini-cert-copier no longer appears when scheduler is diego

  • Enforced odd number of mysql replicas for HA scenarios to improve consistency with PXC

  • Improved mysql-proxy active/passive handling

  • Fixed apiVersion in Chart yaml(s) to point to Helm API version (v1)

  • Turned binlog on for pxc config to enable transaction recovery

  • Moved to stack-associated (or stackful) buildpacks, away from multi-stack

  • Enabled BPM for bits-service for reliability

  • garden.disable_swap_limit set to "true" to remove the need for swap accounting

  • Includes these Cloud Foundry component versions:

    • app-autoscaler: 1.2.1

    • bits-services: 2.28.0

    • bpm: 1.1.0

    • capi: 1.83.0

    • cf-acceptance-tests: 9.5

    • cf-deployment: 9.5

    • cf-mysql: 36.15.0

    • cf-routing: 0.188.0

    • cf-sle12: 1.81.61

    • cf-sle15: 10.70

    • cf-smoke-tests: 40.0.112

    • cf-syslog-drain: 10.2

    • cf-usb: 1.0.1

    • cflinuxfs3: 0.141.0

    • credhub: 2.4.0

    • diego: 2.34.0

    • eirini: 0.0.23

    • garden-runc: 1.19.3

    • groot-btrfs: 1.0.5

    • log-cache: 2.2.2

    • loggregator: 105.5

    • loggregator-agent: 3.9

    • mapfs: 1.1.0

    • nats: 27

    • nfs-volume: 1.5.2

    • postgres-release: 26

    • pxc: 0.18.0

    • scf-helper: 1.0.7

    • statsd-injector: 1.10.0

    • uaa: 72.0

  • Buildpacks:

    • binary-buildpack: 1.0.35

    • dotnetcore-buildpack: 2.3.0

    • go-buildpack: 1.9.2

    • java-buildpack: 4.24.0

    • nginx-buildpack: 1.1.0

    • nodejs-buildpack: 1.7.1

    • php-buildpack: 4.4.0

    • python-buildpack: 1.6.37

    • ruby-buildpack: 1.8.1

    • staticfile-buildpack: 1.5.0

3.2.3 Known Issues

Important
Important
  • In circumstances where the uaa pod may fail to start due to database migration failures, manual intervention is required to track the last completed transaction in the uaadb database, update the schema_version table with the record of the last completed transaction, then restart the migration. Please contact support for further instructions.

3.3 Release 1.5, September 2019

3.3.1 What Is New?

  • SUSE Cloud Foundry has been updated to version 2.18.0:

    • PXC (Percona XtraDB Cluster) replaces cf-mysql for database management — please read the Known Issues section for this version on deployment and upgrade changes

    • Ability to set config.HA_strict=false in combination with config.HA=true to allow lowering the sizing count for a role below what is required for HA

    • UAA can now be deployed as embedded in the cf namespace, allowing for a single step deployment — please read the Known Issues section for this version on deployment limitations

  • The Stratos UI has been updated to version 2.5.1:

    • Tech preview for helm feature

    • Added custom welcome message on endpoints page

    • Added support for connecting SUSE Containers as a Service Platform V4 endpoints

    • Refinements to the Autoscaler UI

    • For a full list of features and fixes see https://github.com/SUSE/stratos/releases/tag/2.5.1.

For information about deploying and administering SUSE Cloud Application Platform, see the product manuals at https://documentation.suse.com/suse-cap/1.

3.3.2 Features and Fixes

  • cf-deployment has been updated to version 9.5.0.

  • Eirini updated to 0.0.14

  • Removed the cluster-admin role binding for the eirini service account

  • Removed deprecated cflinuxfs2 — please read the Known Issues section for this version as to why

  • Switched over to PXC from cf-mysql for database management

  • Includes these Cloud Foundry component versions:

    • app-autoscaler: 1.2.1

    • bits-service: 2.28.0

    • bpm: 1.1.0

    • capi: 1.83.0

    • cf-deployment: 9.5

    • cf-mysql: 36.15.0

    • cf-routing: 0.188.0

    • cf-sle12: 1.81.26

    • cf-sle15: 10.28

    • cf-smoke-tests: 40.0.112

    • cf-syslog-drain: 10.2

    • cf-usb: 1.0.1

    • cflinuxfs3: 0.118.0

    • credhub: 2.4.0

    • diego: 2.34.0

    • eirini: 0.0.14

    • garden-runc: 1.19.3

    • groot-btrfs: 1.0.5

    • log-cache: 2.2.2

    • loggregator: 105.5

    • loggregator-agent: 3.9

    • mapfs: 1.1.0

    • nats: 27

    • nfs-volume: 1.5.2

    • postgres-release: 26

    • pxc: 0.18.0

    • scf-helper: 1.0.3

    • statsd-injector: 1.10.0

    • uaa: 72.0

  • Buildpacks:

    • binary-buildpack: 1.0.33

    • dotnet-core-buildpack: 2.2.13

    • go-buildpack: 1.8.42

    • java-buildpack: 4.20.0

    • nginx-buildpack: 1.0.15

    • nodejs-buildpack: 1.6.53

    • php-buildpack: 4.3.80

    • python-buildpack: 1.6.36

    • ruby-buildpack: 1.7.42

    • staticfile-buildpack: 1.4.43

3.3.3 Known Issues

Important
Important

In order to deploy SUSE Cloud Application Platform 1.5 or upgrade from SUSE Cloud Application Platform 1.4.1 in an HA configuration, you will first need to start the mysql role with 1 instance to be able to migrate from cf-mysql to PXC. This is based on upstream instructions but based on what we’ve seen with other components that rely on the database, such as Credhub, scaling all database roles into single availability helps with a stable migration and deployment.

Steps for a fresh install of SUSE Cloud Application Platform 1.5 in the default HA configuration:

  1. Install HA UAA but start the mysql role with a count of 1 as a transition step. (In the commands below, susecf-uaa and susecf-scf are assumed to be the release names and suse/uaa and suse/cf are the chart names in the repository. Adjust the release names accordingly to suit your configuration.). By specifying config.HA=true the instance count of all roles will be set to the minimum required for HA mode, otherwise referred to as the default HA configuration. Additionally, specify config.HA_strict=false along with sizing.mysql.count=1 so that there is only a single mysql role.

    helm install --name susecf-uaa --namespace uaa suse/uaa -f <values.yaml> --set config.HA=true \
    --set config.HA_strict=false --set sizing.mysql.count=1 --version 2.18.0
  2. Set the value of the secrets.UAA_CA_CERT to pass your uaa secret and certificate to scf.

    SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  3. Similarly, install HA SCF but start the mysql role with a count of 1 as a transition step. By specifying config.HA=true the instance count of all roles will be set to the minimum required for HA mode, otherwise referred to as the default HA configuration. Additionally, specify config.HA_strict=false along with sizing.mysql.count=1 so that there is only a single mysql role.

    helm install --name susecf-scf --namespace scf suse/cf -f <values.yaml> --set config.HA=true \
    --set config.HA_strict=false --set sizing.mysql.count=1 --set "secrets.UAA_CA_CERT=${CA_CERT}" \
    --version 2.18.0
  4. Scale the mysql role up to the default HA configuration.

    helm upgrade susecf-uaa --namespace uaa suse/uaa -f <values.yaml> --set config.HA_strict=true \
    --set config.HA=true --version 2.18.0
    helm upgrade susecf-scf --namespace scf suse/cf -f <values.yaml> --set config.HA_strict=true \
    --set config.HA=true --set "secrets.UAA_CA_CERT=${CA_CERT}" --version 2.18.0

Steps to upgrade from SUSE Cloud Application Platform 1.4.1 to 1.5 will depend on the configuration of your current deployment. If the mysql roles of your deployment are:

Important
Important

If you are using a buildpack that uses the same name as a shipped buildpack, you will need to rename it to a unique name. Based on our existing model of stackless buildpacks, any buildpack name already in use is considered reserved.

Important
Important

As of SUSE Cloud Foundry 2.18.0, since our cf-deployment version is 9.5, the cflinuxfs2 stack is no longer supported, as was advised in SUSE Cloud Foundry 2.17.1 or SUSE Cloud Application Platform 1.4.1. The cflinuxfs2 buildpack is no longer shipped, but if you are upgrading from an earlier version, cflinuxfs2 will not be removed. However, for migration purposes, we encourage all admins to move to cflinuxfs3 or sle15 as newer buildpacks will not work with the deprecated cflinuxfs2. If you still want to use the older stack, you will need to build an older version of a buildpack to continue for the app to work, but you will be unsupported. (If you are running on sle12, we will be retiring that stack in a future version so start planning your migration to sle15.)

Important
Important

As of SUSE Cloud Foundry 2.18.0, cf push with eirini does not work on Amazon Elastic Container Service for Kubernetes and Google Kubernetes Engine (GKE) by default. To get cf push to work with Amazon Elastic Container Service for Kubernetes and GKE, you need to apply a workaround of deleting a webhook by doing the following:

kubectl delete mutatingwebhookconfigurations eirini-x-mutating-hook-eirini

Deleting the webhook means that the eirini-persi service would not be available. Note that this workaround is not needed on Azure Kubernetes Service.

  • When deploying SUSE Cloud Foundry with Eirini, the cflinuxfs3 stack is the only one that works as part of this tech preview.

  • If you are using the uaa embedded in the suse/cf chart, note that automatic ingress creation via helm will not work at present. Therefore, the ingress controller will not work with embedded uaa but but the chart can be deployed with Kubernetes LoadBalancer services.

  • On occasion, the credhub pod may fail to start due to database migration failures; this has been spotted intermittently on Azure Kubernetes Service and to a lesser extent, other public clouds. In these situations, manual intervention is required to track the last completed transaction in credhub_user database and update the flyway schema history table with the record of the last completed transaction. Please contact support for further instructions.

  • In some situations, the autoscaler-metrics pod may fail to reach a fully ready state due to a Liquibase error: liquibase.exception.LockException: Could not acquire change log lock. When this occurs, refer to Part V of the SUSE Cloud Application Platform Deployment Guide to troubleshoot and resolve this issue at https://documentation.suse.com/suse-cap/1.

3.4 Release 1.4.1, July 2019

3.4.1 What Is New?

  • SUSE Cloud Foundry has been updated to version 2.17.1.

3.4.2 Features and Fixes

  • Set the default value of AZ_LABEL_NAME to failure-domain.beta.kubernetes.io/zone.

  • Simplified service accounts and pod security policies.

  • Switched to log-cache for container metrics.

  • Implemented a patch to squash Cloud Controller database migrations.

  • Fixed version and SHA1 of cf-mysql-release tied to version 36.15.0.

  • Fixed TLS issues in log-cache.

  • Includes these Cloud Foundry component versions:

    • app-autoscaler: 1.2.1

    • bits-service: 2.26.0

    • bpm: 1.0.0

    • capi: 1.79.0

    • cats: 7.11

    • cf-deployment: 7.11

    • cf-mysql: 36.15.0

    • cf-routing: 0.187.0

    • cf-sle12: 1.75.11

    • cf-smoke-tests: 40.0.51

    • cf-syslog-drain: 10.0

    • cf-usb: 1.0.1

    • cflinuxfs2: 1.281.0

    • cflinuxfs3: 0.108.0

    • credhub: 2.1.2

    • diego: 2.30.0

    • eirini: 0.0.4

    • garden-runc: 1.19.1

    • groot-btrfs: 1.0.4

    • kubectl: 1.9.6

    • loggregator: 105.2

    • loggregator-agent: 3.9

    • nats: 26

    • nfs-volume: 1.5.2

    • postgres-release: 26

    • scf-helper: 1.0.2

    • statsd-injector: 1.9.0

    • uaa: 68.0

  • Buildpacks:

    • binary-buildpack: 1.0.32

    • dotnet-core-buildpack: 2.2.12

    • go-buildpack: 1.8.41

    • java-buildpack: 4.19.1

    • nginx-buildpack: 1.0.14

    • nodejs-buildpack: 1.6.51

    • php-buildpack: 4.3.77

    • python-buildpack: 1.6.34

    • ruby-buildpack: 1.7.40

    • staticfile-buildpack: 1.4.43

3.4.3 Known Issues

  • cf-deployment 7.11 is the last Cloud Foundry version that supports the cflinuxfs2 stack. The cflinuxfs2 and sle12 stacks are deprecated in favor of cflinuxfs3 and sle15 respectively. Start planning to migrate applications to the newer stacks for futureproofing, as the older stacks will be removed in a future release. The Stack Auditor plugin for cf can help with this migration (see https://docs.cloudfoundry.org/adminguide/stack-auditor.html).

3.5 Release 1.4, May 2019

3.5.1 What Is New?

  • SUSE Cloud Foundry has been updated to version 2.16.4:

    • A tech preview of Eirini is available. To enable Eirini, follow the instructions from https://github.com/SUSE/scf/wiki/Eirini.

    • Added SLE15 stack.

    • Added feature flags to enable roles such as autoscaler, cf-usb, credhub and eirini.

    • Added Sync Integration Test Suite (SITS).

    • Added support for NGINX Ingress Controller with customizable Ingress via user supplied annotations.

    • Added .net-core buildpack (2.2.7).

  • The Stratos UI has been updated to version 2.4:

For information about deploying and administering SUSE Cloud Application Platform, see the product manuals at https://documentation.suse.com/suse-cap/1.

3.5.2 Features and Fixes

  • cf-mysql-release has been pinned at version 36.15.0 to avoid intermittent database connectivity errors in HA setup.

  • Changed app autoscaler-postgres to a non-HA setup due to a known limitation - see https://github.com/cloudfoundry/postgres-release/#known-limitations.

  • The app autoscaler services are no longer deployed as Kubernetes services of type LoadBalancer and therefore, are not exposed on public IP addresses or hostnames.

  • Fixed autoscaler to perform SSL validation.

  • Fixed autoscaler to listen to cluster internal CF API endpoint.

  • The default nproc limits for the vcap user for all SCF roles have been bumped to 1024/2048 (soft/hard). You can use different limits by setting kube.limits.nproc.soft and kube.limits.nproc.hard in the Helm chart values.

  • Cleaned up role readiness probe outputs.

  • Fixed the test for an insecure Docker registry (uses tcpdomain for the route).

  • Changed Doppler to use port 443 to allow for SSL communication and passthrough to Ingress controller.

  • Includes these Cloud Foundry component versions:

    • app-autoscaler: 1.0.0

    • bits-service: 2.26.0

    • bpm: 1.0.0

    • capi: 1.79.0

    • cf-deployment: 6.10

    • cf-mysql: 36.15.0

    • cf-routing: 0.184.0

    • cf-sle12: 1.75.11

    • cf-smoke-tests: 40.0.44

    • cf-syslog-drain: 8.1

    • cf-usb: 1.0.1

    • cflinuxfs2: 1.281.0

    • cflinuxfs3: 0.81.0

    • credhub: 2.1.2

    • diego: 2.25.0

    • eirini: 0.0.4

    • garden-runc: 1.17.2

    • groot-btrfs: 1.0.4

    • kubectl: 1.9.6

    • loggregator: 104.4

    • loggregator-agent: 3.2

    • nats: 26

    • nfs-volume: 1.5.2

    • postgres-release: 26

    • scf-helper: 1.0.2

    • cf-acceptance-tests:

    • statsd-injector: 1.5.0

    • uaa: 68.0

  • Buildpacks:

    • binary-buildpack: 1.0.32

    • dotnet-core-buildpack: 2.2.10

    • go-buildpack: 1.8.36

    • java-buildpack: 4.19.1

    • nginx-buildpack: 1.0.11

    • nodejs-buildpack: 1.6.49

    • php-buildpack: 4.3.75

    • python-buildpack: 1.6.32

    • ruby-buildpack: 1.7.38

    • staticfile-buildpack: 1.4.42

3.5.3 Known Issues

  • The instructions for enabling Eirini can be found at https://github.com/SUSE/scf/wiki/Eirini.

  • Currently, Eirini does not work on Kubernetes environments running cri-o. To make Eirini work, use the Docker runtime.

  • Resuming a past practice, with SUSE Cloud Application Platform 1.4, use the complete command: helm upgrade --force --recreate-pods for an upgrade. This will reintroduce downtime for apps but without --recreate-pods, multiple versions of statefulsets may co-exist which can cause incompatibilities between dependent statefulsets, and result in a broken upgrade. This applies to Stratos pods as well.

  • With the introduction of feature flags, setting sizing.<role>.count to enable/disable a feature is no longer supported. You must explicitly set enable.<feature> to true or false to enable/disable a feature. As an example, if you had enabled credhub or autoscaler in SUSE Cloud Application Platform 1.3.1, then you must add enable.credhub=true or enable.autoscaler=true during the helm upgrade. If you had previously set sizing.<role>.count to 1 you can remove that as the new minimum setting is 1. Conversely, if you had disabled a feature in SUSE Cloud Application Platform 1.3.1, you should remove the corresponding sizing setting and, instead, explicitly set enable.<feature>=false during the upgrade. If you would like to deploy more than 1 instance of an optional role, you would need to use an appropriate value for sizing.<role>.count in addition to using the feature flag.

  • If autoscaler was enabled in SUSE Cloud Application Platform 1.3.1, you must specify sizing.autoscaler_postgres.disk_sizes.postgres_data=100 during the helm upgrade to avoid upgrade errors. Alternatively, you can disable the autoscaler before the upgrade and re-enable after the upgrade is finished. Without any of these workarounds, the upgrade would fail with Error: UPGRADE FAILED: StatefulSet.apps "autoscaler-postgres" is invalid message.

  • If you are using the NGINX Ingress Controller and seeing Request Entity Too Large errors, you should bump up the ingress proxy body size to an appropriate value by setting the ingress.annotations key in helm chart values as in the following:

      ingress:
         annotations:
           nginx.ingress.kubernetes.io/proxy-body-size: 64m
  • If during an upgrade the post-deployment job does not complete, re-apply the helm upgrade.

  • On GKE, the swap accounting related kernel boot parameter changes on the worker nodes may not be retained as GCP may automatically re-provision nodes to perform upgrades or repairs. One option you may want to consider is to set up the GKE cluster with auto-repair and auto-upgrade set to false to reduce the ephemeral nature of the GKE nodes. See https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#modifications for more details.

  • On GKE you should set up the Kubernetes storage class to be backed by an SSD instead of a standard disk.

3.6 Release 1.3.1, February 2019

3.6.1 What Is New?

  • SUSE Cloud Foundry has been updated to version 2.15.2:

    • Default PodSecurityPolicies (PSPs) come with the helm charts

    • cflinuxfs3 now available as a stack

    • Added nginx buildpack

    • Support added for placement zones & isolation segments

  • The Stratos UI has been updated to version 2.3:

For information about deploying and administering SUSE Cloud Application Platform, see the product manuals at https://documentation.suse.com/suse-cap/1.

3.6.2 Features and Fixes

  • App-AutoScaler no longer depends on hairpin

  • CredHub on Microsoft Azure is now supported

  • Corrected service name to work with syslog drains

  • Certificates rely on correct FQDN for UAA

  • Removed obsolete key and diego-cell readiness probe from role-manifest.yml

  • Changed one variable name to align with upstream practices—​this may require changes to sizing:

    • cf-routing replaces routing

  • Includes these Cloud Foundry component versions:

    • app-autoscaler: 1.0.0

    • bpm: 1.0.0

    • capi: 1.66.0

    • cf-deployment: 3.6.0

    • cf-mysql: 36.15.0

    • cf-routing: 0.180.0

    • cf-sle12: 1.52.6

    • cf-smoke-tests: 40.0.6

    • cf-syslog-drain: 7.0

    • cf-usb: 1.0.1

    • cflinuxfs2: 1.266.0

    • cflinuxfs3: 0.60.0

    • credhub: 2.0.2

    • diego: 2.16.0

    • garden-runc: 1.16.3

    • groot-btrfs: 1.0.4

    • kubectl: 1.9.6

    • loggregator: 103.1

    • loggregator-agent: 2.0

    • nats: 25

    • nfs-volume: 1.2.0

    • opensuse42: 1.8.6

    • postgres-release: 26

    • scf-helper: 1.0.1

    • cf-acceptance-tests: 2.8

    • statsd-injector: 1.3.0

    • uaa: 60.2

    • uaa-fissile: c9edf895

  • Buildpacks:

    • binary-buildpack: 1.0.30

    • dotnet-core-buildpack: 2.0.3

    • go-buildpack: 1.8.33

    • java-buildpack: 4.17.2

    • nginx-buildpack: 1.0.8

    • nodejs-buildpack: 1.6.43

    • php-buildpack: 4.3.70

    • python-buildpack: 1.6.27

    • ruby-buildpack: 1.7.31

    • staticfile-buildpack: 1.4.39

3.6.3 Known Issues

  • For SUSE Cloud Application Platform 1.3.1, during the helm upgrade from 1.3.0, the --recreate-pods is not required as the recent change to the active/passive model allowed for previously Unready pods to be upgraded. This will allow for zero app downtime from the previous version.

  • For deployments on Amazon EKS: the AWS Service Broker (https://aws.amazon.com/partners/servicebroker/) should now be used instead of the deprecated cf-brokers wrapper.

  • For custom PSPs, SYS_RESOURCE no longer needs to be specified under added capabilities in the scf-config-values.yml

  • During an upgrade from 2.14 to 2.15.2, the GoRouter and the applications it routes to will be unavailable until the new GoRouter pods are ready. You can work around this by setting the following label on the existing GoRouter pod specs: labels:

    labels:
    .. `app.kubernetes.io/component: "router"`
    .. `skiff-role-name: "router"`
  • The App-AutoScaler services are exposed as Kube services of type LoadBalancer but they should only be accessed via the GoRouter. Therefore, do not rely on the public IPs for these services on the load balancer or do not create separate DNS entries for them — use the DNS entries associated with the GoRouter public service instead.

  • Deletion of MariaDB instances created with Minibroker can fail with timeouts. If an error appears, wait one minute and retry. If the cf delete-service command fails but the instance pods are removed from Kubernetes, the service instance data can safely be removed with a cf purge-service-instance command.

  • On Microsoft Azure it is recommended to run on instance types Standard_DS4_v2 or larger due to the introduction of the cflinuxfs3 stack. It’s also recommended to use Premium SSD for the storage class.

  • If you notice application instances (long-running processes or "LRPs") improperly persisting and accepting traffic after update or scaling actions, there may be an instance of the cc-clock role that did not come up properly due to an incorrect internal protocol setting. To address this:

  1. Create a file called cc-clock-patch.yml with the following contents:

    bosh:
       instance_groups:
       - name: cc-clock
         jobs:
         - name: cloud_controller_clock
           properties:
             cc:
               external_protocol: http
  2. Rerun the upgrade of the CAP deployment via a Helm command with this syntax: helm upgrade scf suse/cf --reuse-values --namespace scf -f cc-clock-patch-yml --version 2.15.2

  3. For high-availability (HA) deployments, manually restart the cc-clock-N pods by deleting them one at a time to avoid app downtime; newer updated pods will be created automatically:

    kubectl delete pod - n scf cc-clock-0
    kubectl delete pod - n scf cc-clock-1
    kubectl delete pod - n scf cc-clock-2
  4. For single availability deployments, since there’s only one cc-clock pod, app downtime is unavoidable.

  • The URL of the internal cf-usb broker endpoint has been corrected from the duplicate name from the previous version. To reconnect with SUSE Cloud Foundry/SUSE Cloud Application Platform, brokers for PostgreSQL and MySQL that use cf-usb will require the following manual fix after the upgrade:

  1. Run kubectl get secret --namespace scf and copy the name of the secret (for example, secrets-2.15.2-1)

  2. Run cf service-brokers to get the URL for the cf-usb host (for example, https://cf-usb-cf-usb.scf.svc.cluster.local:24054)

  3. Get the current CF_USB password by running:

    kubectl get secret --namespace scf <SECRET_NAME> -o yaml | \
      grep \\scf-usb-password: | cut -d: -f2 | base64 -id

    Replace <SECRET_NAME> with the name from the first step.

  4. Finally, update the service broker:

    cf update-service-broker usb broker-admin <PASSWORD> \
      https://cf-usb.scf.svc.cluster.local:24054

    Replace <PASSWORD> with the password from step 3. The URL is a modified version of the URL from step 2: however, as the subdomain name, use cf-usb instead of cf-usb-cf-usb.

3.7 Release 1.3, November 2018

3.7.1 What Is New?

  • SUSE Cloud Foundry has been updated to version 2.14.5:

    • Includes support for AWS Service Broker

    • Centralized credential management with CredHub is now available to Cloud Foundry apps and compatible brokers (disabled by default)

    • Automatically scaling resource with App-AutoScaler is now supported for Azure Kubernetes Service and Amazon Elastic Container Service for Kubernetes (disabled by default)

    • Minibroker has gained support for Redis, MongoDB, MySQL, PostgreSQL, and MariaDB

  • The Stratos UI has been updated to version 2.2:

    • There is a new metrics endpoint for keeping and exposing Cloud Foundry application and Kubernetes metrics

    • There are new views for Kubernetes application, pod, and node metrics

    • For a more detailed list of new features and fixes, see https://github.com/SUSE/stratos/releases/tag/2.2.0.

For information about deploying and administering SUSE Cloud Application Platform, see the product manuals at https://documentation.suse.com/suse-cap/1.

3.7.2 Features and Fixes

  • One Kubernetes service per job. The service names will include both the instance group (previously the role) and job name, which impacts the role manifest YAML

  • Changed two variable names to align with upstream practices—​this may require changes to sizing:

    • diego-ssh replaces diego-access

    • api-group replaces api

  • UAA charts now have affinity/antiaffinity logic

  • Exposed SMTP_HOST & SMTP_FROM_ADDRESS variables to allow for account creation & password reset

  • consul role removed due to redundancy

  • Kubernetes readiness check no longer looks for hyperkube explicitly

  • Updated cluster role names to ensure no namespace conflicts in Kubernetes

  • Includes these Cloud Foundry component versions:

    • UAA: v60.2

    • cf-deployment: 2.7.0

    • kubectl: 1.9.6

    • capi-release: 1.61.0

    • cflinuxfs2-release: v1.227.0

    • cf-mysql-release: v36.15.0

    • cf-opensuse42-release: 1.7.87

    • cf-sle12-release: 1.51.115

    • cf-smoke-tests-release: 40.0.5

    • cf-syslog-drain-release: v7.0

    • cf-usb: 7a45076

    • diego-release: v2.12.1

    • garden-runc-release: v1.15.1

    • groot-btrfs: 305b068d

    • loggregator-agent-release: v2.0

    • loggregator-release: v103.0

    • nats-release: v24

    • nfs-volume-release: v1.2.0

    • postgres-release: v26

    • routing-release: 0.179.0

    • scf-helper-release: b9fa59d

    • cf-acceptance-tests: c83c97b9

    • testbrain: 1.0.0-61-ga172cf9

    • statsd-injector-release: v1.3.0

    • uaa-fissile-release: 0.0.1-321-g6c32268

  • Buildpacks:

    • binary-buildpack-release: 1.0.27.1

    • dotnet-core-buildpack-release: 1.0.26-14-gf951834

    • go-buildpack-release: 1.8.28.1

    • java-buildpack-release: 4.16.1-3-g3cf9321

    • nodejs-buildpack-release: 1.6.34.1

    • php-buildpack-release: 4.3.63.1

    • python-buildpack-release: 1.6.23.1

    • ruby-buildpack-release: 1.7.26.1

    • staticfile-buildpack-release: 1.4.34.1

3.7.3 Known Issues

  • App-AutoScaler will not work on SUSE Containers as a Service Platform without Hairpin enabled.

  • Enabling new feature roles, such as CredHub and App-AutoScaler, requires more memory and CPU resources in minimal installations (at least 22 GB in total for single instances that have all roles enabled). If these new feature pods are enabled, for example, on Microsoft Azure instances, move to the tier Standard_D4_v2 or larger.

  • CredHub on Microsoft Azure is considered experimental.

  • Minibroker with MariaDB will see timeout issues upon deletion. If an error appears, wait one minute and retry. If the cf delete-service command fails but the instance pods are removed from Kubernetes, the service instance data can safely be removed with a cf purge-service-instance command.

  • The AWS Service Broker has changed with the recent release of v1.0. The Helm chart from SUSE will be updated in the near future to include these changes.

  • The URL of the internal cf-usb broker endpoint has changed. To reconnect with SUSE Cloud Foundry/SUSE Cloud Application Platform, brokers for PostgreSQL and MySQL that use cf-usb will require the following manual fix after the upgrade:

    1. Run kubectl get secret --namespace scf and copy the name of the secret (for example, secrets-2.14.5-1)

    2. Run cf service-brokers to get the URL for the cf-usb host (for example, https://cf-usb.scf.svc.cluster.local:24054)

    3. Get the current CF_USB password by running:

      kubectl get secret --namespace scf <SECRET_NAME> -o yaml | \
        grep \\scf-usb-password: | cut -d: -f2 | base64 -id

      Replace <SECRET_NAME> with the name from the first step.

    4. Finally, update the service broker:

      cf update-service-broker usb broker-admin <PASSWORD> \
        https://cf-usb-cf-usb.scf.svc.cluster.local:24054

      Replace <PASSWORD> with the password from step 3. The URL is a modified version of the URL from step 2: however, as the subdomain name, use cf-usb-cf-usb instead of cf-usb.

3.8 Release 1.2.1, September 2018

3.8.1 Features and Fixes

  • Updated Stratos UI to v2.1

  • Updated SUSE Cloud Foundry to v2.13.3

  • Introduction of App-AutoScaler (experimental, off by default)

  • Introduction of Minibroker for Redis (experimental)

  • Support for Microsoft Azure service brokers

  • Cloud Foundry deployment bumped to 2.7.0

  • Groot-btrfs now available

  • HA for nfs-broker, cc-clock and syslog-scheduler roles

  • Enabled cloud controller security events

  • Exposed broker_client_timeout_seconds as a router parameter

  • Realigned Cloud Foundry role composition to be more in line with upstream, which includes these changes:

    • mysql-proxy has been merged into the mysql role

    • diego-locket has been merged into diego-api

    • log-api roles now combines loggregator and syslog-rlp

    • syslog-adapter renamed as adapter

  • Removed process list from all roles

  • Removed duplicate routing_api.locket.api_location property

  • syslog-adapter added to syslog adapter certificate

  • INTERNAL_CA_KEY not included in every pod by default

  • Better mechanism for waiting on mysql included

  • Includes these Cloud Foundry component versions:

    • UAA: v60.2

    • cf-deployment: 2.7.0

    • ruby-buildpack: 1.7.21.1

    • go-buildpack: 1.8.22.1

    • kubectl: 1.9.6

    • capi-release: 1.61.0

    • cflinuxfs2-release: v1.227.0

    • cf-mysql-release: v36.15.0

    • cf-opensuse42-release: 648e8f1

    • cf-sle12-release: c585efc

    • cf-smoke-tests-release: 40.0.5

    • cf-syslog-drain-release: v7.0

    • cf-usb: 7a45076

    • consul-release: v195

    • diego-release: v2.12.1

    • garden-runc-release: v1.15.1

    • loggregator-release: v103.0

    • nats-release: v24

    • nfs-volume-release: v1.2.0

    • postgres-release: v26

    • routing-release: 0.179.0

    • scf-helper-release: b276460

    • cf-acceptance-tests: c83c97b9

    • testbrain: 1.0.0-61-ga172cf9

    • statsd-injector-release: v1.3.0

    • uaa-fissile-release: 0.0.1-299-gdd37ec6

  • Buildpacks:

    • binary-buildpack-release: 1.0.17

    • dotnet-core-buildpack-release: 1.0.26-14-gf951834

    • go-buildpack-release: 1.7.19-21-g0897183

    • java-buildpack-release: 3.16-18-gfeab2b6

    • nodejs-buildpack-release: 1.5.30-13-g584d686

    • php-buildpack-release: 3dc85f9

    • python-buildpack-release: 1.5.16-14-ga2bbb4c

    • ruby-buildpack-release: bd1f612

    • staticfile-buildpack-release: 1.4.0-12-gdfc6c09

3.8.2 Known Issues

  • Starting with SUSE Cloud Application Platform 1.2.1, during helm upgrade, Kubernetes will not upgrade pods that are not ready by default. To upgrade all pods, use the complete command: helm upgrade --force --recreate-pods --version 2.13.3

  • Similar to SUSE CaaS Platform 3, Microsoft Azure now mandates a stricter security policy via PodSecurityPolicy (PSP), which is included as part of the SUSE Cloud Application Platform Deployment Guide. Any namespace tied to SUSE Cloud Application Platform requires privileged ports to be accessible needs to have to have a PSP set appropriately for access. This would include the default conventions of scf, uaa, stratos-ui, mysql-sidecar and postgres-sidecar as per our documentation tied to SUSE CaaS Platform 3: https://documentation.suse.com/suse-cap/1/html/cap-guides/cha-cap-depl-caasp.html#sec-cap-psps

  • Microsoft Azure users who previously had a Kubernetes policy without RBAC, but now have Azure Kubernetes Service (AKS) with RBAC (which is the new default with AKS), will need to modify their scf-config-values.yaml files so that auth: rbac replaces auth: none. If you remain in an AKS policy without RBAC, then you can ignore this change.

  • If you are using Microsoft Azure, ensure that the root partition has enough space for the installation and potential upgrades. To do so, add the parameter --node-osdisk-size=60 to the command that creates the AKS instance: az aks create. For the complete command, see the SUSE Cloud Application Platform Deployment Guide, section AKS, subsection Create Resource Group and AKS Instance (https://documentation.suse.com/suse-cap/1/html/cap-guides/cha-cap-depl-aks.html#sec-cap-create-aks-instance).

3.9 Release 1.2, August 2018

3.9.1 Features and Fixes

  • Updated Stratos UI to v2

  • Updated SUSE Cloud Foundry to v2.11.0

  • Support for Amazon Elastic Container Service for Kubernetes and SUSE CaaS Platform v3

  • Support for Microsoft Azure load balancer enabled

  • Updated backup/restore plugin (v1.0.8)

  • New active/passive role management for pods whereby the past model of using Ready and Not Ready, as states has been retired. Pods will now be labeled as Active or Passive and rely on stateful sets to be managed, allowing for more high availability. Details available here: https://github.com/SUSE/fissile/wiki/Pod-Management-using-Role-Manifest-Tags

  • All roles aside from UAA can now be HA

  • Certificate expiration now configurable

  • Added support for manual rotation of cloud controller database keys

  • Exposed the router.client_cert_validation property on the router

  • Use namespace for helm install name

  • Updated the role manifest validation to let the secrets generator use KUBE_SERVICE_DOMAIN_SUFFIX without having to configure HA itself

  • SCF_LOG_PORT now set to default port of 514

  • Fixed an issue during upgrade whereby USB sidecars did not receive updated password info, ensuring they will properly communicate with previously registered services

  • Patched an issue with the timestamp for monit_rsyslogd

  • cf-backup-restore restores security groups properly now

  • cf-backup-restore now relies on statically linked Linux binaries

  • Includes these Cloud Foundry component versions:

    • UAA: v59

    • cf-deployment: 1.36

    • ruby-buildpack: 1.7.18.2

    • go-buildpack: 1.8.22.1

    • kubectl: 1.8.2

    • capi-release: 1.58.0

    • cflinuxfs2-release: v1.209.0

    • cf-mysql-release: v36.14.0

    • cf-opensuse42-release: 054a0ca

    • cf-sle12-release: faf946c

    • cf-smoke-tests-release: 40.0.5

    • cf-syslog-drain-release: v6.5

    • cf-usb: 7a45076

    • consul-release: v192

    • diego-release: v2.8.0-24-gad85f06a

    • garden-runc-release: v1.11.1

    • loggregator-release: v102.1

    • nats-release: v24

    • nfs-volume-release: v1.2.0

    • postgres-release: v26

    • routing-release: 0.178.0

    • scf-helper-release: b276460

    • cf-acceptance-tests: 22c36ddc

    • testbrain: 1.0.0-61-ga172cf9

    • statsd-injector-release: v1.3.0

    • uaa-fissile-release: 0.0.1-289-g571836a

  • Buildpacks:

    • binary-buildpack-release: 1.0.17

    • dotnet-core-buildpack-release: 1.0.26-14-gf951834

    • go-buildpack-release: 1.7.19-17-g9dbf944

    • java-buildpack-release: 3.16-18-gfeab2b6

    • nodejs-buildpack-release: 1.5.30-13-g584d686

    • php-buildpack-release: 3dc85f9

    • python-buildpack-release: 1.5.16-14-ga2bbb4c

    • ruby-buildpack-release: ffffb58

    • staticfile-buildpack-release: 1.4.0-12-gdfc6c09

3.9.2 Known Issues

  • Upgrading to SUSE Cloud Application Platform 1.2 introduces a new active/passive model that will result in a longer-than-usual app instance downtime for upgrades to this new version. As part of this change, you will need to run the helm upgrade command with two additional parameters: helm upgrade --force --recreate-pods --version 2.11.0. This will be noticeable when seeing Kubernetes pods marked as Unready. Unready pods will not be upgraded.

  • SUSE CaaS Platform 3 uses an updated version of Kubernetes that mandates a stricter security policy via PodSecurityPolicy (PSP) which is included as part of the SUSE Cloud Application Platform Deployment Guide. This was optional in SUSE CaaS Platform 2 but it works the same. Any namespace tied to SUSE Cloud Application Platform requires privileged ports to be accessible needs to have to have a PSP set appropriately for access. This would include the default conventions of scf, uaa, stratos-ui, mysql-sidecar and postgres-sidecar as per our documentation.

  • UAA should be left as single availability and not high availability (HA)

3.10 Release 1.1.1, May 2018

3.10.1 Features and Fixes

  • Includes SCF v2.10.1

  • Enabled router.forwarded_client_cert variable for router

  • New syslog roles can have anti-affinity

  • MySQL-proxy healthcheck timeouts are configurable

  • cfdot added to all diego roles

  • Removed time stamp check for rsyslog

  • Upgrades will handle certificates better by having the required SAN metadata

  • Rotatable secrets are now immutable

  • Immutable config variables will not be generated

  • For high availability (HA) configurations, upgrades no longer require the api role to be scaled down

  • cf-backup-restore handles Docker apps properly now

  • cf-backup-restore returns a useful error if invalid JSON is parsed

  • PHP buildpack has been bumped to v.4.3.53.1 address MS-ISAC ADVISORY NUMBER 2018-046

  • Updated sidecars for MySQL and PostgreSQL

  • Includes these Cloud Foundry component versions:

    • uaa: v56.0

    • cf-deployment: v.1.21

    • loggregator-release: v102.1

    • cf-opensuse42-release: 459ef9f

    • cf-syslog-drain-release: v6.0

    • cf-usb: 79b1a8c

    • cf-mysql-release: v36.11.0

    • routing-release: 0.174.0

    • cf-sle12-release: b96cbc2

    • diego-release: v2.1.0

    • uaa-fissile-release: 0.0.1-243-ge11bf8d

    • cflinuxfs2-release: v1.194.0

    • cf-smoke-tests-release: 40.0.1

    • nats-release: v23

    • scf-helper-release/src/github.com/cloudfoundry/cf-acceptance-tests: 3beb6ed

    • capi-release: 1.52.0

3.10.2 Known Issues

  • Upgrading now rotates all internal passwords and certificates which may cause some downtime (for example, users will be unable to push applications) as the roles are restarted. This should not impact the availability of hosted applications running multiple instances.

  • If you are using the bundled UAA release, upgrade this first and pass the new certificate to the SUSE Cloud Foundry upgrade command as outlined in the upgrade instructions below.

  • When upgrading, existing deployments of the cf-usb-sidecar-mysql or cf-usb-sidecar-postgres brokers may subsequently be unable to delete service instances. The following commands fix this problem by updating the internal cf-usb password:

    CF_NAMESPACE=scf
    SECRET=$(kubectl get --namespace $CF_NAMESPACE deploy -o json \
      | jq -r '[.items[].spec.template.spec.containers[].env[] \
      | select(.name == "INTERNAL_CA_CERT").valueFrom.secretKeyRef.name] \
      | unique[]')
    USB_PASSWORD=$(kubectl get -n scf secret $SECRET -o jsonpath='{@.data.cf-usb-password}' \
      | base64 -d)
    USB_ENDPOINT=$(cf curl /v2/service_brokers \
      | jq -r '.resources[] | select(.entity.name=="usb").entity.broker_url')
    cf update-service-broker usb broker-admin "$USB_PASSWORD" "$USB_ENDPOINT"
  • If after upgrading:

    • the diego-api role is not fully functional (i.e. appearing as (0/1))

    • the bbs job in the pod is not starting (as per monit summary)

    • the bbs stdout log /var/vcap/sys/log/bbs/bbs.stdout.log contains Error 1062: Duplicate entry 'version' for key 'PRIMARY'

      Do the following to unblock the upgrade:

    • kubectl exec into (one of) the mysql pod(s)

      kubectl exec -it mysql-0 --namespace cf -- env TERM=xterm /bin/bash
    • Use mysql to connect to the diego database

      mysql --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf diego
    • Remove the offending entry

      DELETE FROM configurations WHERE id='version';
  • Do not set the mysql-proxy, routing-api, tcp-router, blobstore or diego_access roles to more than one instance each. Doing so can cause problems with subsequent upgrades which could lead to loss of data. Scalability of these roles will be enabled in an upcoming maintenance release.

  • The diego-api, diego-brain and routing-api roles are configured as active/passive, and passive pods can appear as Not Ready. This is expected behavior.

  • Microsoft Azure operators may not be able to connect to Microsoft Azure Database for MySQL/PostgreSQL databases with the current brokers.

3.11 Release 1.1, April 2018

3.11.1 What Is New?

  • Now supported on Microsoft Azure Container Services (AKS)

  • Cloud Foundry component and buildpack updates (see Section 3.11.2, “Features and Fixes”)

  • PostgreSQL and MySQL service broker sidecars, configured and deployed via Helm

  • cf backup+ CLI plugin for saving, restoring, or migrating CF data and applications

For more information about deploying SUSE Cloud Application Platform, see the Deployment Guide at https://documentation.suse.com/suse-cap/1/html/cap-guides/part-cap-deployment.html.

3.11.2 Features and Fixes

  • Includes SCF v2.8.0

  • Ability to specify multiple external IP addresses (see Section 3.11.4, “Known Issues” below on impact to upgrades)

  • MySQL now a clustered role

  • MySQL-proxy enabled for UAA

  • UAA has more logging enabled, so SCF_LOG_HOST, SCF_LOG_PORT and SCF_LOG_PROTOCOL have been exposed

  • TCP routing ports are configurable and can be templatized

  • CPU limits can be set for pods.

  • Memory limits for pods now properly enforced.

  • Kubernetes annotations enabled so operators can specify what nodes particular roles can be run on

  • Fixed cloud controller clock so that it will wait until API is ready

  • Overhauled secret rotation for upgrades

  • Includes these CF component versions:

    • diego-release 1.35

    • cf-mysql-release 36.10.0

    • cflinuxfs2-release 1.187.0

    • routing-release 0.172.0

    • garden-runc-release 1.11.1

    • nats-release 22

    • capi-release 1.49.0

  • Includes these Cloud Foundry buildpack versions:

    • go-buildpack-release 1.7.19-16-g37cc6b4

    • binary-buildpack-release 1.0.17

    • nodejs-buildpack-release 1.5.30-13-g584d686

    • ruby-buildpack-release 9adff61

    • php-buildpack-release ea8acd0

    • python-buildpack-release 1.5.16-14-ga2bbb4c

    • staticfile-buildpack-release 1.4.0-12-gdfc6c09

    • dotnet-core-buildpack-release 1.0.26-14-gf951834

    • java-buildpack-release 3.16-18-gfeab2b6

3.11.3 Configuration Changes

Changes to the format of values.yaml for SCF and UAA require special handling when upgrading from SUSE Cloud Application Platform 1.0 to 1.1 if you are reusing configuration files (for example, scf-config-values.yaml):

  • All secrets formerly set under env: are now set under secrets:. Any _PASSWORD, _SECRET, _CERT, or _KEY value explicitly set in values.yaml for SUSE Cloud Application Platform 1.0 should be moved into the secrets: section before running helm upgrade with the revised values.yaml. Find a sample configuration in Section 8, “Appendix: Configuration with secrets: Section”.

  • These secrets must be resupplied on each upgrade (for example, the CLUSTER_ADMIN_PASSWORD, UAA_ADMIN_CLIENT_SECRET) as they will not be carried forward automatically. We recommend always using a values file.

  • To rotate secrets, increment the kube.secrets_generation_counter (immutable generated secrets will not be reset).

  • The kube.external_ip variable has been changed to kube.external_ips, allowing for services to be exposed on multiple Kubernetes worker nodes (for example, behind a TCP load balancer). Before upgrading, change the setting or add a new setting specified as an array. For example:

    kube.external_ip=10.1.1.1
    kube.external_ips=["10.1.1.1"]
  • Both variables can exist at the same time and be set to the same value for those in mixed version environments. To specify multiple addresses, use:

    kube.external_ips=["1.1.1.1", "2.2.2.2"]
  • Upgrading from SUSE Cloud Application Platform 1.0.1 to 1.1

    An example scf-config-values.yaml for SUSE Cloud Application Platform 1.1 would look like this:

    env:
        # Domain for SCF. DNS for *.DOMAIN must point to a kube node's (not master)
        # external ip address.
        DOMAIN: cf-dev.io
    
    kube:
        # The IP address assigned to the kube node pointed to by the domain.
        #### the external_ip setting changed to accept a list of IPs, and was
        #### renamed to external_ips
        external_ips: ["192.168.77.77"]
        storage_class:
            # Make sure to change the value in here to whatever storage class you use
            persistent: "persistent"
            shared: "shared"
    
        # The registry the images will be fetched from. The values below should work for
        # a default installation from the suse registry.
        registry:
           hostname: "registry.suse.com"
           username: ""
           password: ""
        organization: "cap"
    
        auth: rbac
    
    secrets:
        # Password for user 'admin' in the cluster
        CLUSTER_ADMIN_PASSWORD: changeme
    
        # Password for SCF to authenticate with UAA
        UAA_ADMIN_CLIENT_SECRET: uaa-admin-client-secret

    To upgrade from SUSE Cloud Application Platform 1.0.1 to 1.1, run the following commands:

    $ helm repo update
    $ helm upgrade --recreate-pods <uaa-helm-release-name> suse/uaa --values scf-config-values.yaml --version 2.8.0
    $ SECRET=$(kubectl get pods --namespace uaa -o jsonpath='{.items[*].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    $ CA_CERT="$(kubectl get secret $SECRET --namespace uaa -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    $ helm upgrade --recreate-pods <scf-helm-release-name> suse/cf --values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT} --version 2.8.0"
    $ helm upgrade --recreate-pods <console-helm-release-name> suse/console --values scf-config-values.yaml --version 1.1.0

3.11.4 Known Issues

Important
Important

You will need Stratos UI 1.1 when running SUSE Cloud Application Platform 1.1 and you share the scf-values.yaml configuration file between them. Prior versions of the Stratos UI will not work.

Important
Important

If you have used a configuration file from a version prior to 1.1, you will need to update it. See details below.

  • The variable kube.external_ip has now been renamed to kube.external_ips, meaning upgrades from older versions will fail unless the latter variable exists in the scf-values.yaml file used to deploy SUSE Cloud Application Platform. Both variables can exist at the same time and be set to the same value for those in mixed version environments:

    kube.external_ip=1.1.1.1
    kube.external_ips=[1.1.1.1]
    • Going forward, kube.external_ips is an array, hence it can be used as reproduced below:

      kube.external_ips=[“1.1.1.1”, “2.2.2.2”]
    • Also as a result of this change, the helm command line client must be version 2.6.0 or higher.

    • All the secrets have been renamed from env.FOO to secrets.FOO, so all the appropriate entries in scf-values.yaml need to be modified to align with that change.

    • You need to keep specifying all your secrets on each upgrade (for example, the CLUSTER_ADMIN_PASSWORD) as it will not be carried forward automatically.

    • To rotate secrets, increment the kube.secret_generation_counter. Note that immutable generated secrets will not be reset.

  • In HA environments, upgrades can run into an issue whereby the API pods do not all come up post-migration. The work around this issue, before the upgrade, scale down the API role to 1. After completing the upgrade, scale the API role up again to 2 or more.

    • Some roles (like diego-api, diego-brain and routing-api) are configured as active/passive, so passive pods can appear as Not Ready.

    • Other roles (tcp-router and blobstore) cannot be scaled.

  • Cloud Application Platform v1.1 requires that Stratos UI use version 1.1. Older versions of the UI will not work due to the change in variable names.

  • Azure operators may not be able to connect to SQL databases with the sidecar.

  • Restores performed by the Backup CLI may leave docker apps in a stopped state. The workaround is to restart the affected applications.

  • A proper JSON file generated by the Backup CLI needs to be provided in order to do a restore, otherwise an ugly error appears.

  • Do not set the mysql-proxy, routing-api, tcp-router, blobstore or diego_access roles to more than one instance each. Doing so can cause problems with subsequent upgrades which could lead to loss of data. Scalability of these roles will be enabled in an upcoming maintenance release.

  • To upgrade high availability (HA) configurations, scale down the api role count to 1. Then upon completing the upgrade, scale api up again to 2 or more.

    • The diego-api, diego-brain and routing-api roles are configured as active/passive, and passive pods can appear as Not Ready. This is expected behavior.

  • Azure operators may not be able to connect to Azure Database for MySQL/PostgreSQL databases with the current brokers.

  • cf backup-restore may leave Docker apps in a stopped state. These can be started manually.

  • cf backup-restore produces an unhelpful error if the file is not valid JSON.

3.12 Release 1.0.1, February 2018

3.12.1 Features and Fixes

  • Using the helm upgrade command in SUSE Cloud Application Platform 1.0 to 1.0.1 (scf 2.6.11 to 2.7.0) requires the use of --force to drop an unnecessary persistent volume. Note that helm upgrade only works for multi-node clusters when running with a proper HA storage class. For example, hostpath will not work, as required stateful data can be lost.

  • Bump to Cloud Foundry Deployment (1.9.0), using Cloud Foundry Deployment not Cloud Foundry Release from now on

  • Bump UAA to v53.3

  • Add ability to rename immutable secrets

  • Update CATS to be closer to what upstream is using

  • Make RBAC the default in the values.yaml (no need to specify anymore)

  • Increase test brain timeouts to stop randomly failing tests

  • Remove unused SANs from the generated TLS certificates

  • Remove the dependency on jq from stemcells

  • Fix duplicate buildpack ids when starting Cloud Foundry

  • Fix an issue in the vagrant box where compilation would fail due to old versions of docker.

  • Fix an issue where diego cell could not be mounted on NFS-backed Kubernetes storage class

  • Fix an issue where diego cell could not mount NFS in persi

  • Fix several problems reported with the syslog-forwarding implementation

3.12.2 Known Issues

  • Do not set the mysql or diego_access roles to more than one instance each in HA configurations. Doing so can cause problems with subsequent upgrades which could lead to loss of data. Scalability of these roles will be enabled in an upcoming maintenance release.

  • A helm upgrade command from 1.0 to 1.0.1 (scf 2.6.11 to 2.7.0) requires the use of --force to drop an unnecessary persistent volume. Note that helm upgrade only works for multi-node clusters when running with a proper HA storage class (for example, hostpath will not work as required stateful data can be lost).

3.13 Release 1.0, January 2018

  • Initial product release

4 Deploying SUSE Cloud Application Platform on SUSE Containers as a Service Platform

For supported ways to deploy SUSE Cloud Application Platform on SUSE Containers as a Service Platform, follow the deployment guides for your chosen infrastructure platform:

Following the above guides will provide you with a fully supported SUSE Containers as a Service Platform environment that you can subsequently use to install SUSE Cloud Application Platform using the SUSE Cloud Application Platform Deployment Guide.

5 Installing SUSE Cloud Application Platform on Microsoft Azure

For instructions on installing SUSE Cloud Application Platform on Microsoft Azure, see the official documentation at https://documentation.suse.com/suse-cap/1/html/cap-guides/cha-cap-depl-aks.html.

6 Obtaining Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at https://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to sle_source_request@suse.com or as otherwise instructed at https://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.

7 Legal Notices

SUSE makes no representations or warranties with regard to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to revise this publication and to make changes to its content, at any time, without the obligation to notify any person or entity of such revisions or changes.

Further, SUSE makes no representations or warranties with regard to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to make changes to any and all parts of SUSE software, at any time, without any obligation to notify any person or entity of such changes.

Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classifications to export, re-export, or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical/biological weaponry end uses. Refer to https://www.suse.com/company/legal/ for more information on exporting SUSE software. SUSE assumes no responsibility for your failure to obtain any necessary export approvals.

Copyright © 2017-2020 SUSE LLC.

This release notes document is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA-4.0). You should have received a copy of the license along with this document. If not, see https://creativecommons.org/licenses/by-nd/4.0/.

SUSE has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at https://www.suse.com/company/legal/ and one or more additional patents or pending patent applications in the U.S. and other countries.

For SUSE trademarks, see SUSE Trademark and Service Mark list (https://www.suse.com/company/legal/). All third-party trademarks are the property of their respective owners.

8 Appendix: Configuration with secrets: Section

---
env:
  # List of domains (including scheme) from which Cross-Origin requests will be
  # accepted, a * can be used as a wildcard for any part of a domain.
  ALLOWED_CORS_DOMAINS: "[]"

  # Allow users to change the value of the app-level allow_ssh attribute.
  ALLOW_APP_SSH_ACCESS: "true"

  # Extra token expiry time while uploading big apps, in seconds.
  APP_TOKEN_UPLOAD_GRACE_PERIOD: "1200"

  # List of allow / deny rules for the blobstore internal server. Will be
  # followed by 'deny all'. Each entry must be follow by a semicolon.
  BLOBSTORE_ACCESS_RULES: "allow 10.0.0.0/8; allow 172.16.0.0/12; allow 192.168.0.0/16;"

  # Maximal allowed file size for upload to blobstore, in megabytes.
  BLOBSTORE_MAX_UPLOAD_SIZE: "5000"

  # The set of CAT test suites to run. If not specified it falls back to a
  # hardwired set of suites.
  CATS_SUITES: ~

  # URI for a CDN to use for buildpack downloads.
  CDN_URI: ""

  # The Oauth2 authorities available to the cluster administrator.
  CLUSTER_ADMIN_AUTHORITIES: "scim.write,scim.read,openid,cloud_controller.admin,clients.read,clients.write,doppler.firehose,routing.router_groups.read,routing.router_groups.write"

  # 'build' attribute in the /v2/info endpoint
  CLUSTER_BUILD: "2.0.2"

  # 'description' attribute in the /v2/info endpoint
  CLUSTER_DESCRIPTION: "SUSE Cloud Foundry"

  # 'name' attribute in the /v2/info endpoint
  CLUSTER_NAME: "SCF"

  # 'version' attribute in the /v2/info endpoint
  CLUSTER_VERSION: "2"

  # The standard amount of disk (in MB) given to an application when not
  # overriden by the user via manifest, command line, etc.
  DEFAULT_APP_DISK_IN_MB: "1024"

  # The standard amount of memory (in MB) given to an application when not
  # overriden by the user via manifest, command line, etc.
  DEFAULT_APP_MEMORY: "1024"

  # If set apps pushed to spaces that allow SSH access will have SSH enabled by
  # default.
  DEFAULT_APP_SSH_ACCESS: "true"

  # The default stack to use if no custom stack is specified by an app.
  DEFAULT_STACK: "sle12"

  # The container disk capacity the cell should manage. If this capacity is
  # larger than the actual disk quota of the cell component, over-provisioning
  # will occur.
  DIEGO_CELL_DISK_CAPACITY_MB: "auto"

  # The memory capacity the cell should manage. If this capacity is larger than
  # the actual memory of the cell component, over-provisioning will occur.
  DIEGO_CELL_MEMORY_CAPACITY_MB: "auto"

  # Maximum network transmission unit length in bytes for application
  # containers.
  DIEGO_CELL_NETWORK_MTU: "1400"

  # A CIDR subnet mask specifying the range of subnets available to be assigned
  # to containers.
  DIEGO_CELL_SUBNET: "10.38.0.0/16"

  # Disable external buildpacks. Only admin buildpacks and system buildpacks
  # will be available to users.
  DISABLE_CUSTOM_BUILDPACKS: "false"

  # The host to ping for confirmation of DNS resolution.
  DNS_HEALTH_CHECK_HOST: "127.0.0.1"

  # Base domain of the SCF cluster.
  # Example: my-scf-cluster.com
  DOMAIN: ~

  # The number of versions of an application to keep. You will be able to
  # rollback to this amount of versions.
  DROPLET_MAX_STAGED_STORED: "5"

  # Enables setting the X-Forwarded-Proto header if SSL termination happened
  # upstream and the header value was set incorrectly. When this property is set
  # to true, the gorouter sets the header X-Forwarded-Proto to https. When this
  # value set to false, the gorouter sets the header X-Forwarded-Proto to the
  # protocol of the incoming request.
  FORCE_FORWARDED_PROTO_AS_HTTPS: "false"

  # URL pointing to the Docker registry used for fetching Docker images. If not
  # set, the Docker service default is used.
  GARDEN_DOCKER_REGISTRY: "registry-1.docker.io"

  # Whitelist of IP:PORT tuples and CIDR subnet masks. Pulling from docker
  # registries with self signed certificates will not be permitted if the
  # registry's address is not listed here.
  GARDEN_INSECURE_DOCKER_REGISTRIES: ""

  # Override DNS servers to be used in containers; defaults to the same as the
  # host.
  GARDEN_LINUX_DNS_SERVER: ""

  # The filesystem driver to use (btrfs or overlay-xfs).
  GARDEN_ROOTFS_DRIVER: "btrfs"

  # Location of the proxy to use for secure web access.
  HTTPS_PROXY: ~

  # Location of the proxy to use for regular web access.
  HTTP_PROXY: ~

  KUBE_SERVICE_DOMAIN_SUFFIX: ~

  # The cluster's log level: off, fatal, error, warn, info, debug, debug1,
  # debug2.
  LOG_LEVEL: "info"

  # The maximum amount of disk a user can request for an application via
  # manifest, command line, etc., in MB. See also DEFAULT_APP_DISK_IN_MB for the
  # standard amount.
  MAX_APP_DISK_IN_MB: "2048"

  # Maximum health check timeout that can be set for an app, in seconds.
  MAX_HEALTH_CHECK_TIMEOUT: "180"

  # Sets the maximum allowed size of the client request body, specified in the
  # “Content-Length” request header field, in megabytes. If the size in a
  # request exceeds the configured value, the 413 (Request Entity Too Large)
  # error is returned to the client. Please be aware that browsers cannot
  # correctly display this error. Setting size to 0 disables checking of client
  # request body size. This limits application uploads, buildpack uploads, etc.
  NGINX_MAX_REQUEST_BODY_SIZE: "2048"

  # Comma separated list of IP addresses and domains which should not be
  # directed through a proxy, if any.
  NO_PROXY: ~

  # Comma separated list of white-listed options that may be set during create
  # or bind operations.
  # Example:
  # uid,gid,allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,fsname,username,password
  PERSI_NFS_ALLOWED_OPTIONS: "uid,gid,auto_cache,username,password"

  # Comma separated list of default values for nfs mount options. If a default
  # is specified with an option not included in PERSI_NFS_ALLOWED_OPTIONS, then
  # this default value will be set and it won't be overridable.
  PERSI_NFS_DEFAULT_OPTIONS: ~

  # Comma separated list of white-listed options that may be accepted in the
  # mount_config options. Note a specific 'sloppy_mount:true' volume option
  # tells the driver to ignore non-white-listed options, while a
  # 'sloppy_mount:false' tells the driver to fail fast instead when receiving a
  # non-white-listed option."
  #
  # Example:
  # allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,sloppy_mount,fsname
  PERSI_NFS_DRIVER_ALLOWED_IN_MOUNT: "auto_cache"

  # Comma separated list of white-listed options that may be configured in
  # supported in the mount_config.source URL query params
  #
  # Example: uid,gid,auto-traverse-mounts,dircache
  PERSI_NFS_DRIVER_ALLOWED_IN_SOURCE: "uid,gid"

  # Comma separated list default values for options that may be configured in
  # the mount_config options, formatted as 'option:default'. If an option is not
  # specified in the volume mount, or the option is not white-listed, then the
  # specified default value will be used instead.
  #
  # Example:
  # allow_root:false,nfs_uid:2000,nfs_gid:2000,auto_cache:true,sloppy_mount:true
  PERSI_NFS_DRIVER_DEFAULT_IN_MOUNT: "auto_cache:true"

  # Comma separated list of default values for options in the source URL query
  # params, formatted as 'option:default'. If an option is not specified in the
  # volume mount, or the option is not white-listed, then the specified default
  # value will be applied.
  PERSI_NFS_DRIVER_DEFAULT_IN_SOURCE: ~

  # Disable Persi NFS driver
  PERSI_NFS_DRIVER_DISABLE: "false"

  # LDAP server host name or ip address (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_HOST: ""

  # LDAP server port (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PORT: "389"

  # LDAP server protocol (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PROTOCOL: "tcp"

  # LDAP service account user name (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_USER: ""

  # LDAP fqdn for user records we will search against when looking up user uids
  # (required for LDAP integration only)
  # Example: cn=Users,dc=corp,dc=test,dc=com
  PERSI_NFS_DRIVER_LDAP_USER_FQDN: ""

  # Certficates to add to the rootfs trust store. Multiple certs are possible by
  # concatenating their definitions into one big block of text.
  ROOTFS_TRUSTED_CERTS: ""

  # The algorithm used by the router to distribute requests for a route across
  # backends. Supported values are round-robin and least-connection.
  ROUTER_BALANCING_ALGORITHM: "round-robin"

  # The log destination to talk to. This has to point to a syslog server.
  SCF_LOG_HOST: ~

  # The port used by rsyslog to talk to the log destination. If not set it
  # defaults to 514, the standard port of syslog.
  SCF_LOG_PORT: ~

  # The protocol used by rsyslog to talk to the log destination. The allowed
  # values are tcp, and udp. The default is tcp.
  SCF_LOG_PROTOCOL: "tcp"

  # A comma-separated list of insecure Docker registries in the form of
  # '<HOSTNAME|IP>:PORT'. Each registry must be quoted separately.
  #
  # Example: "docker-registry.example.com:80", "hello.example.org:443"
  STAGER_INSECURE_DOCKER_REGISTRIES: ""

  # Timeout for staging an app, in seconds.
  STAGING_TIMEOUT: "900"

  # Support contact information for the cluster
  SUPPORT_ADDRESS: "support@example.com"

  # TCP routing domain of the SCF cluster; only used for testing;
  # Example: tcp.my-scf-cluster.com
  TCP_DOMAIN: ~

  # Concatenation of trusted CA certificates to be made available on the cell.
  TRUSTED_CERTS: ~

  # The host name of the UAA server (root zone)
  UAA_HOST: ~

  # The tcp port the UAA server (root zone) listens on for requests.
  UAA_PORT: "2793"

  # Whether or not to use privileged containers for buildpack based
  # applications. Containers with a docker-image-based rootfs will continue to
  # always be unprivileged.
  USE_DIEGO_PRIVILEGED_CONTAINERS: "false"

  # Whether or not to use privileged containers for staging tasks.
  USE_STAGER_PRIVILEGED_CONTAINERS: "false"

sizing:
  # Flag to activate high-availability mode
  HA: false

  # The api role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # - cloud_controller_ng: The Cloud Controller provides primary Cloud Foundry
  #   API that is by the CF CLI. The Cloud Controller uses a database to keep
  #   tables for organizations, spaces, apps, services, service instances, user
  #   roles, and more. Typically multiple instances of Cloud Controller are load
  #   balanced.
  #
  # - route_registrar: Used for registering routes
  #
  # Also: metron_agent, statsd_injector, go-buildpack, binary-buildpack,
  # nodejs-buildpack, ruby-buildpack, php-buildpack, python-buildpack,
  # staticfile-buildpack, java-buildpack, dotnet-core-buildpack
  api:
    # Node affinity rules can be specified here
    affinity: {}

    # The api role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2421
      limit: ~

  # The blobstore role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - route_registrar: Used for registering routes
  #
  # Also: blobstore, metron_agent
  blobstore:
    # Node affinity rules can be specified here
    affinity: {}

    # The blobstore role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      blobstore_data: 50

    # Unit [MiB]
    memory:
      request: 420
      limit: ~

  # The cc-clock role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - cloud_controller_clock: The Cloud Controller clock periodically schedules
  #   Cloud Controller clean up tasks for app usage events, audit events, failed
  #   jobs, and more. Only single instance of this job is necessary.
  #
  # Also: metron_agent, statsd_injector
  cc_clock:
    # Node affinity rules can be specified here
    affinity: {}

    # The cc-clock role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 789
      limit: ~

  # The cc-uploader role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: tps, cc_uploader, metron_agent
  cc_uploader:
    # Node affinity rules can be specified here
    affinity: {}

    # The cc-uploader role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 129
      limit: ~

  # The cc-worker role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - cloud_controller_worker: Cloud Controller worker processes background
  #   tasks submitted via the.
  #
  # Also: metron_agent
  cc_worker:
    # Node affinity rules can be specified here
    affinity: {}

    # The cc-worker role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 753
      limit: ~

  # The cf-usb role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: cf-usb
  cf_usb:
    # Node affinity rules can be specified here
    affinity: {}

    # The cf-usb role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 117
      limit: ~

  # Global CPU configuration
  cpu:
    # Flag to activate cpu requests
    requests: false

    # Flag to activate cpu limits
    limits: false

  # The diego-access role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: ssh_proxy, metron_agent, file_server
  diego_access:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego-access role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 123
      limit: ~

  # The diego-api role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: bbs, metron_agent
  diego_api:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego-api role can scale between 1 and 3 instances.
    # The instance count must be an odd number (not divisible by 2).
    # For high availability it needs at least 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 138
      limit: ~

  # The diego-brain role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: auctioneer, metron_agent
  diego_brain:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego-brain role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 99
      limit: ~

  # The diego-cell role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - wait-for-uaa: Wait for UAA to be ready before starting any jobs
  #
  # Also: rep, route_emitter, garden, cflinuxfs2-rootfs-setup,
  # opensuse42-rootfs-setup, cf-sle12-setup, metron_agent, nfsv3driver
  diego_cell:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego-cell role can scale between 1 and 254 instances.
    # For high availability it needs at least 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    disk_sizes:
      grootfs_data: 50

    # Unit [MiB]
    memory:
      request: 4677
      limit: ~

  # The diego-locket role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: locket, metron_agent
  diego_locket:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego-locket role can scale between 1 and 3 instances.
    # For high availability it needs at least 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 90
      limit: ~

  # The doppler role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: doppler, metron_agent
  doppler:
    # Node affinity rules can be specified here
    affinity: {}

    # The doppler role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 390
      limit: ~

  # The loggregator role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: loggregator_trafficcontroller, metron_agent
  loggregator:
    # Node affinity rules can be specified here
    affinity: {}

    # The loggregator role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 153
      limit: ~

  # Global memory configuration
  memory:
    # Flag to activate memory requests
    requests: false

    # Flag to activate memory limits
    limits: false

  # The mysql role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: mysql
  mysql:
    # Node affinity rules can be specified here
    affinity: {}

    # The mysql role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      mysql_data: 20

    # Unit [MiB]
    memory:
      request: 2841
      limit: ~

  # The mysql-proxy role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: proxy
  mysql_proxy:
    # Node affinity rules can be specified here
    affinity: {}

    # The mysql-proxy role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 63
      limit: ~

  # The nats role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - nats: The NATS server provides publish-subscribe messaging system for the
  #   Cloud Controller, the DEA , HM9000, and other Cloud Foundry components.
  #
  # Also: metron_agent
  nats:
    # Node affinity rules can be specified here
    affinity: {}

    # The nats role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 60
      limit: ~

  # The nfs-broker role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: metron_agent, nfsbroker
  nfs_broker:
    # Node affinity rules can be specified here
    affinity: {}

    # The nfs-broker role can scale between 1 and 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 63
      limit: ~

  # The post-deployment-setup role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - uaa-create-user: Create the initial user in UAA
  #
  # - configure-scf: Uses the cf CLI to configure SCF once it's online (things
  #   like proxy settings, service brokers, etc.)
  post_deployment_setup:
    # Node affinity rules can be specified here
    affinity: {}

    # The post-deployment-setup role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The router role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - gorouter: Gorouter maintains a dynamic routing table based on updates
  #   received from NATS and (when enabled) the Routing API. This routing table
  #   maps URLs to backends. The router finds the URL in the routing table that
  #   most closely matches the host header of the request and load balances
  #   across the associated backends.
  #
  # Also: metron_agent
  router:
    # Node affinity rules can be specified here
    affinity: {}

    # The router role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 135
      limit: ~

  # The routing-api role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: metron_agent, routing-api
  routing_api:
    # Node affinity rules can be specified here
    affinity: {}

    # The routing-api role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 114
      limit: ~

  # The secret-generation role contains the following jobs:
  #
  # - generate-secrets: This job will generate the secrets for the cluster
  secret_generation:
    # Node affinity rules can be specified here
    affinity: {}

    # The secret-generation role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The syslog-adapter role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: adapter, metron_agent
  syslog_adapter:
    # Node affinity rules can be specified here
    affinity: {}

    # The syslog-adapter role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 78
      limit: ~

  # The syslog-rlp role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: metron_agent, reverse_log_proxy
  syslog_rlp:
    # Node affinity rules can be specified here
    affinity: {}

    # The syslog-rlp role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 93
      limit: ~

  # The syslog-scheduler role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: scheduler, metron_agent
  syslog_scheduler:
    # Node affinity rules can be specified here
    affinity: {}

    # The syslog-scheduler role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 69
      limit: ~

  # The tcp-router role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - wait-for-uaa: Wait for UAA to be ready before starting any jobs
  #
  # Also: tcp_router, metron_agent
  tcp_router:
    # Node affinity rules can be specified here
    affinity: {}

    # The tcp-router role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 99
      limit: ~

    ports:
      tcp_route:
        count: 9

secrets:
  # The password for the cluster administrator.
  # This value is immutable and must not be changed once set.
  CLUSTER_ADMIN_PASSWORD: ~

  # LDAP service account password (required for LDAP integration only)
  # This value is immutable and must not be changed once set.
  PERSI_NFS_DRIVER_LDAP_PASSWORD: "-"

  # The password of the admin client - a client named admin with uaa.admin as an
  # authority.
  # This value is immutable and must not be changed once set.
  UAA_ADMIN_CLIENT_SECRET: ~

  # The CA certificate for UAA
  UAA_CA_CERT: ~

  # PEM encoded RSA private key used to identify host.
  # This value uses a generated default.
  APP_SSH_KEY: ~

  # MD5 fingerprint of the host key of the SSH proxy that brokers connections to
  # application instances.
  # This value uses a generated default.
  APP_SSH_KEY_FINGERPRINT: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  AUCTIONEER_REP_CERT: ~

  # PEM-encoded key
  # This value uses a generated default.
  AUCTIONEER_REP_KEY: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  AUCTIONEER_SERVER_CERT: ~

  # PEM-encoded server key
  # This value uses a generated default.
  AUCTIONEER_SERVER_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  BBS_AUCTIONEER_CERT: ~

  # PEM-encoded key
  # This value uses a generated default.
  BBS_AUCTIONEER_KEY: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BBS_CLIENT_CRT: ~

  # PEM-encoded client key.
  # This value uses a generated default.
  BBS_CLIENT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  BBS_REP_CERT: ~

  # PEM-encoded key
  # This value uses a generated default.
  BBS_REP_KEY: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BBS_SERVER_CRT: ~

  # PEM-encoded client key.
  # This value uses a generated default.
  BBS_SERVER_KEY: ~

  # The PEM-encoded certificate (optionally as a certificate chain) for serving
  # blobs over TLS/SSL.
  # This value uses a generated default.
  BLOBSTORE_TLS_CERT: ~

  # The PEM-encoded private key for signing TLS/SSL traffic.
  # This value uses a generated default.
  BLOBSTORE_TLS_KEY: ~

  # The PEM-encoded certificate for internal cloud controller traffic.
  # This value uses a generated default.
  CC_SERVER_CRT: ~

  # The PEM-encoded private key for internal cloud controller traffic.
  # This value uses a generated default.
  CC_SERVER_KEY: ~

  # The PEM-encoded certificate for internal cloud controller uploader traffic.
  # This value uses a generated default.
  CC_UPLOADER_CRT: ~

  # The PEM-encoded private key for internal cloud controller uploader traffic.
  # This value uses a generated default.
  CC_UPLOADER_KEY: ~

  # PEM-encoded broker server certificate.
  # This value uses a generated default.
  CF_USB_BROKER_SERVER_CERT: ~

  # PEM-encoded broker server key.
  # This value uses a generated default.
  CF_USB_BROKER_SERVER_KEY: ~

  # PEM-encoded client certificate
  # This value uses a generated default.
  DIEGO_CLIENT_CERT: ~

  # PEM-encoded client key
  # This value uses a generated default.
  DIEGO_CLIENT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  DOPPLER_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  DOPPLER_KEY: ~

  # PEM-encoded CA certificate used to sign the TLS certificate used by all
  # components to secure their communications.
  # This value uses a generated default.
  INTERNAL_CA_CERT: ~

  # PEM-encoded CA key.
  # This value uses a generated default.
  INTERNAL_CA_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  METRON_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  METRON_KEY: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  REP_SERVER_CERT: ~

  # PEM-encoded server key
  # This value uses a generated default.
  REP_SERVER_KEY: ~

  # The public ssl cert for ssl termination.
  # This value uses a generated default.
  ROUTER_SSL_CERT: ~

  # The private ssl key for ssl termination.
  # This value uses a generated default.
  ROUTER_SSL_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_ADAPT_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  SYSLOG_ADAPT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_RLP_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  SYSLOG_RLP_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_SCHED_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  SYSLOG_SCHED_KEY: ~

  # PEM-encoded client certificate for internal communication between the cloud
  # controller and TPS.
  # This value uses a generated default.
  TPS_CC_CLIENT_CRT: ~

  # PEM-encoded client key for internal communication between the cloud
  # controller and TPS.
  # This value uses a generated default.
  TPS_CC_CLIENT_KEY: ~

  # PEM-encoded certificate for communication with the traffic controller of the
  # log infra structure.
  # This value uses a generated default.
  TRAFFICCONTROLLER_CERT: ~

  # PEM-encoded key for communication with the traffic controller of the log
  # infra structure.
  # This value uses a generated default.
  TRAFFICCONTROLLER_KEY: ~

services:
  loadbalanced: false
kube:
  external_ips: []
  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 1
  storage_class:
    persistent: "persistent"
    shared: "shared"
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"
  auth: "rbac"
Print this page