Jump to content
  1. About This Guide
  2. I Overview of SUSE Cloud Application Platform
    1. 1 About SUSE Cloud Application Platform
    2. 2 Running SUSE Cloud Application Platform on non-SUSE CaaS Platform Kubernetes Systems
  3. II Deploying SUSE Cloud Application Platform
    1. 3 Deployment and Administration Notes
    2. 4 Using an Ingress Controller with Cloud Application Platform
    3. 5 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform
    4. 6 Installing the Stratos Web Console
    5. 7 SUSE Cloud Application Platform High Availability
    6. 8 LDAP Integration
    7. 9 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)
    8. 10 Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)
    9. 11 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)
    10. 12 Installing SUSE Cloud Application Platform on OpenStack
    11. 13 Setting Up a Registry for an Air Gapped Environment
  4. III SUSE Cloud Application Platform Administration
    1. 14 Upgrading SUSE Cloud Application Platform
    2. 15 Configuration Changes
    3. 16 Creating Admin Users
    4. 17 Managing Passwords
    5. 18 Cloud Controller Database Secret Rotation
    6. 19 Backup and Restore
    7. 20 Provisioning Services with Minibroker
    8. 21 Setting Up and Using a Service Broker
    9. 22 App-AutoScaler
    10. 23 Logging
    11. 24 Managing Certificates
    12. 25 Integrating CredHub with SUSE Cloud Application Platform
    13. 26 Offline Buildpacks
    14. 27 Custom Application Domains
    15. 28 Managing Nproc Limits of Pods
  5. IV SUSE Cloud Application Platform User Guide
    1. 29 Deploying and Managing Applications with the Cloud Foundry Client
  6. V Troubleshooting
    1. 30 Troubleshooting
  7. A Appendix
  8. B GNU Licenses
SUSE Cloud Application Platform 1.4.1

Deployment, Administration, and User Guides

Introducing SUSE Cloud Application Platform, a software platform for cloud-native application deployment based on SUSE Cloud Foundry and Kubernetes.

Authors: Carla Schroder, Billy Tat, and Claudia-Amelia Marin
Publication Date: August 02, 2019
About This Guide
Required Background
Available Documentation
Feedback
Documentation Conventions
About the Making of This Documentation
I Overview of SUSE Cloud Application Platform
1 About SUSE Cloud Application Platform
1.1 New in Version 1.4.1
1.2 SUSE Cloud Application Platform Overview
1.3 Minimum Requirements
1.4 SUSE Cloud Application Platform Architecture
2 Running SUSE Cloud Application Platform on non-SUSE CaaS Platform Kubernetes Systems
2.1 Kubernetes Requirements
II Deploying SUSE Cloud Application Platform
3 Deployment and Administration Notes
3.1 README First
3.2 Important Changes
3.3 Usage of Helm Chart Fields in Cloud Application Platform
3.4 Helm Values in scf-config-values.yaml
3.5 Status of Pods during Deployment
3.6 Namespaces
3.7 DNS Management
3.8 Releases and Helm Chart Versions
4 Using an Ingress Controller with Cloud Application Platform
4.1 Deploying NGINX Ingress Controller
4.2 Changing the Max Body Size
5 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform
5.1 Prerequisites
5.2 Pod Security Policy
5.3 Choose Storage Class
5.4 Test Storage Class
5.5 Configure the SUSE Cloud Application Platform Production Deployment
5.6 Deploy with Helm
5.7 Add the Kubernetes Charts Repository
5.8 Copy SUSE Enterprise Storage Secret
5.9 Deploy uaa
5.10 Deploy scf
5.11 Expanding Capacity of a Cloud Application Platform Deployment on SUSE® CaaS Platform
6 Installing the Stratos Web Console
6.1 Deploy Stratos on SUSE® CaaS Platform
6.2 Deploy Stratos on Amazon EKS
6.3 Deploy Stratos on Microsoft AKS
6.4 Deploy Stratos on Google GKE
6.5 Upgrading Stratos
6.6 Stratos Metrics
7 SUSE Cloud Application Platform High Availability
7.1 Configuring Cloud Application Platform for High Availability
7.2 Availability Zones
8 LDAP Integration
8.1 Prerequisites
8.2 Example LDAP Integration
9 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)
9.1 Prerequisites
9.2 Create Resource Group and AKS Instance
9.3 Install Helm Client and Tiller
9.4 Pod Security Policies
9.5 Enable Swap Accounting
9.6 Default Storage Class
9.7 DNS Configuration
9.8 Deployment Configuration
9.9 Add the Kubernetes Charts Repository
9.10 Deploying SUSE Cloud Application Platform
9.11 Configuring and Testing the Native Microsoft AKS Service Broker
9.12 Upgrading An AKS Cluster and Additional Considerations
9.13 Resizing Persistent Volumes
9.14 Expanding Capacity of a Cloud Application Platform Deployment on Microsoft AKS
10 Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)
10.1 Prerequisites
10.2 IAM Requirements for EKS
10.3 Install Helm Client and Tiller
10.4 Default Storage Class
10.5 Security Group rules
10.6 DNS Configuration
10.7 Deployment Configuration
10.8 Deploying Cloud Application Platform
10.9 Add the Kubernetes Charts Repository
10.10 Deploy uaa
10.11 Deploy scf
10.12 Deploying and Using the AWS Service Broker
10.13 Resizing Persistent Volumes
11 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)
11.1 Prerequisites
11.2 Creating a GKE cluster
11.3 Enable Swap Accounting
11.4 Get kubeconfig File
11.5 Install Helm Client and Tiller
11.6 Default Storage Class
11.7 DNS Configuration
11.8 Deployment Configuration
11.9 Add the Kubernetes charts repository
11.10 Deploying SUSE Cloud Application Platform
11.11 Resizing Persistent Volumes
11.12 Expanding Capacity of a Cloud Application Platform Deployment on Google GKE
12 Installing SUSE Cloud Application Platform on OpenStack
12.1 Prerequisites
12.2 Create a New OpenStack Project
12.3 Deploy SUSE Cloud Application Platform
12.4 Bootstrapping SUSE Cloud Application Platform
12.5 Growing the Root Filesystem
13 Setting Up a Registry for an Air Gapped Environment
13.1 Prerequisites
13.2 Mirror Images to Registry
III SUSE Cloud Application Platform Administration
14 Upgrading SUSE Cloud Application Platform
14.1 Important Considerations
14.2 Upgrading SUSE Cloud Application Platform
14.3 Installing Skipped Releases
15 Configuration Changes
15.1 Configuration Change Example
15.2 Other Examples
16 Creating Admin Users
16.1 Prerequisites
16.2 Creating an Example Cloud Application Platform Cluster Administrator
17 Managing Passwords
17.1 Password Management with the Cloud Foundry Client
17.2 Changing User Passwords with Stratos
18 Cloud Controller Database Secret Rotation
18.1 Tables with Encrypted Information
19 Backup and Restore
19.1 Backup and Restore Using cf-plugin-backup
19.2 Disaster Recovery in scf through Raw Data Backup and Restore
20 Provisioning Services with Minibroker
20.1 Deploy Minibroker
20.2 Setting Up the Environment for Minibroker Usage
20.3 Using Minibroker with Applications
21 Setting Up and Using a Service Broker
21.1 Enabling and Disabling Service Brokers
21.2 Prerequisites
21.3 Deploying on CaaS Platform 3
21.4 Configuring the MySQL Deployment
21.5 Deploying the MySQL Chart
21.6 Create and Bind a MySQL Service
21.7 Deploying the PostgreSQL Chart
21.8 Removing Service Broker Sidecar Deployments
21.9 Upgrade Notes
22 App-AutoScaler
22.1 Prerequisites
22.2 Enabling and Disabling the App-AutoScaler Service
22.3 Upgrade Considerations
22.4 Using the App-AutoScaler Service
22.5 Policies
23 Logging
23.1 Logging to an External Syslog Server
23.2 Log Levels
24 Managing Certificates
24.1 Certificate Characteristics
24.2 Deployment Configuration
24.3 Deploying SUSE Cloud Application Platform with Certificates
24.4 Rotating Automatically Generated Secrets
25 Integrating CredHub with SUSE Cloud Application Platform
25.1 Installing the CredHub Client
25.2 Enabling and Disabling CredHub
25.3 Upgrade Considerations
25.4 Connecting to the CredHub Service
26 Offline Buildpacks
26.1 Creating an Offline Buildpack
27 Custom Application Domains
27.1 Customizing Application Domains
28 Managing Nproc Limits of Pods
28.1 Configuring and Applying Nproc Limits
IV SUSE Cloud Application Platform User Guide
29 Deploying and Managing Applications with the Cloud Foundry Client
29.1 Using the cf CLI with SUSE Cloud Application Platform
V Troubleshooting
30 Troubleshooting
30.1 Using Supportconfig
30.2 Deployment Is Taking Too Long
30.3 Deleting and Rebuilding a Deployment
30.4 Querying with Kubectl
A Appendix
A.1 Manual Configuration of Pod Security Policies
A.2 Complete suse/uaa values.yaml File
A.3 Complete suse/scf values.yaml File
B GNU Licenses
B.1 GNU Free Documentation License

Copyright © 2006– 2019 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

SUSE Cloud Application Platform is a software platform for cloud-native application development, based on Cloud Foundry, with additional supporting services and components. The core of the platform is SUSE Cloud Foundry, a Cloud Foundry distribution for Kubernetes which runs on SUSE Linux Enterprise containers.

Cloud Application Platform is designed to run on any Kubernetes cluster. This guide describes how to deploy it on:

1 Required Background

To keep the scope of these guidelines manageable, certain technical assumptions have been made:

  • You have some computer experience and are familiar with common technical terms.

  • You are familiar with the documentation for your system and the network on which it runs.

  • You have a basic understanding of Linux systems.

2 Available Documentation

We provide HTML and PDF versions of our books in different languages. Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates and browse or download the documentation in various formats.

The following documentation is available for this product:

Deployment, Administration, and User Guides

The SUSE Cloud Application Platform guide is a comprehensive guide providing deployment, administration, and user guides, and architecture and minimum system requirements.

3 Feedback

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

4 Documentation Conventions

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • x86_64 This paragraph is only relevant for the AMD64/Intel 64 architecture. The arrows mark the beginning and the end of the text block.

    System z, POWER This paragraph is only relevant for the architectures z Systems and POWER. The arrows mark the beginning and the end of the text block.

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

5 About the Making of This Documentation

This documentation is written in SUSEDoc, a subset of DocBook 5. The XML source files were validated by jing (see https://code.google.com/p/jing-trang/), processed by xsltproc, and converted into XSL-FO using a customized version of Norman Walsh's stylesheets. The final PDF is formatted through FOP from Apache Software Foundation. The open source tools and the environment used to build this documentation are provided by the DocBook Authoring and Publishing Suite (DAPS). The project's home page can be found at https://github.com/openSUSE/daps.

The XML source code of this documentation can be found at https://github.com/SUSE/doc-cap.

Part I Overview of SUSE Cloud Application Platform

1 About SUSE Cloud Application Platform

1.1 New in Version 1.4.1

  • SUSE Cloud Foundry has been updated to version 2.17.1:

See all product manuals for SUSE Cloud Application Platform 1.x at SUSE Cloud Application Platform 1.

Tip
Tip: Read the Release Notes

Make sure to review the release notes for SUSE Cloud Application Platform published at https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/.

1.2 SUSE Cloud Application Platform Overview

SUSE Cloud Application Platform is a software platform for cloud-native application deployment based on SUSE Cloud Foundry and Kubernetes.

SUSE Cloud Application Platform describes the complete software stack, including the operating system, Kubernetes, and SUSE Cloud Foundry.

SUSE Cloud Application Platform is comprised of the SUSE Linux Enterprise builds of the uaa (User Account and Authentication) server, SUSE Cloud Foundry, the Stratos Web user interface, and Stratos Metrics.

The Cloud Foundry code base provides the basic functionality. SUSE Cloud Foundry differentiates itself from other Cloud Foundry distributions by running in Linux containers managed by Kubernetes, rather than virtual machines managed with BOSH, for greater fault tolerance and lower memory use.

All Docker images for the SUSE Linux Enterprise builds are hosted on registry.suse.com. These are the commercially-supported images. (Community-supported images for openSUSE are hosted on Docker Hub.) Product manuals on SUSE Doc: SUSE Cloud Application Platform 1 refer to the commercially-supported SUSE Linux Enterprise version.

Cloud Application Platform is designed to run on any Kubernetes cluster. This guide describes how to deploy it on:

SUSE Cloud Application Platform serves different but complementary purposes for operators and application developers.

For operators, the platform is:

  • Easy to install, manage, and maintain

  • Secure by design

  • Fault tolerant and self-healing

  • Offers high availability for critical components

  • Uses industry-standard components

  • Avoids single vendor lock-in

For developers, the platform:

  • Allocates computing resources on demand via API or Web interface

  • Offers users a choice of language and Web framework

  • Gives access to databases and other data services

  • Emits and aggregates application log streams

  • Tracks resource usage for users and groups

  • Makes the software development workflow more efficient

The principle interface and API for deploying applications to SUSE Cloud Application Platform is SUSE Cloud Foundry. Most Cloud Foundry distributions run on virtual machines managed by BOSH. SUSE Cloud Foundry runs in SUSE Linux Enterprise containers managed by Kubernetes. Containerizing the components of the platform itself has these advantages:

  • Improves fault tolerance. Kubernetes monitors the health of all containers, and automatically restarts faulty containers faster than virtual machines can be restarted or replaced.

  • Reduces physical memory overhead. SUSE Cloud Foundry components deployed in containers consume substantially less memory, as host-level operations are shared between containers by Kubernetes.

SUSE Cloud Foundry packages upstream Cloud Foundry BOSH releases to produce containers and configurations which are deployed to Kubernetes clusters using Helm.

1.3 Minimum Requirements

This guide details the steps for deploying SUSE Cloud Foundry on SUSE CaaS Platform, and on supported Kubernetes environments such as Microsoft Azure Kubernetes Service (AKS), and Amazon Elastic Container Service for Kubernetes (EKS). SUSE CaaS Platform is a specialized application development and hosting platform built on the SUSE MicroOS container host operating system, container orchestration with Kubernetes, and Salt for automating installation and configuration.

Important
Important: Required Knowledge

Installing and administering SUSE Cloud Application Platform requires knowledge of Linux, Docker, Kubernetes, and your Kubernetes platform (for example SUSE CaaS Platform, AKS, EKS, OpenStack). You must plan resource allocation and network architecture by taking into account the requirements of your Kubernetes platform in addition to SUSE Cloud Foundry requirements. SUSE Cloud Foundry is a discrete component in your cloud stack, but it still requires knowledge of administering and troubleshooting the underlying stack.

You may create a minimal deployment on four Kubernetes nodes for testing. However, this is insufficient for a production deployment. A supported deployment includes SUSE Cloud Foundry installed on SUSE CaaS Platform, Amazon EKS, or Azure AKS. You also need a storage back-end such as SUSE Enterprise Storage or NFS, a DNS/DHCP server, and an Internet connection to download additional packages during installation and ~10 GB of Docker images on each Kubernetes worker after installation. (See Chapter 5, Deploying SUSE Cloud Application Platform on SUSE CaaS Platform.)

A production deployment requires considerable resources. SUSE Cloud Application Platform includes an entitlement of SUSE CaaS Platform and SUSE Enterprise Storage. SUSE Enterprise Storage alone has substantial requirements; see the Tech Specs for details. SUSE CaaS Platform requires a minimum of four hosts: one admin and three Kubernetes nodes. SUSE Cloud Foundry is then deployed on the Kubernetes nodes. Four CaaS Platform nodes are not sufficient for a production deployment. Figure 1.1, “Minimal Example Production Deployment” describes a minimal production deployment with SUSE Cloud Foundry deployed on a Kubernetes cluster containing three Kubernetes masters and three workers, plus an ingress controller, administration workstation, DNS/DHCP server, and a SUSE Enterprise Storage cluster.

network architecture of minimal production setup
Figure 1.1: Minimal Example Production Deployment

Note that after you have deployed your cluster and start building and running applications, your applications may depend on buildpacks that are not bundled in the container images that ship with SUSE Cloud Foundry. These will be downloaded at runtime, when you are pushing applications to the platform. Some of these buildpacks may include components with proprietary licenses. (See Customizing and Developing Buildpacks to learn more about buildpacks, and creating and managing your own.)

1.4 SUSE Cloud Application Platform Architecture

The following figures illustrate the main structural concepts of SUSE Cloud Application Platform. Figure 1.2, “Cloud Platform Comparisons” shows a comparison of the basic cloud platforms:

  • Infrastructure as a Service (IaaS)

  • Container as a Service (CaaS)

  • Platform as a Service (Paas)

  • Software as a Service (SaaS)

SUSE CaaS Platform is a Container as a Service platform, and SUSE Cloud Application Platform is a PaaS.

Comparison of cloud platforms.
Figure 1.2: Cloud Platform Comparisons

Figure 1.3, “Containerized Platforms” illustrates how SUSE CaaS Platform and SUSE Cloud Application Platform containerize the platform itself.

SUSE CaaS Platform and SUSE Cloud Application Platform containerize the platform itself.
Figure 1.3: Containerized Platforms

Figure 1.4, “SUSE Cloud Application Platform Stack” shows the relationships of the major components of the software stack. SUSE Cloud Application Platform runs on Kubernetes, which in turn runs on multiple platforms, from bare metal to various cloud stacks. Your applications run on SUSE Cloud Application Platform and provide services.

Relationships of the main Cloud Application Platform components.
Figure 1.4: SUSE Cloud Application Platform Stack

1.4.1 SUSE Cloud Foundry Components

SUSE Cloud Foundry is comprised of developer and administrator clients, trusted download sites, transient and long-running components, APIs, and authentication:

  • Clients for developers and admins to interact with SUSE Cloud Foundry: the cf CLI, which provides the cf command, Stratos Web interface, IDE plugins.

  • Docker Trusted Registry owned by SUSE.

  • SUSE Helm chart repository.

  • Helm, the Kubernetes package manager, which includes Tiller, the Helm server, and the helm command line client.

  • kubectl, the command line client for Kubernetes.

  • Long-running SUSE Cloud Foundry components.

  • SUSE Cloud Foundry post-deployment components: Transient SUSE Cloud Foundry components that start after all SUSE Cloud Foundry components are started, perform their tasks, and then exit.

  • SUSE Cloud Foundry Linux cell, an elastic runtime component that runs Linux applications.

  • uaa, a Cloud Application Platform service for authentication and authorization.

  • The Kubernetes API.

1.4.2 SUSE Cloud Foundry Containers

Figure 1.5, “SUSE Cloud Foundry Containers, Grouped by Function” provides a look at SUSE Cloud Foundry's containers.

SUSE Cloud Foundry's containers, grouped by functionality.
Figure 1.5: SUSE Cloud Foundry Containers, Grouped by Function
List of SUSE Cloud Foundry Containers
adapter

Part of the logging system, manages connections to user application syslog drains.

api-group

Contains the SUSE Cloud Foundry Cloud Controller, which implements the CF API. It is exposed via the router.

blobstore

A WebDAV blobstore for storing application bits, buildpacks, and stacks.

cc-clock

Sidekick to the Cloud Controller, periodically performing maintenance tasks such as resource cleanup.

cc-uploader

Assists droplet upload from Diego.

cc-worker

Sidekick to the Cloud Controller, processes background tasks.

cf-usb

Universal Service Broker; SUSE's own component for managing and publishing service brokers.

diego-api

API for the Diego scheduler.

diego-brain

Contains the Diego auctioning system that schedules user applications across the elastic layer.

diego-cell (privileged)

The elastic layer of SUSE Cloud Foundry, where applications live.

diego-ssh

Provides SSH access to user applications, exposed via a Kubernetes service.

doppler

Routes log messages from applications and components.

log-api

Part of the logging system; exposes log streams to users using web sockets and proxies user application log messages to syslog drains. Exposed using the router.

mysql

A MariaDB server and component to route requests to replicas. (A separate copy is deployed for uaa.)

nats

A pub-sub messaging queue for the routing system.

nfs-broker (privileged)

A service broker for enabling NFS-based application persistent storage.

post-deployment-setup

Used as a Kubernetes job, performs cluster setup after installation has completed.

router

Routes application and API traffic. Exposed using a Kubernetes service.

routing-api

API for the routing system.

secret-generation

Used as a Kubernetes job to create secrets (certificates) when the cluster is installed.

syslog-scheduler

Part of the logging system that allows user applications to be bound to a syslog drain.

tcp-router

Routes TCP traffic for your applications.

1.4.3 SUSE Cloud Foundry Service Diagram

This simple service diagram illustrates how SUSE Cloud Foundry components communicate with each other (Figure 1.6, “Simple Services Diagram”). See Figure 1.7, “Detailed Services Diagram” for a more detailed view.

Simple Services Diagram
Figure 1.6: Simple Services Diagram

This table describes how these services operate.

InterfaceNetwork NameNetwork ProtocolRequestorRequestRequest CredentialsRequest AuthorizationListenerResponseResponse CredentialsDescription of Operation
1ExternalHTTPSHelm ClientDeploy Cloud Application PlatformOAuth2 Bearer tokenDeployment of Cloud Application Platform Services on KubernetesHelm/Kubernetes APIOperation ack and handleTLS certificate on external endpointOperator deploys Cloud Application Platform on Kubernetes
2ExternalHTTPSInternal Kubernetes componentsDownload Docker ImagesRefer to registry.suse.comRefer to registry.suse.comregistry.suse.comDocker imagesNoneDocker images that make up Cloud Application Platform are downloaded
3TenantHTTPSCloud Application Platform componentsGet tokensOAuth2 client secretVaries, based on configured OAuth2 client scopesuaa An OAuth2 refresh token used to interact with other serviceTLS certificateSUSE Cloud Foundry components ask uaa for tokens so they can talk to each other
4ExternalHTTPSSUSE Cloud Foundry clientsSUSE Cloud Foundry API RequestsOAuth2 Bearer tokenSUSE Cloud Foundry application managementCloud Application Platform componentsJSON object and HTTP Status codeTLS certificate on external endpointCloud Application Platform Clients interact with the SUSE Cloud Foundry API (for example users deploying apps)
5ExternalWSSSUSE Cloud Foundry clientsLog streamingOAuth2 Bearer tokenSUSE Cloud Foundry application managementCloud Application Platform componentsA stream of SUSE Cloud Foundry logsTLS certificate on external endpointSUSE Cloud Foundry clients ask for logs (for example user looking at application logs or administrator viewing system logs)
6ExternalSSHSUSE Cloud Foundry clients, SSH clientsSSH Access to ApplicationOAuth2 bearer tokenSUSE Cloud Foundry application managementCloud Application Platform componentsA duplex connection is created allowing the user to interact with a shellRSA SSH Key on external endpointSUSE Cloud Foundry Clients open an SSH connection to an application's container (for example users debugging their applications)
7ExternalHTTPSHelmDownload chartsRefer to kubernetes-charts.suse.comRefer to kubernetes-charts.suse.comkubernetes-charts.suse.comHelm chartsTLS certificate on external endpointHelm charts for Cloud Application Platform are downloaded

1.4.4 Detailed Services Diagram

Figure 1.7, “Detailed Services Diagram” presents a more detailed view of SUSE Cloud Foundry services and how they interact with each other. Services labeled in red are unencrypted, while services labeled in green run over HTTPS.

Detailed Services Diagram
Figure 1.7: Detailed Services Diagram

2 Running SUSE Cloud Application Platform on non-SUSE CaaS Platform Kubernetes Systems

2.1 Kubernetes Requirements

SUSE Cloud Application Platform is designed to run on any Kubernetes system that meets the following requirements:

  • Kubernetes API version 1.8+

  • Nodes use a mininum kernel version of 3.19.

  • Kernel parameter swapaccount=1

  • docker info must not show aufs as the storage driver

  • The Kubernetes cluster must have a storage class for SUSE Cloud Application Platform to use. The default storage class is persistent. You may specify a different storage class in your deployment's values.yaml file (which is called scf-config-values.yaml in the examples in this guide), or as a helm command option, for example --set kube.storage_class.persistent=my_storage_class.

  • kube-dns must be be running

  • Either ntp or systemd-timesyncd must be installed and active

  • Docker must be configured to allow privileged containers

  • Privileged container must be enabled in kube-apiserver. See kube-apiserver.

  • Privileged must be enabled in kubelet

  • The TasksMax property of the containerd service definition must be set to infinity

  • Helm's Tiller has to be installed and active, with Tiller on the Kubernetes cluster and Helm on your remote administration machine

Part II Deploying SUSE Cloud Application Platform

3 Deployment and Administration Notes

Important things to know before deploying SUSE Cloud Application Platform.

4 Using an Ingress Controller with Cloud Application Platform

An Ingress controller (see https://kubernetes.io/docs/concepts/services-networking/ingress/) is a Kubernetes resource that manages traffic to services in a Kubernetes cluster.

5 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform

You may set up a minimal deployment on four Kubernetes nodes for testing. This is not sufficient for a production deployment. A basic SUSE Cloud Application Platform production deployment requires at least eight hosts plus a storage back-end: one SUSE CaaS Platform admin server, three Kubernetes mas…

6 Installing the Stratos Web Console

The Stratos user interface (UI) is a modern web-based management application for Cloud Foundry. It provides a graphical management console for both developers and system administrators. Install Stratos with Helm after all of the uaa and scf pods are running.

7 SUSE Cloud Application Platform High Availability
8 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components an…

9 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)

SUSE Cloud Application Platform supports deployment on Microsoft Azure Kubernetes Service (AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparing Azure for a SUSE Cloud Application Platform deployment, with a basic Azure load balancer. Note that you will not cre…

10 Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)

This chapter describes how to deploy SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS), using Amazon's Elastic Load Balancer to provide fault-tolerant access to your cluster.

11 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)

SUSE Cloud Application Platform supports deployment on Google Kubernetes Engine (GKE). This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment on GKE using its integrated network load balancers. See https://cloud.google.com/kubernetes-engine/ for more information on …

12 Installing SUSE Cloud Application Platform on OpenStack

You can deploy a SUSE Cloud Application Platform on CaaS Platform stack on OpenStack. This chapter describes how to deploy a small testing and development instance with one Kubernetes master and two worker nodes, using Terraform to automate the deployment. This does not create a production deploymen…

13 Setting Up a Registry for an Air Gapped Environment

Cloud Application Platform, which consists of Docker images, is deployed to a Kubernetes cluster through Helm. These images are hosted on a Docker registry at registry.suse.com. In an air gapped environment, registry.suse.com will not be accessible. You will need to create a registry, and populate i…

3 Deployment and Administration Notes

Important things to know before deploying SUSE Cloud Application Platform.

3.1 README First

README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

3.2 Important Changes

Warning
Warning: Deprecation of cflinuxfs2 and sle12 Stacks

cf-deployment 7.11, part of Cloud Application Platform 1.4.1, is the final Cloud Foundry version that supports the cflinuxfs2 stack. The cflinuxfs2 and sle12 stacks are deprecated in favor of cflinuxfs3 and sle15 respectively. Start planning to migrate applications to those stacks for futureproofing, as these stacks will be removed in a future release. The migration procedure is described below.

  • Migrate applications to the new stack using one of the methods listed. Note that both methods will cause application downtime. Downtime can be avoided by following a Blue-Green Deployment strategy. See https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html for details.

    Note that stack association support is available as of cf CLI v6.39.0.

    • Option 1 - Migrating applications using the Stack Auditor plugin.

      Stack Auditor rebuilds the application onto the new stack without a change in the application source code. If you want to move to a new stack with updated code, please follow Option 2 below. For additional information about the Stack Auditor plugin, see https://docs.cloudfoundry.org/adminguide/stack-auditor.html.

      1. Install the Stack Auditor plugin for the cf CLI. For instructions, see https://docs.cloudfoundry.org/adminguide/stack-auditor.html#install.

      2. Identify the stack applications are using. The audit lists all applications in orgs you have access to. To list all applications in your Cloud Application Platform deployment, ensure you are logged in as a user with access to all orgs.

        tux > cf audit-stack

        For each application requiring migration, perform the steps below.

      3. If necessary, switch to the org and space the application is deployed to.

        tux > cf target ORG SPACE
      4. Change the stack to sle15.

        tux > cf change-stack APP_NAME sle15
      5. Identify all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf buildpacks
      6. Remove all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf delete-buildpack BUILDPACK -s sle12
        
        tux > cf delete-buildpack BUILDPACK -s cflinuxfs2
      7. Remove the sle12 and cflinuxfs2 stacks.

        tux > cf delete-stack sle12
        
        tux > cf delete-stack cflinuxfs2
    • Option 2 - Migrating applications using the cf CLI.

      Perform the following for all orgs and spaces in your Cloud Application Platform deployment. Ensure you are logged in as a user with access to all orgs.

      1. Target an org and space.

        tux > cf target ORG SPACE
      2. Identify the stack an applications in the org and space is using.

        tux > cf app APP_NAME
      3. Re-push the app with the sle15 stack using one of the following methods.

      4. Identify all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf buildpacks
      5. Remove all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf delete-buildpack BUILDPACK -s sle12
        
        tux > cf delete-buildpack BUILDPACK -s cflinuxfs2
      6. Remove the sle12 and cflinuxfs2 stacks using the CF API. See https://apidocs.cloudfoundry.org/7.11.0/#stacks for details.

        List all stacks then find the GUID of the sle12 cflinuxfs2 stacks.

        tux > cf curl /v2/stacks

        Delete the sle12 and cflinuxfs2 stacks.

        tux > cf curl -X DELETE /v2/stacks/SLE12_STACK_GUID
        
        tux > cf curl -X DELETE /v2/stacks/CFLINUXFS2_STACK_GUID

3.3 Usage of Helm Chart Fields in Cloud Application Platform

There are slight differences in the way Cloud Application Platform uses some Helm chart fields than what is defined in https://helm.sh/docs/developing_charts. Take note of the following fields:

APP VERSION (appVersion in Chart.yaml)

In Cloud Application Platform, the APP VERSION field indicates the Cloud Application Platform release that a Helm chart belongs to. This is in contrast to indicating the version of the application as defined in https://helm.sh/docs/developing_charts/#the-appversion-field. For example, in the suse/uaa Helm chart, an APP VERSION of 1.4 is in reference to Cloud Application Platform release 1.4 and does not indicate uaa is version 1.4.

CHART VERSION (version in Chart.yaml)

In Cloud Application Platform, the CHART VERSION field indicates the Helm chart version, the same as defined in https://helm.sh/docs/developing_charts/#charts-and-versioning. For Cloud Application Platform Helm charts, the chart version is also the release number of the coresponding component. For example, in the suse/uaa Helm chart, a CHART VERSION of 2.16.4 also indicates uaa is release 2.16.4.

3.4 Helm Values in scf-config-values.yaml

Take note of the following Helm values when defining your scf-config-values.yaml file.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are based on SUSE Linux Enterprise, the btrfs file system driver must be used. By default, btrfs is selected as the default.

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments where the nodes are based on other operating systems, the overlay-xfs file system driver must be used.

3.5 Status of Pods during Deployment

Some Pods Show as Not Running

Some uaa and scf pods perform only deployment tasks, and it is normal for them to show as unready and Completed after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed

tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed
Some Pods Terminate and Restart during Deployment

When monitoring the status of a deployment, pods can be observed transitioning from a Running state to a Terminating state, then returning to a Running again.

If a RESTARTS count of 0 is maintained during this process, this is normal behavior and not due to failing pods. It is not necessary to stop the deployment. During deployment, pods will modify annotations on itself via the StatefulSet pod spec. In order to get the correct annotations on the running pod, it is stopped and restarted. Under normal circumstances, this behavior should only result in a pod restarting once.

3.6 Namespaces

Length of release names

Release names (for example, when you run helm install --name) have a maximum length of 36 characters.

Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, but have deleted a previous deployment and are starting over, you must create new namespaces. Do not re-use your old namespaces. The helm delete command does not remove generated secrets from the scf and uaa namespaces as it is not aware of them. These leftover secrets may cause deployment failures. See Section 30.3, “Deleting and Rebuilding a Deployment” for more information.

3.7 DNS Management

The following tables list the minimum DNS requirements to run SUSE Cloud Application Platform, using example.com as the example domain. Your DNS management is platform-dependent, for example Microsoft AKS assigns IP addresses to your services, which you will map to A records. Amazon EKS assigns host names, which you will use to create CNAMEs. SUSE CaaS Platform provides the flexibility to manage your name services in nearly any way you wish. The chapters for each platform in this guide provide the relevant DNS instructions.

DomainsServices
uaa.example.comuaa-uaa-public
*.uaa.example.comuaa-uaa-public
example.comrouter-gorouter-public
*.example.comrouter-gorouter-public
tcp.example.comtcp-router-tcp-router-public
ssh.example.comdiego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptionsKubernetes service names
User Account and Authentication (uaa)uaa-uaa-public
Cloud Foundry (CF) TCP routing servicetcp-router-tcp-router-public
CF application SSH accessdiego-ssh-ssh-proxy-public
CF routerrouter-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

3.8 Releases and Helm Chart Versions

The supported upgrade method is to install all upgrades, in order. Skipping releases is not supported. This table matches the Helm chart versions to each release:

CAP ReleaseSCF and UAA Helm Chart VersionStratos Helm Chart Version
1.4.1 (current release)2.17.12.4.0
1.42.16.42.4.0
1.3.12.15.22.3.0
1.32.14.52.2.0
1.2.12.13.32.1.0
1.2.02.11.02.0.0
1.1.12.10.11.1.0
1.1.02.8.01.1.0
1.0.12.7.01.0.2
1.02.6.111.0.0

4 Using an Ingress Controller with Cloud Application Platform

An Ingress controller (see https://kubernetes.io/docs/concepts/services-networking/ingress/) is a Kubernetes resource that manages traffic to services in a Kubernetes cluster.

Using an Ingress controller has the benefit of:

  • Having only one load balancer.

  • SSL can be terminated on the controller.

  • All traffic can be routed through ports 80 and 443 on the controller. Tracking different ports(for example, port 2793 for UAA, port (4)443 for Gorouter) is no longer needed. The Ingress routing rules will then manage the traffic flow to the appropriate backend services.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

4.1 Deploying NGINX Ingress Controller

  1. Prepare your Kubernetes cluster according to the documentation for your platform. Proceed to the next step when you reach the uaa deployment phase for your platform. Note that the DNS sections in the platform-specific documentation can be omitted.

  2. Install the NGINX Ingress Controller.

    tux > helm install suse/nginx-ingress \
    --name nginx-ingress \
    --namespace ingress \
    --set rbac.create=true
  3. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace ingress'
  4. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname. For CaaS Platform, Microsoft AKS, and Google GKE, this will be an IP and for Amazon EKS, it will be a hostname. Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   10.63.248.70   35.233.191.177   80:30344/TCP,443:31386/TCP
  5. Set up appropriate DNS records (CNAME for Amazon EKS, A records for CaaS Platform, Microsoft AKS, and Google GKE) corresponding to the controller service IP or hostname with the following entries. Replace example.com with your actual domain.

    • example.com

    • *.example.com

    • uaa.example.com

    • *.uaa.example.com

  6. Obtain a PEM formatted certificate and ensure it includes Subject Alternative Names (SAN) for uaa and scf listed in the previous step.

  7. Add the following to your configuration file, scf-config-values.yaml, to trigger the creation of the Ingress objects. Ensure crt and key are encoded in the PEM format. Note the port changes, this ensures all communications to uaa are routed through the Ingress controller.

    The nginx.ingress.kubernetes.io/proxy-body-size value indicates the maximum client request body size. Actions such as pushing an application through cf push can result in larger request body sizes depending on the application type you work with. This value will need to be adapted to your workflow.

    UAA_PORT: 443
    UAA_PUBLIC_PORT: 443
    ...
    ingress:
      enabled: true
      annotations:
        nginx.ingress.kubernetes.io/proxy-body-size: 1024m
      tls:
        crt: |
          -----BEGIN CERTIFICATE-----
          MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
          [...]
          xC8x/+zB7XlvcRJRio6kk670+25ABP==
          -----END CERTIFICATE-----
        key: |
          -----BEGIN RSA PRIVATE KEY-----
          MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
          [...]
          to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
          -----END RSA PRIVATE KEY-----
  8. Deploy uaa.

    tux > helm install suse/uaa \
    --name susecf-uaa \
    --namespace uaa \
    --values scf-config-values.yaml

    Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

    tux > watch --color 'kubectl get pods --namespace uaa'

    When uaa is successfully deployed, the following is observed:

    • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

    • All other pods have a Running STATUS and a READY value of n/n.

    Press CtrlC to exit the watch command.

  9. When all uaa pods are up and ready, verify uaa is working. Pass the CA certificate used to sign the Ingress controller certificate as the value of --cacert.

    tux > curl --cacert INGRESS_CONTROLLER_CA_CERT https://uaa.example.com/.well-known/openid-configuration
  10. Update the Ingress controller to set up TCP forwarding

    tux > helm upgrade nginx-ingress suse/nginx-ingress \
      --reuse-values \
      --set "tcp.20000=scf/tcp-router-tcp-router-public:20000" \
      --set "tcp.20001=scf/tcp-router-tcp-router-public:20001" \
      --set "tcp.20002=scf/tcp-router-tcp-router-public:20002" \
      --set "tcp.20003=scf/tcp-router-tcp-router-public:20003" \
      --set "tcp.20004=scf/tcp-router-tcp-router-public:20004" \
      --set "tcp.20005=scf/tcp-router-tcp-router-public:20005" \
      --set "tcp.20006=scf/tcp-router-tcp-router-public:20006" \
      --set "tcp.20007=scf/tcp-router-tcp-router-public:20007" \
      --set "tcp.20008=scf/tcp-router-tcp-router-public:20008" \
      --set "tcp.2222=scf/diego-ssh-ssh-proxy-public:2222"
  11. Set up the scf deployment to trust the CA certificate that signed the certificate of the Ingress Controller.

    tux > export INGRESS_CA_CERT=$(cat ingress-ca-cert.pem)
  12. Deploy scf.

    tux > helm install suse/cf \
    --name susecf-scf \
    --namespace scf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${INGRESS_CA_CERT}"

    Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

    tux > watch --color 'kubectl get pods --namespace scf'

    When scf is successfully deployed, the following is observed:

    • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

    • All other pods have a Running STATUS and a READY value of n/n.

    Press CtrlC to exit the watch command.

  13. When the deployment completes, verify you are able to login using the cf CLI.

    tux > cf login https://api.example.com -u username -p password

4.2 Changing the Max Body Size

The nginx.ingress.kubernetes.io/proxy-body-size value indicates the maximum client request body size. Actions such as pushing an application through cf push can result in larger request body sizes depending on the application type you work with. If your current setting is insufficient, you may encounter a 413 Request Entity Too Large error.

The maximum client request body size can be changed to adapt to your workflow using the following.

  1. Add nginx.ingress.kubernetes.io/proxy-body-size to your scf-config-values.yaml and specify a value.

    ingress:
      annotations:
        nginx.ingress.kubernetes.io/proxy-body-size: 1024m
  2. Set up the scf deployment to trust the CA certificate that signed the certificate of the Ingress Controller.

    tux > export INGRESS_CA_CERT=$(cat ingress-ca-cert.pem)
  3. Use helm upgrade to apply the change.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${INGRESS_CA_CERT}"
  4. Monitor the deployment progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace scf'

5 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

You may set up a minimal deployment on four Kubernetes nodes for testing. This is not sufficient for a production deployment. A basic SUSE Cloud Application Platform production deployment requires at least eight hosts plus a storage back-end: one SUSE CaaS Platform admin server, three Kubernetes masters, three Kubernetes workers, a DNS/DHCP server, and a storage back-end such as SUSE Enterprise Storage or NFS. This is a bare minimum, and actual requirements are likely to be much larger, depending on your workloads. You also need an external workstation for administering your cluster. (See Section 1.3, “Minimum Requirements”.) You may optionally make your SUSE Cloud Application Platform instance highly-available.

Note
Note: Remote Administration

You will run most of the commands in this chapter from a remote workstation, rather than directly on any of the SUSE Cloud Application Platform nodes. These are indicated by the unprivileged user Tux, while root prompts are on a cluster node. There are few tasks that need to be performed directly on any of the cluster hosts.

The optional High Availability example in this chapter provides HA only for the SUSE Cloud Application Platform cluster, and not for CaaS Platform or SUSE Enterprise Storage. See Section 7.1, “Configuring Cloud Application Platform for High Availability”.

5.1 Prerequisites

Calculating hardware requirements is best done with an analysis of your expected workloads, traffic patterns, storage needs, and application requirements. The following examples are bare minimums to deploy a running cluster, and any production deployment will require more.

Minimum Hardware Requirements

8 GB of memory per CaaS Platform dashboard and Kubernetes master nodes.

16 GB of memory per Kubernetes worker.

40 GB disk space per CaaS Platform dashboard and Kubernetes master nodes.

80 GB disk space per Kubernetes worker.

Network Requirements

To enable the SUSE Cloud Foundry nodes to interact with each other, make sure your network setup fulfills the following requirements:

  • The Kubernetes cluster has a dedicated domain and required subdomains (see Chapter 3, Deployment and Administration Notes).

  • Each node is able resolve to its host name and fully-qualified domain name (FQDN).

  • SUSE Cloud Application Platform has a dedicated domain or subdomain separate from the Kubernetes cluster.

  • Traffic inside of SUSE Cloud Application Platform resolves to a Kubernetes master node in that dedicated domain.

Typically, a Kubernetes cluster sits behind a load balancer, which also provides external access to the cluster. Another option is to use DNS round-robin to the Kubernetes workers to provide external access. It is also a common practice to create a wildcard DNS entry pointing to the domain, for example *.example.com, so that applications can be deployed without creating DNS entries for each application. This guide does not describe how to set up a load balancer or name services, as these depend on your requirements and existing network architectures.

For information about network and name services configurations, see https://www.suse.com/documentation/suse-caasp-3/book_caasp_deployment/data/sec_deploy_requirements_system.html#sec_deploy_requirements_network.

Install SUSE CaaS Platform

SUSE Cloud Application Platform is supported on SUSE CaaS Platform 3.x.

Ensure nodes use a mininum kernel version of 3.19.

After installing CaaS Platform 2 or CaaS Platform 3 and logging into the Velum Web interface, check the box to install Tiller (Helm's server component).

Install Tiller
Figure 5.1: Install Tiller

Take note of the Overlay network settings. These define the networks that are exclusive to the internal Kubernetes cluster communications. They are not externally accessible. You may assign different networks to avoid address collisions.

There is also a form for proxy settings; if you're not using a proxy then leave it empty.

The easiest way to create the Kubernetes nodes, after you create the admin node, is to use AutoYaST; see Installation with AutoYaST. Set up CaaS Platform with one admin node and at least three Kubernetes masters and three Kubernetes workers. You also need an Internet connection, as the installer downloads additional packages, and the Kubernetes workers will each download ~10 GB of Docker images.

Assigning Roles to Nodes
Figure 5.2: Assigning Roles to Nodes

When you have completed Bootstrapping the Cluster click the kubectl config button to download your new cluster's kubeconfig file. This takes you to a login screen; use the login you created to access Velum. Save the file as ~/.kube/config on your workstation. This file enables the remote administration of your cluster.

Install kubectl

To install kubectl on a SLE 12 SP3 or 15 workstation, install the package kubernetes-client from the Public Cloud module. For other operating systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/. After installation, run this command to verify that it is installed, and that is communicating correctly with your cluster:

tux > kubectl version --short
Client Version: v1.10.7
Server Version: v1.10.11

As the client is on your workstation, and the server is on your cluster, reporting the server version verifies that kubectl is using ~/.kube/config and is communicating with your cluster.

The following kubectl examples query the cluster configuration and node status:

tux > kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://11.100.10.10:6443
  name: local
contexts:
[...]

tux > kubectl get nodes
NAME                  STATUS   ROLES     AGE  VERSION
ef254d3.example.com   Ready    Master    4h   v1.10.11
b70748d.example.com   Ready    Master    4h   v1.10.11
cb77881.example.com   Ready    Master    4h   v1.10.11
d028551.example.com   Ready    <none>    4h   v1.10.11
[...]
Install Helm

Deploying SUSE Cloud Application Platform is different than the usual method of installing software. Rather than installing packages in the usual way with YaST or Zypper, you will install the Helm client on your workstation to install the required Kubernetes applications to set up SUSE Cloud Application Platform, and to administer your cluster remotely. Helm is the Kubernetes package manager. The Helm client goes on your remote administration computer, and Tiller is Helm's server, which is installed on your Kubernetes cluster.

Helm client version 2.9 or higher is required. Compatibility between Cloud Application Platform and Helm 3 is not supported.

Warning
Warning: Initialize Only the Helm Client

When you initialize Helm on your workstation be sure to initialize only the client, as the server, Tiller, was installed during the CaaS Platform installation. You do not want two Tiller instances.

If the Linux distribution on your workstation does not provide the correct Helm version, or you are using some other platform, see the Helm Quickstart Guide for installation instructions and basic usage examples. Download the Helm binary into any directory that is in your PATH on your workstation, such as your ~/bin directory. Then initialize the client only:

tux > helm init --client-only
Creating /home/tux/.helm
Creating /home/tux/.helm/repository
Creating /home/tux/.helm/repository/cache
Creating /home/tux/.helm/repository/local
Creating /home/tux/.helm/plugins
Creating /home/tux/.helm/starters
Creating /home/tux/.helm/cache/archive
Creating /home/tux/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/tux/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!

5.2 Pod Security Policy

SUSE CaaS Platform 3 includes Pod Security Policy (PSP) support. This change adds two new PSPs to CaaS Platform 3:

  • unprivileged, which is the default assigned to all users. The unprivileged Pod Security Policy is intended as a reasonable compromise between the reality of Kubernetes workloads, and the suse:caasp:psp:privileged role. By default, this PSP is granted to all users and service accounts.

  • privileged, which is intended to be assigned only to trusted workloads. It applies few restrictions, and should only be assigned to highly trusted users.

SUSE Cloud Application Platform 1.4.1 includes the necessary PSP configurations in the Helm charts to run on SUSE CaaS Platform, and are set up automatically without requiring manual configuration. See Section A.1, “Manual Configuration of Pod Security Policies” for instructions on applying the necessary PSPs manually on older Cloud Application Platform releases.

5.3 Choose Storage Class

The Kubernetes cluster requires a persistent storage class for the databases to store persistent data. Your available storage classes depend on which storage cluster you are using (SUSE Enterprise Storage users, see SUSE CaaS Platform Integration with SES). After connecting your storage back-end use kubectl to see your available storage classes. This example is for an NFS storage class:

tux > kubectl get storageclass
NAME         PROVISIONER   AGE
persistent   nfs           10d

Creating a default storage class is useful for a number of scenarios, such as using Minibroker. When your storage class is already created, run the following command (substituting the name of your storage class) to make it the default:

tux > kubectl patch storageclass persistent \
 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

tux > kubectl get storageclass
NAME                   PROVISIONER   AGE
persistent (default)   nfs           10d

See Section 5.5, “Configure the SUSE Cloud Application Platform Production Deployment” to learn where to configure your storage class for SUSE Cloud Application Platform. See the Kubernetes document Persistent Volumes for detailed information on storage classes.

5.4 Test Storage Class

You may test that your storage class is properly configured before deploying SUSE Cloud Application Platform by creating a persistent volume claim on your storage class, then verifying that the status of the claim is bound, and a volume has been created.

First copy the following configuration file, which in this example is named test-storage-class.yaml, substituting the name of your storageClassName:

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-sc-persistent
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: persistent

Create your persistent volume claim:

tux > kubectl create --filename test-storage-class.yaml
persistentvolumeclaim "test-sc-persistent" created

Check that the claim has been created, and that the status is bound:

tux > kubectl get pv,pvc
NAME                                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                        STORAGECLASS   REASON    AGE
pv/pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c   1Gi        RWO            Delete           Bound     default/test-sc-persistent   persistent               2m

NAME                     STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc/test-sc-persistent   Bound     pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c   1Gi        RWO            persistent     2m

This verifies that your storage class is correctly configured. Delete your volume claims when you're finished:

tux > kubectl delete pv/pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c
persistentvolume "pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c" deleted

tux > kubectl delete pvc/test-sc-persistent
persistentvolumeclaim "test-sc-persistent" deleted

If something goes wrong and your volume claims get stuck in pending status, you can force deletion with the --grace-period=0 option:

tux > kubectl delete pvc/test-sc-persistent --grace-period=0

5.5 Configure the SUSE Cloud Application Platform Production Deployment

Create a configuration file on your workstation for Helm to use. In this example it is called scf-config-values.yaml. (See Section A.1, “Manual Configuration of Pod Security Policies” for configuration instructions for releases older than 1.3.1.)

The example configuration file scf-config-values.yaml is for a simple deployment without the following network resources that are usually relevant for production systems:

The example scf-config-values.yaml file is for a simple deployment without an ingress controller or external load balancer. Instead, assign the master node an external IP address, and map this to the domain name to provide external access to the cluster. The external_ips parameter needs this address to provide access to the Stratos Web interface and other public servers, and it also needs the internal IP addresses of the master node to provide access to internal services.

### example deployment configuration file
### scf-config-values.yaml

env:
  # Enter the domain you created for your CAP cluster
  DOMAIN: example.com

  # uaa host and port
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793

kube:
  # Specify the master node's external IP and internal IP
  external_ips: ["11.100.10.10", "192.168.1"]

  storage_class:
    persistent: "persistent"
    shared: "shared"

  # The registry the images will be fetched from.
  # The values below should work for
  # a default installation from the SUSE registry.
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"

secrets:
  # Create a very strong password for user 'admin'
  CLUSTER_ADMIN_PASSWORD: password

  # Create a very strong password, and protect it because it
  # provides root access to everything
  UAA_ADMIN_CLIENT_SECRET: password

Take note of the following Helm values when defining your scf-config-values.yaml file.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are based on SUSE Linux Enterprise, the btrfs file system driver must be used. By default, btrfs is selected as the default.

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments where the nodes are based on other operating systems, the overlay-xfs file system driver must be used.

Important
Important: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

5.6 Deploy with Helm

The following list provides an overview of Helm commands to complete the deployment. Included are links to detailed descriptions.

  1. Download the SUSE Kubernetes charts repository (Section 5.7, “Add the Kubernetes Charts Repository”)

  2. Copy the storage secret of your storage cluster to the uaa and scf namespaces (Section 5.8, “Copy SUSE Enterprise Storage Secret”)

  3. Deploy uaa (Section 5.9, “Deploy uaa)

  4. Copy the uaa secret and certificate to the scf namespace, deploy scf (Section 5.10, “Deploy scf)

5.7 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME            URL
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts
suse            https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.17.1          1.4             A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    2.4.0           2.4.0           A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0           1.0.0           A Helm chart for Stratos Metrics
suse/minibroker                 0.2.0                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.17.1          1.4             A Helm chart for SUSE UAA

5.8 Copy SUSE Enterprise Storage Secret

If you are using SUSE Enterprise Storage you must copy the Ceph admin secret to the uaa and scf namespaces:

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \
sed 's/"namespace": "default"/"namespace": "uaa"/' | kubectl create --filename -

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \
sed's/"namespace": "default"/"namespace": "scf"/' | kubectl create --filename -

5.9 Deploy uaa

Use Helm to deploy the uaa (User Account and Authentication) server. You may create your own release --name:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

  • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

When the uaa completes, proceed to deploying SUSE Cloud Foundry. Pressing CtrlC stops the watch command.

5.10 Deploy scf

First pass your uaa secret and certificate to scf, then use Helm to install SUSE Cloud Foundry:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

  • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

When the deployment completes, use the Cloud Foundry command line interface to log in to SUSE Cloud Foundry to deploy and manage your applications. (See Section 29.1, “Using the cf CLI with SUSE Cloud Application Platform”)

5.11 Expanding Capacity of a Cloud Application Platform Deployment on SUSE® CaaS Platform

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 5, Deploying SUSE Cloud Application Platform on SUSE CaaS Platform and have a running Cloud Application Platform deployment on SUSE® CaaS Platform.

  1. Add additional nodes to your SUSE® CaaS Platform cluster as described in https://www.suse.com/documentation/suse-caasp-3/singlehtml/book_caasp_admin/book_caasp_admin.html#sec.admin.nodes.add.

  2. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  3. Add or update the following in your scf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        count: 4
  4. Pass your uaa secret and certificate to scf.

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  5. Perform a helm upgrade to apply the change.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}"
  6. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace scf'

6 Installing the Stratos Web Console

The Stratos user interface (UI) is a modern web-based management application for Cloud Foundry. It provides a graphical management console for both developers and system administrators. Install Stratos with Helm after all of the uaa and scf pods are running.

6.1 Deploy Stratos on SUSE® CaaS Platform

The steps in this section describe how to install Stratos on SUSE® CaaS Platform without an external load balancer, instead mapping the master node to your SUSE Cloud Application Platform domain as described in Section 5.5, “Configure the SUSE Cloud Application Platform Production Deployment”. These instructions assume you have followed the procedure in Chapter 5, Deploying SUSE Cloud Application Platform on SUSE CaaS Platform, have deployed uaa and scf successfully, and have created a default storage class.

If you are using SUSE Enterprise Storage as your storage back-end, copy the secret into the Stratos namespace:

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \
sed 's/"namespace": "default"/"namespace": "stratos"/' | kubectl create --filename -

You should already have the Stratos charts when you downloaded the SUSE charts repository (see Section 5.7, “Add the Kubernetes Charts Repository”). Search your Helm repository to verify that you have the suse/console chart:

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.17.1          1.4             A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    2.4.0           2.4.0           A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0           1.0.0           A Helm chart for Stratos Metrics
suse/minibroker                 0.2.0                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.17.1          1.4             A Helm chart for SUSE UAA

Use Helm to install Stratos, using the same scf-config-values.yaml configuration file you used to deployuaa and scf:

tux > helm install suse/console \
    --name susecf-console \
    --namespace stratos \
    --values scf-config-values.yaml

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

  • For the volume-migration pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

When the stratos deployment completes, query with Helm to view your release information:

tux > helm status susecf-console
LAST DEPLOYED: Wed Mar 27 06:51:36 2019
NAMESPACE: stratos
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                           TYPE    DATA  AGE
susecf-console-secret          Opaque  2     3h
susecf-console-mariadb-secret  Opaque  2     3h

==> v1/PersistentVolumeClaim
NAME                                  STATUS  VOLUME                                    CAPACITY  ACCESSMODES  STORAGECLASS  AGE
susecf-console-upgrade-volume         Bound   pvc-711380d4-5097-11e9-89eb-fa163e15acf0  20Mi      RWO          persistent    3h
susecf-console-encryption-key-volume  Bound   pvc-711b5275-5097-11e9-89eb-fa163e15acf0  20Mi      RWO          persistent    3h
console-mariadb                       Bound   pvc-7122200c-5097-11e9-89eb-fa163e15acf0  1Gi       RWO          persistent    3h

==> v1/Service
NAME                    CLUSTER-IP      EXTERNAL-IP                                                PORT(S)   AGE
susecf-console-mariadb  172.24.137.195  <none>                                                     3306/TCP  3h
susecf-console-ui-ext   172.24.80.22    10.86.101.115,172.28.0.31,172.28.0.36,172.28.0.7,172.28.0.22  8443/TCP  3h

==> v1beta1/Deployment
NAME        DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
stratos-db  1        1        1           1          3h

==> v1beta1/StatefulSet
NAME     DESIRED  CURRENT  AGE
stratos  1        1        3h

Find the external IP address with kubectl get service susecf-console-ui-ext --namespace stratos to access your new Stratos Web console, for example https://10.86.101.115:8443, or use the domain you created for it, and its port, for example https://example.com:8443. Wade through the nag screens about the self-signed certificates and log in as admin with the password you created in scf-config-values.yaml.

Stratos UI Cloud Foundry Console
Figure 6.1: Stratos UI Cloud Foundry Console

6.1.1 Connecting SUSE® CaaS Platform to Stratos

Stratos can show information from your SUSE® CaaS Platform environment.

To enable this, you must register and connect your SUSE® CaaS Platform environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view.

  2. On the Register a new Endpoint view, click the SUSE CaaS Platform button.

  3. Enter a memorable name for your SUSE® CaaS Platform environment in the Name field. For example, my-endpoint.

  4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Address field. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

    tux > kubectl cluster-info
  5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

  6. Click Register.

  7. Activate the Connect to my-endpoint now (optional). check box.

  8. Provide a valid kubeconfig file for your SUSE® CaaS Platform environment.

  9. Click Connect.

  10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for your SUSE® CaaS Platform environment should now be displayed.

Kubernetes Environment Information on Stratos
Figure 6.2: Kubernetes Environment Information on Stratos

6.2 Deploy Stratos on Amazon EKS

Before deploying Stratos, ensure uaa and scf have been successfully deployed on Amazon EKS (see Chapter 10, Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)).

Configure a scoped storage class for your Stratos deployment. Create a configuration file, called scoped-storage-class.yaml in this example, using the following as a template. Specify the region you are using as the zone and be sure to include the letter (for example, the letter a in us-west-2a) identifier to indicate the Availability Zone used:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2scoped
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  zone: "us-west-2a"
reclaimPolicy: Retain
mountOptions:
  - debug

Create the storage class using the scoped-storage-class.yaml configuration file:

tux > kubectl create --filename scoped-storage-class.yaml

Verify the storage class has been created:

tux > kubectl get storageclass
NAME            PROVISIONER             AGE
gp2 (default)   kubernetes.io/aws-ebs   1d
gp2scoped       kubernetes.io/aws-ebs   1d

Use Helm to install Stratos:

tux > helm install suse/console \
    --name susecf-console \
    --namespace stratos \
    --values scf-config-values.yaml \
    --set kube.storage_class.persistent=gp2scoped \
    --set services.loadbalanced=true \
    --set console.service.http.nodePort=8080

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

  • For the volume-migration pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Obtain the host name of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

Use this host name to create a CNAME record.

Stratos UI Cloud Foundry Console
Figure 6.3: Stratos UI Cloud Foundry Console

6.2.1 Connecting Amazon EKS to Stratos

Stratos can show information from your Amazon EKS environment.

To enable this, you must register and connect your Amazon EKS environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view.

  2. On the Register a new Endpoint view, click the Amazon EKS button.

  3. Enter a memorable name for your Amazon EKS environment in the Name field. For example, my-endpoint.

  4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Address field. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

    tux > kubectl cluster-info
  5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

  6. Click Register.

  7. Activate the Connect to my-endpoint now (optional). check box.

  8. Enter the name of your Amazon EKS cluster in the Cluster field.

  9. Enter your AWS Access Key ID in the Access Key ID field.

  10. Enter your AWS Secret Access Key in the Secret Access Key field.

  11. Click Connect.

  12. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for your Amazon EKS environment should now be displayed.

Kubernetes Environment Information on Stratos
Figure 6.4: Kubernetes Environment Information on Stratos

6.3 Deploy Stratos on Microsoft AKS

Before deploying Stratos, ensure uaa and scf have been successfully deployed on Microsoft AKS (see Chapter 9, Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)).

Use Helm to install Stratos:

tux > helm install suse/console \
    --name susecf-console \
    --namespace stratos \
    --values scf-config-values.yaml \
    --set services.loadbalanced=true \
    --set console.service.http.nodePort=8080

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

  • For the volume-migration pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Obtain the IP address of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

Use this IP address to create an A record.

Stratos UI Cloud Foundry Console
Figure 6.5: Stratos UI Cloud Foundry Console

6.3.1 Connecting Microsoft AKS to Stratos

Stratos can show information from your Microsoft AKS environment.

To enable this, you must register and connect your Microsoft AKS environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view.

  2. On the Register a new Endpoint view, click the Azure AKS button.

  3. Enter a memorable name for your Microsoft AKS environment in the Name field. For example, my-endpoint.

  4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Address field. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

    tux > kubectl cluster-info
  5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

  6. Click Register.

  7. Activate the Connect to my-endpoint now (optional). check box.

  8. Provide a valid kubeconfig file for your Microsoft AKS environment.

  9. Click Connect.

  10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for your Microsoft AKS environment should now be displayed.

Kubernetes Environment Information on Stratos
Figure 6.6: Kubernetes Environment Information on Stratos

6.4 Deploy Stratos on Google GKE

Before deploying Stratos, ensure uaa and scf have been successfully deployed on Google GKE (see Chapter 11, Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)).

Use Helm to install Stratos:

tux > helm install suse/console \
    --name susecf-console \
    --namespace stratos \
    --values scf-config-values.yaml \
    --set services.loadbalanced=true \
    --set console.service.http.nodePort=8080

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

  • For the volume-migration pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Obtain the IP address of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

Use this IP address to create an A record.

Stratos UI Cloud Foundry Console
Figure 6.7: Stratos UI Cloud Foundry Console

6.4.1 Connecting Google GKE to Stratos

Stratos can show information from your Google GKE environment.

To enable this, you must register and connect your Google GKE environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view.

  2. On the Register a new Endpoint view, click the Google Kubernetes Engine button.

  3. Enter a memorable name for your Microsoft AKS environment in the Name field. For example, my-endpoint.

  4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Address field. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

    tux > kubectl cluster-info
  5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

  6. Click Register.

  7. Activate the Connect to my-endpoint now (optional). check box.

  8. Provide a valid Application Default Credentials file for your Google GKE environment. Generate the file using the command below. The command saves the credentials to a file named application_default_credentials.json and outputs the path of the file.

    tux > gcloud auth application-default login
  9. Click Connect.

  10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for your Google GKE environment should now be displayed.

Kubernetes Environment Information on Stratos
Figure 6.8: Kubernetes Environment Information on Stratos

6.5 Upgrading Stratos

For instructions to upgrade Stratos, follow the process described in Chapter 14, Upgrading SUSE Cloud Application Platform. Take note that uaa and scf, other components along with Stratos that make up a Cloud Application Platform release, are upgraded prior to upgrading Stratos.

6.6 Stratos Metrics

Stratos can show metrics data from Prometheus for both Cloud Foundry and Kubernetes.

6.6.1 Install Stratos Metrics with Helm

In order to display metrics data with Stratos, you need to deploy the stratos-metrics Helm chart - this deploys Prometheus with the necessary exporters that collect data from Cloud Foundry and Kubernetes. It also wraps Prometheus with an nginx server to provide authentication.

As with deploying Stratos, you should deploy the metrics Helm chart using the same scf-config-values.yaml file that was used for deploying scf and uaa.

Create a new yaml file named stratos-metrics-values.yaml, with the following contents:

env:
  DOPPLER_PORT: 443
kubernetes:
  authEndpoint: kube_server_address.example.com
prometheus:
  kubeStateMetrics:
    enabled: true
nginx:
  username: username
  password: password
useLb: true

where:

  • authEndpoint is the same URL that you used when registering your Kubernetes environment with Stratos (the Kubernetes API Server URL)

  • username should be chosen by you as the username that you will use when connecting to Stratos Metrics

  • password should be chosen by you as the password that you will use when connecting to Stratos Metrics. Ensure to choose a secure password

  • useLb is set to true if your Kubernetes deployment supports automatic configuration of a load balancer (for example, AKS, EKS, and GKE)

If you are using SUSE Enterprise Storage, you must copy the Ceph admin secret to the metrics namespace:

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \
sed 's/"namespace": "default"/"namespace": "metrics"/' | kubectl create --filename -

Install Metrics with:

tux > helm install suse/metrics \
    --name susecf-metrics \
    --namespace metrics \
    --values scf-config-values.yaml \
    --values stratos-metrics-values.yaml

Monitor progress:

$ watch --color 'kubectl get pods --namespace metrics'

When all statuses show Ready, press CtrlC to exit and to view your release information.

6.6.2 Connecting Stratos Metrics

When Stratos Metrics is connected to Stratos, additional views are enabled that show metrics metadata that has been ingested into the Stratos Metrics Prometheus server.

To enable this, you must register and connect your Stratos Metrics instance with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view. Next:

  1. Select Metrics from the Endpoint Type dropdown.

  2. Enter a memorable name for your environment in the Name field.

  3. Enter the Endpoint Address. Use the following to find the endpoint value.

    tux > kubectl get service susecf-metrics-metrics-nginx --namespace metrics
    • For Microsoft AKS, Amazon EKS, and Google GKE deployments which use a load balancer, the output will be similar to the following:

      NAME                           TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)         AGE
      susecf-metrics-metrics-nginx   LoadBalancer   10.0.202.180   52.170.253.229   443:30263/TCP   21h

      Preprend https:// to the public IP of the load balancer, and enter it into the Endpoint Address field. Using the values from the example above, https://52.170.253.229 is entered as the endpoint address.

    • For CaaS Platform deployments which do not use a load balancer, the output will be similar to the following:

      NAME                           TYPE       CLUSTER-IP       EXTERNAL-IP               PORT(S)         AGE
      susecf-metrics-metrics-nginx   NodePort   172.28.107.209   10.86.101.115,172.28.0.31 443:30685/TCP   21h

      Prepend https:// to the external IP of your node, followed by the nodePort, and enter it into the Endpoint Address field. Using the values from the example above, https://10.86.101.115:30685 is entered as the endpoint address.

  4. Check the Skip SSL validation for the endpoint checkbox if using self-signed certificates.

  5. Click Finish.

The view will refresh to show the new endpoint in the disconnected state. Next you will need to connect to this endpoint.

In the table of endpoints, click the overflow menu icon alongside the endpoint that you added above, then:

  1. Click on Connect in the dropdown menu.

  2. Enter the username for your Stratos Metrics instance. This will be the nginx.username defined in your stratos-metrics-values.yaml file.

  3. Enter the password for your Stratos Metrics instance. This will be the nginx.password defined in your stratos-metrics-values.yaml file.

  4. Click Connect.

Once connected, you should see that the name of your Metrics endpoint is a hyperlink and clicking on it should show basic metadata about the Stratos Metrics endpoint.

Metrics data and views should now be available in the Stratos UI, for example:

  • On the Instances tab for an Application, the table should show an additional Cell column to indicate which Diego Cell the instance is running on. This should be clickable to navigate to a Cell view showing Cell information and metrics.

    Cell Column on Application Instance Tab after Connecting Stratos Metrics
    Figure 6.9: Cell Column on Application Instance Tab after Connecting Stratos Metrics
  • On the view for an Application there should be a new Metrics tab that shows Application metrics.

    Application Metrics Tab after Connecting Stratos Metrics
    Figure 6.10: Application Metrics Tab after Connecting Stratos Metrics
  • On the Kubernetes views, views such as the Node view should show an additional Metrics tab with metric information.

    Node Metrics on the Stratos Kubernetes View
    Figure 6.11: Node Metrics on the Stratos Kubernetes View

7 SUSE Cloud Application Platform High Availability

7.1 Configuring Cloud Application Platform for High Availability

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The simplest method is to set the HA parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own custom values.

7.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for each namespace describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm charts:

tux > helm inspect suse/uaa | less +/sizing:
tux > helm inspect suse/cf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section. The following example is for the uaa namespace.

tux > helm inspect values suse/uaa | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'
151:    # The mysql instance group can scale between 1 and 3 instances.
152:    # For high availability it needs at least 2 instances.
153:    count: 1
178:    # The secret-generation instance group cannot be scaled.
179:    count: 1
207:  #   for managing user accounts and for registering OAuth2 clients, as well as
216:    # The uaa instance group can scale between 1 and 65535 instances.
217:    # For high availability it needs at least 2 instances.
218:    count: 1

The default values.yaml files are also included in this guide at Section A.2, “Complete suse/uaa values.yaml File” and Section A.3, “Complete suse/scf values.yaml File”.

7.1.2 Simple High Availability Configuration

Important
Important
Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, but have deleted a previous deployment and are starting over, you must create new namespaces. Do not re-use your old namespaces. The helm delete command does not remove generated secrets from the scf and uaa namespaces as it is not aware of them. These leftover secrets may cause deployment failures. See Section 30.3, “Deleting and Rebuilding a Deployment” for more information.

The simplest way to make your SUSE Cloud Application Platform deployment highly available is to set HA to true in your deployment configuration file, for example scf-config-values.yaml:

config:
  # Flag to activate high-availability mode
  HA: true

Or, you may pass it as a command line option when you are deploying with Helm, for example:

tux > helm install suse/uaa \
 --name susecf-uaa \
 --namespace uaa \
 --values scf-config-values.yaml \
 --set config.HA=true

This changes all roles with a default size of 1 to the minimum required for a High Availability deployment. It is not possible to customize any of the sizing values.

7.1.3 Example Custom High Availability Configurations

The following two example High Availability configuration files are for the uaa and scf namespaces. The example values are not meant to be copied, as these depend on your particular deployment and requirements. Do not change the config.HA flag to true (see Section 7.1.2, “Simple High Availability Configuration”.)

The first example is for the uaa namespace, uaa-sizing.yaml. The values specified are the minimum required for a High Availability deployment (that is equivalent to setting config.HA to true):

sizing:
  mysql:
    count: 2
  uaa:
    count: 2

The second example is for scf, scf-sizing.yaml. The values specified are the minimum required for a High Availability deployment (that is equivalent to setting config.HA to true), except for diego-cell which includes additional instances:

sizing:
  adapter:
    count: 2
  api_group:
    count: 2
  cc_clock:
    count: 2
  cc_uploader:
    count: 2
  cc_worker:
    count: 2
  cf_usb:
    count: 2
  diego_api:
    count: 2
  diego_brain:
    count: 2
  diego_cell:
    count: 6
  diego_ssh:
    count: 2
  doppler:
    count: 2
  log-api:
    count: 2
  mysql:
    count: 2
  nats:
    count: 2
  nfs_broker:
    count: 2
  router:
    count: 2
  routing_api:
    count: 2
  syslog_scheduler:
    count: 2
  tcp_router:
    count: 2
Important
Important
Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, but have deleted a previous deployment and are starting over, you must create new namespaces. Do not re-use your old namespaces. The helm delete command does not remove generated secrets from the scf and uaa namespaces as it is not aware of them. These leftover secrets may cause deployment failures. See Section 30.3, “Deleting and Rebuilding a Deployment” for more information.

After creating your configuration files, follow the steps in Section 5.5, “Configure the SUSE Cloud Application Platform Production Deployment” until you get to Section 5.9, “Deploy uaa. Then deploy uaa with this command:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml \
--values uaa-sizing.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

  • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

When the uaa deployment completes, deploy SCF with these commands:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--values scf-sizing.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

The HA pods with the following roles will enter in both passive and ready states; there should always be at least one pod in each role that is ready.

  • diego-brain

  • diego-database

  • routing-api

You can confirm this by looking at the logs inside the container. Look for .consul-lock.acquiring-lock.

Some roles follow an active/passive scaling model, meaning all pods except the active one will be shown as NOT READY by Kubernetes. This is appropriate and expected behavior.

7.1.4 Upgrading a non-High Availability Deployment to High Availability

You may make a non-High Availability deployment highly available by upgrading with Helm:

tux > helm upgrade susecf-uaa suse/uaa \
--values scf-config-values.yaml \
--values uaa-sizing.yaml

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--values scf-sizing.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

This may take a long time, and your cluster will be unavailable until the upgrade is complete.

7.2 Availability Zones

Availability Zones (AZ) are logical arrangements of compute nodes within a region that provide isolation from each other. A deployment that is distributed across multiple AZs can use this separation to increase resiliency against downtime in the event a given zone experiences issues.

Refer to the following for platform-specific information about availability zones:

7.2.1 Availability Zone Information Handling

In Cloud Application Platform, availability zone handling is done using the AZ_LABEL_NAME Helm chart value. By default, AZ_LABEL_NAME is set to failure-domain.beta.kubernetes.io/zone, which is the predefined Kubernetes label for availability zones. On most public cloud providers, nodes will already have this label set and availability zone support will work without further configuration. For on-premise installations, it is recommended that nodes are labeled with the same label.

Run the following to see the labels on your nodes.

tux > kubectl get nodes --show-labels

To label a node, use kubectl label nodes . For example:

tux > kubectl label nodes cap-worker-1 failure-domain.beta.kubernetes.io/zone=zone-1

To see which node and availability zone a given diego-cell pod is assigned to, refer to the following example:

tux > kubectl logs diego-cell-0 --namespace scf | grep ^AZ

For more information on the failure-domain.beta.kubernetes.io/zone label, see https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone.

Note that due to a bug in Cloud Application Platform 1.4 and earlier, this label was not working for AZ_LABEL_NAME.

Important
Important: Performance with Availability Zones

For the best performance, all availability zones should have a similar number of nodes because app instances will be evenly distributed, so that each zone has about the same number of instances.

8 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

8.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • The Cloud Foundry command line interface (cf CLI). See https://docs.cloudfoundry.org/cf-cli/ for installation instructions and documentation.

  • The Cloud Foundry uaa command line interface (UAAC).

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

8.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server. In this example, scf has been deployed to a namespace named scf.

  1. Use UAAC to target your uaa server:

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your scf-config-values.yaml file:

    tux > uaac token client get admin --secret password
  3. Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --header 'X-Identity-Zone-Subdomain: scf' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  4. Verify the LDAP identify provider has been created in the scf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: scf"
  5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  6. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  7. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  8. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  9. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

  1. Obtain the ID of your identity provider.

    tux > uaac curl /identity-providers \
        --insecure \
        --header "Content-Type:application/json" \
        --header "Accept:application/json" \
        --header"X-Identity-Zone-Id:scf"
  2. Delete the identity provider.

    tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \
        --request DELETE \
        --insecure \
        --header "X-Identity-Zone-Id:scf"

9 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Microsoft Azure Kubernetes Service (AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparing Azure for a SUSE Cloud Application Platform deployment, with a basic Azure load balancer. Note that you will not create any DNS records until after uaa is deployed. (See Azure Kubernetes Service (AKS) for more information.)

In Kubernetes terminology a node used to be a minion, which was the name for a worker node. Now the correct term is simply node (see https://kubernetes.io/docs/concepts/architecture/nodes/). This can be confusing, as computing nodes have traditionally been defined as any device in a network that has an IP address. In Azure they are called agent nodes. In this chapter we call them agent nodes or Kubernetes nodes.

9.1 Prerequisites

Install az, the Azure command line client, on your remote administration machine. See Install Azure CLI 2.0 for instructions.

See the Azure CLI 2.0 Reference for a complete az command reference.

You also need the kubectl, curl, sed, and jq commands, Helm 2.9 or newer, and the name of the SSH key that is attached to your Azure account. (Get Helm from Helm Releases.)

Ensure nodes use a mininum kernel version of 3.19.

Note that 24 vCPUs are required for the minimal installation described in this guide. If your account only has the default 10 vCPUs available, you can request a quota increase by going to https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request.

Log in to your Azure Account:

tux > az login

Your Azure user needs the User Access Administrator role. Check your assigned roles with the az command:

tux > az role assignment list --assignee login-name
[...]
"roleDefinitionName": "User Access Administrator",

If you do not have this role, then you must request it from your Azure administrator.

You need your Azure subscription ID. Extract it with az:

tux > az account show --query "{ subscription_id: id }"
{
"subscription_id": "a900cdi2-5983-0376-s7je-d4jdmsif84ca"
}

Replace the example subscription-id in the next command with your subscription-id. Then export it as an environment variable and set it as the current subscription:

tux > export SUBSCRIPTION_ID="a900cdi2-5983-0376-s7je-d4jdmsif84ca"

tux > az account set --subscription $SUBSCRIPTION_ID

Verify that the Microsoft.Network, Microsoft.Storage, Microsoft.Compute, and Microsoft.ContainerService providers are enabled:

tux > az provider list | egrep --word-regexp 'Microsoft.Network|Microsoft.Storage|Microsoft.Compute|Microsoft.ContainerService'

If any of these are missing, enable them with the az provider register --name provider command.

9.2 Create Resource Group and AKS Instance

Now you can create a new Azure resource group and AKS instance. Define the required variables as environment variables in a file, called env.sh. This helps to speed up the setup, and to reduce errors. Verify your environment variables at any time by using the source to load the file, then running echo $VARNAME, for example:

tux > source ./env.sh

tux > echo $RG_NAME
cap-aks

This is especially useful when you run long compound commands to extract and set environment variables.

Tip
Tip: Use Different Names

Ensure each of your AKS clusters use unique resource group and managed cluster names, and not copy the examples, especially when your Azure subscription supports multiple users. Azure has no tools for sorting resources by user, so creating unique names and putting everything in your deployment in a single resource group helps you keep track, and you can delete the whole deployment by deleting the resource group.

In env.sh, define the environment variables below. Replace the example values with your own.

  • Set a resource group name.

    export RG_NAME="cap-aks"
  • Set an AKS managed cluster name. Azure's default is to use the resource group name, then prepend it with MC and append the location, for example MC_cap-aks_cap-aks_eastus. This example uses the creator's initials for the AKS_NAME environment variable, which will be mapped to the az command's --name option. The --name option is for creating arbitrary names for your AKS resources. This example will create a managed cluster named MC_cap-aks_cjs_eastus:

    tux > export AKS_NAME=cjs
  • Set the Azure location. See Quotas and region availability for Azure Kubernetes Service (AKS) for supported locations. Run az account list-locations to verify the correct way to spell your location name, for example East US is eastus in your az commands:

    tux > export REGION="eastus"
  • Set the Kubernetes agent node count. (Cloud Application Platform requires a minimum of 3.)

    tux > export NODE_COUNT="3"
  • Set the virtual machine size (see General purpose virtual machine sizes). A virtual machine size of at least Standard_DS4_v2 using premium storage (see Built in storage classes) is recommended. Note managed-premium has been specified in the example scf-config-values.yaml used (see Section 9.10, “Deploying SUSE Cloud Application Platform”):

    tux > export NODE_VM_SIZE="Standard_DS4_v2"
  • Set the public SSH key name associated with your Azure account:

    tux > export SSH_KEY_VALUE="~/.ssh/id_rsa.pub"
  • Set a new admin username:

    tux > export ADMIN_USERNAME="scf-admin"
  • Create a unique nodepool name. The default is aks-nodepool followed by an auto-generated number, for example aks-nodepool1-39318075-2. You have the option to change nodepool1 and create your own unique identifier. For example, mypool results in aks-mypool-39318075-2. Note that uppercase characters are considered invalid in a nodepool name and should not be used.

    tux > export NODEPOOL_NAME="mypool"

The below is an example env.sh file after all the environment variables have been defined.

### example environment variable definition file
### env.sh

export RG_NAME="cap-aks"
export AKS_NAME="cjs"
export REGION="eastus"
export NODE_COUNT="3"
export NODE_VM_SIZE="Standard_DS4_v2"
export SSH_KEY_VALUE="~/.ssh/id_rsa.pub"
export ADMIN_USERNAME="scf-admin"
export NODEPOOL_NAME="mypool"

Now that your environment variables are in place, load the file:

tux > source ./env.sh

Create a new resource group:

tux > az group create --name $RG_NAME --location $REGION

List the Kubernetes versions currently supported by AKS (see https://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions for more information on the AKS version support policy):

tux > az aks get-versions --location $REGION --output table

Create a new AKS managed cluster, and specify the Kubernetes version for consistent deployments:

tux > az aks create --resource-group $RG_NAME --name $AKS_NAME \
 --node-count $NODE_COUNT --admin-username $ADMIN_USERNAME \
 --ssh-key-value $SSH_KEY_VALUE --node-vm-size $NODE_VM_SIZE \
 --node-osdisk-size=80 --nodepool-name $NODEPOOL_NAME \
 --kubernetes-version 1.11.9
Note
Note

An OS disk size of at least 80 GB must be specified using the --node-osdisk-size flag.

This takes a few minutes. When it is completed, fetch your kubectl credentials. The default behavior for az aks get-credentials is to merge the new credentials with the existing default configuration, and to set the new credentials as as the current Kubernetes context. The context name is your AKS_NAME value. You should first backup your current configuration, or move it to a different location, then fetch the new credentials:

tux > az aks get-credentials --resource-group $RG_NAME --name $AKS_NAME
 Merged "cap-aks" as current context in /home/tux/.kube/config

Verify that you can connect to your cluster:

tux > kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
aks-mypool-47788232-0   Ready     agent     5m        v1.11.9
aks-mypool-47788232-1   Ready     agent     6m        v1.11.9
aks-mypool-47788232-2   Ready     agent     6m        v1.11.9

tux > kubectl get pods --all-namespaces
NAMESPACE     NAME                                READY  STATUS    RESTARTS   AGE
kube-system   azureproxy-79c5db744-fwqcx          1/1    Running   2          6m
kube-system   heapster-55f855b47-c4mf9            2/2    Running   0          5m
kube-system   kube-dns-v20-7c556f89c5-spgbf       3/3    Running   0          6m
kube-system   kube-dns-v20-7c556f89c5-z2g7b       3/3    Running   0          6m
kube-system   kube-proxy-g9zpk                    1/1    Running   0          6m
kube-system   kube-proxy-kph4v                    1/1    Running   0          6m
kube-system   kube-proxy-xfngh                    1/1    Running   0          6m
kube-system   kube-svc-redirect-2knsj             1/1    Running   0          6m
kube-system   kube-svc-redirect-5nz2p             1/1    Running   0          6m
kube-system   kube-svc-redirect-hlh22             1/1    Running   0          6m
kube-system   kubernetes-dashboard-546686-mr9hz   1/1    Running   1          6m
kube-system   tunnelfront-595565bc78-j8msn        1/1    Running   0          6m

When all nodes are in a ready state and all pods are running, proceed to the next steps.

9.3 Install Helm Client and Tiller

Helm is a Kubernetes package manager. It consists of a client and server component, both of which are required in order to install and manage Cloud Application Platform.

The Helm client, helm, can be installed on your remote administration computer by referring to the documentation at https://docs.helm.sh/using_helm/#installing-helm. Usage with Cloud Application Platform requires Helm 2 and/or iterations of its minor releases. Compatibility between Cloud Application Platform and Helm 3 is not supported.

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. Follow the instructions at https://helm.sh/docs/using_helm/#installing-tiller to install Tiller with a service account and ensure your installation is appropriately secured according to your requirements as described in https://helm.sh/docs/using_helm/#securing-your-helm-installation.

9.4 Pod Security Policies

Role-based access control (RBAC) is enabled by default on AKS. SUSE Cloud Application Platform 1.3.1 and later do not need to be configured manually. Older Cloud Application Platform releases require manual PSP configuration; see Section A.1, “Manual Configuration of Pod Security Policies” for instructions.

9.5 Enable Swap Accounting

Identify and set the cluster resource group, then enable kernel swap accounting. Swap accounting is required by Cloud Application Platform, but it is not the default in AKS nodes. The following commands use the az command to modify the GRUB configuration on each node, and then reboot the virtual machines.

  1. tux > export MC_RG_NAME=$(az aks show --resource-group $RG_NAME --name $AKS_NAME --query nodeResourceGroup --output json | jq -r '.')
  2. tux > export VM_NODES=$(az vm list --resource-group $MC_RG_NAME --output json | jq -r '.[] | select (.tags.poolName | contains("'$NODEPOOL_NAME'")) | .name')
  3. tux > for i in $VM_NODES
     do
       az vm run-command invoke --resource-group $MC_RG_NAME --name $i --command-id RunShellScript --scripts \
       "sudo sed --in-place --regexp-extended 's|^(GRUB_CMDLINE_LINUX_DEFAULT=)\"(.*.)\"|\1\"\2 swapaccount=1\"|' \
       /etc/default/grub.d/50-cloudimg-settings.cfg && sudo update-grub"
       az vm restart --resource-group $MC_RG_NAME --name $i
    done
  4. Verify that all nodes are in state "Ready" again, before you continue.

    tux > kubectl get nodes

9.6 Default Storage Class

This example creates a managed-premium (see https://docs.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv) storage class for your cluster using the manifest defined in storage-class.yaml below:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
  name: persistent
parameters:
  kind: Managed
  storageaccounttype: Premium_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
allowVolumeExpansion: true

Then apply the new storage class configuration with this command:

tux > kubectl create --filename storage-class.yaml

Specify the newly created created storage class, called persistent, as the value for kube.storage_class.persistent in your deployment configuration file, like this example:

kube:
  storage_class:
    persistent: "persistent"
    shared: "persistent"

See Section 9.8, “Deployment Configuration” for a complete example deployment configuration file, scf-config-values.yaml.

9.7 DNS Configuration

This section provides an overview of the domain and sub-domains that require A records to be set up for. The process is described in more detail in the deployment section.

The following table lists the required domain and sub-domains, using example.com as the example domain:

DomainsServices
uaa.example.comuaa-uaa-public
*.uaa.example.comuaa-uaa-public
example.comrouter-gorouter-public
*.example.comrouter-gorouter-public
tcp.example.comtcp-router-tcp-router-public
ssh.example.comdiego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptionsKubernetes service names
User Account and Authentication (uaa)uaa-uaa-public
Cloud Foundry (CF) TCP routing servicetcp-router-tcp-router-public
CF application SSH accessdiego-ssh-ssh-proxy-public
CF routerrouter-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

9.8 Deployment Configuration

It is not necessary to create any DNS records before deploying uaa. Instead, after uaa is running you will find the load balancer IP address that was automatically created during deployment, and then create the necessary records.

The following file, scf-config-values.yaml, provides a complete example deployment configuration. Enter the fully-qualified domain name (FQDN) that you intend to use for DOMAIN and UAA_HOST.

### example deployment configuration file
### scf-config-values.yaml

env:
  # the FQDN of your domain
  DOMAIN: example.com
  # the UAA prefix is required
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793
  GARDEN_ROOTFS_DRIVER: "overlay-xfs"

kube:
  storage_class:
    persistent: "persistent"
    shared: "persistent"

  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"

secrets:
  # Create a very strong password for user 'admin'
  CLUSTER_ADMIN_PASSWORD: password

  # Create a very strong password, and protect it because it
  # provides root access to everything
  UAA_ADMIN_CLIENT_SECRET: password

services:
  loadbalanced: true

Take note of the following Helm values when defining your scf-config-values.yaml file.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are based on SUSE Linux Enterprise, the btrfs file system driver must be used. By default, btrfs is selected as the default.

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments where the nodes are based on other operating systems, the overlay-xfs file system driver must be used.

Important
Important: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

9.9 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.17.1          1.4             A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    2.4.0           2.4.0           A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0           1.0.0           A Helm chart for Stratos Metrics
suse/minibroker                 0.2.0                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.17.1          1.4             A Helm chart for SUSE UAA

9.10 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform with a basic AKS load balancer and how to configure your DNS records.

9.10.1 Deploy uaa

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

  • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

After the deployment completes, a Kubernetes service for uaa will be exposed on an Azure load balancer that is automatically set up by AKS (named kubernetes in the resource group that hosts the worker node VMs).

List the services that have been exposed on the load balancer public IP. The name of these services end in -public. For example, the uaa service is exposed on 40.85.188.67 and port 2793.

tux > kubectl get services --namespace uaa | grep public
uaa-uaa-public    LoadBalancer   10.0.67.56     40.85.188.67   2793:32034/TCP

Use the DNS service of your choice to set up DNS A records for the service from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the uaa service, map the following domains:

    uaa.DOMAIN

    Using the example values, an A record for uaa.example.com that points to 40.85.188.67

    *.uaa.DOMAIN

    Using the example values, an A record for *.uaa.example.com that points to 40.85.188.67

If you wish to use the DNS service provided by Azure, see the Azure DNS Documentation to learn more.

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name configured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token",
"authorization_endpoint":"https://uaa.example.com:2793
/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

9.10.2 Deploy scf

Before deploying scf, ensure the DNS records for the uaa domains have been set up as specified in the previous section. Next, pass your uaa secret and certificate to scf, then use Helm to deploy scf:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

  • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

After the deployment completes, a number of public services will be setup using a load balancer that has been configured with corresponding load balancing rules and probes as well as having the correct ports opened in Network Security Group.

List the services that have been exposed on the load balancer public IP. The name of these services end in -public. For example, the gorouter service is exposed on 23.96.32.205:

tux > kubectl get services --namespace scf | grep public
diego-ssh-ssh-proxy-public                  LoadBalancer   10.0.44.118    40.71.187.83   2222:32412/TCP                                                                                                                                    1d
router-gorouter-public                      LoadBalancer   10.0.116.78    23.96.32.205   80:32136/TCP,443:32527/TCP,4443:31541/TCP                                                                                                         1d
tcp-router-tcp-router-public                LoadBalancer   10.0.132.203   23.96.46.98    20000:30337/TCP,20001:31530/TCP,20002:32118/TCP,20003:30750/TCP,20004:31014/TCP,20005:32678/TCP,20006:31528/TCP,20007:31325/TCP,20008:30060/TCP   1d

Use the DNS service of your choice to set up DNS A records for the services from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the gorouter service, map the following domains:

    DOMAIN

    Using the example values, an A record for example.com that points to 23.96.32.205 would be created.

    *.DOMAIN

    Using the example values, an A record for *.example.com that points to 23.96.32.205 would be created.

  • For the diego-ssh service, map the following domain:

    ssh.DOMAIN

    Using the example values, an A record for ssh.example.com that points to 40.71.187.83 would be created.

  • For the tcp-router service, map the following domain:

    tcp.DOMAIN

    Using the example values, an A record for tcp.example.com that points to 23.96.46.98 would be created.

If you wish to use the DNS service provided by Azure, see the Azure DNS Documentation to learn more.

Your load balanced deployment of Cloud Application Platform is now complete. Verify you can access the API endpoint:

tux > cf api --skip-ssl-validation https://api.example.com

9.11 Configuring and Testing the Native Microsoft AKS Service Broker

Microsoft Azure Kubernetes Service provides a service broker called the Open Service Broker for Azure (see https://github.com/Azure/open-service-broker-azure. This section describes how to use it with your SUSE Cloud Application Platform deployment.

Start by extracting and setting a batch of environment variables:

tux > SBRG_NAME=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 8)-service-broker

tux > REGION=eastus

tux > export SUBSCRIPTION_ID=$(az account show | jq -r '.id')

tux > az group create --name ${SBRG_NAME} --location ${REGION}

tux > SERVICE_PRINCIPAL_INFO=$(az ad sp create-for-rbac --name ${SBRG_NAME})

tux > TENANT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.tenant')

tux > CLIENT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.appId')

tux > CLIENT_SECRET=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.password')

tux > echo SBRG_NAME=${SBRGNAME}

tux > echo REGION=${REGION}

tux > echo SUBSCRIPTION_ID=${SUBSCRIPTION_ID} \; TENANT_ID=${TENANT_ID}\; CLIENT_ID=${CLIENT_ID}\; CLIENT_SECRET=${CLIENT_SECRET}

Add the necessary Helm repositories and download the charts:

tux > helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

tux > helm repo update

tux > helm install svc-cat/catalog --name catalog \
 --namespace catalog \
 --set controllerManager.healthcheck.enabled=false \
 --set apiserver.healthcheck.enabled=false

tux > kubectl get apiservice

tux > helm repo add azure https://kubernetescharts.blob.core.windows.net/azure

tux > helm repo update

Set up the service broker with your variables:

Warning
Warning: Service Broker Installation Fails on AKS Cluster Running Kubernetes 1.11.8

If installation of open-service-broker-azure is unsuccessful due to Failed to pull image "osbapublicacr.azurecr.io/microsoft/azure-service-broker:v1.6.0, upgrade your AKS cluster to Kubernetes 1.11.9. See Section 9.12, “Upgrading An AKS Cluster and Additional Considerations” for instructions.

tux > helm install azure/open-service-broker-azure \
--name osba \
--namespace osba \
--set azure.subscriptionId=${SUBSCRIPTION_ID} \
--set azure.tenantId=${TENANT_ID} \
--set azure.clientId=${CLIENT_ID} \
--set azure.clientSecret=${CLIENT_SECRET} \
--set azure.defaultLocation=${REGION} \
--set redis.persistence.storageClass=default \
--set basicAuth.username=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set basicAuth.password=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set tls.enabled=false

Monitor the progress:

tux > watch --color 'kubectl get pods --namespace osba'

When all pods are running, create the service broker in SUSE Cloud Foundry using the cf CLI:

tux > cf login

tux > cf create-service-broker azure $(kubectl get deployment osba-open-service-broker-azure \
--namespace osba --output jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "BASIC_AUTH_USERNAME")].value}') $(kubectl get secret --namespace osba osba-open-service-broker-azure --output jsonpath='{.data.basic-auth-password}' | base64 --decode) http://osba-open-service-broker-azure.osba

List the available service plans. For more information about the services supported see https://github.com/Azure/open-service-broker-azure#supported-services:

tux > cf service-access -b azure

Use cf enable-service-access to enable access to a service plan. This example enables all basic plans:

tux > cf service-access -b azure | \
awk '($2 ~ /basic/) { system("cf enable-service-access " $1 " -p " $2)}'

Test your new service broker with an example PHP application. First create an organization and space to deploy your test application to:

tux > cf create-org testorg

tux > cf create-space scftest -o testorg

tux > cf target -o "testorg" -s "scftest"

tux > cf create-service azure-mysql-5-7 basic question2answer-db \
-c "{ \"location\": \"${REGION}\", \"resourceGroup\": \"${SBRG_NAME}\", \"firewallRules\": [{\"name\": \
\"AllowAll\", \"startIPAddress\":\"0.0.0.0\",\"endIPAddress\":\"255.255.255.255\"}]}"

tux > cf service question2answer-db | grep status

Find your new service and optionally disable TLS. You should not disable TLS on a production deployment, but it simplifies testing. The mysql2 gem must be configured to use TLS, see brianmario/mysql2/SSL options on GitHub:

tux > az mysql server list --resource-group $SBRG_NAME

tux > az mysql server update --resource-group $SBRG_NAME \
--name scftest --ssl-enforcement Disabled

Look in your Azure portal to find your database --name.

Build and push the example PHP application:

tux > git clone https://github.com/scf-samples/question2answer

tux > cd question2answer

tux > cf push

tux > cf service question2answer-db # => bound apps

When the application has finished deploying, use your browser and navigate to the URL specified in the routes field displayed at the end of the staging logs. For example, the application route could be question2answer.example.com.

Press the button to prepare the database. When the database is ready, further verify by creating an initial user and posting some test questions.

9.12 Upgrading An AKS Cluster and Additional Considerations

When upgrading the Kubernetes version of your AKS cluster, be sure to enable swap accounting as it is required by Cloud Application Platform. As part of the initial AKS cluster creation process earlier in this chapter, swap accounting was enabled on all nodes. During the AKS cluster upgrade process, these existing nodes are deleted and replaced by new nodes, running a newer version of Kubernetes, but without swap accounting enabled. The procedure below describes how to enable swap accounting on the new nodes added during an upgrade.

  • Upgrade your AKS cluster by following the process described in https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster.

    Modify the GRUB configuration of each node to enable swap accounting and then reboot all nodes. RG_NAME and AKS_NAME are the values set earlier in this chapter and can also be obtained through the AKS web dashboard.

    tux > export MC_RG_NAME=$(az aks show --resource-group $RG_NAME --name $AKS_NAME --query nodeResourceGroup --output json | jq -r '.')
    
    tux > export VM_NODES=$(az vm list --resource-group $MC_RG_NAME --output json | jq -r '.[] | select (.tags.poolName | contains("'$NODEPOOL_NAME'")) | .name')
    
    tux > for i in $VM_NODES
     do
       az vm run-command invoke --resource-group $MC_RG_NAME --name $i --command-id RunShellScript --scripts \
       "sudo sed --in-place --regexp-extended 's|^(GRUB_CMDLINE_LINUX_DEFAULT=)\"(.*.)\"|\1\"\2 swapaccount=1\"|' \
       /etc/default/grub.d/50-cloudimg-settings.cfg && sudo update-grub"
       az vm restart --resource-group $MC_RG_NAME --name $i
    done

    After the nodes are rebooted, expect 15-60 minutes of additional Cloud Application Platform downtime as Cloud Application Platform components resume.

9.13 Resizing Persistent Volumes

Depending on your workloads, the default persistent volume (PV) sizes of your Cloud Application Platform deployment may be insufficient. This section describes the process to resize a persistent volume in your Cloud Application Platform deployment, by modifying the persistent volumes claim (PVC) object.

Note that PVs can only be expanded, but cannot be shrunk. shrunk.

9.13.1 Prerequisites

The following are required in order to use the process below to resize a PV.

9.13.2 Example Procedure

The following describes the process required to resize a PV, using the PV and PVC associated with uaa's mysql as an example.

  1. Find the storage class and PVC associated with the PV being expanded. In This example, the storage class is called persistent and the PVC is called mysql-data-mysql-0.

    tux > kubectl get persistentvolume
  2. Verify whether the storage class has allowVolumeExpansion set to true. If it does not, run the following command to update the storage class.

    tux > kubectl get storageclass persistent --output json

    If it does not, run the below command to update the storage class.

    tux > kubectl patch storageclass persistent \
    --patch '{"allowVolumeExpansion": true}'
  3. Cordon all nodes in your cluster.

    1. tux > export VM_NODES=$(kubectl get nodes -o name)
    2. tux > for i in $VM_NODES
       do
        kubectl cordon `echo "${i//node\/}"`
      done
  4. Increase the storage size of the PVC object associated with the PV being expanded.

    tux > kubectl patch persistentvolumeclaim --namespace uaa mysql-data-mysql-0 \
    --patch '{"spec": {"resources": {"requests": {"storage": "25Gi"}}}}'
  5. List all pods that use the PVC, in any namespace.

    tux > kubectl get pods --all-namespaces --output=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec |  select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
  6. Restart all pods that use the PVC.

    tux > kubectl delete pod mysql-0 --namespace uaa
  7. Run kubectl describe persistentvolumeclaim and monitor the status.conditions field.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

    When the following is observed, press CtrlC to exit the watch command and proceed to the next step.

    • status.conditions.message is

      message: Waiting for user to (re-)start a pod to finish file system resize of volume on node.
    • status.conditions.type is

      type: FileSystemResizePending
  8. Uncordon all nodes in your cluster.

    tux > for i in $VM_NODES
     do
      kubectl uncordon `echo "${i//node\/}"`
    done
  9. Wait for the resize to finish. Verify the storage size values match for status.capacity.storage and spec.resources.requests.storage.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'
  10. Also verify the storage size in the pod itself is updated.

    tux > kubectl --namespace uaa exec mysql-0 -- df --human-readable

9.14 Expanding Capacity of a Cloud Application Platform Deployment on Microsoft AKS

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 9, Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS) and have a running Cloud Application Platform deployment on Microsoft AKS. The instructions below will use environment variables defined in Section 9.2, “Create Resource Group and AKS Instance”.

  1. Get the current number of Kubernetes nodes in the cluster.

    tux > export OLD_NODE_COUNT=$(kubectl get nodes --output json | jq '.items | length')
  2. Set the number of Kubernetes nodes the cluster will be expanded to. Replace the example value with the number of nodes required for your workload.

    tux > export NEW_NODE_COUNT=4
  3. Increase the Kubernetes node count in the cluster.

    tux > az aks scale --resource-group $RG_NAME --name $AKS_NAME \
    --node-count $NEW_NODE_COUNT \
    --nodepool-name $NODEPOOL_NAME
  4. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  5. Enable swap accounting on the new nodes.

    1. tux > export MC_RG_NAME=$(az aks show --resource-group $RG_NAME --name $AKS_NAME --query nodeResourceGroup --output json | jq -r '.')
    2. tux > export NEW_VM_NODES=$(az vm list --resource-group $MC_RG_NAME --output json | jq -r '.[] | select (.tags.poolName | contains("'$NODEPOOL_NAME'")) |.name | if .[-1:] | tonumber >= '$OLD_NODE_COUNT' then . else empty end ')
    3. tux > for i in $NEW_VM_NODES
       do
         az vm run-command invoke --resource-group $MC_RG_NAME --name $i --command-id RunShellScript --scripts \
         "sudo sed --in-place --regexp-extended 's|^(GRUB_CMDLINE_LINUX_DEFAULT=)\"(.*.)\"|\1\"\2 swapaccount=1\"|' \
         /etc/default/grub.d/50-cloudimg-settings.cfg && sudo update-grub"
         az vm restart --resource-group $MC_RG_NAME --name $i
      done
    4. Verify the new nodes are in a Ready state before proceeding.

      tux > kubectl get nodes
  6. Add or update the following in your scf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        count: 4
  7. Pass your uaa secret and certificate to scf.

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  8. Perform a helm upgrade to apply the change.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}"
  9. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace scf'

10 Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

This chapter describes how to deploy SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS), using Amazon's Elastic Load Balancer to provide fault-tolerant access to your cluster.

10.1 Prerequisites

You need an Amazon EKS account. See Getting Started with Amazon EKS for instructions on creating a Kubernetes cluster for your SUSE Cloud Application Platform deployment.

When you create your cluster, use node sizes that are at least t2.large. The NodeVolumeSize must be a minimum of 80 GB.

Ensure nodes use a mininum kernel version of 3.19.

Take note of special configurations that are required to successfully deploy SUSE Cloud Application Platform on EKS in Section 10.11, “Deploy scf.

Section 10.2, “IAM Requirements for EKS” provides guidance on configuring Identity and Access Management (IAM) for your users.

10.2 IAM Requirements for EKS

These IAM policies provide sufficient access to use EKS.

10.2.1 Unscoped Operations

Some of these permissions are very broad. They are difficult to scope effectively, in part because many resources are created and named dynamically when deploying an EKS cluster using the CloudFormation console. It may be helpful to enforce certain naming conventions, such as prefixing cluster names with ${aws:username} for pattern-matching in Conditions. However, this requires special consideration beyond the EKS deployment guide, and should be evaluated in the broader context of organizational IAM policies.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "UnscopedOperations",
            "Effect": "Allow",
            "Action": [
                "cloudformation:CreateUploadBucket",
                "cloudformation:EstimateTemplateCost",
                "cloudformation:ListExports",
                "cloudformation:ListStacks",
                "cloudformation:ListImports",
                "cloudformation:DescribeAccountLimits",
                "eks:ListClusters",
                "cloudformation:ValidateTemplate",
                "cloudformation:GetTemplateSummary",
                "eks:CreateCluster"
            ],
            "Resource": "*"
        },
        {
            "Sid": "EffectivelyUnscopedOperations",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:GetRole",
                "iam:DetachRolePolicy",
                "iam:RemoveRoleFromInstanceProfile",
                "cloudformation:*",
                "iam:CreateRole",
                "iam:DeleteRole",
                "eks:*"
            ],
            "Resource": [
                "arn:aws:eks:*:*:cluster/*",
                "arn:aws:cloudformation:*:*:stack/*/*",
                "arn:aws:cloudformation:*:*:stackset/*:*",
                "arn:aws:iam::*:instance-profile/*",
                "arn:aws:iam::*:role/*"
            ]
        }
    ]
}

10.2.2 Scoped Operations

These policies deal with sensitive access controls, such as passing roles and attaching/detaching policies from roles.

This policy, as written, allows unrestricted use of only customer-managed policies, and not Amazon-managed policies. This prevents potential security holes such as attaching the IAMFullAccess policy to a role. If you are using roles in a way that would be undermined by this, you should strongly consider integrating a Permissions Boundary before using this policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "UseCustomPoliciesWithCustomRoles",
            "Effect": "Allow",
            "Action": [
                "iam:DetachRolePolicy",
                "iam:AttachRolePolicy"
            ],
            "Resource": [
                "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*",
                "arn:aws:iam::<YOUR_ACCOUNT_ID>:policy/*"
            ],
            "Condition": {
                "ForAllValues:ArnNotLike": {
                    "iam:PolicyARN": "arn:aws:iam::aws:policy/*"
                }
            }
        },
        {
            "Sid": "AllowPassingRoles",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*"
        },
        {
            "Sid": "AddCustomRolesToInstanceProfiles",
            "Effect": "Allow",
            "Action": "iam:AddRoleToInstanceProfile",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:instance-profile/*"
        },
        {
            "Sid": "AssumeServiceRoleForEKS",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<EKS_SERVICE_ROLE_NAME>"
        },
        {
            "Sid": "DenyUsingAmazonManagedPoliciesUnlessNeededForEKS",
            "Effect": "Deny",
            "Action": "iam:*",
            "Resource": "arn:aws:iam::aws:policy/*",
            "Condition": {
                "ArnNotEquals": {
                    "iam:PolicyARN": [
                        "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
                        "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
                        "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAttachingSpecificAmazonManagedPoliciesForEKS",
            "Effect": "Allow",
            "Action": [
                "iam:DetachRolePolicy",
                "iam:AttachRolePolicy"
            ],
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "iam:PolicyARN": [
                        "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
                        "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
                        "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
                    ]
                }
            }
        }
    ]
}

10.3 Install Helm Client and Tiller

Helm is a Kubernetes package manager. It consists of a client and server component, both of which are required in order to install and manage Cloud Application Platform.

The Helm client, helm, can be installed on your remote administration computer by referring to the documentation at https://docs.helm.sh/using_helm/#installing-helm. Usage with Cloud Application Platform requires Helm 2 and/or iterations of its minor releases. Compatibility between Cloud Application Platform and Helm 3 is not supported.

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. Follow the instructions at https://helm.sh/docs/using_helm/#installing-tiller to install Tiller with a service account and ensure your installation is appropriately secured according to your requirements as described in https://helm.sh/docs/using_helm/#securing-your-helm-installation.

10.4 Default Storage Class

This example creates a simple storage class for your cluster in storage-class.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp2
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
allowVolumeExpansion: true

Then apply the new storage class configuration with this command:

tux > kubectl create --filename storage-class.yaml

10.5 Security Group rules

In your EC2 virtual machine list, add the following rules to the security group to any one of your nodes:

Type		   Protocol     Port Range      Source          Description
HTTP               TCP          80              0.0.0.0/0       CAP HTTP
Custom TCP Rule    TCP          2793            0.0.0.0/0       CAP UAA
Custom TCP Rule    TCP          2222            0.0.0.0/0       CAP SSH
Custom TCP Rule    TCP          4443            0.0.0.0/0       CAP WSS
Custom TCP Rule    TCP          443             0.0.0.0/0       CAP HTTPS
Custom TCP Rule    TCP          20000-20009     0.0.0.0/0       TCP Routing

10.6 DNS Configuration

Creation of the Elastic Load Balancer is triggered by a setting in the Cloud Application Platform deployment configuration file. First deploy uaa, then create CNAMEs for your domain and uaa subdomains (see the table below). Then deploy scf, and create the appropriate scf CNAMEs. This is described in more detail in the deployment sections.

The following table lists the required domain and sub-domains, using example.com as the example domain:

DomainsServices
uaa.example.comuaa-uaa-public
*.uaa.example.comuaa-uaa-public
example.comrouter-gorouter-public
*.example.comrouter-gorouter-public
tcp.example.comtcp-router-tcp-router-public
ssh.example.comdiego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptionsKubernetes service names
User Account and Authentication (uaa)uaa-uaa-public
Cloud Foundry (CF) TCP routing servicetcp-router-tcp-router-public
CF application SSH accessdiego-ssh-ssh-proxy-public
CF routerrouter-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

10.7 Deployment Configuration

Use this example scf-config-values.yaml as a template for your configuration.

### example deployment configuration file
### scf-config-values.yaml

env:
  DOMAIN: example.com
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793

  GARDEN_ROOTFS_DRIVER: overlay-xfs
  GARDEN_APPARMOR_PROFILE: ""

services:
  loadbalanced: true

kube:
  storage_class:
    # Change the value to the storage class you use
    persistent: "gp2"
    shared: "gp2"

  # The default registry images are fetched from
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"

secrets:
  # Create a very strong password for user 'admin'
  CLUSTER_ADMIN_PASSWORD: password

  # Create a very strong password, and protect it because it
  # provides root access to everything
  UAA_ADMIN_CLIENT_SECRET: password

Take note of the following Helm values when defining your scf-config-values.yaml file.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are based on SUSE Linux Enterprise, the btrfs file system driver must be used. By default, btrfs is selected as the default.

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments where the nodes are based on other operating systems, the overlay-xfs file system driver must be used.

Important
Important: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

10.8 Deploying Cloud Application Platform

The following list provides an overview of Helm commands to complete the deployment. Included are links to detailed descriptions.

  1. Download the SUSE Kubernetes charts repository (Section 10.9, “Add the Kubernetes Charts Repository”)

  2. Deploy uaa, then create appropriate CNAMEs (Section 10.10, “Deploy uaa)

  3. Copy the uaa secret and certificate to the scf namespace, deploy scf, create CNAMES (Section 10.11, “Deploy scf)

10.9 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME            URL
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts
suse            https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.17.1          1.4             A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    2.4.0           2.4.0           A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0           1.0.0           A Helm chart for Stratos Metrics
suse/minibroker                 0.2.0                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.17.1          1.4             A Helm chart for SUSE UAA

10.10 Deploy uaa

Use Helm to deploy the uaa (User Account and Authentication) server. You may create your own release --name:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

  • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

When uaa is finished deploying, create CNAMEs for the required domains. (See Section 10.6, “DNS Configuration”.) Use kubectl to find the service host names. These host names include the elb sub-domain, so use this to get the correct results:

tux > kubectl get services --namespace uaa | grep elb

10.11 Deploy scf

Before deploying scf, ensure the DNS records for the uaa domains have been set up as specified in the previous section. Next, pass your uaa secret and certificate to scf, then use Helm to deploy scf:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

  • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Before traffic is routed to EKS load balancers, health checks are performed on them. These health checks require access to ports on the tcp-router service, which are only opened by request via cf map-route and other associated commands.

Use kubectl patch to patch the tcp-router service to expose a port to the EKS load balancer so the health checks can be completed.

tux > kubectl patch service tcp-router-tcp-router-public --namespace scf \
--type strategic \
--patch '{"spec": {"ports": [{"name": "healthcheck", "port": 8080}]}}'

Remove port 8080 from the load balancer's listeners list. This ensures the port is not exposed to external traffic while still allowing it to serve as the port for health checks internally.

tux > aws elb delete-load-balancer-listeners \
--load-balancer-name  healthcheck   \
--load-balancer-ports 8080

The health checks should now operate correctly. When the status shows RUNNING for all of the scf pods, create CNAMEs for the required domains. (See Section 10.6, “DNS Configuration”) Use kubectl to find the service host names. These host names include the elb sub-domain, so use this to get the correct results:

tux > kubectl get services --namespace scf | grep elb

10.12 Deploying and Using the AWS Service Broker

The AWS Service Broker provides integration of native AWS services with SUSE Cloud Application Platform.

10.12.1 Prerequisites

Deploying and using the AWS Service Broker requires the following:

10.12.2 Setup

  1. Create the required DynamoDB table where the AWS service broker will store its data. This example creates a table named awssb:

    tux > aws dynamodb create-table \
    		--attribute-definitions \
    			AttributeName=id,AttributeType=S \
    			AttributeName=userid,AttributeType=S \
    			AttributeName=type,AttributeType=S \
    		--key-schema \
    			AttributeName=id,KeyType=HASH \
    			AttributeName=userid,KeyType=RANGE \
    		--global-secondary-indexes \
    			'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' \
    		--provisioned-throughput \
    			ReadCapacityUnits=5,WriteCapacityUnits=5 \
    		--region ${AWS_REGION} --table-name awssb
  2. Wait until the table has been created. When it is ready, the TableStatus will change to ACTIVE. Check the status using the describe-table command:

    aws dynamodb describe-table --table-name awssb

    (For more information about the describe-table command, see https://docs.aws.amazon.com/cli/latest/reference/dynamodb/describe-table.html.)

  3. Set a name for the Kubernetes namespace you will install the service broker to. This name will also be used in the service broker URL:

    tux > BROKER_NAMESPACE=aws-sb
  4. Create a server certificate for the service broker:

    1. Create and use a separate directory to avoid conflicts with other CA files:

      tux > mkdir /tmp/aws-service-broker-certificates && cd $_
    2. Get the CA certificate:

      kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem
    3. Get the CA private key:

      kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key
    4. Create a signing request. Replace BROKER_NAMESPACE with the namespace assigned in Step 3:

      tux > openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \
        -passout pass:1234 \
        -subj '/CN=aws-servicebroker.'${BROKER_NAMESPACE} -batch \
        </dev/null
    5. Decrypt the generated broker private key:

      tux > openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key
    6. Sign the request with the CA certificate:

      tux > openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem
  5. Install the AWS service broker as documented at https://github.com/awslabs/aws-servicebroker/blob/master/docs/getting-started-k8s.md. Skip the installation of the Kubernetes Service Catalog. While installing the AWS Service Broker, make sure to update the Helm chart version (the version as of this writing is 1.0.0-beta.3). For the broker install, pass in a value indicating the Cluster Service Broker should not be installed (for example --set deployClusterServiceBroker=false). Ensure an account and role with adequate IAM rights is chosen (see Section 10.12.1, “Prerequisites”:

    tux > helm install aws-sb/aws-servicebroker \
    	     --name aws-servicebroker \
    	     --namespace ${BROKER_NAMESPACE} \
    	     --version 1.0.0-beta.3 \
    	     --set aws.secretkey=$aws_access_key \
    	     --set aws.accesskeyid=$aws_key_id \
    	     --set deployClusterServiceBroker=false \
    	     --set tls.cert="$(base64 -w0 tls.pem)" \
    	     --set tls.key="$(base64 -w0 tls.key)" \
    	     --set-string aws.targetaccountid=${AWS_TARGET_ACCOUNT_ID} \
    	     --set aws.targetrolename=${AWS_TARGET_ROLE_NAME} \
    	     --set aws.tablename=awssb \
    	     --set aws.vpcid=$vpcid \
    	     --set aws.region=$aws_region \
    	     --set authenticate=false
  6. Log into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space
    tux > cf target -o org -s space
  7. Create a service broker in scf. Note the name of the service broker should be the same as the one specified for the --name flag in the helm install step (for example aws-servicebroker. Note that the username and password parameters are only used as dummy values to pass to the cf command:

    tux > cf create-service-broker aws-servicebroker username password  https://aws-servicebroker.${BROKER_NAMESPACE}
  8. Verify the service broker has been registered:

    tux > cf service-brokers
  9. List the available service plans:

    tux > cf service-access
  10. Enable access to a service. This example uses the -p to enable access to a specific service plan. See https://github.com/awslabs/aws-servicebroker/blob/master/templates/rdsmysql/template.yaml for information about all available services and their associated plans:

    tux > cf enable-service-access rdsmysql -p custom
  11. Create a service instance. As an example, a custom MySQL instance can be created as:

    tux > cf create-service rdsmysql custom mysql-instance-name -c '{
      "AccessCidr": "192.0.2.24/32",
      "BackupRetentionPeriod": 0,
      "MasterUsername": "master",
      "DBInstanceClass": "db.t2.micro",
      "PubliclyAccessible": "true",
      "region": "${AWS_REGION}",
      "StorageEncrypted": "false",
      "VpcId": "${AWS_VPC}",
      "target_account_id": "${AWS_TARGET_ACCOUNT_ID}",
      "target_role_name": "${AWS_TARGET_ROLE_NAME}"
    }'

10.12.3 Cleanup

When the AWS Service Broker and its services are no longer required, perform the following steps:

  1. Unbind any applications using any service instances then delete the service instance:

    tux > cf unbind-service my_app mysql-instance-name
    tux > cf delete-service mysql-instance-name
  2. Delete the service broker in scf:

    tux > cf delete-service-broker aws-servicebroker
  3. Delete the deployed Helm chart and the namespace:

    tux > helm delete --purge aws-servicebroker
    tux > kubectl delete namespace ${BROKER_NAMESPACE}
  4. The manually created DynamoDB table will need to be deleted as well:

    tux > aws dynamodb delete-table --table-name awssb --region ${AWS_REGION}

10.13 Resizing Persistent Volumes

Depending on your workloads, the default persistent volume (PV) sizes of your Cloud Application Platform deployment may be insufficient. This section describes the process to resize a persistent volume in your Cloud Application Platform deployment, by modifying the persistent volumes claim (PVC) object.

Note that PVs can only be expanded, but cannot be shrunk. shrunk.

10.13.1 Prerequisites

The following are required in order to use the process below to resize a PV.

10.13.2 Example Procedure

The following describes the process required to resize a PV, using the PV and PVC associated with uaa's mysql as an example.

  1. Find the storage class and PVC associated with the PV being expanded. In This example, the storage class is called persistent and the PVC is called mysql-data-mysql-0.

    tux > kubectl get persistentvolume
  2. Verify whether the storage class has allowVolumeExpansion set to true. If it does not, run the following command to update the storage class.

    tux > kubectl get storageclass persistent --output json

    If it does not, run the below command to update the storage class.

    tux > kubectl patch storageclass persistent \
    --patch '{"allowVolumeExpansion": true}'
  3. Cordon all nodes in your cluster.

    1. tux > export VM_NODES=$(kubectl get nodes -o name)
    2. tux > for i in $VM_NODES
       do
        kubectl cordon `echo "${i//node\/}"`
      done
  4. Increase the storage size of the PVC object associated with the PV being expanded.

    tux > kubectl patch persistentvolumeclaim --namespace uaa mysql-data-mysql-0 \
    --patch '{"spec": {"resources": {"requests": {"storage": "25Gi"}}}}'
  5. List all pods that use the PVC, in any namespace.

    tux > kubectl get pods --all-namespaces --output=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec |  select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
  6. Restart all pods that use the PVC.

    tux > kubectl delete pod mysql-0 --namespace uaa
  7. Run kubectl describe persistentvolumeclaim and monitor the status.conditions field.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

    When the following is observed, press CtrlC to exit the watch command and proceed to the next step.

    • status.conditions.message is

      message: Waiting for user to (re-)start a pod to finish file system resize of volume on node.
    • status.conditions.type is

      type: FileSystemResizePending
  8. Uncordon all nodes in your cluster.

    tux > for i in $VM_NODES
     do
      kubectl uncordon `echo "${i//node\/}"`
    done
  9. Wait for the resize to finish. Verify the storage size values match for status.capacity.storage and spec.resources.requests.storage.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'
  10. Also verify the storage size in the pod itself is updated.

    tux > kubectl --namespace uaa exec mysql-0 -- df --human-readable

11 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Google Kubernetes Engine (GKE). This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment on GKE using its integrated network load balancers. See https://cloud.google.com/kubernetes-engine/ for more information on GKE.

11.1 Prerequisites

The following are required to complete the deployment of SUSE Cloud Application Platform on GKE:

11.2 Creating a GKE cluster

In order to deploy SUSE Cloud Application Platform, create a cluster that:

  • Is a Zonal, Regional, or Private type. Do not use a Alpha cluster.

  • Uses Ubuntu as the host operating system. If using the gcloud CLI, include --image-type=UBUNTU during the cluster creation.

  • Allows access to all Cloud APIs (in order for storage to work correctly).

  • Has at least 3 nodes of machine type n1-standard-4. If using the gcloud CLI, include --machine-type=n1-standard-4 and --num-nodes=3 during the cluster creation. For details, see https://cloud.google.com/compute/docs/machine-types#standard_machine_types.

  • Ensure nodes use a mininum kernel version of 3.19.

  • Has at least 80 GB local storage per node.

  • (Optional) Uses preemptible nodes to keep costs low. For detials, see https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms.

  1. Set a name for your cluster:

    tux > export CLUSTER_NAME="cap"
  2. Set the zone for your cluster:

    tux > export CLUSTER_ZONE="us-west1-a"
  3. Set the number of nodes for your cluster:

    tux > export NODE_COUNT=3
  4. Create the cluster:

    tux > gcloud container clusters create ${CLUSTER_NAME} \
    --image-type=UBUNTU \
    --machine-type=n1-standard-4 \
    --zone ${CLUSTER_ZONE} \
    --num-nodes=$NODE_COUNT \
    --no-enable-basic-auth \
    --no-issue-client-certificate \
    --no-enable-autoupgrade \
    --metadata disable-legacy-endpoints=true
    • Specify the --no-enable-basic-auth and --no-issue-client-certificate flags so that kubectl does not use basic or client certificate authentication, but uses OAuth Bearer Tokens instead. Configure the flags to suit your desired authentication mechanism.

    • Specify --no-enable-autoupgrade to disable automatic upgrades as this affects the swap accounting changes that will be made (see Section 11.3, “Enable Swap Accounting”).

    • Disable legacy metadata server endpoints using --metadata disable-legacy-endpoints=true as a best practice as indicated in https://cloud.google.com/compute/docs/storing-retrieving-metadata#default.

11.3 Enable Swap Accounting

Swap accounting is required by SUSE Cloud Application Platform, but not enabled by default. When your cluster finishes creating, enable swap accounting using the steps below. Ensure your gcloud CLI is configured correctly.

  1. Get the node instance names:

    tux > instance_names=$(gcloud compute instances list --filter=name~${CLUSTER_NAME:?required} --format json | jq --raw-output '.[].name')
  2. Set the correct zone:

    tux > gcloud config set compute/zone ${CLUSTER_ZONE:?required}
  3. Update the kernel command line and GRUB then restart the virtual machines:

    tux > echo "$instance_names" | xargs -i{} gcloud compute ssh {} -- "sudo sed --in-place 's/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0\"/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0 cgroup_enable=memory swapaccount=1\"/g' /etc/default/grub.d/50-cloudimg-settings.cfg && sudo update-grub && sudo systemctl reboot -i"

In the event swap accounting is not successfully enabled on all nodes using the previous procedure, perform the following on each node that requires swap accounting to be enabled:

  1. ssh into the node.

    tux > gcloud compute ssh INSTANCE
  2. When inside the node, run the command to update the kernel command line and GRUB, then restart the node.

    tux > sudo sed --in-place 's/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0\"/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0 cgroup_enable=memory swapaccount=1\"/g' /etc/default/grub.d/50-cloudimg-settings.cfg && sudo update-grub && sudo systemctl reboot -i

11.4 Get kubeconfig File

Get the kubeconfig file for your cluster.

tux > gcloud container clusters get-credentials --zone ${CLUSTER_ZONE:?required} ${CLUSTER_NAME:?required}

11.5 Install Helm Client and Tiller

Helm is a Kubernetes package manager. It consists of a client and server component, both of which are required in order to install and manage Cloud Application Platform.

The Helm client, helm, can be installed on your remote administration computer by referring to the documentation at https://docs.helm.sh/using_helm/#installing-helm. Usage with Cloud Application Platform requires Helm 2 and/or iterations of its minor releases. Compatibility between Cloud Application Platform and Helm 3 is not supported.

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. Follow the instructions at https://helm.sh/docs/using_helm/#installing-tiller to install Tiller with a service account and ensure your installation is appropriately secured according to your requirements as described in https://helm.sh/docs/using_helm/#securing-your-helm-installation.

11.6 Default Storage Class

This example creates a pd-ssd storage class for your cluster. Create a file named storage-class.yaml with the following:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    kubernetes.io/cluster-service: "true"
  name: persistent
parameters:
  type: pd-ssd
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
allowVolumeExpansion: true

Create the new storage class using the manifest defined:

tux > kubectl create --filename storage-class.yaml

Specify the newly created created storage class, called persistent, as the value for kube.storage_class.persistent in your deployment configuration file, like this example:

kube:
  storage_class:
    persistent: "persistent"

See Section 11.8, “Deployment Configuration” for a complete example deployment configuration file, scf-config-values.yaml.

11.7 DNS Configuration

This section provides an overview of the domain and sub-domains that require A records to be set up for. The process is described in more detail in the deployment section.

The following table lists the required domain and sub-domains, using example.com as the example domain:

DomainsServices
uaa.example.comuaa-uaa-public
*.uaa.example.comuaa-uaa-public
example.comrouter-gorouter-public
*.example.comrouter-gorouter-public
tcp.example.comtcp-router-tcp-router-public
ssh.example.comdiego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptionsKubernetes service names
User Account and Authentication (uaa)uaa-uaa-public
Cloud Foundry (CF) TCP routing servicetcp-router-tcp-router-public
CF application SSH accessdiego-ssh-ssh-proxy-public
CF routerrouter-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

11.8 Deployment Configuration

It is not necessary to create any DNS records before deploying uaa. Instead, after uaa is running you will find the load balancer IP address that was automatically created during deployment, and then create the necessary records.

The following file, scf-config-values.yaml, provides a complete example deployment configuration. Enter the fully-qualified domain name (FQDN) that you intend to use for DOMAIN and UAA_HOST.

### example deployment configuration file
### scf-config-values.yaml

env:
  # the FQDN of your domain
  DOMAIN: example.com
  # the UAA prefix is required
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793
  GARDEN_ROOTFS_DRIVER: "overlay-xfs"

kube:
  storage_class:
    persistent: "persistent"
  auth: rbac

secrets:
  # Create a very strong password for user 'admin'
  CLUSTER_ADMIN_PASSWORD: password

  # Create a very strong password, and protect it because it
  # provides root access to everything
  UAA_ADMIN_CLIENT_SECRET: password

services:
  loadbalanced: true

Take note of the following Helm values when defining your scf-config-values.yaml file.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are based on SUSE Linux Enterprise, the btrfs file system driver must be used. By default, btrfs is selected as the default.

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments where the nodes are based on other operating systems, the overlay-xfs file system driver must be used.

Important
Important: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

11.9 Add the Kubernetes charts repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.17.1          1.4             A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    2.4.0           2.4.0           A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0           1.0.0           A Helm chart for Stratos Metrics
suse/minibroker                 0.2.0                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.17.1          1.4             A Helm chart for SUSE UAA

11.10 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform on Google GKE, and how to configure your DNS records.

11.10.1 Deploy uaa

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

  • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Once the uaa deployment completes, a uaa service will be exposed on a load balancer public IP. The name of the service ends with -public. In the following example, the uaa-uaa-public service is exposed on 35.197.11.229 and port 2793.

tux > kubectl get services --namespace uaa | grep public
uaa-uaa-public    LoadBalancer   10.0.67.56     35.197.11.229  2793:30206/TCP

Use the DNS service of your choice to set up DNS A records for the service from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the uaa-uaa-public service, map the following domains:

    uaa.DOMAIN

    Using the example values, an A record for uaa.example.com that points to 35.197.11.229

    *.uaa.DOMAIN

    Using the example values, an A record for *.uaa.example.com that points to 35.197.11.229

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name configured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token",
"authorization_endpoint":"https://uaa.example.com:2793
/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

11.10.2 Deploy scf

Before deploying scf, ensure the DNS records for the uaa domains have been set up as specified in the previous section. Next, pass your uaa secret and certificate to scf, then use Helm to deploy scf:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

  • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Once the deployment completes, a number of public services will be setup using load balancers that has been configured with corresponding load balancing rules and probes as well as having the correct ports opened in the firewall settings.

List the services that have been exposed on the load balancer public IP. The name of these services end in -public:

tux > kubectl get services --namespace scf | grep public
diego-ssh-ssh-proxy-public                  LoadBalancer   10.23.249.196  35.197.32.244  2222:31626/TCP                                                                                                                                    1d
router-gorouter-public                      LoadBalancer   10.23.248.85   35.197.18.22   80:31213/TCP,443:30823/TCP,4443:32200/TCP                                                                                                         1d
tcp-router-tcp-router-public                LoadBalancer   10.23.241.17   35.197.53.74   20000:30307/TCP,20001:30630/TCP,20002:32524/TCP,20003:32344/TCP,20004:31514/TCP,20005:30917/TCP,20006:31568/TCP,20007:30701/TCP,20008:31810/TCP   1d

Use the DNS service of your choice to set up DNS A records for the services from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the router-gorouter-public service, map the following domains:

    DOMAIN

    Using the example values, an A record for example.com that points to 35.197.18.22 would be created.

    *.DOMAIN

    Using the example values, an A record for *.example.com that points to 35.197.18.22 would be created.

  • For the diego-ssh-ssh-proxy-public service, map the following domain:

    ssh.DOMAIN

    Using the example values, an A record for ssh.example.com that points to 35.197.32.244 would be created.

  • For the tcp-router-tcp-router-public service, map the following domain:

    tcp.DOMAIN

    Using the example values, an A record for tcp.example.com that points to 35.197.53.74 would be created.

Your load balanced deployment of Cloud Application Platform is now complete. Verify you can access the API endpoint:

tux > cf api --skip-ssl-validation https://api.example.com

11.11 Resizing Persistent Volumes

Depending on your workloads, the default persistent volume (PV) sizes of your Cloud Application Platform deployment may be insufficient. This section describes the process to resize a persistent volume in your Cloud Application Platform deployment, by modifying the persistent volumes claim (PVC) object.

Note that PVs can only be expanded, but cannot be shrunk. shrunk.

11.11.1 Prerequisites

The following are required in order to use the process below to resize a PV.

11.11.2 Example Procedure

The following describes the process required to resize a PV, using the PV and PVC associated with uaa's mysql as an example.

  1. Find the storage class and PVC associated with the PV being expanded. In This example, the storage class is called persistent and the PVC is called mysql-data-mysql-0.

    tux > kubectl get persistentvolume
  2. Verify whether the storage class has allowVolumeExpansion set to true. If it does not, run the following command to update the storage class.

    tux > kubectl get storageclass persistent --output json

    If it does not, run the below command to update the storage class.

    tux > kubectl patch storageclass persistent \
    --patch '{"allowVolumeExpansion": true}'
  3. Cordon all nodes in your cluster.

    1. tux > export VM_NODES=$(kubectl get nodes -o name)
    2. tux > for i in $VM_NODES
       do
        kubectl cordon `echo "${i//node\/}"`
      done
  4. Increase the storage size of the PVC object associated with the PV being expanded.

    tux > kubectl patch persistentvolumeclaim --namespace uaa mysql-data-mysql-0 \
    --patch '{"spec": {"resources": {"requests": {"storage": "25Gi"}}}}'
  5. List all pods that use the PVC, in any namespace.

    tux > kubectl get pods --all-namespaces --output=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec |  select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
  6. Restart all pods that use the PVC.

    tux > kubectl delete pod mysql-0 --namespace uaa
  7. Run kubectl describe persistentvolumeclaim and monitor the status.conditions field.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

    When the following is observed, press CtrlC to exit the watch command and proceed to the next step.

    • status.conditions.message is

      message: Waiting for user to (re-)start a pod to finish file system resize of volume on node.
    • status.conditions.type is

      type: FileSystemResizePending
  8. Uncordon all nodes in your cluster.

    tux > for i in $VM_NODES
     do
      kubectl uncordon `echo "${i//node\/}"`
    done
  9. Wait for the resize to finish. Verify the storage size values match for status.capacity.storage and spec.resources.requests.storage.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'
  10. Also verify the storage size in the pod itself is updated.

    tux > kubectl --namespace uaa exec mysql-0 -- df --human-readable

11.12 Expanding Capacity of a Cloud Application Platform Deployment on Google GKE

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 11, Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE) and have a running Cloud Application Platform deployment on Microsoft AKS. The instructions below will use environment variables defined in Section 11.2, “Creating a GKE cluster”.

  1. Get the most recently created node in the cluster.

    tux > RECENT_VM_NODE=$(gcloud compute instances list --filter=name~${CLUSTER_NAME:?required} --format json | jq --raw-output '[sort_by(.creationTimestamp) | .[].creationTimestamp ] | last | .[0:19] | strptime("%Y-%m-%dT%H:%M:%S") | mktime')
  2. Increase the Kubernetes node count in the cluster. Replace the example value with the number of nodes required for your workload.

    tux > gcloud container clusters resize $CLUSTER_NAME \
    --num-nodes 4
  3. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  4. Enable swap accounting on the new nodes.

    1. Get the names of the new node instances.

      tux > export NEW_VM_NODES=$(gcloud compute instances list --filter=name~${CLUSTER_NAME:?required} --format json | jq --raw-output 'sort_by(.creationTimestamp) | .[] | if (.creationTimestamp | .[0:19] | strptime("%Y-%m-%dT%H:%M:%S") | mktime) > '$RECENT_VM_NODE' then .name else empty end')
    2. Update the kernel command line and GRUB then restart the virtual machines.

      tux > echo "$NEW_VM_NODES" | xargs -i{} gcloud compute ssh {} -- "sudo sed --in-place 's/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0\"/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0 cgroup_enable=memory swapaccount=1\"/g' /etc/default/grub.d/50-cloudimg-settings.cfg && sudo update-grub && sudo systemctl reboot -i"
    3. In the event swap accounting is not successfully enabled on all nodes using the previous step, perform the following on each node that requires swap accounting to be enabled.

      1. ssh into the node.

        tux > gcloud compute ssh INSTANCE
      2. When inside the node, run the command to update the kernel command line and GRUB, then restart the node.

        tux > sudo sed --in-place 's/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0\"/GRUB_CMDLINE_LINUX_DEFAULT=\"console=ttyS0 net.ifnames=0 cgroup_enable=memory swapaccount=1\"/g' /etc/default/grub.d/50-cloudimg-settings.cfg && sudo update-grub && sudo systemctl reboot -i
    4. Verify the new nodes are in a Ready state before proceeding.

      tux > kubectl get nodes
  5. Add or update the following in your scf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        count: 4
  6. Pass your uaa secret and certificate to scf.

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  7. Perform a helm upgrade to apply the change.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}"
  8. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace scf'

12 Installing SUSE Cloud Application Platform on OpenStack

You can deploy a SUSE Cloud Application Platform on CaaS Platform stack on OpenStack. This chapter describes how to deploy a small testing and development instance with one Kubernetes master and two worker nodes, using Terraform to automate the deployment. This does not create a production deployment, which should be deployed on bare metal for best performance.

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

12.1 Prerequisites

The following prerequisites should be met before attempting to deploy SUSE Cloud Application Platform on OpenStack. The memory and disk space requirements are minimums, and may need to be larger according to your workloads.

  • 8 GB of memory per CaaS Platform dashboard and Kubernetes master nodes

  • 16 GB of memory per Kubernetes worker

  • 40 GB disk space per CaaS Platform dashboard and Kubernetes master nodes

  • 80 GB disk space per Kubernetes worker

  • Ensure nodes use a mininum kernel version of 3.19.

  • A SUSE Customer Center account for downloading CaaS Platform. Get SUSE-CaaS-Platform-2.0-KVM-and-Xen.x86_64-1.0.0-GM.qcow2, which has been tested on OpenStack.

  • Download the openrc.sh file for your OpenStack account

12.2 Create a New OpenStack Project

You may use an existing OpenStack project, or run the following commands to create a new project with the necessary configuration for SUSE Cloud Application Platform.

tux > openstack project create --domain default --description "CaaS Platform Project" caasp
tux > openstack role add --project caasp --user admin admin

Create an OpenStack network plus a subnet for CaaS Platform (for example, caasp-net), and add a router to the external (floating) network:

tux > openstack network create caasp-net
tux > openstack subnet create caasp_subnet --network caasp-net \
--subnet-range 10.0.2.0/24
tux > openstack router create caasp-net-router
tux > openstack router set caasp-net-router --external-gateway floating
tux > openstack router add subnet caasp-net-router caasp_subnet

Upload your CaaS Platform image to your OpenStack account:

tux > 
$ openstack image create \
  --file SUSE-CaaS-Platform-2.0-KVM-and-Xen.x86_64-1.0.0-GM.qcow2

Create a security group with the rules needed for CaaS Platform:

tux > openstack security group create cap --description "Allow CAP traffic"
tux > openstack security group rule create cap --protocol any --dst-port any --ethertype IPv4 --egress
tux > openstack security group rule create cap --protocol any --dst-port any --ethertype IPv6 --egress
tux > openstack security group rule create cap --protocol tcp --dst-port 20000:20008 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 443:443 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 2793:2793 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 4443:4443 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 2222:2222 --remote-ip 0.0.0.0/0

Clone the Terraform script from GitHub:

tux > git clone git@github.com:kubic-project/automation.git
tux > cd automation/caasp-openstack-terraform

Edit the openstack.tfvars file. Use the names of your OpenStack objects, for example:

image_name = "SUSE-CaaS-Platform-2.0"
internal_net = "caasp-net"
external_net = "floating"
admin_size = "m1.large"
master_size = "m1.large"
masters = 1
worker_size = "m1.xlarge"
workers = 2

Initialize Terraform:

tux > terraform init

12.3 Deploy SUSE Cloud Application Platform

Source your openrc.sh file, set the project, and deploy CaaS Platform:

tux > . openrc.sh
tux > export OS_PROJECT_NAME='caasp'
tux > ./caasp-openstack apply

Wait for a few minutes until all systems are up and running, then view your installation:

tux > openstack server list

Add your cap security group to all CaaS Platform workers:

tux > openstack server add security group caasp-worker0 cap
tux > openstack server add security group caasp-worker1 cap

If you need to log into your new nodes, log in as root using the SSH key in the automation/caasp-openstack-terraform/ssh directory.

12.4 Bootstrapping SUSE Cloud Application Platform

The following examples use the xip.io wildcard DNS service. You may use your own DNS/DHCP services that you have set up in OpenStack in place of xip.io.

For more information about networking requirements, see Section 5.1, “Prerequisites”.

  • Point your browser to the IP address of the CaaS Platform admin node, and create a new admin user login

  • Replace the default IP address or domain name of the Internal Dashboard FQDN/IP on the Initial CaaS Platform configuration screen with the internal IP address of the CaaS Platform admin node

  • Check the Install Tiller checkbox, then click the Next button

  • Terraform automatically creates all of your worker nodes, according to the number you configured in openstack.tfvars, so click Next to skip Bootstrap your CaaS Platform

  • On the Select nodes and roles screen click Accept all nodes, click to define your master and worker nodes, then click Next

  • For the External Kubernetes API FQDN, use the public (floating) IP address of the CaaS Platform master and append the .xip.io domain suffix

  • For the External Dashboard FQDN use the public (floating) IP address of the CaaS Platform admin node, and append the .xip.io domain suffix

12.5 Growing the Root Filesystem

If the root filesystem on your worker nodes is smaller than the OpenStack virtual disk, use these commands on the worker nodes to grow the filesystems to match:

tux > growpart /dev/vda 3
tux > btrfs filesystem resize max /.snapshots

13 Setting Up a Registry for an Air Gapped Environment

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

Cloud Application Platform, which consists of Docker images, is deployed to a Kubernetes cluster through Helm. These images are hosted on a Docker registry at registry.suse.com. In an air gapped environment, registry.suse.com will not be accessible. You will need to create a registry, and populate it will the images used by Cloud Application Platform.

This chapter describes how to load your registry with the necessary images to deploy Cloud Application Platform in an air gapped environment.

13.1 Prerequisites

The following prerequisites are required:

13.2 Mirror Images to Registry

All the Cloud Application Platform Helm charts include an imagelist.txt file that lists all images from the registry.suse.com registry under the cap organization. They can be mirrored to a local registry with the following script.

Replace the value of MIRROR with your registry's domain.

#!/bin/bash

MIRROR=registry.home

set -ex

function mirror {
    CHART=$1
    CHARTDIR=$(mktemp -d)
    helm fetch suse/$1 --untar --untardir=${CHARTDIR}
    IMAGES=$(cat ${CHARTDIR}/**/imagelist.txt)
    for IMAGE in ${IMAGES}; do
        echo $IMAGE
        docker pull registry.suse.com/cap/$IMAGE
        docker tag registry.suse.com/cap/$IMAGE $MIRROR/cap/$IMAGE
        docker push $MIRROR/cap/$IMAGE
    done
    docker save -o ${CHART}-images.tar.gz \
           $(perl -E "say qq(registry.suse.com/cap/\$_) for @ARGV" ${IMAGES})
    rm -r ${CHARTDIR}
}

mirror cf
mirror uaa
mirror console
mirror metrics
mirror cf-usb-sidecar-mysql
mirror cf-usb-sidecar-postgres

The script above will both mirror to a local registry and save the images in a local tarball that can be restored with docker load foo-images.tgz. In general only one of these mechanisms will be needed.

Also take note of the following regarding the script provided above.

  • The minibroker chart is currently not supported as it does not use a tagged image, but minibroker:latest. It will use a tagged imaged in the next release and supported at that time.

  • The nginx-ingress chart is not supported by this mechanism because it is not part of the cap organization (and cannot be configured with the kube.registry.hostname setting at deploy time either).

    Instead manually parse the Helm chart for the image names and do a manual docker pull && docker tag && docker push on them.

Before deploying Cloud Application Platform using helm install, ensure the following in your scf-config-values.yaml has been updated to point to your registry, and not registry.suse.com.

kube:
  registry:
    # example registry domain
    hostname: "registry.home"
    username: ""
    password: ""
  organization: "cap"

Part III SUSE Cloud Application Platform Administration

14 Upgrading SUSE Cloud Application Platform

uaa, scf, and Stratos together make up a SUSE Cloud Application Platform release. Maintenance updates are delivered as container images from the SUSE registry and applied with Helm.

15 Configuration Changes

After the initial deployment of Cloud Application Platform, any changes made to your Helm chart values, whether through your scf-config-values.yaml file or directly using Helm's --set flag, are applied using the helm upgrade command.

16 Creating Admin Users

This chapter provides an overview on how to create additional administrators for your Cloud Application Platform cluster.

17 Managing Passwords

The various components of SUSE Cloud Application Platform authenticate to each other using passwords that are automatically managed by the Cloud Application Platform secrets-generator. The only passwords managed by the cluster administrator are passwords for human users. The administrator may create…

18 Cloud Controller Database Secret Rotation

The Cloud Controller Database (CCDB) encrypts sensitive information like passwords. By default, the encryption key is generated by SCF, for details see https://github.com/SUSE/scf/blob/2d095a71008c33a23ca39d2ab9664e5602f8707e/container-host-files/etc/scf/config/role-manifest.yml#L1656-L1662. If it i…

19 Backup and Restore
20 Provisioning Services with Minibroker

Minibroker is an OSBAPI compliant broker created by members of the Microsoft Azure team. It provides a simple method to provision service brokers on Kubernetes clusters.

21 Setting Up and Using a Service Broker

The Open Service Broker API provides your SUSE Cloud Application Platform applications with access to external dependencies and platform-level capabilities, such as databases, filesystems, external repositories, and messaging systems. These resources are called services. Services are created, used, …

22 App-AutoScaler

The App-AutoScaler service is used for automatically managing an application's instance count when deployed on SUSE Cloud Foundry. The scaling behavior is determined by a set of criteria defined in a policy (See Section 22.5, “Policies”).

23 Logging

There are two types of logs in a deployment of Cloud Application Platform, applications logs and component logs.

24 Managing Certificates

This chapter describes the process to deploy SUSE Cloud Application Platform installed with certificates signed by an external Certificate Authority.

25 Integrating CredHub with SUSE Cloud Application Platform

SUSE Cloud Application Platform supports CredHub integration. You should already have a working CredHub instance, a CredHub service on your cluster, then apply the steps in this chapter to connect SUSE Cloud Application Platform.

26 Offline Buildpacks

Buildpacks are used to construct the environment needed to run your applications, including any required runtimes or frameworks as well as other dependencies. When you deploy an application, a buildpack can be specified or automatically detected by cycling through all available buildpacks to find on…

27 Custom Application Domains

In a standard SUSE Cloud Foundry deployment, applications will use the same domain as the one configured in your scf-config-values.yaml for SCF. For example, if DOMAIN is set as example.com in your scf-config-values.yaml and you deploy an application called myapp then the application's URL will be m…

28 Managing Nproc Limits of Pods

Nproc is the maximum number of processes allowed per user. In the case of scf, the nproc value applies to the vcap user. In scf, there are parameters, kube.limits.nproc.soft and kube.limits.nproc.hard, to configure a soft nproc limit and a hard nproc limit for processes spawned by the vcap user in s…

14 Upgrading SUSE Cloud Application Platform

uaa, scf, and Stratos together make up a SUSE Cloud Application Platform release. Maintenance updates are delivered as container images from the SUSE registry and applied with Helm.

For additional upgrade information, always review the release notes published at https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/.

14.1 Important Considerations

Before performing an upgrade, be sure to take note of the following:

Perform Upgrades in Sequence

Cloud Application Platform only supports upgrading releases in sequential order. If there are any intermediate releases between your current release and your target release, they must be installed. Skipping releases is not supported. See Section 14.3, “Installing Skipped Releases” for more information.

Preserve Helm Value Changes during Upgrades

During a helm upgrade, always ensure your scf-config-values-yaml file is passed. This will preserve any previously set Helm values while allowing additional Helm value changes to be made.

Use --recreate-pods during a helm upgrade

Note that using --recreate-pods will cause downtime for applications, but is required as multiple versions of statefulsets may co-exist which can cause incompatibilities between dependent statefulsets, and result in a broken upgrade.

When upgrading from SUSE Cloud Application Platform 1.3.0 to 1.3.1, running helm upgrade does not require the --recreate-pods option to be used. A change to the active/passive model has allowed for previously unready pods to be upgraded, which allows for zero app downtime during the upgrade process.

Upgrades between other versions will require the --recreate-pods option when using the helm upgrade command.

helm rollback Is Not Supported

helm rollback is not supported in SUSE Cloud Application Platform or in upstream Cloud Foundry, and may break your cluster completely, because database migrations only run forward and cannot be reversed. Database schema can change over time. During upgrades both pods of the current and the next release may run concurrently, so the schema must stay compatible with the immediately previous release. But there is no way to guarantee such compatibility for future upgrades. One way to address this is to perform a full raw data backup and restore. (See Section 19.2, “Disaster Recovery in scf through Raw Data Backup and Restore”)

Do Not Make Changes to Pod Counts During a Upgrade

If sizing changes need to be mader, make the change either before or after an upgrade. See Section 7.1, “Configuring Cloud Application Platform for High Availability”.

14.2 Upgrading SUSE Cloud Application Platform

The supported upgrade method is to install all upgrades, in order. Skipping releases is not supported. This table matches the Helm chart versions to each release:

CAP ReleaseSCF and UAA Helm Chart VersionStratos Helm Chart Version
1.4.1 (current release)2.17.12.4.0
1.42.16.42.4.0
1.3.12.15.22.3.0
1.32.14.52.2.0
1.2.12.13.32.1.0
1.2.02.11.02.0.0
1.1.12.10.11.1.0
1.1.02.8.01.1.0
1.0.12.7.01.0.2
1.02.6.111.0.0

Use Helm to check for updates:

tux > helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "suse" chart repository
Update Complete. ⎈ Happy Helming!⎈

Get your currently-installed release versions and chart names (your releases may have different names than the examples), and then view the upgrade versions:

tux > helm list
NAME            REVISION  UPDATED                  STATUS    CHART           NAMESPACE
susecf-console  1         Tue Aug 14 11:53:28 2018 DEPLOYED  console-2.4.0   stratos
susecf-scf      1         Tue Aug 14 10:58:16 2018 DEPLOYED  cf-2.16.4       scf
susecf-uaa      1         Tue Aug 14 10:49:30 2018 DEPLOYED  uaa-2.16.4      uaa
tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.17.1          1.4             A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    2.4.0           2.4.0           A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0           1.0.0           A Helm chart for Stratos Metrics
suse/minibroker                 0.2.0                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.17.1          1.4             A Helm chart for SUSE UAA

View all charts in a release, and their versions:

tux > helm search suse/uaa --versions
NAME            CHART VERSION   APP VERSION     DESCRIPTION
suse/uaa        2.17.1          1.4.1           A Helm chart for SUSE UAA
suse/uaa        2.16.4          1.4             A Helm chart for SUSE UAA
suse/uaa        2.15.2          1.3.1           A Helm chart for SUSE UAA
suse/uaa        2.14.5                          A Helm chart for SUSE UAA
suse/uaa        2.13.3                          A Helm chart for SUSE UAA
suse/uaa        2.11.0                          A Helm chart for SUSE UAA
suse/uaa        2.10.1                          A Helm chart for SUSE UAA
suse/uaa        2.8.0                           A Helm chart for SUSE UAA
suse/uaa        2.7.0                           A Helm chart for SUSE UAA
suse/uaa        2.6.11                          A Helm chart for SUSE UAA
...

Verify the latest release is the next sequential release from your currently-installed release. If it is, proceed with the commands below to perform the upgrade. If any releases have been missed, see Section 14.3, “Installing Skipped Releases”.

Just like your initial installation, wait for each command to complete before running the next command. Monitor progress with the watch command for each namespace, for example watch --color 'kubectl get pods --namespace uaa'. First upgrade uaa:

tux > helm upgrade --force --recreate-pods susecf-uaa suse/uaa \
--values scf-config-values.yaml

Then extract the uaa secret for scf to use:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

Upgrade scf, and note that if you see an error message like lost connection to pod Error: UPGRADE FAILED: transport is closing, this is normal. If you can run watch --color 'kubectl get pods --namespace scf' then everything is all right.

tux > helm upgrade --force --recreate-pods susecf-scf suse/cf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Then upgrade Stratos:

tux > helm upgrade --force --recreate-pods susecf-console suse/console \
--values scf-config-values.yaml

14.2.1 Change in URL of Internal cf-usb Broker Endpoint

This change is only applicable for upgrades from Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 and upgrades from Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1. The URL of the internal cf-usb broker endpoint has changed. Brokers for PostgreSQL and MySQL that use cf-usb will require the following manual fix after upgrading to reconnect with SCF/CAP:

For Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 upgrades:

  1. Get the name of the secret (for example secrets-2.14.5-1):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf secrets-2.14.5-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb part doubled with a dash separator

    tux > cf update-service-broker usb broker-admin password https://cf-usb-cf-usb.scf.svc.cluster.local:24054

For Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1 upgrades:

  1. Get the name of the secret (for example secrets-2.15.2-1):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb-cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf secrets-2.15.2-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb- part removed:

    tux > cf update-service-broker usb broker-admin password https://cf-usb.scf.svc.cluster.local:24054

14.3 Installing Skipped Releases

By default, Helm always installs the latest release. What if you accidentally skipped a release, and need to apply it before upgrading to the current release? Install the missing release by specifying the Helm chart version number. For example, your current uaa and scf versions are 2.10.1. Consult the table at the beginning of this chapter to see which releases you have missed. In this example, the missing Helm chart version for uaa and scf is 2.11.0. Use the --version option to install a specific version:

tux > helm upgrade --recreate-pods --version 2.11.0 susecf-uaa suse/uaa \
--values scf-config-values.yaml

Be sure to install the corresponding versions for scf and Stratos.

15 Configuration Changes

After the initial deployment of Cloud Application Platform, any changes made to your Helm chart values, whether through your scf-config-values.yaml file or directly using Helm's --set flag, are applied using the helm upgrade command.

Warning
Warning: Do Not Make Changes to Pod Counts During a Version Upgrade

The helm upgrade command can be used to apply configuration changes as well as perform version upgrades to Cloud Application Platform. A change to the pod count configuration should not be applied simultaneously with a version upgrade. Sizing changes should be made separately, either before or after, from a version upgrade (see Section 7.1, “Configuring Cloud Application Platform for High Availability”).

15.1 Configuration Change Example

Consider an example where more granular log entries are required than those provided by your default deployment of uaa; (default LOG_LEVEL: "info").

You would then add an entry for LOG_LEVEL to the env section of your scf-config-values.yaml used to deploy uaa:

env:
  LOG_LEVEL: "debug2"

Then apply the change with the helm upgrade command. This example assumes the suse/uaa Helm chart deployed was named susecf-uaa:

tux > helm upgrade susecf-uaa suse/uaa --values scf-config-values.yaml

When all pods are in a READY state, the configuration change will also be reflected. If the chart was deployed to the uaa namespace, progress can be monitored with:

tux > watch --color 'kubectl get pods --namespace uaa'

15.2 Other Examples

The following are other examples of using helm upgrade to make configuration changes:

16 Creating Admin Users

This chapter provides an overview on how to create additional administrators for your Cloud Application Platform cluster.

16.1 Prerequisites

The following prerequisites are required in order to create additional Cloud Application Platform cluster administrators:

16.2 Creating an Example Cloud Application Platform Cluster Administrator

The following example demonstrates the steps required to create a new administrator user for your Cloud Application Platform cluster. Note that creating administrator accounts must be done using the UAAC and cannot be done using the cf CLI.

  1. Use UAAC to target your uaa server:

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your scf-config-values.yaml file:

    tux > uaac token client get admin --secret password
  3. Create a new user:

    tux > uaac user add new-admin --password password --emails new-admin@example.com --zone scf
  4. Add the new user to the following groups to grant administrator privileges to the cluster (see https://docs.cloudfoundry.org/concepts/architecture/uaa.html#uaa-scopes for information on privileges provided by each group):

    tux > uaac member add scim.write new-admin --zone scf
    
    tux > uaac member add scim.read new-admin --zone scf
    
    tux > uaac member add cloud_controller.admin new-admin --zone scf
    
    tux > uaac member add clients.read new-admin --zone scf
    
    tux > uaac member add clients.write new-admin --zone scf
    
    tux > uaac member add doppler.firehose new-admin --zone scf
    
    tux > uaac member add routing.router_groups.read new-admin --zone scf
    
    tux > uaac member add routing.router_groups.write new-admin --zone scf
  5. Log into your Cloud Application Platform deployment as the newly created administrator:

    tux > cf api --skip-ssl-validation https://api.example.com
    
    tux > cf login -u new-admin
  6. The following commands can be used to verify the new administrator account has sufficient permissions:

    tux > cf create-shared-domain test-domain.com
    
    tux > cf set-org-role new-admin org OrgManager
    
    tux > cf create-buildpack test_buildpack /tmp/ruby_buildpack-cached-sle15-v1.7.30.1.zip 1

    If the account has sufficient permissions, you should not receive any authorization message similar to the following:

    FAILED
    Server error, status code: 403, error code: 10003, message: You are not authorized to perform the requested action

    See https://docs.cloudfoundry.org/cf-cli/cf-help.html for other administrator-specific commands that can be run to confirm sufficient permissions are provided.

17 Managing Passwords

The various components of SUSE Cloud Application Platform authenticate to each other using passwords that are automatically managed by the Cloud Application Platform secrets-generator. The only passwords managed by the cluster administrator are passwords for human users. The administrator may create and remove user logins, but cannot change user passwords.

  • The cluster administrator password is initially defined in the deployment's values.yaml file with CLUSTER_ADMIN_PASSWORD

  • The Stratos Web UI provides a form for users, including the administrator, to change their own passwords

  • User logins are created (and removed) with the Cloud Foundry Client, cf CLI

17.1 Password Management with the Cloud Foundry Client

The administrator cannot change other users' passwords. Only users may change their own passwords, and password changes require the current password:

tux > cf passwd
Current Password>
New Password>
Verify Password>
Changing password...
OK
Please log in again

The administrator can create a new user:

tux > cf create-user username password

and delete a user:

tux > cf delete-user username password

Use the cf CLI to assign space and org roles. Run cf help -a for a complete command listing, or see Creating and Managing Users with the cf CLI.

17.2 Changing User Passwords with Stratos

The Stratos Web UI provides a form for changing passwords on your profile page. Click the overflow menu button on the top right to access your profile, then click the edit button on your profile page. You can manage your password and username on this page.

Stratos Profile Page
Figure 17.1: Stratos Profile Page
Stratos Edit Profile Page
Figure 17.2: Stratos Edit Profile Page

18 Cloud Controller Database Secret Rotation

The Cloud Controller Database (CCDB) encrypts sensitive information like passwords. By default, the encryption key is generated by SCF, for details see https://github.com/SUSE/scf/blob/2d095a71008c33a23ca39d2ab9664e5602f8707e/container-host-files/etc/scf/config/role-manifest.yml#L1656-L1662. If it is compromised and needs to be rotated, new keys can be added. Note that existing encrypted information will not be updated. The encrypted information must be set again to have them re-encrypted with the new key. The old key cannot be dropped until all references to it are removed from the database.

Updating these secrets is a manual process. The following procedure outlines how this is done.

  1. Create a file called new-key-values.yaml with content of the form:

    env:
      CC_DB_CURRENT_KEY_LABEL: new_key
    
    secrets:
      CC_DB_ENCRYPTION_KEYS:
        new_key: "new_key_value"
  2. Pass your uaa secret and certificate to scf:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  3. Use the helm upgrade command to import the above data into the cluster. This restarts relevant pods with the new information from the previous step:

  4. tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --values new-key-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}"
  5. Perform the rotation:

    1. Change the encryption key in the config file. No output should be produced:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c 'sed --in-place "/db_encryption_key:/c\\db_encryption_key: \"$(echo $CC_DB_ENCRYPTION_KEYS | jq -r .new_key)\"" /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml'
    2. Run the rotation for the encryption keys. A series of JSON-formatted log entries describing the key rotation progress for various Cloud Controller models will be displayed:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c 'export PATH=/var/vcap/packages/ruby-2.4/bin:$PATH ; export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml ; cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng ; /var/vcap/packages/ruby-2.4/bin/bundle exec rake rotate_cc_database_key:perform'

    Note that keys should be appended to the existing secret to ensure existing environment variables can be decoded. Any operator can check which keys are in use by accessing the CCDB. If the encryption_key_label is empty, the default generated key is still being used

    tux > kubectl exec --stdin --tty mysql-0 --namespace scf -- /bin/bash -c 'mysql -p${MYSQL_ADMIN_PASSWORD}'
    MariaDB [(none)]> select name, encrypted_environment_variables, encryption_key_label from ccdb.apps;
    +--------+--------------------------------------------------------------------------------------------------------------+----------------------+
    | name   | encrypted_environment_variables                                                                              | encryption_key_label |
    +--------+--------------------------------------------------------------------------------------------------------------+----------------------+
    | go-env | XF08q9HFfDkfxTvzgRoAGp+oci2l4xDeosSlfHJUkZzn5yvr0U/+s5LrbQ2qKtET0ssbMm3L3OuSkBnudZLlaCpFWtEe5MhUe2kUn3A6rUY= | key0                 |
    +--------+--------------------------------------------------------------------------------------------------------------+----------------------+
    1 row in set (0.00 sec)

    For example, if keys were being rotated again, the secret would become:

    SECRET_DATA=$(echo "{key0: abc-123, key1: def-456}" | base64)

    and the CC_DB_CURRENT_KEY_LABEL would be updated to match the new key.

18.1 Tables with Encrypted Information

The CCDB contains several tables with encrypted information as follows:

apps

Environment variables

buildpack_lifecycle_buildpacks

Buildpack URLs may contain passwords

buildpack_lifecycle_data

Buildpack URLs may contain passwords

droplets

May contain Docker registry passwords

env_groups

Environment variables

packages

May contain Docker registry passwords

service_bindings

Contains service credentials

service_brokers

Contains service credentials

service_instances

Contains service credentials

service_keys

Contains service credentials

tasks

Environment variables

18.1.1 Update Existing Data with New Encryption Key

To ensure the encryption key is updated for existing data, the command (or its update- equivalent) can be run again with the same parameters. Some commands need to be deleted/recreated to update the label.

apps

Run cf set-env again

buildpack_lifecycle_buildpacks, buildpack_lifecycle_data, droplets

cf restage the app

packages

cf delete, then cf push the app (Docker apps with registry password)

env_groups

Run cf set-staging-environment-variable-group or cf set-running-environment-variable-group again

service_bindings

Run cf unbind-service and cf bind-service again

service_brokers

Run cf update-service-broker with the appropriate credentials

service_instances

Run cf update-service with the appropriate credentials

service_keys

Run cf delete-service-key and cf create-service-key again

tasks

While tasks have an encryption key label, they are generally meant to be a one-off event, and left to run to completion. If there is a task still running, it could be stopped with cf terminate-task, then run again with cf run-task.

19 Backup and Restore

19.1 Backup and Restore Using cf-plugin-backup

cf-plugin-backup backs up and restores your Cloud Controller Database (CCDB), using the Cloud Foundry command line interface (cf CLI). (See Section 29.1, “Using the cf CLI with SUSE Cloud Application Platform”.)

cf-plugin-backup is not a general-purpose backup and restore plugin. It is designed to save the state of a SUSE Cloud Foundry instance before making changes to it. If the changes cause problems, use cf-plugin-backup to restore the instance from scratch. Do not use it to restore to a non-pristine SUSE Cloud Foundry instance. Some of the limitations for applying the backup to a non-pristine SUSE Cloud Foundry instance are:

  • Application configuration is not restored to running applications, as the plugin does not have the ability to determine which applications should be restarted to load the restored configurations.

  • User information is managed by the User Account and Authentication (uaa) server, not the Cloud Controller (CC). As the plugin talks only to the CC it cannot save full user information, nor restore users. Saving and restoring users must be performed separately, and user restoration must be performed before the backup plugin is invoked.

  • The set of available stacks is part of the SUSE Cloud Foundry instance setup, and is not part of the CC configuration. Trying to restore applications using stacks not available on the target SUSE Cloud Foundry instance will fail. Setting up the necessary stacks must be performed separately before the backup plugin is invoked.

  • Buildpacks are not saved. Applications using custom buildpacks not available on the target SUSE Cloud Foundry instance will not be restored. Custom buildpacks must be managed separately, and relevant buildpacks must be in place before the affected applications are restored.

19.1.1 Installing the cf-plugin-backup

Download the plugin from https://github.com/SUSE/cf-plugin-backup/releases.

Then install it with cf, using the name of the plugin binary that you downloaded:

tux > cf install-plugin cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64
 Attention: Plugins are binaries written by potentially untrusted authors.
 Install and use plugins at your own risk.
 Do you want to install the plugin
 backup-plugin/cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64? [yN]: y
 Installing plugin backup...
 OK
 Plugin backup 1.0.8 successfully installed.

Verify installation by listing installed plugins:

tux > cf plugins
 Listing installed plugins...

 plugin   version   command name      command help
 backup   1.0.8     backup-info       Show information about the current snapshot
 backup   1.0.8     backup-restore    Restore the CloudFoundry state from a
  backup created with the snapshot command
 backup   1.0.8     backup-snapshot   Create a new CloudFoundry backup snapshot
  to a local file

 Use 'cf repo-plugins' to list plugins in registered repos available to install.

19.1.2 Using cf-plugin-backup

The plugin has three commands:

  • backup-info

  • backup-snapshot

  • backup-restore

View the online help for any command, like this example:

tux >  cf backup-info --help
 NAME:
   backup-info - Show information about the current snapshot

 USAGE:
   cf backup-info

Create a backup of your SUSE Cloud Application Platform data and applications. The command outputs progress messages until it is completed:

tux > cf backup-snapshot
 2018/08/18 12:48:27 Retrieving resource /v2/quota_definitions
 2018/08/18 12:48:30 org quota definitions done
 2018/08/18 12:48:30 Retrieving resource /v2/space_quota_definitions
 2018/08/18 12:48:32 space quota definitions done
 2018/08/18 12:48:32 Retrieving resource /v2/organizations
 [...]

Your Cloud Application Platform data is saved in the current directory in cf-backup.json, and application data in the app-bits/ directory.

View the current backup:

tux > cf backup-info
 - Org  system

Restore from backup:

tux > cf backup-restore

There are two additional restore options: --include-security-groups and --include-quota-definitions.

19.1.3 Scope of Backup

The following table lists the scope of the cf-plugin-backup backup. Organization and space users are backed up at the SUSE Cloud Application Platform level. The user account in uaa/LDAP, the service instances and their application bindings, and buildpacks are not backed up. The sections following the table goes into more detail.

ScopeRestore
OrgsYes
Org auditorsYes
Org billing-managerYes
Quota definitionsOptional
SpacesYes
Space developersYes
Space auditorsYes
Space managersYes
AppsYes
App binariesYes
RoutesYes
Route mappingsYes
DomainsYes
Private domainsYes
Stacksnot available
Feature flagsYes
Security groupsOptional
Custom buildpacksNo

cf backup-info reads the cf-backup.json snapshot file found in the current working directory, and reports summary statistics on the content.

cf backup-snapshot extracts and saves the following information from the CC into a cf-backup.json snapshot file. Note that it does not save user information, but only the references needed for the roles. The full user information is handled by the uaa server, and the plugin talks only to the CC. The following list provides a summary of what each plugin command does.

  • Org Quota Definitions

  • Space Quota Definitions

  • Shared Domains

  • Security Groups

  • Feature Flags

  • Application droplets (zip files holding the staged app)

  • Orgs

    • Spaces

      • Applications

      • Users' references (role in the space)

cf backup-restore reads the cf-backup.json snapshot file found in the current working directory, and then talks to the targeted SUSE Cloud Foundry instance to upload the following information, in the specified order:

  • Shared domains

  • Feature flags

  • Quota Definitions (iff --include-quota-definitions)

  • Orgs

    • Space Quotas (iff --include-quota-definitions)

    • UserRoles

    • (private) Domains

    • Spaces

      • UserRoles

      • Applications (+ droplet)

        • Bound Routes

      • Security Groups (iff --include-security-groups)

The following list provides more details of each action.

Shared Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Feature Flags

Attempts to update flags from the backup.

Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

Orgs

Attempts to create orgs from the backup. Attempts to update existing orgs from the backup.

Space Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

(private) Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Spaces

Attempts to create spaces from the backup. Attempts to update existing spaces from the backup.

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

Apps

Attempts to create apps from the backup. Attempts to update existing apps from the backup (memory, instances, buildpack, state, ...)

Security groups

Existing groups are overwritten from the backup

19.2 Disaster Recovery in scf through Raw Data Backup and Restore

A backup and restore of an existing scf deployment's raw data can be used to migrate all data to a new scf deployment. This procedure is applicable to deployments running on any Kubernetes cluster (for example SUSE CaaS Platform, Amazon EKS, and Azure AKS described in this guide) and can included in your disaster recovery solution.

19.2.1 Prerequisites

In order to complete a raw data backup and restore it is required to have:

19.2.2 Scope of Raw Data Backup and Restore

The following lists the data that is included as part of the backup (and restore) procedure:

19.2.3 Performing a Raw Data Backup

Note
Note: Restore to the Same Version

This process is intended for backing up and restoring to a target deployment with the same version as the source deployment.

Perform the following steps to create a backup of your source scf deployment.

  1. Connect to the blobstore pod:

    tux > kubectl exec --stdin --tty blobstore-0 --namespace scf -- env /bin/bash
  2. Create an archive of the blobstore directory to preserve all needed files (see the Cloud Controller Blobstore content of Section 19.2.2, “Scope of Raw Data Backup and Restore”) then disconnect from the pod:

    tux > tar cfvz blobstore-src.tgz /var/vcap/store/shared
    tux > exit
  3. Copy the archive to a location outside of the pod:

    tux > kubectl cp scf/blobstore-0:blobstore-src.tar.gz /tmp/blobstore-src.tgz
  4. Export the Cloud Controller Database (CCDB) into a file:

    tux > kubectl exec mysql-0 --namespace scf -- bash -c \
      '/var/vcap/packages/mariadb/bin/mysqldump \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      ccdb' > /tmp/ccdb-src.sql
  5. Next, obtain the CCDB encryption key(s). The method used to capture the key will depend on whether current_key_label has been defined on the source cluster. This value is defined in /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml of the api-group-0 pod and also found in various tables of the MySQL database.

    Begin by examining the configuration file for thecurrent_key_label setting:

    tux > kubectl exec --stdin --tty --namespace scf api-group-0 -- bash -c "cat /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml | grep -A 3 database_encryption"
    • If the output contains the current_key_label setting, save the output for the restoration process. Adjust the -A flag as needed to include all keys.

    • If the output does not contain the current_key_label setting, run the following command and save the output for the restoration process:

      tux > kubectl exec api-group-0 --namespace scf -- bash -c 'echo $DB_ENCRYPTION_KEY'

19.2.4 Performing a Raw Data Restore

Perform the following steps to restore your backed up data to the target scf deployment.

Important
Important: Ensure Access to the Correct scf Deployment

Working with multiple Kubernetes clusters simultaneously can be confusing. Ensure you are communicating with the desired cluster by setting $KUBECONFIG correctly.

  1. The target scf cluster needs to be deployed with the correct database encryption key(s) set in your scf-config-values.yaml before data can be restored. How the encryption key(s) will be prepared in your scf-config-values.yaml depends on the result of Step 5 in Section 19.2.3, “Performing a Raw Data Backup”

    • If current_key_label was set, use the current_key_label obtained as the value of CC_DB_CURRENT_KEY_LABEL and all the keys under the keys are defined under CC_DB_ENCRYPTION_KEYS. See the following example scf-config-values.yaml:

      env:
        CC_DB_CURRENT_KEY_LABEL: migrated_key_1
      
      secrets:
        CC_DB_ENCRYPTION_KEYS:
          migrated_key_1: "<key_goes_here>"
          migrated_key_2: "<key_goes_here>"
    • If current_key_label was not set, create one for the new cluster through scf-config-values.yaml and set it to the $DB_ENCRYPTION_KEY value from the old cluster.In this example, migrated_key is the new current_key_label created:

      env:
        CC_DB_CURRENT_KEY_LABEL: migrated_key
      
      secrets:
        CC_DB_ENCRYPTION_KEYS:
          migrated_key: "OLD_CLUSTER_DB_ENCRYPTION_KEY"
  2. Deploy a non-high-availability configuration of scf (see Section 5.10, “Deploy scf) and wait until all pods are ready before proceeding.

  3. In the ccdb-src.sql file created earlier, replace the domain name of the source deployment with the domain name of the target deployment.

    tux > sed --in-place 's/old-example.com/new-example.com/g' /tmp/ccdb-src.sql
  4. Stop the monit services on the api-group-0, cc-worker-0, and cc-clock-0 pods:

    tux > for n in api-group-0 cc-worker-0 cc-clock-0; do
      kubectl exec --stdin --tty --namespace scf $n -- bash -l -c 'monit stop all'
    done
  5. Copy the blobstore-src.tgz archive to the blobstore pod:

    tux > kubectl cp /tmp/blobstore-src.tgz scf/blobstore-0:/.
  6. Restore the contents of the archive created during the backup process to the blobstore pod:

    tux > kubectl exec --stdin --tty --namespace scf blobstore-0 -- bash -l -c 'monit stop all && sleep 10 && rm -rf /var/vcap/store/shared/* && tar xvf blobstore-src.tgz && monit start all && rm blobstore-src.tgz'
  7. Recreate the CCDB on the mysql pod:

    tux > kubectl exec mysql-0 --namespace scf -- bash -c \
        "/var/vcap/packages/mariadb/bin/mysql \
        --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
        -e 'drop database ccdb; create database ccdb;'"
  8. Restore the CCDB on the mysql pod:

    tux > kubectl exec --stdin mysql-0 --namespace scf -- bash -c '/var/vcap/packages/mariadb/bin/mysql --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf ccdb' < /tmp/ccdb-src.sql
  9. Start the monit services on the api-group-0, cc-worker-0, and cc-clock-0 pods

    tux > for n in api-group-0 cc-worker-0 cc-clock-0; do
      kubectl exec --stdin --tty --namespace scf $n -- bash -l -c 'monit start all'
    done
  10. If your old cluster did not have current_key_label defined, perform a key rotation. Otherwise, a key rotation is not necessary.

    1. Change the encryption key in the config file:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c 'sed --in-place "/db_encryption_key:/c\\db_encryption_key: \"$(echo $CC_DB_ENCRYPTION_KEYS | jq -r .migrated_key)\"" /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml'
    2. Run the rotation for the encryption keys:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c 'export PATH=/var/vcap/packages/ruby-2.4/bin:$PATH ; export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml ; cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng ; /var/vcap/packages/ruby-2.4/bin/bundle exec rake rotate_cc_database_key:perform'
  11. Perform a cf restage appname for existing applications to ensure their existing data is updated with the new encryption key.

  12. The data restore is now complete. Run some cf commands, such as cf apps, cf marketplace, or cf services, and verify data from the old cluster is returned.

20 Provisioning Services with Minibroker

Minibroker is an OSBAPI compliant broker created by members of the Microsoft Azure team. It provides a simple method to provision service brokers on Kubernetes clusters.

Important
Important: Minibroker Upstream Services

The services deployed by Minibroker are sourced from the stable upstream charts repository, see https://github.com/helm/charts/tree/master/stable, and maintained by contributors to the Helm project. Though SUSE supports Minibroker itself, it does not support the service charts it deploys. Operators should inspect the charts and images exposed by the service plans before deciding to use them in a production environment.

20.1 Deploy Minibroker

  1. Minibroker is deployed using a Helm chart. Ensure your SUSE Helm chart repository contains the most recent Minibroker chart:

    tux > helm repo update
    Hang tight while we grab the latest from your chart repositories...
    ...Skip local chart repository
    ...Successfully got an update from the "stable" chart repository
    ...Successfully got an update from the "suse" chart repository
    Update Complete. ⎈ Happy Helming!⎈
    
    tux > helm search suse
    NAME                            CHART VERSION   APP VERSION     DESCRIPTION
    ...
    suse/minibroker                 0.2.0                           A minibroker for your minikube
    ...
  2. Use Helm to deploy Minibroker:

    tux > helm install suse/minibroker --namespace minibroker --name minibroker --set "defaultNamespace=minibroker"

    The repository currently contains charts for the following services:

  3. Monitor the deployment progress. Wait until all pods are in a ready state before proceeding:

    tux > watch --color 'kubectl get pods --namespace minibroker'

20.2 Setting Up the Environment for Minibroker Usage

  1. Begin by logging into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space -o org
    tux > cf target -o org -s space
  2. Create the service broker. Note that Minibroker does not require authentication and the username and password parameters act as dummy values to pass to the cf command. These parameters do not need to be customized for the Cloud Application Platform installation:

    tux > cf create-service-broker minibroker username password http://minibroker-minibroker.minibroker.svc.cluster.local

    After the service broker is ready, it can be seen on your deployment:

    tux > cf service-brokers
    Getting service brokers as admin...
    
    name               url
    minibroker         http://minibroker-minibroker.minibroker.svc.cluster.local
  3. List the services and their associated plans the Minibroker has access to:

    tux > cf service-access -b minibroker
  4. Enable access to a service. Services that can be enabled are mariadb, mongodb, postgresql, and redis. The example below uses Redis as the service:

    tux > cf enable-service-access redis

    Use cf marketplace to verify the service has been enabled:

    tux > cf marketplace
    Getting services from marketplace in org org / space space as admin...
    OK
    
    service      plans     description
    redis        4-0-10    Helm Chart for redis
    
    TIP:  Use 'cf marketplace -s SERVICE' to view descriptions of individual plans of a given service.
  5. Define your Application Security Group (ASG) rules in a JSON file. Using the defined rules, create an ASG and bind it to an organization and space:

    tux > echo > redis.json '[{ "protocol": "tcp", "destination": "10.0.0.0/8", "ports": "6379", "description": "Allow Redis traffic" }]'
    tux > cf create-security-group redis_networking redis.json
    tux > cf bind-security-group redis_networking org space

    Use following ports to define your ASG for a given service:

    ServicePort
    MariaDB3306
    MongoDB27017
    PostgreSQL5432
    Redis6379
  6. Create an instance of the Redis service. The cf marketplace or cf marketplace -s redis commands can be used to see the available plans for the service:

    tux > cf create-service redis 4-0-10 redis-example-service

    Monitor the progress of the pods and wait until all pods are in a ready state. The example below shows the additional redis pods with a randomly generated name that have been created in the minibroker namespace:

    tux > watch --color 'kubectl get pods --namespace minibroker'
    NAME                                            READY     STATUS             RESTARTS   AGE
    alternating-frog-redis-master-0                 1/1       Running            2          1h
    alternating-frog-redis-slave-7f7444978d-z86nr   1/1       Running            0          1h
    minibroker-minibroker-5865f66bb8-6dxm7          2/2       Running            0          1h

20.3 Using Minibroker with Applications

This section demonstrates how to use Minibroker services with your applications. The example below uses the Redis service instance created in the previous section.

  1. Obtain the demo application from Github and use cf push with the --no-start flag to deploy the application without starting it:

    tux > git clone https://github.com/scf-samples/cf-redis-example-app
    tux > cd cf-redis-example-app
    tux > cf push --no-start
  2. Bind the service to your application and start the application:

    tux > cf bind-service redis-example-app redis-example-service
    tux > cf start redis-example-app
  3. When the application is ready, it can be tested by storing a value into the Redis service:

    tux > export APP=redis-example-app.example.com
    tux > curl --request GET $APP/foo
    tux > curl --request PUT $APP/foo --data 'data=bar'
    tux > curl --request GET $APP/foo

    The first GET will return key not present. After storing a value, it will return bar.

Important
Important: Database Names for PostgreSQL and MariaDB Instances

By default, Minibroker creates PostgreSQL and MariaDB server instances without a named database. A named database is required for normal usage with these and will need to be added during the cf create-service step using the -c flag. For example:

tux > cf create-service postgresql 9-6-2 djangocms-db -c '{"postgresDatabase":"mydjango"}'
tux > cf create-service mariadb 10-1-34 my-db  -c '{"mariadbDatabase":"mydb"}'

Other options can be set too, but vary by service type.

21 Setting Up and Using a Service Broker

The Open Service Broker API provides your SUSE Cloud Application Platform applications with access to external dependencies and platform-level capabilities, such as databases, filesystems, external repositories, and messaging systems. These resources are called services. Services are created, used, and deleted as needed, and provisioned on demand.

21.1 Enabling and Disabling Service Brokers

The service broker feature is enabled as part of a default SUSE Cloud Foundry deployment. To disable it, use the --set "enable.cf_usb=false" flag when running helm install or helm upgrade.

First fetch the uaa secret and certificate:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If disabling the feature on an initial deployment, run the following command:

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.cf_usb=false"

If disabling the feature on an existing deployment, run the following command:

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.cf_usb=false"

To enable the feature again, run helm upgrade with the --set "enable.cf_usb=true" flag. Be sure to pass your uaa secret and certificate to scf first:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.cf_usb=true"

21.2 Prerequisites

The following examples demonstrate how to deploy service brokers for MySQL and PostgreSQL with Helm, using charts from the SUSE repository. You must have the following prerequisites:

  • A working SUSE Cloud Application Platform deployment with Helm and the Cloud Foundry command line interface (cf CLI).

  • An Application Security Group (ASG) for applications to reach external databases. (See Understanding Application Security Groups.)

  • An external MySQL or PostgreSQL installation with account credentials that allow creating and deleting databases and users.

For testing purposes you may create an insecure security group:

tux > echo > "internal-services.json" '[{ "destination": "0.0.0.0/0", "protocol": "all" }]'
tux > cf create-security-group internal-services-test internal-services.json
tux > cf bind-running-security-group internal-services-test
tux > cf bind-staging-security-group internal-services-test

You may apply an ASG later, after testing. All running applications must be restarted to use the new security group.

21.3 Deploying on CaaS Platform 3

If you are deploying SUSE Cloud Application Platform on CaaS Platform 3, see for important information on applying the required Pod Security Policy (PSP) to your deployment. You must also apply the PSP to your new service brokers.

Take the example configuration file, cap-psp-rbac.yaml, in , and append these lines to the end, using your own namespace name for your new service broker:

- kind: ServiceAccount
  name: default
  namespace: mysql-sidecar

Then apply the updated PSP configuration, before you deploy your new service broker, with this command:

tux > kubectl apply -f cap-psp-rbac.yaml

kubectl apply updates an existing deployment. After applying the PSP, proceed to configuring and deploying your service broker.

21.4 Configuring the MySQL Deployment

Start by extracting the uaa namespace secrets name, and the uaa and scf namespaces internal certificates with these commands. These will output the complete certificates. Substitute your secrets name if it is different than the example:

tux > kubectl get pods --namespace uaa \
--output jsonpath='{.items[*].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}'
 secrets-2.8.0-1

tux > kubectl get secret --namespace scf secrets-2.8.0-1 --output jsonpath='{.data.internal-ca-cert}' | base64 --decode
 -----BEGIN CERTIFICATE-----
 MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
 [...]
 xC8x/+zT0QkvcRJBio5gg670+25KJQ==
 -----END CERTIFICATE-----

tux > kubectl get secret --namespace uaa secrets-2.8.0-1 --output jsonpath='{.data.internal-ca-cert}' | base64 --decode
 -----BEGIN CERTIFICATE-----
 MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
 [...]
 to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
 -----END CERTIFICATE-----

You will copy these certificates into your configuration file as shown below.

Create a values.yaml file. The following example is called usb-config-values.yaml. Modify the values to suit your SUSE Cloud Application Platform installation.

env:
  # Database access credentials
  SERVICE_MYSQL_HOST: mysql.example.com
  SERVICE_MYSQL_PORT: 3306
  SERVICE_MYSQL_USER: mysql-admin-user
  SERVICE_MYSQL_PASS: mysql-admin-password

  # CAP access credentials, from your original deployment configuration
  # (see Section 5.5, “Configure the SUSE Cloud Application Platform Production Deployment”)
  CF_ADMIN_USER: admin
  CF_ADMIN_PASSWORD: password
  CF_DOMAIN: example.com

  # Copy the certificates you extracted above, as shown in these
  # abbreviated examples, prefaced with the pipe character

  # SCF cert
  CF_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
    [...]
    xC8x/+zT0QkvcRJBio5gg670+25KJQ==
    -----END CERTIFICATE-----

  # UAA cert
  UAA_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
    [...]
    to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
    -----END CERTIFICATE-----

kube:
  organization: "cap"
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""

21.5 Deploying the MySQL Chart

SUSE Cloud Application Platform includes charts for MySQL and PostgreSQL (see Section 5.7, “Add the Kubernetes Charts Repository” for information on managing your Helm repository):

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.17.1          1.4             A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    2.4.0           2.4.0           A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0           1.0.0           A Helm chart for Stratos Metrics
suse/minibroker                 0.2.0                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.17.1          1.4             A Helm chart for SUSE UAA

Create a namespace for your MySQL sidecar:

tux > kubectl create namespace mysql-sidecar

Install the MySQL Helm chart:

tux > helm install suse/cf-usb-sidecar-mysql \
  --devel \
  --name mysql-service \
  --namespace mysql-sidecar \
  --set "env.SERVICE_LOCATION=http://cf-usb-sidecar-mysql.mysql-sidecar:8081" \
  --set default-auth=mysql_native_password
  --values usb-config-values.yaml \
  --wait

Wait for the new pods to become ready:

tux > watch kubectl get pods --namespace=mysql-sidecar

Confirm that the new service has been added to your SUSE Cloud Applications Platform installation:

tux > cf marketplace
Warning
Warning: MySQL Requires mysql_native_password

The MySQL sidecar works only with deployments that use mysql_native_password as their authentication plugin. This is the default for MySQL versions 8.0.3 and earlier, but later versions must be started with --default-auth=mysql_native_password before any user creation. (See https://github.com/go-sql-driver/mysql/issues/785

21.6 Create and Bind a MySQL Service

To create a new service instance, use the Cloud Foundry command line client:

tux > cf create-service mysql default service_instance_name

You may replace service_instance_name with any name you prefer.

Bind the service instance to an application:

tux > cf bind-service my_application service_instance_name

21.7 Deploying the PostgreSQL Chart

The PostgreSQL configuration is slightly different from the MySQL configuration. The database-specific keys are named differently, and it requires the SERVICE_POSTGRESQL_SSLMODE key.

env:
  # Database access credentials
  SERVICE_POSTGRESQL_HOST: postgres.example.com
  SERVICE_POSTGRESQL_PORT: 5432
  SERVICE_POSTGRESQL_USER: pgsql-admin-user
  SERVICE_POSTGRESQL_PASS: pgsql-admin-password

  # The SSL connection mode when connecting to the database.  For a list of
  # valid values, please see https://godoc.org/github.com/lib/pq
  SERVICE_POSTGRESQL_SSLMODE: disable

  # CAP access credentials, from your original deployment configuration
  # (see Section 5.5, “Configure the SUSE Cloud Application Platform Production Deployment”)
  CF_ADMIN_USER: admin
  CF_ADMIN_PASSWORD: password
  CF_DOMAIN: example.com

  # Copy the certificates you extracted above, as shown in these
  # abbreviated examples, prefaced with the pipe character

  # SCF certificate
  CF_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
    [...]
    xC8x/+zT0QkvcRJBio5gg670+25KJQ==
    -----END CERTIFICATE-----

  # UAA certificate
  UAA_CA_CERT: |
    ----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
    [...]
    to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
    -----END CERTIFICATE-----

  SERVICE_TYPE: postgres

kube:
  organization: "cap"
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""

Create a namespace and install the chart:

tux > kubectl create namespace postgres-sidecar

tux > helm install suse/cf-usb-sidecar-postgres \
  --devel \
  --name postgres-service \
  --namespace postgres-sidecar \
  --set "env.SERVICE_LOCATION=http://cf-usb-sidecar-postgres.postgres-sidecar:8081" \
  --values usb-config-values.yaml \
  --wait

Then follow the same steps as for the MySQL chart.

21.8 Removing Service Broker Sidecar Deployments

To correctly remove sidecar deployments, perform the following steps in order.

  • Unbind any applications using instances of the service, and then delete those instances:

    tux > cf unbind-service my_app my_service_instance
    tux > cf delete-service my_service_instance
  • Install the CF-USB CLI plugin for the Cloud Foundry CLI from https://github.com/SUSE/cf-usb-plugin/releases/, for example:

    tux > cf install-plugin \
     https://github.com/SUSE/cf-usb-plugin/releases/download/1.0.0/cf-usb-plugin-1.0.0.0.g47b49cd-linux-amd64
  • Configure the Cloud Foundry USB CLI plugin, using the domain you created for your SUSE Cloud Foundry deployment:

    tux > cf usb-target https://usb.example.com
  • List the current sidecar deployments and take note of the names:

    tux > cf usb-driver-endpoints
  • Remove the service by specifying its name:

    tux > cf usb-delete-driver-endpoint mysql-service
  • Find your release name, then delete the release:

    tux > helm list
    NAME           REVISION UPDATED                   STATUS    CHART                      NAMESPACE
    susecf-console 1        Wed Aug 14 08:35:58 2018  DEPLOYED  console-2.4.0              stratos
    susecf-scf     1        Tue Aug 14 12:24:36 2018  DEPLOYED  cf-2.17.1                  scf
    susecf-uaa     1        Tue Aug 14 12:01:17 2018  DEPLOYED  uaa-2.17.1                 uaa
    mysql-service  1        Mon May 21 11:40:11 2018  DEPLOYED  cf-usb-sidecar-mysql-1.0.1 mysql-sidecar
    
    tux > helm delete --purge mysql-service

21.9 Upgrade Notes

21.9.1 Change in URL of Internal cf-usb Broker Endpoint

This change is only applicable for upgrades from Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 and upgrades from Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1. The URL of the internal cf-usb broker endpoint has changed. Brokers for PostgreSQL and MySQL that use cf-usb will require the following manual fix after upgrading to reconnect with SCF/CAP:

For Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 upgrades:

  1. Get the name of the secret (for example secrets-2.14.5-1):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf secrets-2.14.5-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb part doubled with a dash separator

    tux > cf update-service-broker usb broker-admin password https://cf-usb-cf-usb.scf.svc.cluster.local:24054

For Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1 upgrades:

  1. Get the name of the secret (for example secrets-2.15.2-1):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb-cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf secrets-2.15.2-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb- part removed:

    tux > cf update-service-broker usb broker-admin password https://cf-usb.scf.svc.cluster.local:24054

22 App-AutoScaler

The App-AutoScaler service is used for automatically managing an application's instance count when deployed on SUSE Cloud Foundry. The scaling behavior is determined by a set of criteria defined in a policy (See Section 22.5, “Policies”).

22.1 Prerequisites

Using the App-AutoScaler service requires:

22.2 Enabling and Disabling the App-AutoScaler Service

By default, the App-AutoScaler service is not enabled as part of a SUSE Cloud Foundry deployment. To enable it, run helm upgrade with the --set "enable.autoscaler=true" flag on an existing scf deployment. Be sure to pass your uaa secret and certificate to scf first:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.autoscaler=true"

To disable the App-AutoScaler service, run helm upgrade with the --set "enable.autoscaler=false" flag. Be sure to pass your uaa secret and certificate to scf first:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.autoscaler=false"

22.3 Upgrade Considerations

In order to upgrade from a SUSE Cloud Application Platform 1.3.1 deployment with the App-AutoScaler enabled to SUSE Cloud Application Platform 1.4, perform one of the two methods listed below. Both methods require that uaa is first upgraded.

  1. Enabling App-AutoScaler no longer requires sizing values set to have a count of 1, which is the new minimum setting. The following values in your scf-config-values.yaml file can be removed:

    sizing:
      autoscaler_api:
        count: 1
      autoscaler_eventgenerator:
        count: 1
      autoscaler_metrics:
        count: 1
      autoscaler_operator:
        count: 1
      autoscaler_postgres:
        count: 1
      autoscaler_scalingengine:
        count: 1
      autoscaler_scheduler:
        count: 1
      autoscaler_servicebroker:
        count: 1
  2. Get the most recent Helm charts:

    tux > helm repo update
  3. Upgrade uaa:

    tux > helm upgrade susecf-uaa suse/uaa \
    --values scf-config-values.yaml
  4. Extract the uaa secret for scf to use:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

You are now ready to upgrade scf using one of the two following methods:

The first method is to disable the App-AutoScaler during the initial SUSE Cloud Application Platform upgrade, then when the upgrade completes, enable the App-AutoScaler again.

  1. Upgrade scf with the App-AutoScaler disabled:

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}" \
    --set "enable.autoscaler=false"
  2. Monitor the deployment progress. Wait until all pods are in a ready state before proceeding:

    tux > watch --color 'kubectl get pods --namespace scf'
  3. Enable the App-AutoScaler again:

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}" \
    --set "enable.autoscaler=true"
  4. Monitor the deployment progress and until all pods are in a ready state:

    tux > watch --color 'kubectl get pods --namespace scf'

The second method is to pass the --set sizing.autoscaler_postgres.disk_sizes.postgres_data=100 option as part of the upgrade.

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.autoscaler=true" \
--set sizing.autoscaler_postgres.disk_sizes.postgres_data=100

22.4 Using the App-AutoScaler Service

Create the Service Broker for App-AutoScaler, replacing example.com with the DOMAIN set in your scf-config-values.yaml file:

tux > SECRET=$(kubectl get pods --namespace scf \
--output jsonpath='{.items[?(.metadata.name=="api-group-0")].spec.containers[?(.name=="api-group")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > AS_PASSWORD="$(kubectl get secret $SECRET --namespace scf --output jsonpath="{.data['autoscaler-service-broker-password']}" | base64 --decode)"

tux > cf create-service-broker autoscaler username $AS_PASSWORD https://autoscalerservicebroker.example.com

Enable access to the service:

tux > cf enable-service-access autoscaler -p autoscaler-free-plan

Create a new instance of the App-AutoScaler service:

tux > cf create-service autoscaler autoscaler-free-plan service_instance_name

A name of your choice may be used to replace service_instance_name.

Bind the service instance to an application and attach a policy (See Section 22.5, “Policies”):

tux > cf bind-service my_application service_instance_name
tux > cf attach-autoscaling-policy my_application my-policy.json

If a policy has already been defined and is available for use, you can attach the policy as part of the binding process instead:

tux > cf bind-service my_application service_instance_name -c '{
    "instance_min_count": 1,
    "instance_max_count": 4,
    "scaling_rules": [{
        "metric_type": "memoryused",
        "stat_window_secs": 60,
        "breach_duration_secs": 60,
        "threshold": 10,
        "operator": ">=",
        "cool_down_secs": 300,
        "adjustment": "+1"
    }]
}'

Note that attaching a policy in this manner requires passing the policy directly rather than specifying the path to the policy file.

Once an instance of the App-AutoScaler service has been created and bound to an app, it can be managed using the cf CLI with the App-AutoScaler plugin (See Section 22.4.1, “The App-AutoScaler cf CLI Plugin”) or using the App-AutoScaler API (See Section 22.4.2, “App-AutoScaler API”).

22.4.1 The App-AutoScaler cf CLI Plugin

The App-AutoScaler plugin is used for managing the service with your applications and provides the following commands. Refer to https://github.com/cloudfoundry-incubator/app-autoscaler-cli-plugin#command-list for details about each command:

autoscaling-api

Set or view AutoScaler service API endpoint. See https://github.com/cloudfoundry-incubator/app-autoscaler-cli-plugin#cf-autoscaling-api for more information.

autoscaling-policy

Retrieve the scaling policy of an application. See https://github.com/cloudfoundry-incubator/app-autoscaler-cli-plugin#cf-autoscaling-policy for more information.

attach-autoscaling-policy

Attach a scaling policy to an application. See https://github.com/cloudfoundry-incubator/app-autoscaler-cli-plugin#cf-attach-autoscaling-policy for more information.

detach-autoscaling-policy

Detach the scaling policy from an application. See https://github.com/cloudfoundry-incubator/app-autoscaler-cli-plugin#cf-detach-autoscaling-policy for more information.

autoscaling-metrics

Retrieve the metrics of an application. See https://github.com/cloudfoundry-incubator/app-autoscaler-cli-plugin#cf-autoscaling-metrics for more information.

autoscaling-history

Retrieve the scaling history of an application. See https://github.com/cloudfoundry-incubator/app-autoscaler-cli-plugin#cf-autoscaling-history for more information.

22.5 Policies

A policy identifies characteristics including minimum instance count, maximum instance count, and the rules used to determine when the number of application instances is scaled up or down. These rules are categorized into two types, scheduled scaling and dynamic scaling. (See Section 22.5.1, “Scaling Types”). Multiple scaling rules can be specified in a policy, but App-AutoScaler does not detect or handle conflicts that may occur. Ensure there are no conflicting rules to avoid unintended scaling behavior.

Policies are defined using the JSON format and can be attached to an application either by passing the path to the policy file or directly as a parameter.

The following is an example of a policy file, called my-policy.json.

{
    "instance_min_count": 1,
    "instance_max_count": 4,
    "scaling_rules": [{
        "metric_type": "memoryused",
        "stat_window_secs": 60,
        "breach_duration_secs": 60,
        "threshold": 10,
        "operator": ">=",
        "cool_down_secs": 300,
        "adjustment": "+1"
    }]
}

For an example that demonstrates defining multiple scaling rules in a single policy, refer to the sample of a policy file at https://github.com/cloudfoundry-incubator/app-autoscaler/blob/develop/src/integration/fakePolicyWithSchedule.json. The complete list of configurable policy values can be found at https://github.com/cloudfoundry-incubator/app-autoscaler/blob/master/docs/Policy_definition.rst.

22.5.1 Scaling Types

Scheduled Scaling

Modifies an application's instance count at a predetermined time. This option is suitable for workloads with predictable resource usage.

Dynamic Scaling

Modifies an application's instance count based on metrics criteria. This option is suitable for workloads with dynamic resource usage. The following metrics are available:

  • memoryused

  • memoryutil

  • responsetime

  • throughput

See https://github.com/cloudfoundry-incubator/app-autoscaler/tree/develop/docs#scaling-type for additional details.

23 Logging

There are two types of logs in a deployment of Cloud Application Platform, applications logs and component logs.

  • Application logs provide information specific to a given application that has been deployed to your Cloud Application Platform cluster and can be accessed through:

    • The cf CLI using the cf logs command

    • The application's log stream within the Stratos console

  • Access to logs for a given component of your Cloud Application Platform deployment can be obtained by:

    • The kubectl logs command

      The following example retrieves the logs of the router container of router-0 pod in the scf namespace

      tux > kubectl logs --namespace scf router-0 router
    • Direct access to the log files using the following:

      1. Open a shell to the container of the component using the kubectl exec command

      2. Navigate to the logs directory at /var/vcap/sys/logs, at which point there will be subdirectories containing the log files for access.

        tux > kubectl exec --stdin --tty --namespace scf router-0 /bin/bash
        
        router/0:/# cd /var/vcap/sys/log
        
        router/0:/var/vcap/sys/log# ls -R
        .:
        gorouter  loggregator_agent
        
        ./gorouter:
        access.log  gorouter.err.log  gorouter.log  post-start.err.log	post-start.log
        
        ./loggregator_agent:
        agent.log

23.1 Logging to an External Syslog Server

Cloud Application Platform supports sending the cluster's log data to external logging services where additional processing and analysis can be performed.

23.1.1 Configuring Cloud Application Platform

In your scf-config-values.yaml file add the following configuration values to the env: section. The example values below are configured for an external ELK stack.

env:
  SCF_LOG_HOST: elk.example.com
  SCF_LOG_PORT: 5001
  SCF_LOG_PROTOCOL: "tcp"

23.1.2 Example using the ELK Stack

The ELK stack is an example of an external syslog server where log data can be sent to for log management. The ELK stack consists of:

Elasticsearch

A tool for search and analytics. For more information, refer to https://www.elastic.co/products/elasticsearch.

Logstash

A tool for data processing. For more information, refer to https://www.elastic.co/products/logstash.

Kibana

A tool for data visualization. For more information, refer to https://www.elastic.co/products/kibana.

23.1.2.1 Prerequisites

Java 8 is required by:

23.1.2.2 Installing and Configuring Elasticsearch

For methods of installing Elasticsearch, refer to https://www.elastic.co/guide/en/elasticsearch/reference/7.1/install-elasticsearch.html.

After installation, modify the config file /etc/elasticsearch/elasticsearch.yml to set the following value.

network.host: localhost

23.1.2.3 Installing and Configuring Logstash

For methods of installing Logstash, refer to http://www.elastic.co/guide/en/logstash/7.1/installing-logstash.html.

After installation, create the configuration file /etc/logstash/conf.d/00-scf.conf. In this example, we will name it 00-scf.conf. Add the following into the file. Take note of the port used in the input section. This value will need to match the value of the SCF_LOG_PORT property in your scf-config-values.yaml file.

input {
  tcp {
    port => 5001
  }
}
output {
  stdout {}
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "scf-%{+YYYY.MM.dd}"
  }
}

Additional input plug-ins can be found at https://www.elastic.co/guide/en/logstash/current/input-plugins.html and output plug-ins can be found at https://www.elastic.co/guide/en/logstash/current/output-plugins.html. For this example, we will demonstrate the flow of data through the stack, but filter plugins can also be specified to perform processing of the log data. For more details about filter plug-ins, refer to https://www.elastic.co/guide/en/logstash/current/filter-plugins.html.

23.1.2.4 Installing and Configuring Kibana

For methods of installing Kibana, refer to https://www.elastic.co/guide/en/kibana/7.1/install.html.

No configuration changes are required at this point. Refer to https://www.elastic.co/guide/en/kibana/current/settings.html for additonal properties that can configured through the kibana.yml file.

23.2 Log Levels

The log level is configured through the scf-config-values.yaml file by using the LOG_LEVEL property found in the env: section. The LOG_LEVEL property is mapped to component-specific levels. Components have differing technology compositions (for example languages, frameworks) and results in each component determining for itself what content to provide at each level, which may vary between components.

The following are the log levels available along with examples of log entries at the given level.

  • off: disable log messages

  • fatal: fatal conditions

  • error: error conditions

    <11>1 2018-08-21T17:59:48.321059+00:00 api-group-0 vcap.cloud_controller_ng
    - - -
    {"timestamp":1534874388.3206334,"message":"Mysql2::Error: MySQL
    server has gone away: SELECT count(*) AS `count` FROM `tasks` WHERE
    (`state` = 'RUNNING') LIMIT 1","log_level":"error","source":"cc.db","data":
    {},"thread_id":47367387197280,"fiber_id":47367404488760,"process_id":3400,"file":"/
    var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/
    gems/sequel-4.49.0/lib/sequel/database/logging.rb","lineno":88,"method":"block in
    log_each"}
  • warn: warning conditions

    <12>1 2018-08-21T18:49:37.651186+00:00 api-group-0 vcap.cloud_controller_ng
    - - -
    {"timestamp":1534877377.6507676,"message":"Invalid bearer token:
    #<CF::UAA::InvalidSignature: Signature verification failed> [\"/var/vcap/
    packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/
    cf-uaa-lib-3.14.3/lib/uaa/token_coder.rb:118:in `decode'\", \"/var/vcap/packages/
    cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/cf-uaa-
    lib-3.14.3/lib/uaa/token_coder.rb:212:in `decode_at_reference_time'\", \"/var/
    vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/
    lib/cloud_controller/uaa/uaa_token_decoder.rb:70:in `decode_token_with_key'\",
    \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/
    cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:58:in
    `block in decode_token_with_asymmetric_key'\", \"/var/vcap/packages-
    src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/
    lib/cloud_controller/uaa/uaa_token_decoder.rb:56:in `each'\", \"/
    var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/
    cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:56:in
    `decode_token_with_asymmetric_key'\", \"/var/vcap/packages-
    src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/
    cloud_controller/uaa/uaa_token_decoder.rb:29:in `decode_token'\", \"/var/vcap/
    packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/
    cloud_controller/security/security_context_configurer.rb:22:in `decode_token'\", \"/
    var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/
    lib/cloud_controller/security/security_context_configurer.rb:10:in `configure'\",
    \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/
    security_context_setter.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/
    cloud_controller_ng/middleware/vcap_request_id.rb:15:in `call'\", \"/var/vcap/
    packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:49:in
    `call_app'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/
    middleware/cors.rb:14:in `call'\", \"/var/vcap/packages/cloud_controller_ng/
    cloud_controller_ng/middleware/request_metrics.rb:12:in `call'\", \"/
    var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/
    ruby/2.4.0/gems/rack-1.6.9/lib/rack/builder.rb:153:in `call'\", \"/var/vcap/
    packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/
    gems/thin-1.7.0/lib/thin/connection.rb:86:in `block in pre_process'\", \"/
    var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/
    ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:84:in `catch'\", \"/var/
    vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/
    gems/thin-1.7.0/lib/thin/connection.rb:84:in `pre_process'\", \"/var/vcap/
    packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/
    gems/thin-1.7.0/lib/thin/connection.rb:50:in `block in process'\", \"/
    var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/
    ruby/2.4.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:1067:in `block in
    spawn_threadpool'\"]","log_level":"warn","source":"cc.uaa_token_decoder","data":
    {"request_guid":"f3e25c45-a94a-4748-7ccf-5a72600fbb17::774bdb79-5d6a-4ccb-a9b8-
    f4022afa3bdd"},"thread_id":47339751566100,"fiber_id":47339769104800,"process_id":3245,"file":"/
    var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/
    lib/cloud_controller/uaa/uaa_token_decoder.rb","lineno":35,"method":"rescue in
    decode_token"}
  • info: informational messages

    <14>1 2018-08-21T22:42:54.324023+00:00 api-group-0 vcap.cloud_controller_ng
    - - -
    {"timestamp":1534891374.3237739,"message":"Started GET
    \"/v2/info\" for user: , ip: 127.0.0.1 with vcap-request-id:
    45e00b66-e0b7-4b10-b1e0-2657f43284e7 at 2018-08-21 22:42:54
    UTC","log_level":"info","source":"cc.api","data":{"request_guid":"45e00b66-
    e0b7-4b10-
    b1e0-2657f43284e7"},"thread_id":47420077354840,"fiber_id":47420124921300,"process_id":3200,"file":
    var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/
    request_logs.rb","lineno":12,"method":"call"}
  • debug: debugging messages

    <15>1 2018-08-21T22:45:15.146838+00:00 api-group-0 vcap.cloud_controller_ng
    - - -
    {"timestamp":1534891515.1463814,"message":"dispatch
    VCAP::CloudController::InfoController get /v2/
    info","log_level":"debug","source":"cc.api","data":{"request_guid":"b228ef6d-
    af5e-4808-
    af0b-791a37f51154"},"thread_id":47420125585200,"fiber_id":47420098783620,"process_id":3200,"file":
    var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/
    lib/cloud_controller/rest_controller/routes.rb","lineno":12,"method":"block in
    define_route"}
  • debug1: lower-level debugging messages

  • debug2: lowest-level debugging message

    <15>1 2018-08-21T22:46:02.173445+00:00 api-group-0 vcap.cloud_controller_ng - - -
    {"timestamp":1534891562.1731355,"message":"(0.006130s) SELECT * FROM `delayed_jobs`
    WHERE ((((`run_at` <= '2018-08-21 22:46:02') AND (`locked_at` IS NULL)) OR
    (`locked_at` < '2018-08-21 18:46:02') OR (`locked_by` = 'cc_api_worker.api.0.1'))
    AND (`failed_at` IS NULL) AND (`queue` IN ('cc-api-0'))) ORDER BY `priority`
    ASC, `run_at` ASC LIMIT 5","log_level":"debug2","source":"cc.background","data":
    {},"thread_id":47194852110160,"fiber_id":47194886034680,"process_id":3296,"file":"/
    var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/
    gems/sequel-4.49.0/lib/sequel/database/logging.rb","lineno":88,"method":"block in
    log_each"}

24 Managing Certificates

This chapter describes the process to deploy SUSE Cloud Application Platform installed with certificates signed by an external Certificate Authority.

24.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • In order to secure uaa-related traffic, the certificate's Subject Alternative Name (SAN) should include the domains uaa.example.com and *.uaa.example.com, where example.com is replaced with the DOMAIN set in your scf-config-values.yaml.

  • In order to secure scf-related traffic, the certificate's Subject Alternative Name (SAN) should include the domain *.example.com, where example.com is replaced with the DOMAIN in your scf-config-values.yaml.

24.2 Deployment Configuration

Certificates used in SUSE Cloud Application Platform are installed through a configuration file, called scf-config-values.yaml. To specify a certificate, set the value of the certificate and its corresponding private key under the secrets: section.

Note
Note

Note the use of the "|" character which indicates the use of a literal scalar. See the http://yaml.org/spec/1.2/spec.html#id2795688 for more information.

Certificates are installed to the uaa component by setting the values UAA_SERVER_CERT and UAA_SERVER_CERT_KEY. For example:

secrets:
  UAA_SERVER_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIFnzCCA4egAwIBAgICEAMwDQYJKoZIhvcNAQENBQAwXDELMAkGA1UEBhMCQ0Ex
    CzAJBgNVBAgMAkJDMRIwEAYDVQQHDAlWYW5jb3V2ZXIxETAPBgNVBAoMCE15Q2Fw
    T3JnMRkwFwYDVQQDDBBNeUNhcE9yZyBSb290IENBMB4XDTE4MDkxNDIyNDMzNVoX
    ...
    IqhPRKYBFHPw6RxVTjG/ClMsFvOIAO3QsK+MwTRIGVu/MNs0wjMu34B/zApLP+hQ
    3ZxAt/z5Dvdd0y78voCWumXYPfDw9T94B4o58FvzcM0eR3V+nVtahLGD2r+DqJB0
    3xoI
    -----END CERTIFICATE-----

  UAA_SERVER_CERT_KEY: |
    -----BEGIN PRIVATE KEY-----
    MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDhRlcoZAVwUkg0
    sdExkBnPenhLG5FzQM3wm9t4erbSQulKjeFlBa9b0+RH6gbYDHh5+NyiL0L89txO
    JHNRGEmt+4zy+9bY7e2syU18z1orOrgdNq+8QhsSoKHJV2w+0QZkSHTLdWmAetrA
    ...
    ZP5BpgjrT2lGC1ElW/8AFM5TxkkOPMzDCe8HRXPUUw+2YDzyKY1YgkwOMpHlk8Cs
    wPQYJsrcObenRwsGy2+A6NiIg2AVJwHASFG65taoV+1A061P3oPDtyIH/UPhRUoC
    OULPS8fbHefNiSvZTNVKwj8=
    -----END PRIVATE KEY-----

Certificates are installed to the scf component by setting the values ROUTER_SSL_CERT and ROUTER_SSL_KEY. In addition, set UAA_CA_CERT with the root certificate of the Certificate Authority used to sign your certificate. For example:

secrets:
  ROUTER_SSL_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
    QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
    CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
    ...
    xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
    M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
    1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
    -----END CERTIFICATE-----

  ROUTER_SSL_KEY: |
    -----BEGIN RSA PRIVATE KEY-----
    MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
    T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
    G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
    ...
    GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
    M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
    MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
    -----END RSA PRIVATE KEY----

  UAA_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIDaSjCAjKgAwIBAgIQRK+wgNajJ7qJMDmGLvhAazANBgkqhkiG9w0BAQUFADA/
    MTQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT
    DkITVCBSb290IENBIFg7MB4XDTAwMDkzMDIxMTIxOVoXDTIxMDkzMDE0MDExNVow
    ...
    R8srzJmwN0jP41ZL9c8PDHIyh8bwRLtTcm1D9SZImlJnt1ir/md2cXjbDaJWFBM5
    NvaEkqgCWjBH4d1QB7wCCZAA62RjYJsWvIjJEubSfZGL+T0yjWW06XyxV3bqxbYo
    NTQVZRzI9neWagqNdwvYkQsEjgfbKbYK7p2MEIAL
    -----END CERTIFICATE-----

24.2.1 Configuring Multiple Certificates

Cloud Application Platform supports configurations that use multiple certificates. To specify multiple certificates with their associated keys, replace the ROUTER_SSL_CERT and ROUTER_SSL_KEY properties with the ROUTER_TLS_PEM property in your scf-config-values.yaml file.

secrets:
  ROUTER_TLS_PEM: |
    - cert_chain: |
        -----BEGIN CERTIFICATE-----
        MIIEDzCCAfcCCQCWC4NErLzy9DANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        opR9hW2YNrMYQYfhVu4KTkpXIr4iBrt2L+aq2Rk4NBaprH+0X6CPlYg+3edC7Jc+
	...
        ooXNKOrpbSUncflZYrAfYiBfnZGIC99EaXShRdavStKJukLZqb3iHBZWNLYnugGh
        jyoKpGgceU1lwcUkUeRIOXI8qs6jCqsePM6vak3EO5rSiMpXMvLO8WMaWsXEfcBL
        dglVTMCit9ORAbVZryXk8Xxiham83SjG+fOVO4pd0R8UuCE=
        -----END CERTIFICATE-----
      private_key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEA0HZ/aF64ITOrwtzlRlDkxf0b4V6MFaaTx/9UIQKQZLKT0d7u
        3Rz+egrsZ90Jk683Oz9fUZKtgMXt72CMYUn13TTYwnh5fJrDM1JXx6yHJyiIp0rf
        3G6wh4zzgBosIFiadWPQgL4iAJxmP14KMg4z7tNERu6VXa+0OnYT0DBrf5IJhbn6
	...
        ja0CsQKBgQCNrhKuxLgmQKp409y36Lh4VtIgT400jFOsMWFH1hTtODTgZ/AOnBZd
        bYFffmdjVxBPl4wEdVSXHEBrokIw+Z+ZhI2jf2jJkge9vsSPqX5cTd2X146sMUSy
        o+J1ZbzMp423AvWB7imsPTA+t9vfYPSlf+Is0MhBsnGE7XL4fAcVFQ==
        -----END RSA PRIVATE KEY-----
    - cert_chain: |
        -----BEGIN CERTIFICATE-----
        MIIEPzCCAiegAwIBAgIJAJYLg0SsvPL1MA0GCSqGSIb3DQEBCwUAMEYxCzAJBgNV
        BAYTAkNBMRMwEQYDVQQIDApTb21lLVN0YXRlMQ4wDAYDVQQKDAVNeU9yZzESMBAG
        A1UEAwwJTXlDQS5zaXRlMB4XDTE4MDkxNzE1MjQyMVoXDTIwMDEzMDE1MjQyMVow
	...
        FXrgM9jVBGXeL7T/DNfJp5QfRnrQq1/NFWafjORXEo9EPbAGVbPh8LiaEqwraR/K
        cDuNI7supZ33I82VOrI4+5mSMxj+jzSGd2fRAvWEo8E+MpHSpHJt6trGa5ON57vV
        duCWD+f1swpuuzW+rNinrNZZxUQ77j9Vk4oUeVUfL91ZK4k=
        -----END CERTIFICATE-----
      private_key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEowIBAAKCAQEA5kNN9ZZK/UssdUeYSajG6xFcjyJDhnPvVHYA0VtgVOq8S/rb
        irVvkI1s00rj+WypHqP4+l/0dDHTiclOpUU5c3pn3vbGaaSGyonOyr5Cbx1X+JZ5
        17b+ah+oEnI5pUDn7chGI1rk56UI5oV1Qps0+bYTetEYTE1DVjGOHl5ERMv2QqZM
	...
        rMMhAoGBAMmge/JWThffCaponeakJu63DHKz87e2qxcqu25fbo9il1ZpllOD61Zi
        xd0GATICOuPeOUoVUjSuiMtS7B5zjWnmk5+siGeXF1SNJCZ9spgp9rWA/dXqXJRi
        55w7eGyYZSmOg6I7eWvpYpkRll4iFVApMt6KPM72XlyhQOigbGdJ
        -----END RSA PRIVATE KEY-----

24.3 Deploying SUSE Cloud Application Platform with Certificates

Once the certificate-related values have been set in your scf-config-values.yaml, deploy SUSE Cloud Application Platform.

If this is an initial deployment, use helm install to deploy uaa and scf:

  1. Deploy uaa:

    tux > helm install suse/uaa \
    --name susecf-uaa \
    --namespace uaa \
    --values scf-config-values.yaml

    Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

    tux > watch --color 'kubectl get pods --namespace uaa'

    When uaa is successfully deployed, the following is observed:

    • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

    • All other pods have a Running STATUS and a READY value of n/n.

    Press CtrlC to exit the watch command.

  2. Pass your uaa secret and certificate to scf:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  3. Deploy scf:

    tux > helm install suse/cf \
    --name susecf-scf \
    --namespace scf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}"

    Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

    tux > watch --color 'kubectl get pods --namespace scf'

    When scf is successfully deployed, the following is observed:

    • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

    • All other pods have a Running STATUS and a READY value of n/n.

    Press CtrlC to exit the watch command.

If this is an existing deployment, use helm upgrade to apply the changes to uaa and scf:

  1. Upgrade uaa:

    tux > helm upgrade susecf-uaa suse/uaa \
    --values scf-config-values.yaml
  2. Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

    tux > watch --color 'kubectl get pods --namespace uaa'
  3. Pass your uaa secret and certificate to scf:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  4. Upgrade scf:

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}"
  5. Monitor the deployment progress using the watch command:

    tux > watch --color 'kubectl get pods --namespace scf'

Once all pods are up and running, verify you can successfully set your cluster as the target API endpoint by running the cf api command without using the --skip-ssl-validation option.

tux > cf api https://api.example.com

24.4 Rotating Automatically Generated Secrets

Cloud Application Platform uses a number of automatically generated secrets for use internally. These secrets have a default expiration of 10950 days and are set through the CERT_EXPIRATION property in the env: section of the scf-config-values.yaml file. If rotation of the secrets is required, increment the value of secrets_generation_counter in the kube: section of the scf-config-values.yaml configuration file (for example the example scf-config-values.yaml used in this guide) then run helm upgrade.

This example demonstrates rotating the secrets of the scf deployment.

First, update the scf-config-values.yaml file.

kube:
  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 2

Next, perform a helm upgrade to apply the change.

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

25 Integrating CredHub with SUSE Cloud Application Platform

SUSE Cloud Application Platform supports CredHub integration. You should already have a working CredHub instance, a CredHub service on your cluster, then apply the steps in this chapter to connect SUSE Cloud Application Platform.

25.1 Installing the CredHub Client

Start by creating a new directory for the CredHub client on your local workstation, then download and unpack the CredHub client. The following example is for the 2.2.0 Linux release. For other platforms and current releases, see the cloudfoundry-incubator/credhub-cli at

tux > mkdir chclient
tux > cd chclient
tux > wget https://github.com/cloudfoundry-incubator/credhub-cli/releases/download/2.2.0/credhub-linux-2.2.0.tgz
tux > tar zxf credhub-linux-2.2.0.tgz

25.2 Enabling and Disabling CredHub

Enable CredHub for your deployment using the --set "enable.credhub=true" flag when running helm install or helm upgrade.

First fetch the uaa credentials:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If this is an initial deployment, use helm install to deploy scf with CredHub enabled:

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.credhub=true"

If this is an existing deployment, use helm upgrade to enable CredHub:

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.credhub=true"

To disable CredHub, run helm upgrade with the --set "enable.credhub=false" flag. Be sure to pass your uaa secret and certificate to scf first:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}" \
--set "enable.credhub=false"

25.3 Upgrade Considerations

The following applies to upgrades from SUSE Cloud Application Platform 1.3.1 to SUSE Cloud Application Platform 1.4.0:

If CredHub is enabled on your deployment of SUSE Cloud Application Platform 1.3.1, then you must specify --set "enable.credhub=true" during the upgrade to keep the feature installed. Sizing values previously set to have a count of 1, which is the new minimum setting, to enable the service no longer need to be set explicitly. The following values in your scf-config-values.yaml file can be removed:

sizing:
  credhub_user:
    count: 1

25.4 Connecting to the CredHub Service

Set environment variables for the CredHub client, your CredHub service location, and Cloud Application Platform namespace. In these guides the example namespace is scf:

tux > CH_CLI=~/.chclient/credhub
tux > CH_SERVICE=https://credhub.example.com
tux > NAMESPACE=scf

Set up the CredHub service location:

tux > SECRET="$(kubectl get secrets --namespace "${NAMESPACE}" | awk '/^secrets-/ { print $1 }')"
tux > CH_SECRET="$(kubectl get secrets --namespace "${NAMESPACE}" "${SECRET}" --output jsonpath="{.data['uaa-clients-credhub-user-cli-secret']}"|base64 --decode)"
tux > CH_CLIENT=credhub_user_cli
tux > echo Service ......@ $CH_SERVICE
tux > echo CH cli Secret @ $CH_SECRET

Set the CredHub target through its Kubernetes service, then log into CredHub:

tux > "${CH_CLI}" api --skip-tls-validation --server "${CH_SERVICE}"
tux > "${CH_CLI}" login --client-name="${CH_CLIENT}" --client-secret="${CH_SECRET}"

Test your new connection by inserting and retrieving some fake credentials:

tux > "${CH_CLI}" set --name FOX --type value --value 'fox over lazy dog'
tux > "${CH_CLI}" set --name DOG --type user --username dog --password fox
tux > "${CH_CLI}" get --name FOX
tux > "${CH_CLI}" get --name DOG

26 Offline Buildpacks

Buildpacks are used to construct the environment needed to run your applications, including any required runtimes or frameworks as well as other dependencies. When you deploy an application, a buildpack can be specified or automatically detected by cycling through all available buildpacks to find one that is applicable. When there is a suitable buildpack for your application, the buildpack will then download any necessary dependencies during the staging process.

An offline, or cached, buildpack packages the runtimes, frameworks, and dependencies needed to run your applications into an archive that is then uploaded to your Cloud Application Platform deployment. When an application is deployed using an offline buildpack, access to the Internet to download dependencies is no longer required. This has the benefit of providing improved staging performance and allows for staging to take place on air-gapped environments.

26.1 Creating an Offline Buildpack

Offline buildpacks can be created using the cf-buildpack-packager-docker tool, which is available as a Docker image. The only requirement to use this tool is a system with Docker support.

Important
Important: Disclaimer

Some Cloud Foundry buildpacks can reference binaries with proprietary or mutually incompatible open source licenses which cannot be distributed together as offline/cached buildpack archives. Operators who wish to package and maintain offline buildpacks will be responsible for any required licensing or export compliance obligations.

For automation purposes, you can use the --accept-external-binaries option to accept this disclaimer without the interactive prompt.

Usage of the tool is as follows:

package [--accept-external-binaries] org [all [stack] | language [tag] [stack]]

Where:

  • org is the Github organization hosting the buildpack repositories, such as "cloudfoundry" or "SUSE"

  • A tag cannot be specified when using all as the language because the tag is different for each language

  • tag is not optional if a stack is specified. To specify the latest release, use "" as the tag

  • A maximum of one stack can be specified

The following example demonstrates packaging an offline Ruby buildpack and uploading it to your Cloud Application Platform deployment to use. The packaged buildpack will be a Zip file placed in the current working directory, $PWD.

  1. Build the latest released SUSE Ruby buildpack for the SUSE Linux Enterprise 15 stack:

    tux > docker run --interactive --tty --rm -v $PWD:/out splatform/cf-buildpack-packager SUSE ruby "" sle15
  2. Verify the archive has been created in your current working directory:

    tux > ls
    ruby_buildpack-cached-sle15-v1.7.30.1.zip
  3. Log into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space -o org
    tux > cf target -o org -s space
  4. List the currently available buildpacks:

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack               position   enabled   locked   filename
    staticfile_buildpack    1          true      false    staticfile_buildpack-v1.4.34.1-1.1-1dd6386a.zip
    java_buildpack          2          true      false    java-buildpack-v4.16.1-e638145.zip
    ruby_buildpack          3          true      false    ruby_buildpack-v1.7.26.1-1.1-c2218d66.zip
    nodejs_buildpack        4          true      false    nodejs_buildpack-v1.6.34.1-3.1-c794e433.zip
    go_buildpack            5          true      false    go_buildpack-v1.8.28.1-1.1-7508400b.zip
    python_buildpack        6          true      false    python_buildpack-v1.6.23.1-1.1-99388428.zip
    php_buildpack           7          true      false    php_buildpack-v4.3.63.1-1.1-2515c4f4.zip
    binary_buildpack        8          true      false    binary_buildpack-v1.0.27.1-3.1-dc23dfe2.zip
    dotnet-core_buildpack   9          true      false    dotnet-core-buildpack-v2.0.3.zip
  5. Upload your packaged offline buildpack to your Cloud Application Platform deployment:

    tux > cf create-buildpack ruby_buildpack_cached /tmp/ruby_buildpack-cached-sle15-v1.7.30.1.zip 1 --enable
    Creating buildpack ruby_buildpack_cached...
    OK
    
    Uploading buildpack ruby_buildpack_cached...
    Done uploading
    OK
  6. Verify your buildpack is available:

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack               position   enabled   locked   filename
    ruby_buildpack_cached   1          true      false    ruby_buildpack-cached-sle15-v1.7.30.1.zip
    staticfile_buildpack    2          true      false    staticfile_buildpack-v1.4.34.1-1.1-1dd6386a.zip
    java_buildpack          3          true      false    java-buildpack-v4.16.1-e638145.zip
    ruby_buildpack          4          true      false    ruby_buildpack-v1.7.26.1-1.1-c2218d66.zip
    nodejs_buildpack        5          true      false    nodejs_buildpack-v1.6.34.1-3.1-c794e433.zip
    go_buildpack            6          true      false    go_buildpack-v1.8.28.1-1.1-7508400b.zip
    python_buildpack        7          true      false    python_buildpack-v1.6.23.1-1.1-99388428.zip
    php_buildpack           8          true      false    php_buildpack-v4.3.63.1-1.1-2515c4f4.zip
    binary_buildpack        9          true      false    binary_buildpack-v1.0.27.1-3.1-dc23dfe2.zip
    dotnet-core_buildpack   10         true      false    dotnet-core-buildpack-v2.0.3.zip
  7. Deploy a sample Rails app using the new buildpack:

    tux > git clone https://github.com/scf-samples/12factor
    tux > cd 12factor
    tux > cf push 12factor -b ruby_buildpack_cached
    Note
    Note: Specifying a Buildpack to Use with Your Application

    You can specify which buildpack is used to deploy your application through two methods:

    • Using the -b option during cf push, for example:

      tux > cf push my_application -b my_buildpack
    • Using the buildpacks in your application's manifest.yml:

      ---
      applications:
      - name: my_application
        buildpacks:
          - my_buildpack
Warning
Warning: Deprecation of cflinuxfs2 and sle12 Stacks

cf-deployment 7.11, part of Cloud Application Platform 1.4.1, is the final Cloud Foundry version that supports the cflinuxfs2 stack. The cflinuxfs2 and sle12 stacks are deprecated in favor of cflinuxfs3 and sle15 respectively. Start planning to migrate applications to those stacks for futureproofing, as these stacks will be removed in a future release. The migration procedure is described below.

  • Migrate applications to the new stack using one of the methods listed. Note that both methods will cause application downtime. Downtime can be avoided by following a Blue-Green Deployment strategy. See https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html for details.

    Note that stack association support is available as of cf CLI v6.39.0.

    • Option 1 - Migrating applications using the Stack Auditor plugin.

      Stack Auditor rebuilds the application onto the new stack without a change in the application source code. If you want to move to a new stack with updated code, please follow Option 2 below. For additional information about the Stack Auditor plugin, see https://docs.cloudfoundry.org/adminguide/stack-auditor.html.

      1. Install the Stack Auditor plugin for the cf CLI. For instructions, see https://docs.cloudfoundry.org/adminguide/stack-auditor.html#install.

      2. Identify the stack applications are using. The audit lists all applications in orgs you have access to. To list all applications in your Cloud Application Platform deployment, ensure you are logged in as a user with access to all orgs.

        tux > cf audit-stack

        For each application requiring migration, perform the steps below.

      3. If necessary, switch to the org and space the application is deployed to.

        tux > cf target ORG SPACE
      4. Change the stack to sle15.

        tux > cf change-stack APP_NAME sle15
      5. Identify all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf buildpacks
      6. Remove all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf delete-buildpack BUILDPACK -s sle12
        
        tux > cf delete-buildpack BUILDPACK -s cflinuxfs2
      7. Remove the sle12 and cflinuxfs2 stacks.

        tux > cf delete-stack sle12
        
        tux > cf delete-stack cflinuxfs2
    • Option 2 - Migrating applications using the cf CLI.

      Perform the following for all orgs and spaces in your Cloud Application Platform deployment. Ensure you are logged in as a user with access to all orgs.

      1. Target an org and space.

        tux > cf target ORG SPACE
      2. Identify the stack an applications in the org and space is using.

        tux > cf app APP_NAME
      3. Re-push the app with the sle15 stack using one of the following methods.

      4. Identify all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf buildpacks
      5. Remove all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf delete-buildpack BUILDPACK -s sle12
        
        tux > cf delete-buildpack BUILDPACK -s cflinuxfs2
      6. Remove the sle12 and cflinuxfs2 stacks using the CF API. See https://apidocs.cloudfoundry.org/7.11.0/#stacks for details.

        List all stacks then find the GUID of the sle12 cflinuxfs2 stacks.

        tux > cf curl /v2/stacks

        Delete the sle12 and cflinuxfs2 stacks.

        tux > cf curl -X DELETE /v2/stacks/SLE12_STACK_GUID
        
        tux > cf curl -X DELETE /v2/stacks/CFLINUXFS2_STACK_GUID

27 Custom Application Domains

In a standard SUSE Cloud Foundry deployment, applications will use the same domain as the one configured in your scf-config-values.yaml for SCF. For example, if DOMAIN is set as example.com in your scf-config-values.yaml and you deploy an application called myapp then the application's URL will be myapp.example.com.

This chapter describes the changes required to allow applications to use a separate domain.

27.1 Customizing Application Domains

Begin by adding the following to your scf-config-values.yaml. Replace appdomain.com with the domain to use with your applications:

bosh:
  instance_groups:
  - name: api-group
    jobs:
    - name: cloud_controller_ng
      properties:
        app_domains:
	- appdomain.com

If uaa is deployed, pass your uaa secret and certificate to scf. Otherwise deploy uaa first (See Section 5.9, “Deploy uaa), then proceed with this step:

tux > SECRET=$(kubectl get pods --namespace uaa \
--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If this is an initial deployment, use helm install to deploy scf:

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

If this is an existing deployment, use helm upgrade to apply the change:

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

  • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

When the scf is complete, do the following to confirm custom application domains have been configured correctly.

Run cf curl /v2/info and verify the SCF domain is not appdomain.com:

tux > cf api --skip-ssl-validation https://api.example.com
tux > cf curl /v2/info | grep endpoint

Deploy an application and examine the routes field to verify appdomain.com is being used:

tux > cf login
tux > cf create-org org
tux > cf create-space space -o org
tux > cf target -o org -s space
tux > cf push myapp
cf push myapp
Pushing app myapp to org org / space space as admin...
Getting app info...
Creating app with these attributes...
  name:       myapp
  path:       /path/to/myapp
  routes:
+   myapp.appdomain.com

Creating app myapp...
Mapping routes...

...

Waiting for app to start...

name:              myapp
requested state:   started
instances:         1/1
usage:             1G x 1 instances
routes:            myapp.appdomain.com
last uploaded:     Mon 14 Jan 11:08:02 PST 2019
stack:             sle15
buildpack:         ruby
start command:     bundle exec rackup config.ru -p $PORT

     state     since                  cpu    memory       disk          details
#0   running   2019-01-14T19:09:42Z   0.0%   2.7M of 1G   80.6M of 1G

28 Managing Nproc Limits of Pods

Warning
Warning: Do Not Adjust Without Guidance

It is not recommended to change these values without the guidance of SUSE Cloud Application Platform developers. Please contact support for assistance.

Nproc is the maximum number of processes allowed per user. In the case of scf, the nproc value applies to the vcap user. In scf, there are parameters, kube.limits.nproc.soft and kube.limits.nproc.hard, to configure a soft nproc limit and a hard nproc limit for processes spawned by the vcap user in scf pods. By default, the soft limit is 1024 while the hard limit is 2048. The soft and hard limits can be changed to suit your workloads. Note that the limits are applied to all pods.

When configuring the nproc limits, take note that:

  • If the soft limit is set, the hard limit must be set as well.

  • If the hard limit is set, the soft limit must be set as well.

  • The soft limit cannot be greater than the hard limit.

28.1 Configuring and Applying Nproc Limits

To configure the nproc limits, add the following to your scf-config-values.yaml. Replace the example values with limits suitable for your workloads:

kube:
  limits:
    nproc:
      hard: 3072
      soft: 2048

28.1.1 New Deployments

For new SUSE Cloud Application Platform deployments, follow the steps below to deploy SUSE Cloud Application Platform with nproc limits configured:

  1. Deploy uaa:

    tux > helm install suse/uaa \
    --name susecf-uaa \
    --namespace uaa \
    --values scf-config-values.yaml

    Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

    tux > watch --color 'kubectl get pods --namespace uaa'

    When uaa is successfully deployed, the following is observed:

    • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

    • All other pods have a Running STATUS and a READY value of n/n.

    Press CtrlC to exit the watch command.

  2. Pass your uaa secret and certificate to scf:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  3. Deploy scf:

    tux > helm install suse/cf \
    --name susecf-scf \
    --namespace scf \
    --values scf-config-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}"
  4. Monitor the deployment progress using the watch command:

    tux > watch --color 'kubectl get pods --namespace scf'
  5. Open a shell into any container. The command below opens a shell to the default container in the blobstore-0 pod:

    tux > kubectl exec --stdin --tty blobstore-0 --namespace scf -- env /bin/bash
  6. Use the vcap user identity:

    tux > su vcap
  7. Verify the maximum number of processes for the vcap user matches the limits you set:

    tux > ulimit -u
    
    tux > cat /etc/security/limits.conf | grep nproc

28.1.2 Existing Deployments

For existing SUSE Cloud Application Platform deployments, follow the steps below to redeploy SUSE Cloud Application Platform with nproc limits configured:

  1. Pass your uaa secret and certificate to scf:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  2. Use helm upgrade to apply the change:

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"
  3. Monitor the deployment progress using the watch command:

    tux > watch --color 'kubectl get pods --namespace scf'
  4. Open a shell into any container. The command below opens a shell to the default container in the blobstore-0 pod:

    tux > kubectl exec --stdin --tty blobstore-0 --namespace scf -- env /bin/bash
  5. Use the vcap user identity:

    tux > su vcap
  6. Verify the maximum number of processes for the vcap user matches the limits you set:

    tux > ulimit -u
    
    tux > cat /etc/security/limits.conf | grep nproc

Part IV SUSE Cloud Application Platform User Guide

29 Deploying and Managing Applications with the Cloud Foundry Client

29.1 Using the cf CLI with SUSE Cloud Application Platform

The Cloud Foundry command line interface (cf CLI) is for deploying and managing your applications. You may use it for all the orgs and spaces that you are a member of. Install the client on a workstation for remote administration of your SUSE Cloud Foundry instances.

The complete guide is at Using the Cloud Foundry Command Line Interface, and source code with a demo video is on GitHub at Cloud Foundry CLI.

The following examples demonstrate some of the commonly-used commands. The first task is to log into your new SUSE Cloud Foundry instance. When your installation completes it prints a welcome screen with the information you need to access it.

       NOTES:
    Welcome to your new deployment of SCF.

    The endpoint for use by the `cf` client is
        https://api.example.com

    To target this endpoint run
        cf api --skip-ssl-validation https://api.example.com

    Your administrative credentials are:
        Username: admin
        Password: password

    Please remember, it may take some time for everything to come online.

    You can use
        kubectl get pods --namespace scf

    to spot-check if everything is up and running, or
        watch --color 'kubectl get pods --namespace scf'

    to monitor continuously.

You can display this message anytime with this command:

tux > helm status $(helm list | awk '/cf-([0-9]).([0-9]).*/{print$1}') | \
sed --quiet --expression '/NOTES/,$p'

You need to provide the API endpoint of your SUSE Cloud Application Platform instance to log in. The API endpoint is the DOMAIN value you provided in scf-config-values.yaml, plus the api. prefix, as it shows in the above welcome screen. Set your endpoint, and use --skip-ssl-validation when you have self-signed SSL certificates. It asks for an e-mail address, but you must enter admin instead (you cannot change this to a different username, though you may create additional users), and the password is the one you created in scf-config-values.yaml:

tux > cf login --skip-ssl-validation -a https://api.example.com
API endpoint: https://api.example.com

Email> admin

Password>
Authenticating...
OK

Targeted org system

API endpoint:   https://api.example.com (API version: 2.134.0)
User:           admin
Org:            system
Space:          No space targeted, use 'cf target -s SPACE'

cf help displays a list of commands and options. cf help [command] provides information on specific commands.

You may pass in your credentials and set the API endpoint in a single command:

tux > cf login -u admin -p password --skip-ssl-validation -a https://api.example.com

Log out with cf logout.

Change the admin password:

tux > cf passwd
Current Password>
New Password>
Verify Password>
Changing password...
OK
Please log in again

View your current API endpoint, user, org, and space:

tux > cf target

Switch to a different org or space:

tux > cf target -o org
tux > cf target -s space

List all apps in the current space:

tux > cf apps

Query the health and status of a particular app:

tux > cf app appname

View app logs. The first example tails the log of a running app. The --recent option dumps recent logs instead of tailing, which is useful for stopped and crashed apps:

tux > cf logs appname
tux > cf logs --recent appname

Restart all instances of an app:

tux > cf restart appname

Restart a single instance of an app, identified by its index number, and restart it with the same index number:

tux > cf restart-app-instance appname index

After you have set up a service broker (see Chapter 20, Provisioning Services with Minibroker and Chapter 21, Setting Up and Using a Service Broker), create new services:

tux > cf create-service service-name default mydb

Then you may bind a service instance to an app:

tux > cf bind-service appname service-instance

The most-used command is cf push, for pushing new apps and changes to existing apps.

tux > cf push new-app -b buildpack

If you need to debug your application or run one-off tasks, start an SSH session into your application container.

tux > cf ssh appname

When the SSH connection is established, run the following to have the environment match that of the application and its associated buildpack.

tux > /tmp/lifecycle/shell

Part V Troubleshooting

30 Troubleshooting

Cloud stacks are complex, and debugging deployment issues often requires digging through multiple layers to find the information you need. Remember that the SUSE Cloud Foundry releases must be deployed in the correct order, and that each release must deploy successfully, with no failed pods, before …

30 Troubleshooting

Cloud stacks are complex, and debugging deployment issues often requires digging through multiple layers to find the information you need. Remember that the SUSE Cloud Foundry releases must be deployed in the correct order, and that each release must deploy successfully, with no failed pods, before deploying the next release.

30.1 Using Supportconfig

If you ever need to request support, or just want to generate detailed system information and logs, use the supportconfig utility. Run it with no options to collect basic system information, and also cluster logs including Docker, etcd, flannel, and Velum. supportconfig may give you all the information you need.

supportconfig -h prints the options. Read the "Gathering System Information for Support" chapter in any SUSE Linux Enterprise Administration Guide to learn more.

30.2 Deployment Is Taking Too Long

A deployment step seems to take too long, or you see that some pods are not in a ready state hours after all the others are ready, or a pod shows a lot of restarts. This example shows not-ready pods many hours after the others have become ready:

tux > kubectl get pods --namespace scf
NAME                     READY STATUS    RESTARTS  AGE
router-3137013061-wlhxb  0/1   Running   0         16h
routing-api-0            0/1   Running   0         16h

The Running status means the pod is bound to a node and all of its containers have been created. However, it is not Ready, which means it is not ready to service requests. Use kubectl to print a detailed description of pod events and status:

tux > kubectl describe pod --namespace scf router-3137013061-wlhxb

This prints a lot of information, including IP addresses, routine events, warnings, and errors. You should find the reason for the failure in this output.

Important
Important
Some Pods Show as Not Running

Some uaa and scf pods perform only deployment tasks, and it is normal for them to show as unready and Completed after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed

tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed
Some Pods Terminate and Restart during Deployment

When monitoring the status of a deployment, pods can be observed transitioning from a Running state to a Terminating state, then returning to a Running again.

If a RESTARTS count of 0 is maintained during this process, this is normal behavior and not due to failing pods. It is not necessary to stop the deployment. During deployment, pods will modify annotations on itself via the StatefulSet pod spec. In order to get the correct annotations on the running pod, it is stopped and restarted. Under normal circumstances, this behavior should only result in a pod restarting once.

30.3 Deleting and Rebuilding a Deployment

There may be times when you want to delete and rebuild a deployment, for example when there are errors in your scf-config-values.yaml file, you wish to test configuration changes, or a deployment fails and you want to try it again. This has five steps: first delete the StatefulSets of the namespace associated with the release or releases you want to re-deploy, then delete the release or releases, delete its namespace, then re-create the namespace and re-deploy the release.

The namespace is also deleted as part of the process because the SCF and UAA namespaces contain generated secrets which Helm is not aware of and will not remove when a release is deleted. When deleting a release, busy systems may encounter timeouts. By first deleting the StatefulSets, it ensures that this operation is more likely to succeed. Using the delete statefulsets command requires kubectl v1.9.6 or newer.

Use helm to see your releases:

tux > helm list
NAME            REVISION  UPDATED                  STATUS    CHART           NAMESPACE
susecf-console  1         Tue Aug 14 11:53:28 2018 DEPLOYED  console-2.4.0   stratos
susecf-scf      1         Tue Aug 14 10:58:16 2018 DEPLOYED  cf-2.17.1       scf
susecf-uaa      1         Tue Aug 14 10:49:30 2018 DEPLOYED  uaa-2.17.1      uaa

This example deletes the susecf-uaa release and uaa namespace:

tux > kubectl delete statefulsets --all --namespace uaa
statefulset "mysql" deleted
statefulset "uaa" deleted

tux > helm delete --purge susecf-uaa
release "susecf-uaa" deleted

tux > kubectl delete namespace uaa
namespace "uaa" deleted

Repeat the same process for the susecf-scf release and scf namespace:

tux > kubectl delete statefulsets --all --namespace scf
statefulset "adapter" deleted
statefulset "api-group" deleted
...

tux > helm delete --purge susecf-scf
release "susecf-scf" deleted

tux > kubectl delete namespace scf
namespace "scf" deleted

Then you can start over. Be sure to create new release and namespace names.

30.4 Querying with Kubectl

You can safely query with kubectl to get information about resources inside your Kubernetes cluster. kubectl cluster-info dump | tee clusterinfo.txt outputs a large amount of information about the Kubernetes master and cluster services to a text file.

The following commands give more targeted information about your cluster.

  • List all cluster resources:

    tux > kubectl get all --all-namespaces
  • List all of your running pods:

    tux > kubectl get pods --all-namespaces
  • List all of your running pods, their internal IP addresses, and which Kubernetes nodes they are running on:

    tux > kubectl get pods --all-namespaces --output wide
  • See all pods, including those with Completed or Failed statuses:

    tux > kubectl get pods --show-all --all-namespaces
  • List pods in one namespace:

    tux > kubectl get pods --namespace scf
  • Get detailed information about one pod:

    tux > kubectl describe --namespace scf po/diego-cell-0
  • Read the log file of a pod:

    tux > kubectl logs --namespace scf po/diego-cell-0
  • List all Kubernetes nodes, then print detailed information about a single node:

    tux > kubectl get nodes
    tux > kubectl describe node 6a2752b6fab54bb889029f60de6fa4d5.infra.caasp.local
  • List all containers in all namespaces, formatted for readability:

    tux > kubectl get pods --all-namespaces --output jsonpath="{..image}" |\
    tr -s '[[:space:]]' '\n' |\
    sort |\
    uniq -c
  • These two commands check node capacities, to verify that there are enough resources for the pods:

    tux > kubectl get nodes --output yaml | grep '\sname\|cpu\|memory'
    tux > kubectl get nodes --output json | \
    jq '.items[] | {name: .metadata.name, cap: .status.capacity}'

A Appendix

A.1 Manual Configuration of Pod Security Policies

SUSE Cloud Application Platform 1.3.1 introduces built-in support for Pod Security Policies (PSPs), which are provided via Helm charts and are set up automatically, unlike older releases which require manual PSP setup. SUSE CaaS Platform and Microsoft AKS both require PSPs for Cloud Application Platform to operate correctly. This section provides instructions for configuring and applying the appropriate PSPs to older Cloud Application Platform releases.

See the upstream documentation at https://kubernetes.io/docs/concepts/policy/pod-security-policy/, https://docs.cloudfoundry.org/concepts/roles.html, and https://docs.cloudfoundry.org/uaa/identity-providers.html#id-flow for more information on understanding and using PSPs.

Copy the following example into cap-psp-rbac.yaml:

---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: suse.cap.psp
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  # Privileged
  #default in suse.caasp.psp.unprivileged
  #privileged: false
  privileged: true
  # Volumes and File Systems
  volumes:
    # Kubernetes Pseudo Volume Types
    - configMap
    - secret
    - emptyDir
    - downwardAPI
    - projected
    - persistentVolumeClaim
    # Networked Storage
    - nfs
    - rbd
    - cephFS
    - glusterfs
    - fc
    - iscsi
    # Cloud Volumes
    - cinder
    - gcePersistentDisk
    - awsElasticBlockStore
    - azureDisk
    - azureFile
    - vsphereVolume
  allowedFlexVolumes: []
  # hostPath volumes are not allowed; pathPrefix must still be specified
  allowedHostPaths:
    - pathPrefix: /opt/kubernetes-hostpath-volumes
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  #default in suse.caasp.psp.unprivileged
  #allowPrivilegeEscalation: false
  allowPrivilegeEscalation: true
  #default in suse.caasp.psp.unprivileged
  #defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities:
  - SYS_RESOURCE
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: false
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: suse:cap:psp
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['suse.cap.psp']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cap:clusterrole
roleRef:
  kind: ClusterRole
  name: suse:cap:psp
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: default
  namespace: uaa
- kind: ServiceAccount
  name: default
  namespace: scf
- kind: ServiceAccount
  name: default
  namespace: stratos
- kind: ServiceAccount
  name: default-privileged
  namespace: scf
- kind: ServiceAccount
  name: node-reader
  namespace: scf

Apply it to your cluster with kubectl:

tux > kubectl create --filename cap-psp-rbac.yaml
podsecuritypolicy.extensions "suse.cap.psp" created
clusterrole.rbac.authorization.k8s.io "suse:cap:psp" created
clusterrolebinding.rbac.authorization.k8s.io "cap:clusterrole" created

Verify that the new PSPs exist by running the kubectl get psp command to list them. Then continue by deploying UAA and SCF. Ensure that your scf-config-values.yaml file specifies the name of your PSP in the kube: section. These settings will grant only a limited subset of roles to be privileged.

kube:
  psp:
    privileged: "suse.cap.psp"
Tip
Tip

Note that the example cap-psp-rbac.yaml file sets the name of the PSPs, which in the previous examples is suse.cap.psp.

A.1.1 Using Custom Pod Security Policies

When using a custom PSP, your scf-config-values.yaml file requires the SYS_RESOURCE capability to be added to the following roles:

sizing:
  cc_uploader:
    capabilities: ["SYS_RESOURCE"]
  diego_api:
    capabilities: ["SYS_RESOURCE"]
  diego_brain:
    capabilities: ["SYS_RESOURCE"]
  diego_ssh:
    capabilities: ["SYS_RESOURCE"]
  nats:
    capabilities: ["SYS_RESOURCE"]
  router:
    capabilities: ["SYS_RESOURCE"]
  routing_api:
    capabilities: ["SYS_RESOURCE"]

A.2 Complete suse/uaa values.yaml File

This is the complete output of helm inspect suse/uaa for the current SUSE Cloud Application Platform 1.4.1 release.

apiVersion: 2.17.1+cf7.11.0.4.ga6c993eb
appVersion: 1.4.1
description: A Helm chart for SUSE UAA
name: uaa
version: 2.17.1

---
---
kube:
  auth: "rbac"
  external_ips: []

  # Whether HostPath volume mounts are available
  hostpath_available: false

  limits:
    nproc:
      hard: ""
      soft: ""
  organization: "cap"
  psp:
    default: ~
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""

  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 1

  storage_class:
    persistent: "persistent"
    shared: "shared"
config:
  # Flag to activate high-availability mode
  HA: false

  # Global memory configuration
  memory:
    # Flag to activate memory requests
    requests: false

    # Flag to activate memory limits
    limits: false

  # Global CPU configuration
  cpu:
    # Flag to activate cpu requests
    requests: false

    # Flag to activate cpu limits
    limits: false

  # Flag to specify whether to add Istio related annotations and labels
  use_istio: false

bosh:
  instance_groups: []
services:
  loadbalanced: false
secrets:
  # PEM-encoded CA certificate used to sign the TLS certificate used by all
  # components to secure their communications.
  # This value uses a generated default.
  INTERNAL_CA_CERT: ~

  # PEM-encoded CA key.
  INTERNAL_CA_CERT_KEY: ~

  # PEM-encoded JWT certificate.
  # This value uses a generated default.
  JWT_SIGNING_CERT: ~

  # PEM-encoded JWT signing key.
  JWT_SIGNING_CERT_KEY: ~

  # Password used for the monit API.
  # This value uses a generated default.
  MONIT_PASSWORD: ~

  # The password for the MySQL server admin user.
  # This value uses a generated default.
  MYSQL_ADMIN_PASSWORD: ~

  # The password for the cluster logger health user.
  # This value uses a generated default.
  MYSQL_CLUSTER_HEALTH_PASSWORD: ~

  # The password used to contact the sidecar endpoints via Basic Auth.
  # This value uses a generated default.
  MYSQL_GALERA_HEALTHCHECK_ENDPOINT_PASSWORD: ~

  # The password for Basic Auth used to secure the MySQL proxy API.
  # This value uses a generated default.
  MYSQL_PROXY_ADMIN_PASSWORD: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SAML_SERVICEPROVIDER_CERT: ~

  # PEM-encoded key.
  SAML_SERVICEPROVIDER_CERT_KEY: ~

  # The password for access to the UAA database.
  # This value uses a generated default.
  UAADB_PASSWORD: ~

  # The password of the admin client - a client named admin with uaa.admin as an
  # authority.
  UAA_ADMIN_CLIENT_SECRET: ~

  # The server's ssl certificate. The default is a self-signed certificate and
  # should always be replaced for production deployments.
  # This value uses a generated default.
  UAA_SERVER_CERT: ~

  # The server's ssl private key. Only passphrase-less keys are supported.
  UAA_SERVER_CERT_KEY: ~

env:
  # Expiration for generated certificates (in days)
  CERT_EXPIRATION: "10950"

  # Base domain name of the UAA endpoint; `uaa.${DOMAIN}` must be correctly
  # configured to point to this UAA instance.
  DOMAIN: ~

  KUBERNETES_CLUSTER_DOMAIN: ~

  # The cluster's log level: off, fatal, error, warn, info, debug, debug1,
  # debug2.
  LOG_LEVEL: "info"

  # The log destination to talk to. This has to point to a syslog server.
  SCF_LOG_HOST: ~

  # The port used by rsyslog to talk to the log destination. It defaults to 514,
  # the standard port of syslog.
  SCF_LOG_PORT: "514"

  # The protocol used by rsyslog to talk to the log destination. The allowed
  # values are tcp, and udp. The default is tcp.
  SCF_LOG_PROTOCOL: "tcp"

  # If true, authenticate against the SMTP server using AUTH command. See
  # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html
  SMTP_AUTH: "false"

  # SMTP from address, for password reset emails etc.
  SMTP_FROM_ADDRESS: ~

  # SMTP server host address, for password reset emails etc.
  SMTP_HOST: ~

  # SMTP server password, for password reset emails etc.
  SMTP_PASSWORD: ~

  # SMTP server port, for password reset emails etc.
  SMTP_PORT: "25"

  # If true, send STARTTLS command before logging in to SMTP server. See
  # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html
  SMTP_STARTTLS: "false"

  # SMTP server username, for password reset emails etc.
  SMTP_USER: ~

  # The TCP port to report as the public port for the UAA server (root zone).
  UAA_PUBLIC_PORT: "2793"

# The sizing section contains configuration to change each individual instance
# group. Due to limitations on the allowable names, any dashes ("-") in the
# instance group names are replaced with underscores ("_").
sizing:
  # The mysql instance group contains the following jobs:
  #
  # - global-uaa-properties: Dummy BOSH job used to host global parameters that
  #   are required to configure SCF / fissile
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: mysql and proxy
  mysql:
    # Node affinity rules can be specified here
    affinity: {}

    # The mysql instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      mysql_data: 20

    # Unit [MiB]
    memory:
      request: 1400
      limit: ~

  # The secret-generation instance group contains the following jobs:
  #
  # - generate-secrets: This job will generate the secrets for the cluster
  secret_generation:
    # Node affinity rules can be specified here
    affinity: {}

    # The secret_generation instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The uaa instance group contains the following jobs:
  #
  # - global-uaa-properties: Dummy BOSH job used to host global parameters that
  #   are required to configure SCF / fissile
  #
  # - wait-for-database: This is a pre-start job to delay starting the rest of
  #   the role until a database connection is ready. Currently it only checks
  #   that a response can be obtained from the server, and not that it responds
  #   intelligently.
  #
  #
  # - uaa: The UAA is the identity management service for Cloud Foundry. It's
  #   primary role is as an OAuth2 provider, issuing tokens for client
  #   applications to use when they act on behalf of Cloud Foundry users. It can
  #   also authenticate users with their Cloud Foundry credentials, and can act
  #   as an SSO service using those credentials (or others). It has endpoints
  #   for managing user accounts and for registering OAuth2 clients, as well as
  #   various other management functions.
  uaa:
    # Node affinity rules can be specified here
    affinity: {}

    # The uaa instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2100
      limit: ~

enable: {}
ingress:
  # ingress.annotations allows specifying custom ingress annotations that gets
  # merged to the default annotations.
  annotations: {}

  # ingress.enabled enables ingress support - working ingress controller
  # necessary.
  enabled: false

  # ingress.tls.crt and ingress.tls.key, when specified, are used by the TLS
  # secret for the Ingress resource.
  tls: {}

A.3 Complete suse/scf values.yaml File

This is the complete output of helm inspect suse/cf for the current SUSE Cloud Application Platform 1.4.1 release.

apiVersion: 2.17.1+cf7.11.0.4.ga6c993eb
appVersion: 1.4.1
description: A Helm chart for SUSE Cloud Foundry
name: cf
version: 2.17.1

---
---
kube:
  auth: "rbac"
  external_ips: []

  # Whether HostPath volume mounts are available
  hostpath_available: false

  limits:
    nproc:
      hard: ""
      soft: ""
  organization: "cap"
  psp:
    default: ~
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""

  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 1

  storage_class:
    persistent: "persistent"
    shared: "shared"
config:
  # Flag to activate high-availability mode
  HA: false

  # Global memory configuration
  memory:
    # Flag to activate memory requests
    requests: false

    # Flag to activate memory limits
    limits: false

  # Global CPU configuration
  cpu:
    # Flag to activate cpu requests
    requests: false

    # Flag to activate cpu limits
    limits: false

  # Flag to specify whether to add Istio related annotations and labels
  use_istio: false

bosh:
  instance_groups: []
services:
  loadbalanced: false
secrets:
  # PEM encoded RSA private key used to identify host.
  # This value uses a generated default.
  APP_SSH_KEY: ~

  # MD5 fingerprint of the host key of the SSH proxy that brokers connections to
  # application instances.
  APP_SSH_KEY_FINGERPRINT: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  AUCTIONEER_REP_CERT: ~

  # PEM-encoded key
  AUCTIONEER_REP_CERT_KEY: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  AUCTIONEER_SERVER_CERT: ~

  # PEM-encoded server key
  AUCTIONEER_SERVER_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler API public https server.
  # This includes the Autoscaler ApiServer and the Service Broker.
  # This value uses a generated default.
  AUTOSCALER_ASAPI_PUBLIC_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler API public https server. This
  # includes the Autoscaler ApiServer and the Service Broker.
  AUTOSCALER_ASAPI_PUBLIC_SERVER_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler API https server. This
  # includes the Autoscaler ApiServer and the Service Broker.
  # This value uses a generated default.
  AUTOSCALER_ASAPI_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler API https server. This includes the
  # Autoscaler ApiServer and the Service Broker.
  AUTOSCALER_ASAPI_SERVER_CERT_KEY: ~

  # A PEM-encoded TLS certificate for clients to connect to the Autoscaler
  # Metrics. This includes the Autoscaler Metrics Collector and Event Generator.
  # This value uses a generated default.
  AUTOSCALER_ASMETRICS_CLIENT_CERT: ~

  # A PEM-encoded TLS key for clients to connect to the Autoscaler Metrics. This
  # includes the Autoscaler Metrics Collector and Event Generator.
  AUTOSCALER_ASMETRICS_CLIENT_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler Metrics https server. This
  # includes the Autoscaler Metrics Collector.
  # This value uses a generated default.
  AUTOSCALER_ASMETRICS_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler Metrics https server. This includes
  # the Autoscaler Metrics Collector.
  AUTOSCALER_ASMETRICS_SERVER_CERT_KEY: ~

  # The password for the Autoscaler postgres database.
  # This value uses a generated default.
  AUTOSCALER_DB_PASSWORD: ~

  # A PEM-encoded TLS certificate for clients to connect to the Autoscaler
  # Scaling Engine.
  # This value uses a generated default.
  AUTOSCALER_SCALING_ENGINE_CLIENT_CERT: ~

  # A PEM-encoded TLS key for clients to connect to the Autoscaler Scaling
  # Engine.
  AUTOSCALER_SCALING_ENGINE_CLIENT_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler Scaling Engine https server.
  # This value uses a generated default.
  AUTOSCALER_SCALING_ENGINE_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler Scaling Engine https server.
  AUTOSCALER_SCALING_ENGINE_SERVER_CERT_KEY: ~

  # A PEM-encoded TLS certificate for clients to connect to the Autoscaler
  # Scheduler.
  # This value uses a generated default.
  AUTOSCALER_SCHEDULER_CLIENT_CERT: ~

  # A PEM-encoded TLS key for clients to connect to the Autoscaler Scheduler.
  AUTOSCALER_SCHEDULER_CLIENT_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler Scheduler https server.
  # This value uses a generated default.
  AUTOSCALER_SCHEDULER_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler Scheduler https server.
  AUTOSCALER_SCHEDULER_SERVER_CERT_KEY: ~

  # the uaa client secret used by Autoscaler.
  # This value uses a generated default.
  AUTOSCALER_UAA_CLIENT_SECRET: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  BBS_AUCTIONEER_CERT: ~

  # PEM-encoded key
  BBS_AUCTIONEER_CERT_KEY: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BBS_CLIENT_CRT: ~

  # PEM-encoded client key.
  BBS_CLIENT_CRT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  BBS_REP_CERT: ~

  # PEM-encoded key
  BBS_REP_CERT_KEY: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BBS_SERVER_CRT: ~

  # PEM-encoded client key.
  BBS_SERVER_CRT_KEY: ~

  # The basic auth password that the Cloud Controller uses to connect to the
  # admin endpoint on webdav.
  # This value uses a generated default.
  BITS_ADMIN_USERS_PASSWORD: ~

  # This is the key secret Bits-Service uses and clients should use to generate
  # signed URLs.
  # This value uses a generated default.
  BITS_SERVICE_SECRET: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BITS_SERVICE_SSL_CERT: ~

  # PEM-encoded client key.
  BITS_SERVICE_SSL_CERT_KEY: ~

  # The basic auth password that Cloud Controller uses to connect to the
  # blobstore server. Auto-generated if not provided. Passwords must be
  # alphanumeric (URL-safe).
  # This value uses a generated default.
  BLOBSTORE_PASSWORD: ~

  # The secret used for signing URLs between Cloud Controller and blobstore.
  # This value uses a generated default.
  BLOBSTORE_SECURE_LINK: ~

  # The PEM-encoded certificate (optionally as a certificate chain) for serving
  # blobs over TLS/SSL.
  # This value uses a generated default.
  BLOBSTORE_TLS_CERT: ~

  # The PEM-encoded private key for signing TLS/SSL traffic.
  BLOBSTORE_TLS_CERT_KEY: ~

  # The password for the bulk api.
  # This value uses a generated default.
  BULK_API_PASSWORD: ~

  # A map of labels and encryption keys
  CC_DB_ENCRYPTION_KEYS: "~"

  # The PEM-encoded certificate for secure TLS communication over external
  # endpoints.
  # This value uses a generated default.
  CC_PUBLIC_TLS_CERT: ~

  # The PEM-encoded key for secure TLS communication over external endpoints.
  CC_PUBLIC_TLS_CERT_KEY: ~

  # The PEM-encoded certificate for internal cloud controller traffic.
  # This value uses a generated default.
  CC_SERVER_CRT: ~

  # The PEM-encoded private key for internal cloud controller traffic.
  CC_SERVER_CRT_KEY: ~

  # The PEM-encoded certificate for internal cloud controller uploader traffic.
  # This value uses a generated default.
  CC_UPLOADER_CRT: ~

  # The PEM-encoded private key for internal cloud controller uploader traffic.
  CC_UPLOADER_CRT_KEY: ~

  # PEM-encoded broker server certificate.
  # This value uses a generated default.
  CF_USB_BROKER_SERVER_CERT: ~

  # PEM-encoded broker server key.
  CF_USB_BROKER_SERVER_CERT_KEY: ~

  # The password for access to the Universal Service Broker.
  # This value uses a generated default.
  # Example: "password"
  CF_USB_PASSWORD: ~

  # The password for the cluster administrator.
  CLUSTER_ADMIN_PASSWORD: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  CREDHUB_SERVER_CERT: ~

  # PEM-encoded server key
  CREDHUB_SERVER_CERT_KEY: ~

  # PEM-encoded client certificate
  # This value uses a generated default.
  DIEGO_CLIENT_CERT: ~

  # PEM-encoded client key
  DIEGO_CLIENT_CERT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  DOPPLER_CERT: ~

  # PEM-encoded key.
  DOPPLER_CERT_KEY: ~

  # Basic auth password for access to the Cloud Controller's internal API.
  # This value uses a generated default.
  INTERNAL_API_PASSWORD: ~

  # PEM-encoded CA certificate used to sign the TLS certificate used by all
  # components to secure their communications.
  # This value uses a generated default.
  INTERNAL_CA_CERT: ~

  # PEM-encoded CA key.
  INTERNAL_CA_CERT_KEY: ~

  # PEM-encoded JWT certificate.
  # This value uses a generated default.
  JWT_SIGNING_CERT: ~

  # PEM-encoded JWT signing key.
  JWT_SIGNING_CERT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  LOGGREGATOR_AGENT_CERT: ~

  # PEM-encoded key.
  LOGGREGATOR_AGENT_CERT_KEY: ~

  # PEM-encoded client certificate for loggregator mutual authentication
  # This value uses a generated default.
  LOGGREGATOR_CLIENT_CERT: ~

  # PEM-encoded client key for loggregator mutual authentication
  LOGGREGATOR_CLIENT_CERT_KEY: ~

  # PEM-encoded client certificate for loggregator forwarder authentication
  # This value uses a generated default.
  LOGGREGATOR_FORWARD_CERT: ~

  # PEM-encoded client key for loggregator forwarder authentication
  LOGGREGATOR_FORWARD_CERT_KEY: ~

  # TLS cert for outgoing dropsonde connection
  # This value uses a generated default.
  LOGGREGATOR_OUTGOING_CERT: ~

  # TLS key for outgoing dropsonde connection
  LOGGREGATOR_OUTGOING_CERT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  LOG_CACHE_CERT: ~

  # PEM-encoded key.
  LOG_CACHE_CERT_KEY: ~

  # The TLS cert for the auth proxy.
  # This value uses a generated default.
  LOG_CACHE_CF_AUTH_PROXY_EXTERNAL_CERT: ~

  # The TLS key for the auth proxy.
  LOG_CACHE_CF_AUTH_PROXY_EXTERNAL_CERT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  LOG_CACHE_TO_LOGGREGATOR_AGENT_CERT: ~

  # PEM-encoded key.
  LOG_CACHE_TO_LOGGREGATOR_AGENT_CERT_KEY: ~

  # Password used for the monit API.
  # This value uses a generated default.
  MONIT_PASSWORD: ~

  # The password for the MySQL server admin user.
  # This value uses a generated default.
  MYSQL_ADMIN_PASSWORD: ~

  # The password for access to the Cloud Controller database.
  # This value uses a generated default.
  MYSQL_CCDB_ROLE_PASSWORD: ~

  # The password for access to the usb config database.
  # This value uses a generated default.
  # Example: "password"
  MYSQL_CF_USB_PASSWORD: ~

  # The password for the cluster logger health user.
  # This value uses a generated default.
  MYSQL_CLUSTER_HEALTH_PASSWORD: ~

  # The password for access to the credhub-user database.
  # This value uses a generated default.
  MYSQL_CREDHUB_USER_PASSWORD: ~

  # Database password for the diego locket service.
  # This value uses a generated default.
  MYSQL_DIEGO_LOCKET_PASSWORD: ~

  # The password for access to MySQL by diego.
  # This value uses a generated default.
  MYSQL_DIEGO_PASSWORD: ~

  # Password used to authenticate to the MySQL Galera healthcheck endpoint.
  # This value uses a generated default.
  MYSQL_GALERA_HEALTHCHECK_ENDPOINT_PASSWORD: ~

  # Database password for storing broker state for the Persi NFS Broker
  # This value uses a generated default.
  MYSQL_PERSI_NFS_PASSWORD: ~

  # The password for Basic Auth used to secure the MySQL proxy API.
  # This value uses a generated default.
  MYSQL_PROXY_ADMIN_PASSWORD: ~

  # The password for access to MySQL by the routing-api
  # This value uses a generated default.
  MYSQL_ROUTING_API_PASSWORD: ~

  # The password for access to NATS.
  # This value uses a generated default.
  NATS_PASSWORD: ~

  # Basic auth password to verify on incoming Service Broker requests
  # This value uses a generated default.
  PERSI_NFS_BROKER_PASSWORD: ~

  # LDAP service account password (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PASSWORD: "-"

  # PEM-encoded server certificate
  # This value uses a generated default.
  REP_SERVER_CERT: ~

  # PEM-encoded server key
  REP_SERVER_CERT_KEY: ~

  # Support for route services is disabled when no value is configured. A robust
  # passphrase is recommended.
  # This value uses a generated default.
  ROUTER_SERVICES_SECRET: ~

  # The public ssl cert for ssl termination. Will be ignored if ROUTER_TLS_PEM
  # is set.
  # This value uses a generated default.
  ROUTER_SSL_CERT: ~

  # The private ssl key for ssl termination. Will be ignored if ROUTER_TLS_PEM
  # is set.
  ROUTER_SSL_CERT_KEY: ~

  # Password for HTTP basic auth to the varz/status endpoint.
  # This value uses a generated default.
  ROUTER_STATUS_PASSWORD: ~

  # Array of private keys and certificates used for TLS handshakes with
  # downstream clients. Each element in the array is an object containing fields
  # 'private_key' and 'cert_chain', each of which supports a PEM block. This
  # setting overrides ROUTER_SSL_CERT and ROUTER_SSL_KEY.
  # Example:
  #   - cert_chain: |
  #       -----BEGIN CERTIFICATE-----
  #       -----END CERTIFICATE-----
  #       -----BEGIN CERTIFICATE-----
  #       -----END CERTIFICATE-----
  #     private_key: |
  #       -----BEGIN RSA PRIVATE KEY-----
  #       -----END RSA PRIVATE KEY-----
  ROUTER_TLS_PEM: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SAML_SERVICEPROVIDER_CERT: ~

  # PEM-encoded key.
  SAML_SERVICEPROVIDER_CERT_KEY: ~

  # The password for access to the uploader of staged droplets.
  # This value uses a generated default.
  STAGING_UPLOAD_PASSWORD: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_ADAPT_CERT: ~

  # PEM-encoded key.
  SYSLOG_ADAPT_CERT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_RLP_CERT: ~

  # PEM-encoded key.
  SYSLOG_RLP_CERT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_SCHED_CERT: ~

  # PEM-encoded key.
  SYSLOG_SCHED_CERT_KEY: ~

  # PEM-encoded client certificate for internal communication between the cloud
  # controller and TPS.
  # This value uses a generated default.
  TPS_CC_CLIENT_CRT: ~

  # PEM-encoded client key for internal communication between the cloud
  # controller and TPS.
  TPS_CC_CLIENT_CRT_KEY: ~

  # PEM-encoded certificate for communication with the traffic controller of the
  # log infra structure.
  # This value uses a generated default.
  TRAFFICCONTROLLER_CERT: ~

  # PEM-encoded key for communication with the traffic controller of the log
  # infra structure.
  TRAFFICCONTROLLER_CERT_KEY: ~

  # The password for access to the UAA database.
  # This value uses a generated default.
  UAADB_PASSWORD: ~

  # The password of the admin client - a client named admin with uaa.admin as an
  # authority.
  UAA_ADMIN_CLIENT_SECRET: ~

  # The CA certificate for UAA
  UAA_CA_CERT: ~

  # The password for UAA access by the Routing API.
  # This value uses a generated default.
  UAA_CLIENTS_CC_ROUTING_SECRET: ~

  # Used for third party service dashboard SSO.
  # This value uses a generated default.
  UAA_CLIENTS_CC_SERVICE_DASHBOARDS_CLIENT_SECRET: ~

  # Used for fetching service key values from CredHub.
  # This value uses a generated default.
  UAA_CLIENTS_CC_SERVICE_KEY_CLIENT_SECRET: ~

  # Client secret for the CF smoke tests job
  # This value uses a generated default.
  UAA_CLIENTS_CF_SMOKE_TESTS_CLIENT_SECRET: ~

  # The password for UAA access by the Universal Service Broker.
  # This value uses a generated default.
  UAA_CLIENTS_CF_USB_SECRET: ~

  # The password for UAA access by the Cloud Controller for fetching usernames.
  # This value uses a generated default.
  UAA_CLIENTS_CLOUD_CONTROLLER_USERNAME_LOOKUP_SECRET: ~

  # The password for UAA access by the client for the user-accessible credhub
  # This value uses a generated default.
  UAA_CLIENTS_CREDHUB_USER_CLI_SECRET: ~

  # The password for UAA access by the SSH proxy.
  # This value uses a generated default.
  UAA_CLIENTS_DIEGO_SSH_PROXY_SECRET: ~

  # The password for UAA access by doppler.
  # This value uses a generated default.
  UAA_CLIENTS_DOPPLER_SECRET: ~

  # The password for UAA access by the gorouter.
  # This value uses a generated default.
  UAA_CLIENTS_GOROUTER_SECRET: ~

  # The OAuth client secret used by the routing-api.
  # This value uses a generated default.
  UAA_CLIENTS_ROUTING_API_CLIENT_SECRET: ~

  # The password for UAA access by the task creating the cluster administrator
  # user
  # This value uses a generated default.
  UAA_CLIENTS_SCF_AUTO_CONFIG_SECRET: ~

  # The password for UAA access by the TCP emitter.
  # This value uses a generated default.
  UAA_CLIENTS_TCP_EMITTER_SECRET: ~

  # The password for UAA access by the TCP router.
  # This value uses a generated default.
  UAA_CLIENTS_TCP_ROUTER_SECRET: ~

  # The server's ssl certificate. The default is a self-signed certificate and
  # should always be replaced for production deployments.
  # This value uses a generated default.
  UAA_SERVER_CERT: ~

  # The server's ssl private key. Only passphrase-less keys are supported.
  UAA_SERVER_CERT_KEY: ~

env:
  # The number of times Ginkgo will run a CATS test before treating it as a
  # failure. Individual failed runs will still be reported in the test output.
  ACCEPTANCE_TEST_FLAKE_ATTEMPTS: "3"

  # The number of parallel test executors to spawn for Cloud Foundry acceptance
  # tests. The larger the number the higher the stress on the system.
  ACCEPTANCE_TEST_NODES: "4"

  # List of domains (including scheme) from which Cross-Origin requests will be
  # accepted, a * can be used as a wildcard for any part of a domain.
  ALLOWED_CORS_DOMAINS: "[]"

  # Allow users to change the value of the app-level allow_ssh attribute.
  ALLOW_APP_SSH_ACCESS: "true"

  # Extra token expiry time while uploading big apps, in seconds.
  APP_TOKEN_UPLOAD_GRACE_PERIOD: "1200"

  # The db address for the Autoscaler postgres database.
  AUTOSCALER_DB_ADDRESS: "autoscaler-postgres-postgres.((KUBERNETES_NAMESPACE)).svc.((KUBERNETES_CLUSTER_DOMAIN))"

  # The tcp port of postgres database serves on
  AUTOSCALER_DB_PORT: "5432"

  # The role name of autoscaler postgres database
  AUTOSCALER_DB_ROLE_NAME: "postgres"

  # The name of the metadata label to query on worker nodes to get AZ
  # information.
  AZ_LABEL_NAME: "failure-domain.beta.kubernetes.io/zone"

  # List of allow / deny rules for the blobstore internal server. Will be
  # followed by 'deny all'. Each entry must be follow by a semicolon.
  BLOBSTORE_ACCESS_RULES: "allow 10.0.0.0/8; allow 172.16.0.0/12; allow 192.168.0.0/16;"

  # Maximal allowed file size for upload to blobstore, in megabytes.
  BLOBSTORE_MAX_UPLOAD_SIZE: "5000"

  # For requests to service brokers, this is the HTTP (open and read) timeout
  # setting, in seconds.
  BROKER_CLIENT_TIMEOUT_SECONDS: "70"

  # The set of CAT test suites to run. If not specified it falls back to a
  # hardwired set of suites.
  CATS_SUITES: ~

  # The key used to encrypt entries in the CC database
  CC_DB_CURRENT_KEY_LABEL: ""

  # URI for a CDN to use for buildpack downloads.
  CDN_URI: ""

  # Expiration for generated certificates (in days)
  CERT_EXPIRATION: "10950"

  # The Oauth2 authorities available to the cluster administrator.
  CLUSTER_ADMIN_AUTHORITIES: "scim.write,scim.read,openid,cloud_controller.admin,clients.read,clients.write,doppler.firehose,routing.router_groups.read,routing.router_groups.write"

  # 'build' attribute in the /v2/info endpoint
  CLUSTER_BUILD: "2.17.1"

  # 'description' attribute in the /v2/info endpoint
  CLUSTER_DESCRIPTION: "SUSE Cloud Foundry"

  # 'name' attribute in the /v2/info endpoint
  CLUSTER_NAME: "SCF"

  # 'version' attribute in the /v2/info endpoint
  CLUSTER_VERSION: "2"

  # The standard amount of disk (in MB) given to an application when not
  # overriden by the user via manifest, command line, etc.
  DEFAULT_APP_DISK_IN_MB: "1024"

  # The standard amount of memory (in MB) given to an application when not
  # overriden by the user via manifest, command line, etc.
  DEFAULT_APP_MEMORY: "1024"

  # If set apps pushed to spaces that allow SSH access will have SSH enabled by
  # default.
  DEFAULT_APP_SSH_ACCESS: "true"

  # The default stack to use if no custom stack is specified by an app.
  DEFAULT_STACK: "sle15"

  # The container disk capacity the cell should manage. If this capacity is
  # larger than the actual disk quota of the cell component, over-provisioning
  # will occur.
  DIEGO_CELL_DISK_CAPACITY_MB: "auto"

  # The memory capacity the cell should manage. If this capacity is larger than
  # the actual memory of the cell component, over-provisioning will occur.
  DIEGO_CELL_MEMORY_CAPACITY_MB: "auto"

  # Maximum network transmission unit length in bytes for application
  # containers.
  DIEGO_CELL_NETWORK_MTU: "1400"

  # A CIDR subnet mask specifying the range of subnets available to be assigned
  # to containers.
  DIEGO_CELL_SUBNET: "10.38.0.0/16"

  # Disable external buildpacks. Only admin buildpacks and system buildpacks
  # will be available to users.
  DISABLE_CUSTOM_BUILDPACKS: "false"

  # Base domain of the SCF cluster.
  # Example: "my-scf-cluster.com"
  DOMAIN: ~

  # The number of versions of an application to keep. You will be able to
  # rollback to this amount of versions.
  DROPLET_MAX_STAGED_STORED: "5"

  # The docker image used by Eirini to register the image registry CA cert with
  # Docker, on each Kubernetes node
  EIRINI_CERT_COPIER_IMAGE: "splatform/eirini-cert-copier:1.0.0.4.gd8e7208"

  # The docker image used by Eirini to consume Kubernetes container logs
  EIRINI_FLUENTD_IMAGE: "eirini/loggregator-fluentd:0.1.0"

  # Docker Image used for staging apps deployed using Eirini
  EIRINI_IMAGE: "eirini/recipe:ci-24.0.0"

  # Address of Kubernetes' Heapster installation, used for reading Cloud Foundry
  # app metrics.
  EIRINI_KUBE_HEAPSTER_ADDRESS: "http://heapster.kube-system/apis/metrics/v1alpha1"

  # The namespace used by Eirini for deploying applications.
  EIRINI_KUBE_NAMESPACE: "eirini"

  # By default, Cloud Foundry does not enable Cloud Controller request logging.
  # To enable this feature, you must set this property to "true". You can learn
  # more about the format of the logs here
  # https://docs.cloudfoundry.org/loggregator/cc-uaa-logging.html#cc
  ENABLE_SECURITY_EVENT_LOGGING: "false"

  # Enables setting the X-Forwarded-Proto header if SSL termination happened
  # upstream and the header value was set incorrectly. When this property is set
  # to true, the gorouter sets the header X-Forwarded-Proto to https. When this
  # value set to false, the gorouter sets the header X-Forwarded-Proto to the
  # protocol of the incoming request.
  FORCE_FORWARDED_PROTO_AS_HTTPS: "false"

  # AppArmor profile name for garden-runc; set this to empty string to disable
  # AppArmor support
  GARDEN_APPARMOR_PROFILE: "garden-default"

  # URL pointing to the Docker registry used for fetching Docker images. If not
  # set, the Docker service default is used.
  GARDEN_DOCKER_REGISTRY: "registry-1.docker.io"

  # Override DNS servers to be used in containers; defaults to the same as the
  # host.
  GARDEN_LINUX_DNS_SERVER: ""

  # The filesystem driver to use (btrfs or overlay-xfs).
  GARDEN_ROOTFS_DRIVER: "btrfs"

  # Location of the proxy to use for secure web access.
  HTTPS_PROXY: ~

  # Location of the proxy to use for regular web access.
  HTTP_PROXY: ~

  # A comma-separated whitelist of insecure Docker registries in the form of
  # '<HOSTNAME|IP>:PORT'. Each registry must be quoted separately.
  #
  # Example: "\"docker-registry.example.com:80\", \"hello.example.org:443\""
  INSECURE_DOCKER_REGISTRIES: ""

  KUBERNETES_CLUSTER_DOMAIN: ~

  # The cluster's log level: off, fatal, error, warn, info, debug, debug1,
  # debug2.
  LOG_LEVEL: "info"

  # The maximum amount of disk a user can request for an application via
  # manifest, command line, etc., in MB. See also DEFAULT_APP_DISK_IN_MB for the
  # standard amount.
  MAX_APP_DISK_IN_MB: "2048"

  # Maximum health check timeout that can be set for an app, in seconds.
  MAX_HEALTH_CHECK_TIMEOUT: "180"

  # The time allowed for the MySQL server to respond to healthcheck queries, in
  # milliseconds.
  MYSQL_PROXY_HEALTHCHECK_TIMEOUT: "30000"

  # Sets the maximum allowed size of the client request body, specified in the
  # “Content-Length” request header field, in megabytes. If the size in a
  # request exceeds the configured value, the 413 (Request Entity Too Large)
  # error is returned to the client. Please be aware that browsers cannot
  # correctly display this error. Setting size to 0 disables checking of client
  # request body size. This limits application uploads, buildpack uploads, etc.
  NGINX_MAX_REQUEST_BODY_SIZE: "2048"

  # Comma separated list of IP addresses and domains which should not be
  # directoed through a proxy, if any.
  NO_PROXY: ~

  # Comma separated list of white-listed options that may be set during create
  # or bind operations.
  # Example:
  # "uid,gid,allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,fsname,username,password"
  PERSI_NFS_ALLOWED_OPTIONS: "uid,gid,auto_cache,username,password"

  # Comma separated list of default values for nfs mount options. If a default
  # is specified with an option not included in PERSI_NFS_ALLOWED_OPTIONS, then
  # this default value will be set and it won't be overridable.
  PERSI_NFS_DEFAULT_OPTIONS: ~

  # Comma separated list of white-listed options that may be accepted in the
  # mount_config options. Note a specific 'sloppy_mount:true' volume option
  # tells the driver to ignore non-white-listed options, while a
  # 'sloppy_mount:false' tells the driver to fail fast instead when receiving a
  # non-white-listed option."
  #
  # Example:
  # "allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,sloppy_mount,fsname"
  PERSI_NFS_DRIVER_ALLOWED_IN_MOUNT: "auto_cache"

  # Comma separated list of white-listed options that may be configured in
  # supported in the mount_config.source URL query params.
  # Example: "uid,gid,auto-traverse-mounts,dircache"
  PERSI_NFS_DRIVER_ALLOWED_IN_SOURCE: "uid,gid"

  # Comma separated list default values for options that may be configured in
  # the mount_config options, formatted as 'option:default'. If an option is not
  # specified in the volume mount, or the option is not white-listed, then the
  # specified default value will be used instead.
  #
  # Example:
  # "allow_root:false,nfs_uid:2000,nfs_gid:2000,auto_cache:true,sloppy_mount:true"
  PERSI_NFS_DRIVER_DEFAULT_IN_MOUNT: "auto_cache:true"

  # Comma separated list of default values for options in the source URL query
  # params, formatted as 'option:default'. If an option is not specified in the
  # volume mount, or the option is not white-listed, then the specified default
  # value will be applied.
  PERSI_NFS_DRIVER_DEFAULT_IN_SOURCE: ~

  # Disable Persi NFS driver
  PERSI_NFS_DRIVER_DISABLE: "false"

  # LDAP server host name or ip address (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_HOST: ""

  # LDAP server port (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PORT: "389"

  # LDAP server protocol (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PROTOCOL: "tcp"

  # LDAP service account user name (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_USER: ""

  # LDAP fqdn for user records we will search against when looking up user uids
  # (required for LDAP integration only)
  # Example: "cn=Users,dc=corp,dc=test,dc=com"
  PERSI_NFS_DRIVER_LDAP_USER_FQDN: ""

  # The name of the metadata label to query on worker nodes to get placement tag
  # information, also known as isolation segments. When set, the cells will
  # query their worker node for placement information and inject the result into
  # cloudfoundry via the KUBE_PZ parameter. When left to the default no custom
  # placement processing is done.
  PZ_LABEL_NAME: ""

  # Certficates to add to the rootfs trust store. Multiple certs are possible by
  # concatenating their definitions into one big block of text.
  ROOTFS_TRUSTED_CERTS: ""

  # The algorithm used by the router to distribute requests for a route across
  # backends. Supported values are round-robin and least-connection.
  ROUTER_BALANCING_ALGORITHM: "round-robin"

  # How to handle client certificates. Supported values are none, request, or
  # require. See
  # https://docs.cloudfoundry.org/adminguide/securing-traffic.html#gorouter_mutual_auth
  # for more information.
  ROUTER_CLIENT_CERT_VALIDATION: "request"

  # How to handle the x-forwarded-client-cert (XFCC) HTTP header. Supported
  # values are always_forward, forward, and sanitize_set. See
  # https://docs.cloudfoundry.org/concepts/http-routing.html for more
  # information.
  ROUTER_FORWARDED_CLIENT_CERT: "always_forward"

  # The log destination to talk to. This has to point to a syslog server.
  SCF_LOG_HOST: ~

  # The port used by rsyslog to talk to the log destination. It defaults to 514,
  # the standard port of syslog.
  SCF_LOG_PORT: "514"

  # The protocol used by rsyslog to talk to the log destination. The allowed
  # values are tcp, and udp. The default is tcp.
  SCF_LOG_PROTOCOL: "tcp"

  # If true, authenticate against the SMTP server using AUTH command. See
  # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html
  SMTP_AUTH: "false"

  # SMTP from address, for password reset emails etc.
  SMTP_FROM_ADDRESS: ~

  # SMTP server host address, for password reset emails etc.
  SMTP_HOST: ~

  # SMTP server password, for password reset emails etc.
  SMTP_PASSWORD: ~

  # SMTP server port, for password reset emails etc.
  SMTP_PORT: "25"

  # If true, send STARTTLS command before logging in to SMTP server. See
  # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html
  SMTP_STARTTLS: "false"

  # SMTP server username, for password reset emails etc.
  SMTP_USER: ~

  # Timeout for staging an app, in seconds.
  STAGING_TIMEOUT: "900"

  # Support contact information for the cluster
  SUPPORT_ADDRESS: "https://scc.suse.com"

  # The number of times Ginkgo will run a SITS test before treating it as a
  # failure. Individual failed runs will still be reported in the test output.
  SYNC_INTEGRATION_TESTS_FLAKE_ATTEMPTS: "3"

  # Regex for which SITS tests the test runner should focus on executing.
  SYNC_INTEGRATION_TESTS_FOCUS: ~

  # The number of parallel test executors to spawn for Cloud Foundry sync
  # integration tests.
  SYNC_INTEGRATION_TESTS_NODES: "4"

  # Regex for which SITS tests the test runner should skip.
  SYNC_INTEGRATION_TESTS_SKIP: ~

  # Whether the output of the sync integration tests should be verbose or not.
  SYNC_INTEGRATION_TESTS_VERBOSE: "false"

  # TCP routing domain of the SCF cluster; only used for testing;
  # Example: "tcp.my-scf-cluster.com"
  TCP_DOMAIN: ~

  # Concatenation of trusted CA certificates to be made available on the cell.
  TRUSTED_CERTS: ~

  # The host name of the UAA server (root zone)
  UAA_HOST: ~

  # The tcp port the UAA server (root zone) listens on for requests.
  UAA_PORT: "2793"

  # The TCP port to report as the public port for the UAA server (root zone).
  UAA_PUBLIC_PORT: "2793"

  # Whether or not to use privileged containers for buildpack based
  # applications. Containers with a docker-image-based rootfs will continue to
  # always be unprivileged.
  USE_DIEGO_PRIVILEGED_CONTAINERS: "false"

  # Whether or not to use privileged containers for staging tasks.
  USE_STAGER_PRIVILEGED_CONTAINERS: "false"

# The sizing section contains configuration to change each individual instance
# group. Due to limitations on the allowable names, any dashes ("-") in the
# instance group names are replaced with underscores ("_").
sizing:
  # The adapter instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: adapter and bpm
  adapter:
    # Node affinity rules can be specified here
    affinity: {}

    # The adapter instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The api-group instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # - cloud_controller_ng: The Cloud Controller provides primary Cloud Foundry
  #   API that is by the CF CLI. The Cloud Controller uses a database to keep
  #   tables for organizations, spaces, apps, services, service instances, user
  #   roles, and more. Typically multiple instances of Cloud Controller are load
  #   balanced.
  #
  # - route_registrar: Used for registering routes
  #
  # Also: bpm, statsd_injector, go-buildpack, binary-buildpack,
  # nodejs-buildpack, ruby-buildpack, php-buildpack, python-buildpack,
  # staticfile-buildpack, nginx-buildpack, java-buildpack, and
  # dotnet-core-buildpack
  api_group:
    # Node affinity rules can be specified here
    affinity: {}

    # The api_group instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 3800
      limit: ~

  # The autoscaler-actors instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: scheduler, scalingengine, operator, and bpm
  autoscaler_actors:
    # Node affinity rules can be specified here
    affinity: {}

    # The autoscaler_actors instance group can be enabled by the autoscaler
    # feature.
    # It can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2350
      limit: ~

  # The autoscaler-api instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: apiserver and bpm
  autoscaler_api:
    # Node affinity rules can be specified here
    affinity: {}

    # The autoscaler_api instance group can be enabled by the autoscaler
    # feature.
    # It can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The autoscaler-metrics instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: metricscollector, eventgenerator, and bpm
  autoscaler_metrics:
    # Node affinity rules can be specified here
    affinity: {}

    # The autoscaler_metrics instance group can be enabled by the autoscaler
    # feature.
    # It can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 1024
      limit: ~

  # The autoscaler-postgres instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - postgres: The Postgres server provides a single instance Postgres database
  #   that can be used with the Cloud Controller or the UAA. It does not provide
  #   highly-available configuration.
  autoscaler_postgres:
    # Node affinity rules can be specified here
    affinity: {}

    # The autoscaler_postgres instance group can be enabled by the autoscaler
    # feature.
    # It cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      postgres_data: 5

    # Unit [MiB]
    memory:
      request: 1024
      limit: ~

  # The bits instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - route_registrar: Used for registering routes
  #
  # - eirinifs: This job copies the eirinifs to a desired location
  #
  # Also: statsd_injector, bpm, and bits-service
  bits:
    # Node affinity rules can be specified here
    affinity: {}

    # The bits instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The blobstore instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - route_registrar: Used for registering routes
  #
  # Also: blobstore and bpm
  blobstore:
    # Node affinity rules can be specified here
    affinity: {}

    # The blobstore instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      blobstore_data: 50

    # Unit [MiB]
    memory:
      request: 500
      limit: ~

  # The cc-clock instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - wait-for-api: Wait for API to be ready before starting any jobs
  #
  # - cloud_controller_clock: The Cloud Controller Clock runs the Diego Sync job
  #   to keep the actual state of running processes in Diego in sync with Cloud
  #   Controller's desired state. Additionally, the Clock schedules periodic
  #   clean up jobs to prune app usage events, audit events, failed jobs, and
  #   more.
  #
  # Also: statsd_injector and bpm
  cc_clock:
    # Node affinity rules can be specified here
    affinity: {}

    # The cc_clock instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 750
      limit: ~

  # The cc-uploader instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: tps, cc_uploader, and bpm
  cc_uploader:
    # Node affinity rules can be specified here
    affinity: {}

    # The cc_uploader instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The cc-worker instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - cloud_controller_worker: Cloud Controller worker processes background
  #   tasks submitted via the.
  #
  # Also: bpm
  cc_worker:
    # Node affinity rules can be specified here
    affinity: {}

    # The cc_worker instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 750
      limit: ~

  # The cf-usb-group instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: cf-usb and bpm
  cf_usb_group:
    # Node affinity rules can be specified here
    affinity: {}

    # The cf_usb_group instance group is enabled by the cf_usb feature.
    # It can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The credhub-user instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - route_registrar: Used for registering routes
  #
  # Also: credhub and bpm
  credhub_user:
    # Node affinity rules can be specified here
    affinity: {}

    # The credhub_user instance group can be enabled by the credhub feature.
    # It cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2000
      limit: ~

  # The diego-api instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: bbs and cfdot
  diego_api:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego_api instance group can be disabled by the eirini feature.
    # It can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The diego-brain instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: auctioneer and cfdot
  diego_brain:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego_brain instance group can be disabled by the eirini feature.
    # It can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The diego-cell instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - get-kubectl: This job exists only to ensure the presence of the kubectl
  #   binary in the role referencing it.
  #
  # - wait-for-uaa: Wait for UAA to be ready before starting any jobs
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: rep, cfdot, route_emitter, garden, groot-btrfs,
  # cflinuxfs2-rootfs-setup, cflinuxfs3-rootfs-setup, cf-sle12-setup,
  # sle15-rootfs-setup, nfsv3driver, and mapfs
  diego_cell:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego_cell instance group can be disabled by the eirini feature.
    # It can scale between 1 and 254 instances.
    # For high availability it needs at least 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    disk_sizes:
      grootfs_data: 50

    # Unit [MiB]
    memory:
      request: 2800
      limit: ~

  # The diego-ssh instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: ssh_proxy and file_server
  diego_ssh:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego_ssh instance group can be disabled by the eirini feature.
    # It can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The doppler instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: log-cache-gateway, log-cache-nozzle, log-cache-cf-auth-proxy,
  # log-cache-expvar-forwarder, log-cache, doppler, and bpm
  doppler:
    # Node affinity rules can be specified here
    affinity: {}

    # The doppler instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 410
      limit: ~

  # The eirini instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: opi
  eirini:
    # Node affinity rules can be specified here
    affinity: {}

    # The eirini instance group can be enabled by the eirini feature.
    # It cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The locket instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: locket
  locket:
    # Node affinity rules can be specified here
    affinity: {}

    # The locket instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The log-api instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: loggregator_trafficcontroller, reverse_log_proxy, and bpm
  log_api:
    # Node affinity rules can be specified here
    affinity: {}

    # The log_api instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The log-cache-scheduler instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - log-cache-scheduler-properties: Dummy BOSH job used to host parameters
  #   that are used in SCF patches
  #
  # Also: log-cache-scheduler, log-cache-expvar-forwarder, and bpm
  log_cache_scheduler:
    # Node affinity rules can be specified here
    affinity: {}

    # The log_cache_scheduler instance group can scale between 1 and 65535
    # instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 410
      limit: ~

  # The loggregator-agent instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: loggr-expvar-forwarder, loggregator_agent, and bpm
  loggregator_agent:
    # Node affinity rules can be specified here
    affinity: {}

    # The loggregator_agent instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: ~
      limit: ~

    # Unit [MiB]
    memory:
      request: ~
      limit: ~

  # The mysql instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: mysql and proxy
  mysql:
    # Node affinity rules can be specified here
    affinity: {}

    # The mysql instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      mysql_data: 20

    # Unit [MiB]
    memory:
      request: 2500
      limit: ~

  # The nats instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - nats: The NATS server provides publish-subscribe messaging system for the
  #   Cloud Controller, the DEA , HM9000, and other Cloud Foundry components.
  #
  # Also: bpm
  nats:
    # Node affinity rules can be specified here
    affinity: {}

    # The nats instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The nfs-broker instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: nfsbroker
  nfs_broker:
    # Node affinity rules can be specified here
    affinity: {}

    # The nfs_broker instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The post-deployment-setup instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - uaa-create-user: Create the initial user in UAA
  #
  # - configure-scf: Uses the cf CLI to configure SCF once it's online (things
  #   like proxy settings, service brokers, etc.)
  post_deployment_setup:
    # Node affinity rules can be specified here
    affinity: {}

    # The post_deployment_setup instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The router instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - gorouter: Gorouter maintains a dynamic routing table based on updates
  #   received from NATS and (when enabled) the Routing API. This routing table
  #   maps URLs to backends. The router finds the URL in the routing table that
  #   most closely matches the host header of the request and load balances
  #   across the associated backends.
  #
  # Also: bpm
  router:
    # Node affinity rules can be specified here
    affinity: {}

    # The router instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The routing-api instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: bpm and routing-api
  routing_api:
    # Node affinity rules can be specified here
    affinity: {}

    # The routing_api instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The secret-generation instance group contains the following jobs:
  #
  # - generate-secrets: This job will generate the secrets for the cluster
  secret_generation:
    # Node affinity rules can be specified here
    affinity: {}

    # The secret_generation instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The syslog-scheduler instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: scheduler and bpm
  syslog_scheduler:
    # Node affinity rules can be specified here
    affinity: {}

    # The syslog_scheduler instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The tcp-router instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - wait-for-uaa: Wait for UAA to be ready before starting any jobs
  #
  # Also: tcp_router and bpm
  tcp_router:
    # Node affinity rules can be specified here
    affinity: {}

    # The tcp_router instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

    ports:
      tcp_route:
        count: 9

  # The uaa instance group contains the following jobs:
  #
  # - global-uaa-properties: Dummy BOSH job used to host global parameters that
  #   are required to configure SCF / fissile
  #
  # - wait-for-database: This is a pre-start job to delay starting the rest of
  #   the role until a database connection is ready. Currently it only checks
  #   that a response can be obtained from the server, and not that it responds
  #   intelligently.
  #
  #
  # - uaa: The UAA is the identity management service for Cloud Foundry. It's
  #   primary role is as an OAuth2 provider, issuing tokens for client
  #   applications to use when they act on behalf of Cloud Foundry users. It can
  #   also authenticate users with their Cloud Foundry credentials, and can act
  #   as an SSO service using those credentials (or others). It has endpoints
  #   for managing user accounts and for registering OAuth2 clients, as well as
  #   various other management functions.
  uaa:
    # Node affinity rules can be specified here
    affinity: {}

    # The uaa instance group can be enabled by the uaa feature.
    # It can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2100
      limit: ~

enable:
  # The autoscaler feature enables these instance groups: autoscaler_postgres,
  # autoscaler_api, autoscaler_metrics, and autoscaler_actors
  autoscaler: false

  # The cf_usb feature enables these instance groups: cf_usb_group
  cf_usb: true

  # The credhub feature enables these instance groups: credhub_user
  credhub: false

  # The eirini feature enables these instance groups: eirini
  # It disables these instance groups: diego_api, diego_brain, diego_ssh, and
  # diego_cell
  eirini: false

  # The uaa feature enables these instance groups: uaa
  uaa: false

ingress:
  # ingress.annotations allows specifying custom ingress annotations that gets
  # merged to the default annotations.
  annotations: {}

  # ingress.enabled enables ingress support - working ingress controller
  # necessary.
  enabled: false

  # ingress.tls.crt and ingress.tls.key, when specified, are used by the TLS
  # secret for the Ingress resource.
  tls: {}
Print this page