Jump to content
SUSE Cloud Application Platform 1.3.1

Deployment, Administration, and User Guides

Introducing SUSE Cloud Application Platform, a software platform for cloud-native application deployment based on SUSE Cloud Foundry and Kubernetes.

Authors: Carla Schroder and Billy Tat
Publication Date: March 08, 2019
About This Guide
Required Background
Available Documentation
Feedback
Documentation Conventions
About the Making of This Documentation
I Overview of SUSE Cloud Application Platform
1 About SUSE Cloud Application Platform
1.1 New in Version 1.3.1
1.2 SUSE Cloud Application Platform Overview
1.3 Minimum Requirements
1.4 SUSE Cloud Application Platform Architecture
2 Running SUSE Cloud Application Platform on non-SUSE CaaS Platform Kubernetes Systems
2.1 Kubernetes Requirements
II Deploying SUSE Cloud Application Platform
3 Deployment and Administration Notes
3.1 README first
3.2 Not running/Completed Pods
3.3 Namespaces
3.4 DNS management
3.5 Releases and Helm chart versions
4 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform
4.1 Prerequisites
4.2 Pod Security Policy
4.3 Choose Storage Class
4.4 Test Storage Class
4.5 Configure the SUSE Cloud Application Platform Production Deployment
4.6 Deploy with Helm
4.7 Add the Kubernetes charts repository
4.8 Copy SUSE Enterprise Storage Secret
4.9 Deploy uaa
4.10 Deploy scf
5 Installing the Stratos Web Console
5.1 Install Stratos with Helm
5.2 Connecting Kubernetes
5.3 Stratos Metrics
6 SUSE Cloud Application Platform High Availability
6.1 Example High Availability Configuration
6.2 Handling Custom Availability Zone Information
7 LDAP Integration
7.1 Prerequisites
7.2 Example LDAP Integration
8 Preparing Microsoft Azure for SUSE Cloud Application Platform
8.1 Prerequisites
8.2 Create Resource Group and AKS Instance
8.3 Create Tiller Service Account
8.4 Pod Security Policies
8.5 Enable Swap Accounting
8.6 Deploy SUSE Cloud Application Platform with a Load Balancer
8.7 Configuring and Testing the Native Microsoft AKS Service Broker
9 Deploying SUSE Cloud Application Platform on Amazon EKS
9.1 Prerequisites
9.2 IAM Requirements for EKS
9.3 The Helm CLI and Tiller
9.4 Default Storage Class
9.5 Security Group rules
9.6 DNS Configuration
9.7 Deployment Configuration
9.8 Deploying Cloud Application Platform
9.9 Add the Kubernetes charts repository
9.10 Deploy uaa
9.11 Deploy scf
9.12 Deploying and Using the AWS Service Broker
10 Installing SUSE Cloud Application Platform on OpenStack
10.1 Prerequisites
10.2 Create a New OpenStack Project
10.3 Deploy SUSE Cloud Application Platform
10.4 Bootstrap SUSE Cloud Application Platform
10.5 Growing the Root Filesystem
III SUSE Cloud Application Platform Administration
11 Upgrading SUSE Cloud Application Platform
11.1 Upgrading SUSE Cloud Application Platform
11.2 Installing Skipped Releases
12 Configuration Changes
12.1 Configuration Change Example
12.2 Other Examples
13 Managing Passwords
13.1 Password Management with the Cloud Foundry Client
13.2 Changing User Passwords with Stratos
14 Cloud Controller Database Secret Rotation
14.1 Tables with Encrypted Information
15 Backup and Restore
15.1 Backup and restore using cf-plugin-backup
15.2 Disaster recovery in scf through raw data backup and restore
16 Provisioning Services with Minibroker
16.1 Deploy Minibroker
16.2 Setting up the environment for Minibroker usage
16.3 Using Minibroker with Applications
17 Setting up and Using a Service Broker
17.1 Prerequisites
17.2 Deploying on CaaS Platform 3
17.3 Configuring the MySQL Deployment
17.4 Deploying the MySQL Chart
17.5 Create and Bind a MySQL Service
17.6 Deploying the PostgreSQL Chart
17.7 Removing Service Broker Sidecar Deployments
17.8 Upgrade Notes
18 App-AutoScaler
18.1 Prerequisites
18.2 Enabling the App-AutoScaler Service
18.3 Using the App-AutoScaler Service
18.4 Policies
19 Logging
19.1 Logging to an External Syslog Server
19.2 Log Levels
20 Managing Certificates
20.1 Certificate Characteristics
20.2 Deploying Custom Certificates
20.3 Rotating Automatically Generated Secrets
21 Integrating CredHub with SUSE Cloud Application Platform
21.1 Installing the CredHub Client
21.2 Enabling Credhub
21.3 Connecting to the CredHub Service
22 Offline Buildpacks
22.1 Creating an Offline Buildpack
23 Custom Application Domains
23.1 Customizing Application Domains
IV SUSE Cloud Application Platform User Guide
24 Deploying and Managing Applications with the Cloud Foundry Client
24.1 Using the cf CLI with SUSE Cloud Application Platform
V Troubleshooting
25 Troubleshooting
25.1 Using Supportconfig
25.2 Deployment is Taking Too Long
25.3 Deleting and Rebuilding a Deployment
25.4 Querying with Kubectl
A Appendix
A.1 Manual Configuration of Pod Security Policies
A.2 Complete suse/uaa values.yaml file
A.3 Complete suse/scf values.yaml file
B GNU Licenses
B.1 GNU Free Documentation License

Copyright © 2006– 2019 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

SUSE Cloud Application Platform is a software platform for cloud-native application development, based on Cloud Foundry, with additional supporting services and components. The core of the platform is SUSE Cloud Foundry, a Cloud Foundry distribution for Kubernetes which runs on SUSE Linux Enterprise containers.

SUSE Cloud Foundry is designed to run on any Kubernetes cluster. This guide describes how to deploy it on SUSE Container as a Service (CaaS) Platform 3.0, Microsoft AKS, and Amazon EKS.

1 Required Background

To keep the scope of these guidelines manageable, certain technical assumptions have been made:

  • You have some computer experience and are familiar with common technical terms.

  • You are familiar with the documentation for your system and the network on which it runs.

  • You have a basic understanding of Linux systems.

2 Available Documentation

We provide HTML and PDF versions of our books in different languages. Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates and browse or download the documentation in various formats.

The following documentation is available for this product:

Deployment, Administration, and User Guides

The SUSE Cloud Application Platform guide is a comprehensive guide providing deployment, administration, and user guides, and architecture and minimum system requirements.

3 Feedback

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

4 Documentation Conventions

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • x86_64 This paragraph is only relevant for the AMD64/Intel 64 architecture. The arrows mark the beginning and the end of the text block.

    System z, POWER This paragraph is only relevant for the architectures z Systems and POWER. The arrows mark the beginning and the end of the text block.

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

5 About the Making of This Documentation

This documentation is written in SUSEDoc, a subset of DocBook 5. The XML source files were validated by jing (see https://code.google.com/p/jing-trang/), processed by xsltproc, and converted into XSL-FO using a customized version of Norman Walsh's stylesheets. The final PDF is formatted through FOP from Apache Software Foundation. The open source tools and the environment used to build this documentation are provided by the DocBook Authoring and Publishing Suite (DAPS). The project's home page can be found at https://github.com/openSUSE/daps.

The XML source code of this documentation can be found at https://github.com/SUSE/doc-cap.

Part I Overview of SUSE Cloud Application Platform

1 About SUSE Cloud Application Platform

1.1 New in Version 1.3.1

See all product manuals for SUSE Cloud Application Platform 1.x at SUSE Cloud Application Platform 1.

Tip
Tip: Read the Release Notes

Make sure to review the release notes for SUSE Cloud Application Platform published at https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/.

1.2 SUSE Cloud Application Platform Overview

SUSE Cloud Application Platform is a software platform for cloud-native application deployment based on SUSE Cloud Foundry and Kubernetes.

SUSE Cloud Application Platform describes the complete software stack, including the operating system, Kubernetes, and SUSE Cloud Foundry.

SUSE Cloud Application Platform is comprised of the SUSE Linux Enterprise builds of the uaa (User Account and Authentication) server, SUSE Cloud Foundry, the Stratos Web user interface, and Stratos Metrics (scheduled for the SUSE Cloud Application Platform 1.3 release).

The Cloud Foundry code base provides the basic functionality. SUSE Cloud Foundry differentiates itself from other Cloud Foundry distributions by running in Linux containers managed by Kubernetes, rather than virtual machines managed with BOSH, for greater fault tolerance and lower memory use.

All Docker images for the SUSE Linux Enterprise builds are hosted on registry.suse.com. These are the commercially-supported images. (Community-supported images for openSUSE are hosted on Docker Hub.) Product manuals on SUSE Doc: SUSE Cloud Application Platform 1 refer to the commercially-supported SUSE Linux Enterprise version.

SUSE Cloud Foundry is designed to run on any Kubernetes cluster. This guide describes how to deploy it on SUSE Container as a Service (CaaS) Platform 3.0, Microsoft Azure, and Amazon EKS.

SUSE Cloud Application Platform serves different but complementary purposes for operators and application developers.

For operators, the platform is:

  • Easy to install, manage, and maintain

  • Secure by design

  • Fault tolerant and self-healing

  • Offers high availability for critical components

  • Uses industry-standard components

  • Avoids single vendor lock-in

For developers, the platform:

  • Allocates computing resources on demand via API or Web interface

  • Offers users a choice of language and Web framework

  • Gives access to databases and other data services

  • Emits and aggregates application log streams

  • Tracks resource usage for users and groups

  • Makes the software development workflow more efficient

The principle interface and API for deploying applications to SUSE Cloud Application Platform is SUSE Cloud Foundry. Most Cloud Foundry distributions run on virtual machines managed by BOSH. SUSE Cloud Foundry runs in SUSE Linux Enterprise containers managed by Kubernetes. Containerizing the components of the platform itself has these advantages:

  • Improves fault tolerance. Kubernetes monitors the health of all containers, and automatically restarts faulty containers faster than virtual machines can be restarted or replaced.

  • Reduces physical memory overhead. SUSE Cloud Foundry components deployed in containers consume substantially less memory, as host-level operations are shared between containers by Kubernetes.

SUSE Cloud Foundry packages upstream Cloud Foundry BOSH releases to produce containers and configurations which are deployed to Kubernetes clusters using Helm.

1.3 Minimum Requirements

This guide details the steps for deploying SUSE Cloud Foundry on SUSE CaaS Platform, and on supported Kubernetes environments such as Microsoft Azure Kubernetes Service (AKS), and Amazon Elastic Container Service for Kubernetes (EKS). SUSE CaaS Platform is a specialized application development and hosting platform built on the SUSE MicroOS container host operating system, container orchestration with Kubernetes, and Salt for automating installation and configuration.

Important
Important: Required Knowledge

Installing and administering SUSE Cloud Application Platform requires knowledge of Linux, Docker, Kubernetes, and your Kubernetes platform (for example SUSE CaaS Platform, AKS, EKS, OpenStack). You must plan resource allocation and network architecture by taking into account the requirements of your Kubernetes platform in addition to SUSE Cloud Foundry requirements. SUSE Cloud Foundry is a discrete component in your cloud stack, but it still requires knowledge of administering and troubleshooting the underlying stack.

You may create a minimal deployment on four Kubernetes nodes for testing. However, this is insufficient for a production deployment. A supported deployment includes SUSE Cloud Foundry installed on SUSE CaaS Platform, Amazon EKS, or Azure AKS. You also need a storage backend such as SUSE Enterprise Storage or NFS, a DNS/DHCP server, and an Internet connection to download additional packages during installation and ~10GB of Docker images on each Kubernetes worker after installation. (See Chapter 4, Deploying SUSE Cloud Application Platform on SUSE CaaS Platform.)

A production deployment requires considerable resources. SUSE Cloud Application Platform includes an entitlement of SUSE CaaS Platform and SUSE Enterprise Storage. SUSE Enterprise Storage alone has substantial requirements; see the Tech Specs for details. SUSE CaaS Platform requires a minimum of four hosts: one admin and three Kubernetes nodes. SUSE Cloud Foundry is then deployed on the Kubernetes nodes. Four CaaS Platform nodes are not sufficient for a production deployment. Figure 1.1, “Minimal Example Production Deployment” describes a minimal production deployment with SUSE Cloud Foundry deployed on a Kubernetes cluster containing three Kubernetes masters and three workers, plus an ingress controller, administration workstation, DNS/DHCP server, and a SUSE Enterprise Storage cluster.

network architecture of minimal production setup
Figure 1.1: Minimal Example Production Deployment

Note that after you have deployed your cluster and start building and running applications, your applications may depend on buildpacks that are not bundled in the container images that ship with SUSE Cloud Foundry. These will be downloaded at runtime, when you are pushing applications to the platform. Some of these buildpacks may include components with proprietary licenses. (See Customizing and Developing Buildpacks to learn more about buildpacks, and creating and managing your own.)

1.4 SUSE Cloud Application Platform Architecture

The following figures illustrate the main structural concepts of SUSE Cloud Application Platform. Figure 1.2, “Cloud Platform Comparisons” shows a comparison of the basic cloud platforms:

  • Infrastructure as a Service (IaaS)

  • Container as a Service (CaaS)

  • Platform as a Service (Paas)

  • Software as a Service (SaaS)

SUSE CaaS Platform is a Container as a Service platform, and SUSE Cloud Application Platform is a PaaS.

Comparison of cloud platforms.
Figure 1.2: Cloud Platform Comparisons

Figure 1.3, “Containerized Platforms” illustrates how SUSE CaaS Platform and SUSE Cloud Application Platform containerize the platform itself.

SUSE CaaS Platform and SUSE Cloud Application Platform containerize the platform itself.
Figure 1.3: Containerized Platforms

Figure 1.4, “SUSE Cloud Application Platform Stack” shows the relationships of the major components of the software stack. SUSE Cloud Application Platform runs on Kubernetes, which in turn runs on multiple platforms, from bare metal to various cloud stacks. Your applications run on SUSE Cloud Application Platform and provide services.

Relationships of the main Cloud Application Platform components.
Figure 1.4: SUSE Cloud Application Platform Stack

1.4.1 SUSE Cloud Foundry Components

SUSE Cloud Foundry is comprised of developer and administrator clients, trusted download sites, transient and long-running components, APIs, and authentication:

  • Clients for developers and admins to interact with SUSE Cloud Foundry: the cf CLI, which provides the cf command, Stratos Web interface, IDE plugins.

  • Docker Trusted Registry owned by SUSE.

  • SUSE Helm chart repository.

  • Helm, the Kubernetes package manager, which includes Tiller, the Helm server, and the helm command-line client.

  • kubectl, the command-line client for Kubernetes.

  • Long-running SUSE Cloud Foundry components.

  • SUSE Cloud Foundry post-deployment components: Transient SUSE Cloud Foundry components that start after all SUSE Cloud Foundry components are started, perform their tasks, and then exit.

  • SUSE Cloud Foundry Linux cell, an elastic runtime component that runs Linux applications.

  • uaa, a Cloud Application Platform service for authentication and authorization.

  • The Kubernetes API.

1.4.2 SUSE Cloud Foundry containers

Figure 1.5, “SUSE Cloud Foundry Containers, Grouped by Function” provides a look at SUSE Cloud Foundry's containers.

SUSE Cloud Foundry's containers, grouped by functionality.
Figure 1.5: SUSE Cloud Foundry Containers, Grouped by Function
List of SUSE Cloud Foundry Containers
adapter

Part of the logging system, manages connections to user application syslog drains.

api-group

Contains the SUSE Cloud Foundry Cloud Controller, which implements the CF API. It is exposed via the router.

blobstore

A WebDAV blobstore for storing application bits, buildpacks, and stacks.

cc-clock

Sidekick to the Cloud Controller, periodically performing maintenance tasks such as resource cleanup.

cc-uploader

Assists droplet upload from Diego.

cc-worker

Sidekick to the Cloud Controller, processes background tasks.

cf-usb

Universal Service Broker; SUSE's own component for managing and publishing service brokers.

diego-api

API for the Diego scheduler.

diego-brain

Contains the Diego auctioning system that schedules user applications across the elastic layer.

diego-cell (privileged)

The elastic layer of SUSE Cloud Foundry, where applications live.

diego-ssh

Provides SSH access to user applications, exposed via a Kubernetes service.

doppler

Routes log messages from applications and components.

log-api

Part of the logging system; exposes log streams to users using web sockets and proxies user application log messages to syslog drains. Exposed using the router.

mysql

A MariaDB server and component to route requests to replicas. (A separate copy is deployed for uaa.)

nats

A pub-sub messaging queue for the routing system.

nfs-broker (privileged)

A service broker for enabling NFS-based application persistent storage.

post-deployment-setup

Used as a Kubernetes job, performs cluster setup after installation has completed.

router

Routes application and API traffic. Exposed using a Kubernetes service.

routing-api

API for the routing system.

secret-generation

Used as a Kubernetes job to create secrets (certificates) when the cluster is installed.

syslog-scheduler

Part of the logging system that allows user applications to be bound to a syslog drain.

tcp-router

Routes TCP traffic for your applications.

1.4.3 SUSE Cloud Foundry service diagram

This simple service diagram illustrates how SUSE Cloud Foundry components communicate with each other (Figure 1.6, “Simple Services Diagram”). See Figure 1.7, “Detailed Services Diagram” for a more detailed view.

Simple Services Diagram
Figure 1.6: Simple Services Diagram

This table describes how these services operate.

InterfaceNetwork NameNetwork ProtocolRequestorRequestRequest CredentialsRequest AuthorizationListenerResponseResponse CredentialsDescription of Operation
1ExternalHTTPSHelm ClientDeploy Cloud Application PlatformOAuth2 Bearer tokenDeployment of Cloud Application Platform Services on KubernetesHelm/Kubernetes APIOperation ack and handleTLS certificate on external endpointOperator deploys Cloud Application Platform on Kubernetes
2ExternalHTTPSInternal Kubernetes componentsDownload Docker ImagesRefer to registry.suse.comRefer to registry.suse.comregistry.suse.comDocker imagesNoneDocker images that make up Cloud Application Platform are downloaded
3TenantHTTPSCloud Application Platform componentsGet tokensOAuth2 client secretVaries, based on configured OAuth2 client scopesuaa An OAuth2 refresh token used to interact with other serviceTLS certificateSUSE Cloud Foundry components ask uaa for tokens so they can talk to each other
4ExternalHTTPSSUSE Cloud Foundry clientsSUSE Cloud Foundry API RequestsOAuth2 Bearer tokenSUSE Cloud Foundry application managementCloud Application Platform componentsJSON object and HTTP Status codeTLS certificate on external endpointCloud Application Platform Clients interact with the SUSE Cloud Foundry API (for example users deploying apps)
5ExternalWSSSUSE Cloud Foundry clientsLog streamingOAuth2 Bearer tokenSUSE Cloud Foundry application managementCloud Application Platform componentsA stream of SUSE Cloud Foundry logsTLS certificate on external endpointSUSE Cloud Foundry clients ask for logs (for example user looking at application logs or administrator viewing system logs)
6ExternalSSHSUSE Cloud Foundry clients, SSH clientsSSH Access to ApplicationOAuth2 bearer tokenSUSE Cloud Foundry application managementCloud Application Platform componentsA duplex connection is created allowing the user to interact with a shellRSA SSH Key on external endpointSUSE Cloud Foundry Clients open an SSH connection to an application's container (for example users debugging their applications)
7ExternalHTTPSHelmDownload chartsRefer to kubernetes-charts.suse.comRefer to kubernetes-charts.suse.comkubernetes-charts.suse.comHelm chartsTLS certificate on external endpointHelm charts for Cloud Application Platform are downloaded

1.4.4 Detailed Services Diagram

Figure 1.7, “Detailed Services Diagram” presents a more detailed view of SUSE Cloud Foundry services and how they interact with each other. Services labeled in red are unencrypted, while services labeled in green run over HTTPS.

Detailed Services Diagram
Figure 1.7: Detailed Services Diagram

2 Running SUSE Cloud Application Platform on non-SUSE CaaS Platform Kubernetes Systems

2.1 Kubernetes Requirements

SUSE Cloud Application Platform is designed to run on any Kubernetes system that meets the following requirements:

  • Kubernetes API version 1.8+

  • Kernel parameter swapaccount=1

  • docker info must not show aufs as the storage driver

  • The Kubernetes cluster must have a storage class for SUSE Cloud Application Platform to use. The default storage class is persistent. You may specify a different storage class in your deployment's values.yaml file (which is called scf-config-values.yaml in the examples in this guide), or as a helm command option, for example --set kube.storage_class.persistent=my_storage_class.

  • kube-dns must be be running

  • Either ntp or systemd-timesyncd must be installed and active

  • Docker must be configured to allow privileged containers

  • Privileged container must be enabled in kube-apiserver. See kube-apiserver.

  • Privileged must be enabled in kubelet

  • The TasksMax property of the containerd service definition must be set to infinity

  • Helm's Tiller has to be installed and active, with Tiller on the Kubernetes cluster and Helm on your remote administration machine

Part II Deploying SUSE Cloud Application Platform

3 Deployment and Administration Notes

Important things to know before deploying SUSE Cloud Application Platform.

4 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform

You may set up a minimal deployment on four Kubernetes nodes for testing. This is not sufficient for a production deployment. A basic SUSE Cloud Application Platform production deployment requires at least eight hosts plus a storage backend: one SUSE CaaS Platform admin server, three Kubernetes mast…

5 Installing the Stratos Web Console
6 SUSE Cloud Application Platform High Availability
7 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components an…

8 Preparing Microsoft Azure for SUSE Cloud Application Platform

SUSE Cloud Application Platform supports deployment on Microsoft Azure Kubernetes Service (AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparing Azure for a SUSE Cloud Application Platform deployment, with a basic Azure load balancer. (See Azure Kubernetes Serv…

9 Deploying SUSE Cloud Application Platform on Amazon EKS

This chapter describes how to deploy SUSE Cloud Application Platform on Amazon EKS, using Amazon's Elastic Load Balancer to provide fault-tolerant access to your cluster.

10 Installing SUSE Cloud Application Platform on OpenStack

You can deploy a SUSE Cloud Application Platform on CaaS Platform stack on OpenStack. This chapter describes how to deploy a small testing and development instance with one Kubernetes master and two worker nodes, using Terraform to automate the deployment. This does not create a production deploymen…

3 Deployment and Administration Notes

Important things to know before deploying SUSE Cloud Application Platform.

3.1 README first

README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/

Read Chapter 3, Deployment and Administration Notes

3.2 Not running/Completed Pods

Some pods show not running

Some uaa and scf pods perform only deployment tasks, and it is normal for them to show as unready and Completed after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed
          
tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed

3.3 Namespaces

Length of release names

Release names (for example, when you run helm install --name) have a maximum length of 36 characters.

Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, but have deleted a previous deployment and are starting over, you must create new namespaces. Do not re-use your old namespaces. The helm delete command does not remove generated secrets from the scf and uaa namespaces as it is not aware of them. These leftover secrets may cause deployment failures. See Section 25.3, “Deleting and Rebuilding a Deployment” for more information.

3.4 DNS management

You must have control of your own DNS management, and set up all the necessary domains and subdomains for SUSE Cloud Application Platform and your applications.

3.5 Releases and Helm chart versions

The supported upgrade method is to install all upgrades, in order. Skipping releases is not supported. This table matches the Helm chart versions to each release:

CAP ReleaseSCF and UAA Helm Chart VersionStratos Helm Chart Version
1.3.1 (current release)2.15.22.3.0
1.32.14.52.2.0
1.2.12.13.32.1.0
1.2.02.11.02.0.0
1.1.12.10.11.1.0
1.1.02.8.01.1.0
1.0.12.7.01.0.2
1.02.6.111.0.0

4 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/

Read Chapter 3, Deployment and Administration Notes

You may set up a minimal deployment on four Kubernetes nodes for testing. This is not sufficient for a production deployment. A basic SUSE Cloud Application Platform production deployment requires at least eight hosts plus a storage backend: one SUSE CaaS Platform admin server, three Kubernetes masters, three Kubernetes workers, a DNS/DHCP server, and a storage backend such as SUSE Enterprise Storage or NFS. This is a bare minimum, and actual requirements are likely to be much larger, depending on your workloads. You also need an external workstation for administering your cluster. (See Section 1.3, “Minimum Requirements”.) You may optionally make your SUSE Cloud Application Platform instance highly-available.

Note
Note: Remote Administration

You will run most of the commands in this chapter from a remote workstation, rather than directly on any of the SUSE Cloud Application Platform nodes. These are indicated by the unprivileged user Tux, while root prompts are on a cluster node. There are few tasks that need to be performed directly on any of the cluster hosts.

The optional High Availability example in this chapter provides HA only for the SUSE Cloud Application Platform cluster, and not for CaaS Platform or SUSE Enterprise Storage. See Section 6.1, “Example High Availability Configuration”.

4.1 Prerequisites

Calculating hardware requirements is best done with an analysis of your expected workloads, traffic patterns, storage needs, and application requirements. The following examples are bare minimums to deploy a running cluster, and any production deployment will require more.

Minimum Hardware Requirements

8GB of memory per CaaS Platform dashboard and Kubernetes master nodes.

16GB of memory per Kubernetes worker.

40GB disk space per CaaS Platform dashboard and Kubernetes master nodes.

60GB disk space per Kubernetes worker.

Network Requirements

Your Kubernetes cluster needs its own domain and network. Each node should resolve to its hostname, and to its fully-qualified domain name. Typically, a Kubernetes cluster sits behind a load balancer, which also provides external access to the cluster. Another option is to use DNS round-robin to the Kubernetes workers to provide external access. It is also a common practice to create a wildcard DNS entry pointing to the domain, for example *.example.com, so that applications can be deployed without creating DNS entries for each application. This guide does not describe how to set up a load balancer or name services, as these depend on customer requirements and existing network architectures.

SUSE CaaS Platform Deployment Guide: Network Requirements provides guidance on network and name services configurations.

Install SUSE CaaS Platform

SUSE Cloud Application Platform is supported on SUSE CaaS Platform 3.x.

After installing CaaS Platform 2 or CaaS Platform 3 and logging into the Velum Web interface, check the box to install Tiller (Helm's server component).

Install Tiller
Figure 4.1: Install Tiller

Take note of the Overlay network settings. These define the networks that are exclusive to the internal Kubernetes cluster communications. They are not externally accessible. You may assign different networks to avoid address collisions.

There is also a form for proxy settings; if you're not using a proxy then leave it empty.

The easiest way to create the Kubernetes nodes, after you create the admin node, is to use AutoYaST; see Installation with AutoYaST. Set up CaaS Platform with one admin node and at least three Kubernetes masters and three Kubernetes workers. You also need an Internet connection, as the installer downloads additional packages, and the Kubernetes workers will each download ~10GB of Docker images.

Assigning Roles to Nodes
Figure 4.2: Assigning Roles to Nodes

When you have completed Bootstrapping the Cluster click the kubectl config button to download your new cluster's kubeconfig file. This takes you to a login screen; use the login you created to access Velum. Save the file as ~/.kube/config on your workstation. This file enables the remote administration of your cluster.

Download kubeconfig
Figure 4.3: Download kubeconfig
Install kubectl

To install kubectl on a SLE 12 SP3 or 15 workstation, install the package kubernetes-client from the Public Cloud module. For other operating systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/. After installation, run this command to verify that it is installed, and that is communicating correctly with your cluster:

tux > kubectl version --short
Client Version: v1.10.7
Server Version: v1.10.11

As the client is on your workstation, and the server is on your cluster, reporting the server version verifies that kubectl is using ~/.kube/config and is communicating with your cluster.

The following kubectl examples query the cluster configuration and node status:

tux > kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://11.100.10.10:6443
  name: local
contexts:
[...]

tux > kubectl get nodes
NAME                  STATUS   ROLES     AGE  VERSION
ef254d3.example.com   Ready    Master    4h   v1.10.11
b70748d.example.com   Ready    <none>    4h   v1.10.11
cb77881.example.com   Ready    <none>    4h   v1.10.11
d028551.example.com   Ready    <none>    4h   v1.10.11
[...]
Install Helm

Deploying SUSE Cloud Application Platform is different than the usual method of installing software. Rather than installing packages in the usual way with YaST or Zypper, you will install the Helm client on your workstation to install the required Kubernetes applications to set up SUSE Cloud Application Platform, and to administer your cluster remotely. Helm is the Kubernetes package manager. The Helm client goes on your remote administration computer, and Tiller is Helm's server, which is installed on your Kubernetes cluster.

Helm client version 2.9 or higher is required.

Warning
Warning: Initialize Only the Helm Client

When you initialize Helm on your workstation be sure to initialize only the client, as the server, Tiller, was installed during the CaaS Platform installation. You do not want two Tiller instances.

If the Linux distribution on your workstation doesn't provide the correct Helm version, or you are using some other platform, see the Helm Quickstart Guide for installation instructions and basic usage examples. Download the Helm binary into any directory that is in your PATH on your workstation, such as your ~/bin directory. Then initialize the client only:

tux > helm init --client-only
Creating /home/tux/.helm 
Creating /home/tux/.helm/repository 
Creating /home/tux/.helm/repository/cache 
Creating /home/tux/.helm/repository/local 
Creating /home/tux/.helm/plugins 
Creating /home/tux/.helm/starters 
Creating /home/tux/.helm/cache/archive 
Creating /home/tux/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/tux/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!

4.2 Pod Security Policy

SUSE CaaS Platform 3 includes Pod Security Policy (PSP) support. This change adds two new PSPs to CaaS Platform 3:

  • unprivileged, which is the default assigned to all users. The unprivileged Pod Security Policy is intended as a reasonable compromise between the reality of Kubernetes workloads, and the suse:caasp:psp:privileged role. By default, this PSP is granted to all users and service accounts.

  • privileged, which is intended to be assigned only to trusted workloads. It applies few restrictions, and should only be assigned to highly trusted users.

SUSE Cloud Application Platform 1.3.1 includes the necessary PSP configurations in the Helm charts to run on SUSE CaaS Platform, and are set up automatically without requiring manual configuration. See Section A.1, “Manual Configuration of Pod Security Policies” for instructions on applying the necessary PSPs manually on older Cloud Application Platform releases.

4.3 Choose Storage Class

The Kubernetes cluster requires a persistent storage class for the databases to store persistent data. Your available storage classes depend on which storage cluster you are using (SUSE Enterprise Storage users, see SUSE CaaS Platform Integration with SES). After connecting your storage backend use kubectl to see your available storage classes. This example is for an NFS storage class:

tux > kubectl get storageclass
NAME         PROVISIONER   AGE
persistent   nfs           10d

Creating a default storage class is useful for a number of scenarios, such as using MiniBroker. When your storage class is already created, run the following command (substituting the name of your storage class) to make it the default:

tux > kubectl patch storageclass persistent \
 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    
tux > kubectl get storageclass
NAME                   PROVISIONER   AGE
persistent (default)   nfs           10d

See Section 4.5, “Configure the SUSE Cloud Application Platform Production Deployment” to learn where to configure your storage class for SUSE Cloud Application Platform. See the Kubernetes document Persistent Volumes for detailed information on storage classes.

4.4 Test Storage Class

You may test that your storage class is properly configured before deploying SUSE Cloud Application Platform by creating a persistent volume claim on your storage class, then verifying that the status of the claim is bound, and a volume has been created.

First copy the following configuration file, which in this example is named test-storage-class.yaml, substituting the name of your storageClassName:

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-sc-persistent
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: persistent

Create your persistent volume claim:

tux > kubectl create -f test-storage-class.yaml
persistentvolumeclaim "test-sc-persistent" created

Check that the claim has been created, and that the status is bound:

tux > kubectl get pv,pvc
NAME                                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                        STORAGECLASS   REASON    AGE
pv/pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c   1Gi        RWO            Delete           Bound     default/test-sc-persistent   persistent               2m

NAME                     STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc/test-sc-persistent   Bound     pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c   1Gi        RWO            persistent     2m

This verifies that your storage class is correctly configured. Delete your volume claims when you're finished:

tux > kubectl delete pv/pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c
persistentvolume "pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c" deleted

tux > kubectl delete pvc/test-sc-persistent
persistentvolumeclaim "test-sc-persistent" deleted

If something goes wrong and your volume claims get stuck in pending status, you can force deletion with the --grace-period=0 option:

tux > kubectl delete pvc/test-sc-persistent --grace-period=0

4.5 Configure the SUSE Cloud Application Platform Production Deployment

Create a configuration file on your workstation for Helm to use. In this example it is called scf-config-values.yaml. (See Section A.1, “Manual Configuration of Pod Security Policies” for configuration instructions for releases older than 1.3.1.)

The example scf-config-values.yaml file is for a simple deployment without an ingress controller or load balancer. Instead, assign one worker node an external IP address, and map this to the domain name to provide external access to the cluster. The external_ips needs this address to provide access to the Stratos Web interface. external_ips also needs the internal IP addresses of the worker nodes to provide access to services.

env:
  # Enter the domain you created for your CAP cluster
  DOMAIN: example.com
    
  # uaa host and port
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793

kube:
  external_ips: ["11.100.10.10", "192.168.1", "192.168.2", "192.168.3"]

  storage_class:
    persistent: "persistent"
    shared: "shared"
        
  # The registry the images will be fetched from.
  # The values below should work for
  # a default installation from the SUSE registry.
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"

secrets:
  # Create a password for your CAP cluster
  CLUSTER_ADMIN_PASSWORD: password
    
  # Create a password for your uaa client secret
  UAA_ADMIN_CLIENT_SECRET: password
Important
Important: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

4.6 Deploy with Helm

The following list provides an overview of Helm commands to complete the deployment. Included are links to detailed descriptions.

  1. Download the SUSE Kubernetes charts repository (Section 4.7, “Add the Kubernetes charts repository”)

  2. Copy the storage secret of your storage cluster to the uaa and scf namespaces (Section 4.8, “Copy SUSE Enterprise Storage Secret”)

  3. Deploy uaa (Section 4.9, “Deploy uaa)

  4. Copy the uaa secret and certificate to the scf namespace, deploy scf (Section 4.10, “Deploy scf)

4.7 Add the Kubernetes charts repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME            URL                                             
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts                    
suse            https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                        	VERSION	DESCRIPTION
suse/cf-opensuse            	2.15.2  A Helm chart for SUSE Cloud Foundry
suse/uaa-opensuse           	2.15.2  A Helm chart for SUSE UAA
suse/cf                     	2.15.2  A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql   	1.0.1  	A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres	1.0.1  	A Helm chart for SUSE Universal Service Broker ...
suse/console                	2.3.0   A Helm chart for deploying Stratos UI Console
suse/metrics                	1.0.0  	A Helm chart for Stratos Metrics
suse/nginx-ingress          	0.28.3 	An nginx Ingress controller that uses ConfigMap...
suse/uaa                    	2.15.2  A Helm chart for SUSE uaa

4.8 Copy SUSE Enterprise Storage Secret

If you are using SUSE Enterprise Storage you must copy the Ceph admin secret to the uaa and scf namespaces:

tux > kubectl get secret ceph-secret-admin -o json --namespace default | \
sed 's/"namespace": "default"/"namespace": "uaa"/' | kubectl create -f -

tux > kubectl get secret ceph-secret-admin -o json --namespace default | \
sed's/"namespace": "default"/"namespace": "scf"/' | kubectl create -f -

4.9 Deploy uaa

Use Helm to deploy the uaa (User Account and Authentication) server. You may create your own release --name:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch -c 'kubectl get pods --namespace uaa'

When the status shows RUNNING for all of the uaa nodes, proceed to deploying SUSE Cloud Foundry. Pressing CtrlC stops the watch command.

Important
Important
Some pods show not running

Some uaa and scf pods perform only deployment tasks, and it is normal for them to show as unready and Completed after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed
          
tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed

4.10 Deploy scf

First pass your uaa secret and certificate to scf, then use Helm to install SUSE Cloud Foundry:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Now sit back and wait for the pods come online:

tux > watch -c 'kubectl get pods --namespace scf'

When all services are running use the Cloud Foundry command-line interface to log in to SUSE Cloud Foundry to deploy and manage your applications. (See Section 24.1, “Using the cf CLI with SUSE Cloud Application Platform”)

5 Installing the Stratos Web Console

5.1 Install Stratos with Helm

Stratos UI is a modern web-based management application for Cloud Foundry. It provides a graphical management console for both developers and system administrators. Install Stratos with Helm after all of the uaa and scf pods are running.

If you are using SUSE Enterprise Storage as your storage backend, copy the secret into the Stratos namespace:

tux > kubectl get secret ceph-secret-admin -o json --namespace default | \
sed 's/"namespace": "default"/"namespace": "stratos"/' | \
kubectl create -f -

You should already have the Stratos charts when you downloaded the SUSE charts repository (see Section 4.7, “Add the Kubernetes charts repository”). Search your Helm repository:

tux > helm search suse                                  
NAME                            VERSION DESCRIPTION
suse/cf-opensuse                2.15.2  A Helm chart for SUSE Cloud Foundry
suse/uaa-opensuse               2.15.2  A Helm chart for SUSE UAA
suse/cf                         2.15.2  A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres    1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/console                    2.3.0   A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0   A Helm chart for Stratos Metrics
suse/nginx-ingress              0.28.3  An nginx Ingress controller that uses ConfigMap...
suse/uaa                        2.15.2  A Helm chart for SUSE UAA

Use Helm to install Stratos:

tux > helm install suse/console \
    --name susecf-console \
    --namespace stratos \
    --values scf-config-values.yaml

Monitor progress:

tux > watch -c 'kubectl get pods --namespace stratos'
 Every 2.0s: kubectl get pods --namespace stratos
 
NAME                               READY     STATUS    RESTARTS   AGE
console-0                          3/3       Running   0          30m
console-mariadb-3697248891-5drf5   1/1       Running   0          30m

When all statuses show Ready, press CtrlC to exit. Query with Helm to view your release information:

tux > helm status susecf-console
LAST DEPLOYED: Wed Jan  2 10:25:22 2019
NAMESPACE: stratos
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                           TYPE    DATA  AGE
susecf-console-secret          Opaque  2     1h
susecf-console-mariadb-secret  Opaque  2     1h

==> v1/PersistentVolumeClaim
NAME                                  STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
console-mariadb                       Bound   pvc-c41fb6be-0ebb-11e9-9b6c-fa163e0b27ed  1Gi       RWO           persistent    1h
susecf-console-upgrade-volume         Bound   pvc-c420356f-0ebb-11e9-9b6c-fa163e0b27ed  20Mi      RWO           persistent    1h
susecf-console-encryption-key-volume  Bound   pvc-c4292f69-0ebb-11e9-9b6c-fa163e0b27ed  20Mi      RWO           persistent    1h

==> v1/Service
NAME                    TYPE       CLUSTER-IP      EXTERNAL-IP                                        PORT(S)                      AGE
susecf-console-mariadb  ClusterIP  172.24.60.105   <none>                                             3306/TCP                     1h
susecf-console-ui-ext   NodePort   172.24.164.199  10.86.1.15,172.24.10.12,172.24.10.16,172.24.10.14  80:31104/TCP,8443:31528/TCP  1h

==> v1beta1/Deployment
NAME             DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
console-mariadb  1        1        1           1          1h

==> v1beta1/StatefulSet
NAME     DESIRED  CURRENT  AGE
console  1        1        1h

==> v1/Pod(related)
NAME                              READY  STATUS   RESTARTS  AGE
console-mariadb-66fc57b5b5-fzq7w  1/1    Running  0         1h
console-0                         3/3    Running  0         1h

Point your web browser to https://example.com:8443, or https://10.86.1.15:8443 (from the helm list output) to see the Stratos console. Wade through the nag screens about the self-signed certificates and log in as admin with the password you created in scf-config-values.yaml. If you see an upgrade message, wait a few minutes and try again.

Stratos UI Cloud Foundry Console
Figure 5.1: Stratos UI Cloud Foundry Console

5.2 Connecting Kubernetes

Stratos can show information from your Kubernetes environment.

To enable this, you must register and connect your Kubernetes environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. Select Kubernetes from the Endpoint Type drop-down

  2. Enter a memorable name for your environment in the Name field

  3. Enter the URL of the API server for your Kubernetes environment

  4. Check the Skip SSL validation for the endpoint checkbox if using self-signed certificates

  5. Click Finish

The view will refresh to show the new endpoint in the disconnected state.

Next, you will need to connect to this endpoint. In the table of endpoints, click the overflow menu icon alongside the endpoint that you added above. Click on Connect in the drop-down menu.

You will need to select the appropriate Auth Type for your Kubernetes environment and provide the required credentials:

  • For CaaSP, use the Auth Type CAASP (OIDC) and provide a valid kubeconfig file for your environment

  • For Amazon EKS, use the Auth Type AWS IAM (EKS) and provide the name of your EKS cluster and your AWS Access Key ID and Secret Access Key

  • For Azure AKS, use the Auth Type Azure AKS and provide a valid kubeconfig file for your environment

  • For Minikube, use the Auth Type Kubernetes Cert Auth and provide the Certificate and Certificate Key files

Finally, click Connect to connect the endpoint with the authentication information that you have provided. The endpoint list should update to show that your Kubernetes endpoint is connected.

Once connected, you should see a Kubernetes menu item in the left-hand side navigation - click on this to access Kubernetes views.

5.3 Stratos Metrics

Stratos can show metrics data from Prometheus for both Cloud Foundry and Kubernetes.

5.3.1 Install Stratos Metrics with Helm

In order to display metrics data with Stratos, you need to deploy the stratos-metrics Helm chart - this deploys Prometheus with the necessary exporters that collect data from Cloud Foundry and Kubernetes. It also wraps Prometheus with an nginx server to provide authentication.

As with deploying Stratos, you should deploy the metrics Helm chart using the same scf-config-values.yaml file that was used for deploying scf and uaa.

Create a new yaml file named stratos-metrics-values.yaml, with the following contents:

kubernetes:
  authEndpoint: kube_server_address.example.com
prometheus:
  kubeStateMetrics:
    enabled: true
nginx:
  username: username
  password: password

where:

  • authEndpoint is the same URL that you used when registering your Kubernetes environment with Stratos (the Kubernetes API Server URL)

  • username should be chosen by you as the username that you will use when connecting to Stratos Metrics

  • password should be chosen by you as the password that you will use when connecting to Stratos Metrics

Install Metrics with:

tux > helm install suse/metrics \
    --name susecf-metrics \
    --namespace metrics \
    --values scf-config-values.yaml \
    --values stratos-metrics-values.yaml

Monitor progress:

$ watch -c 'kubectl get pods --namespace metrics'

When all statuses show Ready, press CtrlC to exit and to view your release information.

You can locate the IP and port that Stratos Metrics is running on with:

kubectl get service susecf-metrics-metrics-nginx --namespace=metrics

This will give output similar to:

NAME                         TYPE     CLUSTER-IP     EXTERNAL-IP PORT(S)       AGE
susecf-metrics-metrics-nginx NodePort 172.24.218.219 10.17.3.1   443:31173/TCP 13s

5.3.2 Connecting Stratos Metrics

When Stratos Metrics is connected to Stratos, additional views are enabled that show metrics metadata that has been ingested into the Stratos Metrics Prometheus server.

To enable this, you must register and connect your Stratos Metrics instance with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view. Next:

  1. Select Metrics from the Endpoint Type dropdown

  2. Enter a memorable name for your environment in the Name field

  3. Check the Skip SSL validation for the endpoint checkbox if using self-signed certificates

  4. Click Finish

The view will refresh to show the new endpoint in the disconnected state. Next you will need to connect to this endpoint.

In the table of endpoints, click the overflow menu icon alongside the endpoint that you added above, then:

  1. Click on Connect in the dropdown menu

  2. Enter the username for your Stratos Metrics instance

  3. Enter the password for your Stratos Metrics instance

  4. Click Connect

Once connected, you should see that the name of your Metrics endpoint is a hyperlink and clicking on it should show basic metadata about the Stratos Metrics endpoint.

Metrics data and views should now be available in the Stratos UI, for example:

  • On the Instances tab for an Application, the table should show an additional Cell column to indicate which Diego Cell the instance is running on. This should be clickable to navigate to a Cell view showing Cell information and metrics

  • On the view for an Application there should be a new Metrics tab that shows Application metrics

  • On the Kubernetes views, views such as the Node view should show an additional Metrics tab with metric information

6 SUSE Cloud Application Platform High Availability

6.1 Example High Availability Configuration

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The simplest method is to set the HA parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own custom values.

6.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for each namespace describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm charts:

tux > helm inspect suse/uaa | less +/sizing:
tux > helm inspect suse/cf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section. The following example is for the uaa namespace.

tux > helm inspect values suse/uaa | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'
151:    # The mysql instance group can scale between 1 and 3 instances.
152:    # For high availability it needs at least 2 instances.
153:    count: 1
178:    # The secret-generation instance group cannot be scaled.
179:    count: 1
207:  #   for managing user accounts and for registering OAuth2 clients, as well as
216:    # The uaa instance group can scale between 1 and 65535 instances.
217:    # For high availability it needs at least 2 instances.
218:    count: 1

The default values.yaml files are also included in this guide at Section A.2, “Complete suse/uaa values.yaml file” and Section A.3, “Complete suse/scf values.yaml file”. .

6.1.2 Simple High Availability Configuration

Important
Important
Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, but have deleted a previous deployment and are starting over, you must create new namespaces. Do not re-use your old namespaces. The helm delete command does not remove generated secrets from the scf and uaa namespaces as it is not aware of them. These leftover secrets may cause deployment failures. See Section 25.3, “Deleting and Rebuilding a Deployment” for more information.

The simplest way to make your SUSE Cloud Application Platform deployment highly available is to set HA to true in your deployment configuration file, for example scf-config-values.yaml:

config:
  # Flag to activate high-availability mode
  HA: true

Or, you may pass it as a command-line option when you are deploying with Helm, for example:

tux > helm install suse/uaa \
 --name susecf-uaa \
 --namespace uaa \ 
 --values scf-config-values.yaml \
 --set config.HA=true

This changes all roles with a default size of 1 to the minimum required for a High Availability deployment. It is not possible to customize any of the sizing values.

6.1.3 Example Custom High Availability Configurations

The following two example High Availability configuration files are for the uaa and scf namespaces. The example values are not meant to be copied, as these depend on your particular deployment and requirements. Do not change the config.HA flag to true (see Section 6.1.2, “Simple High Availability Configuration”.)

The first example is for the uaa namespace, uaa-sizing.yaml. The values specified are the minimum required for a High Availability deployment (that is equivalent to setting config.HA to true):

sizing:
  mysql:
    count: 2
  uaa:
    count: 2

The second example is for scf, scf-sizing.yaml. The values specified are the minimum required for a High Availability deployment (that is equivalent to setting config.HA to true), except for diego-cell which includes additional instances:

sizing:
  adapter:
    count: 2
  api_group:
    count: 2
  cc_clock:
    count: 2
  cc_uploader:
    count: 2
  cc_worker:
    count: 2
  cf_usb:
    count: 2
  diego_api:
    count: 2
  diego_brain:
    count: 2
  diego_cell:
    count: 6
  diego_ssh:
    count: 2
  doppler:
    count: 2
  log-api:
    count: 2
  mysql:
    count: 2
  nats:
    count: 2
  nfs_broker:
    count: 2
  router:
    count: 2
  routing_api:
    count: 2
  syslog_scheduler:
    count: 2
  tcp_router:
    count: 2
Important
Important
Always install to a fresh namespace

If you are not creating a fresh SUSE Cloud Application Platform installation, but have deleted a previous deployment and are starting over, you must create new namespaces. Do not re-use your old namespaces. The helm delete command does not remove generated secrets from the scf and uaa namespaces as it is not aware of them. These leftover secrets may cause deployment failures. See Section 25.3, “Deleting and Rebuilding a Deployment” for more information.

After creating your configuration files, follow the steps in Section 4.5, “Configure the SUSE Cloud Application Platform Production Deployment” until you get to Section 4.9, “Deploy uaa. Then deploy uaa with this command:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \ 
--values scf-config-values.yaml \
--values uaa-sizing.yaml

When the status shows RUNNING for all of the uaa nodes, deploy SCF with these commands:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
 -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"    

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--values scf-sizing.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

The HA pods with the following roles will enter in both passive and ready states; there should always be at least one pod in each role that is ready.

  • diego-brain

  • diego-database

  • routing-api

You can confirm this by looking at the logs inside the container. Look for .consul-lock.acquiring-lock.

Some roles follow an active/passive scaling model, meaning all pods except the active one will be shown as NOT READY by Kubernetes. This is appropriate and expected behavior.

6.1.4 Upgrading a non-High Availability Deployment to High Availability

You may make a non-High Availability deployment highly available by upgrading with Helm:

tux > helm upgrade susecf-uaa suse/uaa \
--values scf-config-values.yaml \
--values uaa-sizing.yaml 

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
 -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"    

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml \
--values scf-sizing.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

This may take a long time, and your cluster will be unavailable until the upgrade is complete.

6.2 Handling Custom Availability Zone Information

6.2.1 Availability Zones

Availability Zones (AZ) are logical arrangements of compute nodes within a region that provide isolation from each other. A deployment that is distributed across multiple AZs can use this separation to increase resiliency against downtime in the event a given zone experiences issues. See Availability Zones for more information.

6.2.2 Enable Custom Availability Zone Information Handling

By default, handling of custom AZ information is disabled. To enable it, provide a label name as a string to the AZ_LABEL_NAME field in the env: section of your scf-config-values.yaml:

env:
  AZ_LABEL_NAME: "zone-label"

If uaa is deployed, pass your uaa secret and certificate to scf. Otherwise deploy uaa first (See Section 4.9, “Deploy uaa), then proceed with this step:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If this is an initial deployment, use helm install to deploy scf:

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

If this is an existing deployment, use helm upgrade to apply the change:

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

Enabling this feature requires that the node-reader service account has sufficient permissions. This can be provided through a configuration file and kubectl.

Create the configuration file, which in this example will be node-reader.yaml:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-reader-clusterrole
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: node-reader-node-reader-clusterrole-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: node-reader-clusterrole
subjects:
  - kind: ServiceAccount
    name: node-reader
    namespace: scf

Apply it to your cluster:

tux > kubectl create -f node-reader.yaml

Next, get the names of the nodes in your cluster:

tux > kubectl get nodes
NAME                         STATUS  ROLES     AGE  VERSION
4a10db2c.infra.caasp.local   Ready   Master    4h   v1.9.8
87c9e8ff.infra.caasp.local   Ready   <none>    4h   v1.9.8
34ce7eb0.infra.caasp.local   Ready   <none>    4h   v1.9.8

Set your chosen label on the worker nodes and specify an AZ:

tux > kubectl label nodes 34ce7eb0.infra.caasp.local zone-label=availability-zone-1

Restart all diego-cell pods:

tux > kubectl delete pod diego-cell-0 --namespace scf

Run the following and verify that similar output is received:

tux > kubectl logs diego-cell-0 --namespace scf | grep ^AZ
AZ: Configured zone-label
AZ: Node...... 34ce7eb0.infra.caasp.local
AZ: Found..... availability-zone-1

7 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

7.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

7.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  • Use UAAC to target your uaa server.

    tux > uaac target https://uaa.example.com:2793
  • Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your scf-config-values.yaml file.

    tux > uaac token client get admin -s password
  • Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true -X POST \
        -k \
        -H 'Content-Type: application/json' \
        -H 'X-Identity-Zone-Subdomain: scf' \
        -d '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  • Verify the LDAP identify provider has been created in the scf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers -k -H "X-Identity-Zone-Id: scf"
  • Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  • Login as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  • Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  • Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  • Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

8 Preparing Microsoft Azure for SUSE Cloud Application Platform

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Microsoft Azure Kubernetes Service (AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparing Azure for a SUSE Cloud Application Platform deployment, with a basic Azure load balancer. (See Azure Kubernetes Service (AKS) for more information.)

In Kubernetes terminology a node used to be a minion, which was the name for a worker node. Now the correct term is simply node (see https://kubernetes.io/docs/concepts/architecture/nodes/). This can be confusing, as computing nodes have traditionally been defined as any device in a network that has an IP address. In Azure they are called agent nodes. In this chapter we call them agent nodes or Kubernetes nodes.

8.1 Prerequisites

Install az, the Azure command-line client, on your remote administration machine. See Install Azure CLI 2.0 for instructions.

See the Azure CLI 2.0 Reference for a complete az command reference.

You also need the kubectl, curl, sed, and jq commands, Helm 2.9 or newer, and the name of the SSH key that is attached to your Azure account. (Get Helm from Helm Releases.)

Log in to your Azure Account:

tux > az login

Your Azure user needs the User Access Administrator role. Check your assigned roles with the az command:

tux > az role assignment list --assignee login-name
[...]
"roleDefinitionName": "User Access Administrator",

If you do not have this role, then you must request it from your Azure administrator.

You need your Azure subscription ID. Extract it with az:

tux > az account show --query "{ subscription_id: id }"
{
"subscription_id": "a900cdi2-5983-0376-s7je-d4jdmsif84ca"
}

Replace the example subscription-id in the next command with your subscription-id. Then export it as an environment variable and set it as the current subscription:

tux > export SUBSCRIPTION_ID="a900cdi2-5983-0376-s7je-d4jdmsif84ca"

tux > az account set --subscription $SUBSCRIPTION_ID

Verify that the Microsoft.Network, Microsoft.Storage, Microsoft.Compute, and Microsoft.ContainerService providers are enabled:

tux > az provider list | egrep -w 'Microsoft.Network|Microsoft.Storage|Microsoft.Compute|Microsoft.ContainerService'

If any of these are missing, enable them with the az provider register --name provider command.

8.2 Create Resource Group and AKS Instance

Now you can create a new Azure resource group and AKS instance. Set the required variables as environment variables, which helps to speed up the setup, and to reduce errors. Verify your environment variables at any time with echo $VARNAME, for example:

tux > echo $RGNAME
cap-aks

This is especially useful when you run long compound commands to extract and set environment variables.

Note
Note: Use different names

Ensure each of your AKS clusters use unique resource group and managed cluster names, and not copy the examples, especially when your Azure subscription supports multiple users. Azure has no tools for sorting resources by user, so creating unique names and putting everything in your deployment in a single resource group helps you keep track, and you can delete the whole deployment by deleting the resource group.

  1. Create the resource group name:

    tux > export RGNAME="cap-aks"
  2. Create the AKS managed cluster name. Azure's default is to use the resource group name, then prepend it with MC and append the location, for example MC_cap-aks_cap-aks_eastus. This example uses the creator's initials for the AKSNAME environment variable, which will be mapped to the az command's --name option. The --name option is for creating arbitrary names for your AKS resources. This example will create a managed cluster named MC_cap-aks_cjs_eastus:

    tux > export AKSNAME=cjs
  3. Set the Azure location. (See Quotas and region availability for Azure Kubernetes Service (AKS) for supported locations.)

    tux > export REGION="eastus"
  4. Set the Kubernetes agent node count. (Cloud Application Platform requires a minimum of 3.)

    tux > export NODECOUNT="3"
  5. Set the virtual machine size (see General purpose virtual machine sizes). A virtual machine size of at least Standard_DS4_v2 using premium storage (see High-performance Premium Storage and managed disks for VMs) is recommended. Note managed-premium has been specified in the example scf-azure-values.yaml used (see Section 8.6, “Deploy SUSE Cloud Application Platform with a Load Balancer”):

    tux > export NODEVMSIZE="Standard_DS4_v2"
  6. Set the public SSH key name associated with your Azure account:

    tux > export SSHKEYVALUE="~/.ssh/id_rsa.pub"
  7. Create and set a new admin username:

    tux > export ADMINUSERNAME="scf-admin"
  8. Create a unique nodepool name. The default is aks-nodepool followed by an auto-generated number, for example aks-nodepool1-39318075-2. You have the option to change nodepool1 and create your own unique identifier, for example, mypool:

    tux > export NODEPOOLNAME="mypool"

    Which results in something like aks-mypool-39318075-2.

Now that your environment variables are in place, create a new resource group:

tux > az group create --name $RGNAME --location $REGION

List the Kubernetes versions currently supported by AKS (see https://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions for more information on the AKS version support policy). When creating your cluster in the next step, specify a Kubernetes version for consistent deployments:

tux > az aks get-versions --location $REGION --output table

Now you can create a new AKS managed cluster:

tux > az aks create --resource-group $RGNAME --name $AKSNAME \
 --node-count $NODECOUNT --admin-username $ADMINUSERNAME \
 --ssh-key-value $SSHKEYVALUE --node-vm-size $NODEVMSIZE \
 --node-osdisk-size=60 --nodepool-name $NODEPOOLNAME \
 --kubernetes-version 1.11.8
Note
Note

An OS disk size of at least 60GB must be specified using the --node-osdisk-size flag.

This takes a few minutes. When it is completed, fetch your kubectl credentials. The default behavior for az aks get-credentials is to merge the new credentials with the existing default configuration, and to set the new credentials as as the current Kubernetes context. The context name is your AKSNAME value. You should first backup your current configuration, or move it to a different location, then fetch the new credentials:

tux > az aks get-credentials --resource-group $RGNAME --name $AKSNAME
 Merged "cap-aks" as current context in /home/tux/.kube/config

Verify that you can connect to your cluster:

tux > kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
aks-mypool-47788232-0   Ready     agent     5m        v1.11.6
aks-mypool-47788232-1   Ready     agent     6m        v1.11.6
aks-mypool-47788232-2   Ready     agent     6m        v1.11.6

tux > kubectl get pods --all-namespaces
NAMESPACE     NAME                                READY  STATUS    RESTARTS   AGE
kube-system   azureproxy-79c5db744-fwqcx          1/1    Running   2          6m
kube-system   heapster-55f855b47-c4mf9            2/2    Running   0          5m
kube-system   kube-dns-v20-7c556f89c5-spgbf       3/3    Running   0          6m
kube-system   kube-dns-v20-7c556f89c5-z2g7b       3/3    Running   0          6m
kube-system   kube-proxy-g9zpk                    1/1    Running   0          6m
kube-system   kube-proxy-kph4v                    1/1    Running   0          6m
kube-system   kube-proxy-xfngh                    1/1    Running   0          6m
kube-system   kube-svc-redirect-2knsj             1/1    Running   0          6m
kube-system   kube-svc-redirect-5nz2p             1/1    Running   0          6m
kube-system   kube-svc-redirect-hlh22             1/1    Running   0          6m
kube-system   kubernetes-dashboard-546686-mr9hz   1/1    Running   1          6m
kube-system   tunnelfront-595565bc78-j8msn        1/1    Running   0          6m

When all nodes are in a ready state and all pods are running, proceed to the next steps.

8.3 Create Tiller Service Account

You must create a Tiller service account in order to give Tiller sufficient permissions to make changes on your cluster. There are two ways to create a service account: from the command line with kubectl, or applying a configuration file with kubectl and helm.

Run these kubectl commands to create your Tiller service account:

tux > kubectl create serviceaccount tiller --namespace kube-system
 serviceaccount "tiller" created

tux > kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
 clusterrolebinding.rbac.authorization.k8s.io "tiller" created

tux > helm init --upgrade --service-account tiller
 $HELM_HOME has been configured at /home/tux/.helm.

 Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
 Happy Helming!

Creating your Tiller service account with a configuration file requires Helm version 2.9 or newer (see https://github.com/helm/helm/releases

First create the configuration file, which this example is rbac-config.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Apply it to your cluster with these commands:

tux > kubectl create -f rbac-config.yaml
tux > helm init --service-account tiller

8.4 Pod Security Policies

Role-based access control (RBAC) is enabled by default on AKS. SUSE Cloud Application Platform 1.3.1 and later do not need to be configured manually. Older Cloud Application Platform releases require manual PSP configuration; see Section A.1, “Manual Configuration of Pod Security Policies” for instructions.

8.5 Enable Swap Accounting

Identify and set the cluster resource group, then enable kernel swap accounting. Swap accounting is required by Cloud Application Platform, but it is not the default in AKS nodes. The following commands use the az command to modify the GRUB configuration on each node, and then reboot the virtual machines.

  1. tux > export MCRGNAME=$(az aks show --resource-group $RGNAME --name $AKSNAME --query nodeResourceGroup -o json | jq -r '.')
  2. tux > export VMNODES=$(az vm list --resource-group $MCRGNAME -o json | jq -r '.[] | select (.tags.poolName | contains("'$NODEPOOLNAME'")) | .name')
  3. tux > for i in $VMNODES
     do
       az vm run-command invoke -g $MCRGNAME -n $i --command-id RunShellScript --scripts \
       "sudo sed -i -r 's|^(GRUB_CMDLINE_LINUX_DEFAULT=)\"(.*.)\"|\1\"\2 swapaccount=1\"|' \
       /etc/default/grub.d/50-cloudimg-settings.cfg && sudo update-grub"
       az vm restart -g $MCRGNAME -n $i
    done
  4. Verify that all nodes are in state "Ready" again, before you continue.

    tux > kubectl get nodes

8.6 Deploy SUSE Cloud Application Platform with a Load Balancer

In SUSE Cloud Application Platform, load balancing is enabled by setting the services.loadbalanced parameter to true and specifying a fully qualified domain name (FQDN) for the DOMAIN parameter in your configuration file. When services.loadbalanced is set to true, the ServiceType of public-facing services will change to Type LoadBalancer and a load balancer will be provisioned for the service. In Azure Kubernetes Service, this provisions a Basic Load Balancer.

The following is an example configuration file, called scf-azure-values.yaml, used to deploy Cloud Application Platform on Azure Kubernetes Service:

env:
  # Use a FQDN
  DOMAIN: autolb.example.com

  # uaa prefix is required
  UAA_HOST: uaa.autolb.example.com
  UAA_PORT: 2793

  #Azure deployment requires overlay
  GARDEN_ROOTFS_DRIVER: "overlay-xfs"

kube:
  storage_class:
    # Azure supports only "default" or "managed-premium"
    persistent: "managed-premium"
    shared: "shared"

  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"

secrets:
  # Password for user 'admin' in the cluster
  CLUSTER_ADMIN_PASSWORD: password

  # Password for scf to authenticate with uaa
  UAA_ADMIN_CLIENT_SECRET: password

services:
  loadbalanced: true

8.6.1 Deploy uaa

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-azure-values.yaml

Wait until you have a successful uaa deployment before going to the next steps. Monitor the progress using the watch command:

tux > watch -c 'kubectl get pods --namespace uaa'

Once the deployment completes, a Kubernetes service for uaa will be exposed on an Azure load balancer that is automatically set up by AKS (named kubernetes in the resource group that hosts the worker node VMs).

List the services that have been exposed on the load balancer public IP. The name of these services end in -public. For example, the uaa service is exposed on 40.85.188.67 and port 2793.

tux > kubectl get services --namespace uaa | grep public
uaa-uaa-public    LoadBalancer   10.0.67.56     40.85.188.67   2793:32034/TCP

Use the DNS service of your choice to set up DNS A records for the service from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the uaa service, map the following domains:

    uaa.DOMAIN

    Using the example values, an A record for uaa.autolb.example.com that points to 40.85.188.67

    *.uaa.DOMAIN

    Using the example values, an A record for *.uaa.autolb.example.com that points to 40.85.188.67

If you wish to use the DNS service provided by Azure, see the Azure DNS Documentation to learn more.

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name configured:

tux > curl -k https://uaa.autolb.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.autolb.example.com:2793/oauth/token",
"authorization_endpoint":"https://uaa.autolb.example.com:2793
/oauth/authorize","token_endpoint":"https://uaa.autolb.example.com:2793/oauth/token"

8.6.2 Deploy scf

Before deploying scf, ensure the DNS records for the uaa domains have been set up as specified in the previous section. Next, pass your uaa secret and certificate to scf, then use Helm to deploy scf:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-azure-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Monitor the deployment progress using the watch command:

tux > watch -c 'kubectl get pods --namespace scf'

Once the deployment completes, a number of public services will be setup using a load balancer that has been configured with corresponding load balancing rules and probes as well as having the correct ports opened in Network Security Group.

List the services that have been exposed on the load balancer public IP. The name of these services end in -public. For example, the gorouter service is exposed on 23.96.32.205:

tux > kubectl get services --namespace scf | grep public
diego-ssh-ssh-proxy-public                  LoadBalancer   10.0.44.118    40.71.187.83   2222:32412/TCP                                                                                                                                    1d
router-gorouter-public                      LoadBalancer   10.0.116.78    23.96.32.205   80:32136/TCP,443:32527/TCP,4443:31541/TCP                                                                                                         1d
tcp-router-tcp-router-public                LoadBalancer   10.0.132.203   23.96.46.98    20000:30337/TCP,20001:31530/TCP,20002:32118/TCP,20003:30750/TCP,20004:31014/TCP,20005:32678/TCP,20006:31528/TCP,20007:31325/TCP,20008:30060/TCP   1d

Use the DNS service of your choice to set up DNS A records for the services from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the gorouter service, map the following domains:

    DOMAIN

    Using the example values, an A record for autolb.example.com that points to 23.96.32.205 would be created.

    *.DOMAIN

    Using the example values, an A record for *.autolb.example.com that points to 23.96.32.205 would be created.

  • For the diego-ssh service, map the following domain:

    ssh.DOMAIN

    Using the example values, an A record for ssh.autolb.example.com that points to 40.71.187.83 would be created.

  • For the tcp-router service, map the following domain:

    tcp.DOMAIN

    Using the example values, an A record for tcp.autolb.example.com that points to 23.96.46.98 would be created.

If you wish to use the DNS service provided by Azure, see the Azure DNS Documentation to learn more.

Your load balanced deployment of Cloud Application Platform is now complete. Verify you can access the API endpoint:

tux > cf api --skip-ssl-validation https://api.autolb.example.com

8.7 Configuring and Testing the Native Microsoft AKS Service Broker

Microsoft Azure Kubernetes Service provides a service broker. This section describes how to use it with your SUSE Cloud Application Platform deployment.

Start by extracting and setting a batch of environment variables:

tux > SBRGNAME=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 8)-service-broker
      
tux > REGION=eastus

tux > export SUBSCRIPTION_ID=$(az account show | jq -r '.id')

tux > az group create --name ${SBRGNAME} --location ${REGION}

tux > SERVICE_PRINCIPAL_INFO=$(az ad sp create-for-rbac --name ${SBRGNAME})

tux > TENANT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.tenant')

tux > CLIENT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.appId')

tux > CLIENT_SECRET=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.password')

tux > echo SBRGNAME=${SBRGNAME}

tux > echo REGION=${REGION}

tux > echo SUBSCRIPTION_ID=${SUBSCRIPTION_ID} \; TENANT_ID=${TENANT_ID}\; CLIENT_ID=${CLIENT_ID}\; CLIENT_SECRET=${CLIENT_SECRET}

Add the necessary Helm repositories and download the charts:

tux > helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com
        
tux > helm repo update

tux > helm install svc-cat/catalog --name catalog \
 --namespace catalog \
 --set controllerManager.healthcheck.enabled=false \
 --set apiserver.healthcheck.enabled=false

tux > kubectl get apiservice

tux > helm repo add azure https://kubernetescharts.blob.core.windows.net/azure

tux > helm repo update

Set up the service broker with your variables, then create it:

tux > helm install azure/open-service-broker-azure \
--name osba \
--namespace osba \
--set azure.subscriptionId=${SUBSCRIPTION_ID} \
--set azure.tenantId=${TENANT_ID} \
--set azure.clientId=${CLIENT_ID} \
--set azure.clientSecret=${CLIENT_SECRET} \
--set azure.defaultLocation=${REGION} \
--set redis.persistence.storageClass=default \
--set basicAuth.username=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set basicAuth.password=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set tls.enabled=false

tux > cf login

tux > cf create-service-broker azure $(kubectl get deployment osba-open-service-broker-azure \
--namespace osba -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "BASIC_AUTH_USERNAME")].value}') $(kubectl get secret --namespace osba osba-open-service-broker-azure -o jsonpath='{.data.basic-auth-password}' | base64 -d) http://osba-open-service-broker-azure.osba

Register the service broker in SUSE Cloud Foundry:

tux > cf service-access -b azure | \
awk '($2 ~ /basic/) { system("cf enable-service-access " $1 " -p " $2)}'

Test your new service broker with an example Rails app. First create a space and org for your test app:

tux > cf create-org testorg

tux > cf create-space scftest -o testorg

tux > cf target -o "testorg" -s "scftest"

tux > cf create-service azure-mysql-5-7 basic scf-rails-example-db \
-c "{ \"location\": \"${REGION}\", \"resourceGroup\": \"${SBRGNAME}\", \"firewallRules\": [{\"name\": \
\"AllowAll\", \"startIPAddress\":\"0.0.0.0\",\"endIPAddress\":\"255.255.255.255\"}]}"

tux > cf service scf-rails-example-db | grep status

Find your new service and optionally disable TLS. You should not disable TLS on a production deployment, but it simplifies testing. The mysql2 gem must be configured to use TLS, see brianmario/mysql2/SSL options on GitHub:

tux > az mysql server list --resource-group $SBRGNAME

tux > az mysql server update --resource-group $SBRGNAME \
--name scftest --ssl-enforcement Disabled

Look in your Azure portal to find your database --name.

Build and push the test Rails app:

tux > git clone https://github.com/scf-samples/rails-example

tux > cd rails-example

tux > cf push

tux > cf ssh scf-rails-example -c "export PATH=/home/vcap/deps/0/bin:/usr/local/bin:/usr/bin:/bin && \
export BUNDLE_PATH=/home/vcap/deps/0/vendor_bundle/ruby/2.5.0 && \
export BUNDLE_GEMFILE=/home/vcap/app/Gemfile && cd app && bundle exec rake db:seed"

tux > cf service scf-rails-example-db # => bound apps

Test your new service deployment with curl, replacing the example IP address with your own IP address:

tux > curl -k https://scf-rails-example.40.101.3.25

A successful deployment returns output like this abbreviated example:

 <h1>Hello from Rails on SCF!</h1>
2018-12-19T12:53:30+00:00

<ul>
  <li>#1: Drink coffee</li>
  <li>#2: Go to work</li>
  <li>#3: Have some rest</li>
</ul>

9 Deploying SUSE Cloud Application Platform on Amazon EKS

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/

Read Chapter 3, Deployment and Administration Notes

This chapter describes how to deploy SUSE Cloud Application Platform on Amazon EKS, using Amazon's Elastic Load Balancer to provide fault-tolerant access to your cluster.

9.1 Prerequisites

You need an Amazon EKS account. See Getting Started with Amazon EKS for instructions on creating a Kubernetes cluster for your SUSE Cloud Application Platform deployment.

When you create your cluster, use node sizes that are at least t2.large. The NodeVolumeSize must be a minimum of 60GB.

Take note of special configurations that are required to successfully deploy SUSE Cloud Application Platform on EKS in Section 9.11, “Deploy scf.

Section 9.2, “IAM Requirements for EKS” provides guidance on configuring Identity and Access Management (IAM) for your users.

9.2 IAM Requirements for EKS

These IAM policies provide sufficient access to use EKS.

9.2.1 Unscoped Operations

Some of these permissions are very broad. They are difficult to scope effectively, in part because many resources are created and named dynamically when deploying an EKS cluster using the CloudFormation console. It may be helpful to enforce certain naming conventions, such as prefixing cluster names with ${aws:username} for pattern-matching in Conditions. However, this requires special consideration beyond the EKS deployment guide, and should be evaluated in the broader context of organizational IAM policies.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "UnscopedOperations",
            "Effect": "Allow",
            "Action": [
                "cloudformation:CreateUploadBucket",
                "cloudformation:EstimateTemplateCost",
                "cloudformation:ListExports",
                "cloudformation:ListStacks",
                "cloudformation:ListImports",
                "cloudformation:DescribeAccountLimits",
                "eks:ListClusters",
                "cloudformation:ValidateTemplate",
                "cloudformation:GetTemplateSummary",
                "eks:CreateCluster"
            ],
            "Resource": "*"
        },
        {
            "Sid": "EffectivelyUnscopedOperations",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:GetRole",
                "iam:DetachRolePolicy",
                "iam:RemoveRoleFromInstanceProfile",
                "cloudformation:*",
                "iam:CreateRole",
                "iam:DeleteRole",
                "eks:*"
            ],
            "Resource": [
                "arn:aws:eks:*:*:cluster/*",
                "arn:aws:cloudformation:*:*:stack/*/*",
                "arn:aws:cloudformation:*:*:stackset/*:*",
                "arn:aws:iam::*:instance-profile/*",
                "arn:aws:iam::*:role/*"
            ]
        }
    ]
}

9.2.2 Scoped Operations

These policies deal with sensitive access controls, such as passing roles and attaching/detaching policies from roles.

This policy, as written, allows unrestricted use of only customer-managed policies, and not Amazon-managed policies. This prevents potential security holes such as attaching the IAMFullAccess policy to a role. If you are using roles in a way that would be undermined by this, you should strongly consider integrating a Permissions Boundary before using this policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "UseCustomPoliciesWithCustomRoles",
            "Effect": "Allow",
            "Action": [
                "iam:DetachRolePolicy",
                "iam:AttachRolePolicy"
            ],
            "Resource": [
                "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*",
                "arn:aws:iam::<YOUR_ACCOUNT_ID>:policy/*"
            ],
            "Condition": {
                "ForAllValues:ArnNotLike": {
                    "iam:PolicyARN": "arn:aws:iam::aws:policy/*"
                }
            }
        },
        {
            "Sid": "AllowPassingRoles",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*"
        },
        {
            "Sid": "AddCustomRolesToInstanceProfiles",
            "Effect": "Allow",
            "Action": "iam:AddRoleToInstanceProfile",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:instance-profile/*"
        },
        {
            "Sid": "AssumeServiceRoleForEKS",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<EKS_SERVICE_ROLE_NAME>"
        },
        {
            "Sid": "DenyUsingAmazonManagedPoliciesUnlessNeededForEKS",
            "Effect": "Deny",
            "Action": "iam:*",
            "Resource": "arn:aws:iam::aws:policy/*",
            "Condition": {
                "ArnNotEquals": {
                    "iam:PolicyARN": [
                        "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
                        "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
                        "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAttachingSpecificAmazonManagedPoliciesForEKS",
            "Effect": "Allow",
            "Action": [
                "iam:DetachRolePolicy",
                "iam:AttachRolePolicy"
            ],
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "iam:PolicyARN": [
                        "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
                        "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
                        "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
                    ]
                }
            }
        }
    ]
}

9.3 The Helm CLI and Tiller

Get the latest version of Helm, and installation instructions, from the Helm Quickstart Guide. Then create a Tiller service account to give Tiller sufficient permissions to make changes on your cluster. Create a configuration file, which in this example is rbac-config.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Apply it to your cluster with these commands:

tux > kubectl create -f rbac-config.yaml
tux > helm init --service-account tiller

9.4 Default Storage Class

This example creates a simple storage class for your cluster in storage-class.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp2
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

Then apply the new storage class configuration with this command:

tux > kubectl create -f storage-class.yaml

9.5 Security Group rules

In your EC2 virtual machine list, add the following rules to the security group to any one of your nodes:

Type		   Protocol     Port Range      Source          Description
HTTP               TCP          80              0.0.0.0/0       CAP HTTP
Custom TCP Rule    TCP          2793            0.0.0.0/0       CAP UAA
Custom TCP Rule    TCP          2222            0.0.0.0/0       CAP SSH
Custom TCP Rule    TCP          4443            0.0.0.0/0       CAP WSS
Custom TCP Rule    TCP          443             0.0.0.0/0       CAP HTTPS
Custom TCP Rule    TCP          20000-20009     0.0.0.0/0       TCP Routing

9.6 DNS Configuration

Creation of the Elastic Load Balancer is triggered by a setting in the Cloud Application Platform deployment configuration file. First deploy uaa, then create CNAMEs for your domain and uaa subdomains (see the table below). Then deploy scf, and create the appropriate scf CNAMEs. This is described in more detail in the deployment sections.

The following table lists the required domain and sub-domains, using example.com as the example domain:

DomainsServices
uaa.example.comuaa/uaa-public
*.uaa.example.comuaa/uaa-public
example.comscf/router-public
*.example.comscf/router-public
tcp.example.comscf/tcp-router-public
ssh.example.comscf/diego-access-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptionsKubernetes service names
User Account and Authentication (uaa)uaa-uaa-public
Cloud Foundry (CF) TCP routing servicetcp-router-tcp-router-public
CF application SSH accessdiego-ssh-ssh-proxy-public
CF routerrouter-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

9.7 Deployment Configuration

Use this example scf-config-values.yaml as a template for your configuration.

env:
  DOMAIN: example.com
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793
  
  GARDEN_ROOTFS_DRIVER: overlay-xfs
  GARDEN_APPARMOR_PROFILE: ""
  
services:
  loadbalanced: true
  
kube:
  storage_class:
    # Change the value to the storage class you use
    persistent: "gp2"
    shared: "gp2"

  # The default registry images are fetched from
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"
  
secrets:
  # Create a very strong password for user 'admin'
  CLUSTER_ADMIN_PASSWORD: password

  # Create a very strong password, and protect it because it
  # provides root access to everything
  UAA_ADMIN_CLIENT_SECRET: password

9.8 Deploying Cloud Application Platform

The following list provides an overview of Helm commands to complete the deployment. Included are links to detailed descriptions.

  1. Download the SUSE Kubernetes charts repository (Section 9.9, “Add the Kubernetes charts repository”)

  2. Deploy uaa, then create appropriate CNAMEs (Section 9.10, “Deploy uaa)

  3. Copy the uaa secret and certificate to the scf namespace, deploy scf, create CNAMES (Section 9.11, “Deploy scf)

9.9 Add the Kubernetes charts repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME            URL                                             
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts                    
suse            https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                        	VERSION	DESCRIPTION
suse/cf-opensuse            	2.15.2  A Helm chart for SUSE Cloud Foundry
suse/uaa-opensuse           	2.15.2  A Helm chart for SUSE UAA
suse/cf                     	2.15.2  A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql   	1.0.1  	A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres	1.0.1  	A Helm chart for SUSE Universal Service Broker ...
suse/console                	2.3.0   A Helm chart for deploying Stratos UI Console
suse/metrics                	1.0.0  	A Helm chart for Stratos Metrics
suse/nginx-ingress          	0.28.3 	An nginx Ingress controller that uses ConfigMap...
suse/uaa                    	2.15.2  A Helm chart for SUSE UAA

9.10 Deploy uaa

Use Helm to deploy the uaa (User Account and Authentication) server. You may create your own release --name:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch -c 'kubectl get pods --namespace uaa'

When the status shows RUNNING for all of the uaa pods, create CNAMEs for the required domains. (See Section 9.6, “DNS Configuration”.) Use kubectl to find the service hostnames. These hostnames include the elb sub-domain, so use this to get the correct results:

tux > kubectl get services --namespace uaa | grep elb
Important
Important
Some pods show not running

Some uaa and scf pods perform only deployment tasks, and it is normal for them to show as unready and Completed after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed
          
tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed

9.11 Deploy scf

Because of the way EKS runs health checks, Cloud Application Platform requires an edit to one of the scf Helm charts, and then after a successful scf deployment remove a listening port from the Elastic Load Balancer's listeners list.

On your remote admin machine, look in ~.helm/cache/archive/cf-2.15.2.tgz. Extract the cf-2.15.2.tgzarchive, and add this to the ports section of templates/tcp-router.yaml:

- name: "healthcheck"
  port: 8080
  protocol: "TCP"
  targetPort: "healthcheck"

The section should look like this:

    ports:
    - name: "healthcheck"
      port: 8080
      protocol: "TCP"
      targetPort: "healthcheck"
    {{- range $port := until (int .Values.sizing.tcp_router.ports.tcp_route.count) }}
    - name: "tcp-route-{{ $port }}"
      port: {{ add 20000 $port }}
      protocol: "TCP"
      targetPort: "tcp-route-{{ $port }}"
    {{- end }}
    selector:
      app.kubernetes.io/component: "tcp-router"

Now pass your uaa secret and certificate to scf, then use Helm to install SUSE Cloud Foundry:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Sit back and wait for the pods come online:

tux > watch -c 'kubectl get pods --namespace scf'

Remove port 8080 from the load balancer's listeners list:

tux > aws elb delete-load-balancer-listeners \
--load-balancer-name  healthcheck   \
--load-balancer-ports 8080

Now health checks should operate correctly. When the status shows RUNNING for all of the scf pods, create CNAMEs for the required domains. (See Section 9.6, “DNS Configuration”) Use kubectl to find the service hostnames. These hostnames include the elb sub-domain, so use this to get the correct results:

tux > kubectl get services --namespace scf | grep elb

9.12 Deploying and Using the AWS Service Broker

The AWS Service Broker provides integration of native AWS services with SUSE Cloud Application Platform.

9.12.1 Prerequisites

Deploying and using the AWS Service Broker requires the following:

9.12.2 Setup

  1. Create the required DynamoDB table where the AWS service broker will store its data. This example creates a table named awssb:

    tux > aws dynamodb create-table \
    		--attribute-definitions \
    			AttributeName=id,AttributeType=S \
    			AttributeName=userid,AttributeType=S \
    			AttributeName=type,AttributeType=S \
    		--key-schema \
    			AttributeName=id,KeyType=HASH \
    			AttributeName=userid,KeyType=RANGE \
    		--global-secondary-indexes \
    			'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' \
    		--provisioned-throughput \
    			ReadCapacityUnits=5,WriteCapacityUnits=5 \
    		--region ${AWS_REGION} --table-name awssb
  2. Wait until the table has been created. When it is ready, the TableStatus will change to ACTIVE. Check the status using the describe-table command:

    aws dynamodb describe-table --table-name awssb

    (For more information about the describe-table command, see https://docs.aws.amazon.com/cli/latest/reference/dynamodb/describe-table.html.)

  3. Set a name for the Kubernetes namespace you will install the service broker to. This name will also be used in the service broker URL:

    tux > BROKER_NAMESPACE=aws-sb
  4. Create a server certificate for the service broker:

    1. Create and use a separate directory to avoid conflicts with other CA files:

      tux > mkdir /tmp/aws-service-broker-certificates && cd $_
    2. Get the CA certificate:

      kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem
    3. Get the CA private key:

      kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key
    4. Create a signing request:

      tux > openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \
        -passout pass:1234 \
        -subj '/CN=aws-servicebroker.${BROKER_NAMESPACE}' -batch \
        </dev/null
    5. Decrypt the generated broker private key:

      tux > openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key
    6. Sign the request with the CA certificate:

      tux > openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem
  5. Install the AWS service broker as documented at https://github.com/awslabs/aws-servicebroker/blob/master/docs/getting-started-k8s.md. Skip the installation of the Kubernetes Service Catalog. While installing the AWS Service Broker, make sure to update the Helm chart version (the version as of this writing is 1.0.0-beta.3). For the broker install, pass in a value indicating the Cluster Service Broker should not be installed (for example --set deployClusterServiceBroker=false). Ensure an account and role with adequate IAM rights is chosen (see Section 9.12.1, “Prerequisites”:

    tux > helm install aws-sb/aws-servicebroker \
    	     --name aws-servicebroker \
    	     --namespace ${BROKER_NAMESPACE} \
    	     --version 1.0.0-beta.3 \
    	     --set aws.secretkey=$aws_access_key \
    	     --set aws.accesskeyid=$aws_key_id \
    	     --set deployClusterServiceBroker=false \
    	     --set tls.cert="$(base64 -w0 tls.pem)" \
    	     --set tls.key="$(base64 -w0 tls.key)" \
    	     --set-string aws.targetaccountid=${AWS_TARGET_ACCOUNT_ID} \
    	     --set aws.targetrolename=${AWS_TARGET_ROLE_NAME} \
    	     --set aws.tablename=awssb \
    	     --set aws.vpcid=$vpcid \
    	     --set aws.region=$aws_region \
    	     --set authenticate=false
  6. Log into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space
    tux > cf target -o org -s space
  7. Create a service broker in scf. Note the name of the service broker should be the same as the one specified for the --name flag in the helm install step (for example aws-servicebroker. Note that the username and password parameters are only used as dummy values to pass to the cf command:

    tux > cf create-service-broker aws-servicebroker username password  https://aws-servicebroker.${BROKER_NAMESPACE}
  8. Verify the service broker has been registered:

    tux > cf service-brokers
  9. List the available service plans:

    tux > cf service-access
  10. Enable access to a service. This example uses the -p to enable access to a specific service plan. See https://github.com/awslabs/aws-servicebroker/blob/master/templates/rdsmysql/template.yaml for information about all available services and their associated plans:

    tux > cf enable-service-access rdsmysql -p custom
  11. Create a service instance. As an example, a custom MySQL instance can be created as:

    tux > cf create-service rdsmysql custom mysql-instance-name -c '{
      "AccessCidr": "192.0.2.24/32",
      "BackupRetentionPeriod": 0,
      "MasterUsername": "master",
      "DBInstanceClass": "db.t2.micro",
      "PubliclyAccessible": "true",
      "region": "${AWS_REGION}",
      "StorageEncrypted": "false",
      "VpcId": "${AWS_VPC}",
      "target_account_id": "${AWS_TARGET_ACCOUNT_ID}",
      "target_role_name": "${AWS_TARGET_ROLE_NAME}"
    }'

9.12.3 Cleanup

When the AWS Service Broker and its services are no longer required, perform the following steps:

  1. Unbind any applications using any service instances then delete the service instance:

    tux > cf unbind-service my_app mysql-instance-name
    tux > cf delete-service mysql-instance-name
  2. Delete the service broker in scf:

    tux > cf delete-service-broker aws-servicebroker
  3. Delete the deployed Helm chart and the namespace:

    tux > helm delete --purge aws-servicebroker
    tux > kubectl delete namespace ${BROKER_NAMESPACE}
  4. The manually created DynamoDB table will need to be deleted as well:

    tux > aws dynamodb delete-table --table-name awssb --region ${AWS_REGION}

10 Installing SUSE Cloud Application Platform on OpenStack

You can deploy a SUSE Cloud Application Platform on CaaS Platform stack on OpenStack. This chapter describes how to deploy a small testing and development instance with one Kubernetes master and two worker nodes, using Terraform to automate the deployment. This does not create a production deployment, which should be deployed on bare metal for best performance.

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/

Read Chapter 3, Deployment and Administration Notes

10.1 Prerequisites

The following prerequisites should be met before attempting to deploy SUSE Cloud Application Platform on OpenStack. The memory and disk space requirements are minimums, and may need to be larger according to your workloads.

  • 8GB of memory per CaaS Platform dashboard and Kubernetes master nodes

  • 16GB of memory per Kubernetes worker

  • 40GB disk space per CaaS Platform dashboard and Kubernetes master nodes

  • 60GB disk space per Kubernetes worker

  • A SUSE Customer Center account for downloading CaaS Platform. Get SUSE-CaaS-Platform-2.0-KVM-and-Xen.x86_64-1.0.0-GM.qcow2, which has been tested on OpenStack.

  • Download the openrc.sh file for your OpenStack account

10.2 Create a New OpenStack Project

You may use an existing OpenStack project, or run the following commands to create a new project with the necessary configuration for SUSE Cloud Application Platform.

tux > openstack project create --domain default --description "CaaS Platform Project" caasp
tux > openstack role add --project caasp --user admin admin

Create an OpenStack network plus a subnet for CaaS Platform (for example, caasp-net), and add a router to the external (floating) network:

tux > openstack network create caasp-net
tux > openstack subnet create caasp_subnet --network caasp-net \
--subnet-range 10.0.2.0/24
tux > openstack router create caasp-net-router
tux > openstack router set caasp-net-router --external-gateway floating
tux > openstack router add subnet caasp-net-router caasp_subnet

Upload your CaaS Platform image to your OpenStack account:

tux > 
$ openstack image create \
  --file SUSE-CaaS-Platform-2.0-KVM-and-Xen.x86_64-1.0.0-GM.qcow2

Create a security group with the rules needed for CaaS Platform:

tux > openstack security group create cap --description "Allow CAP traffic"
tux > openstack security group rule create cap --protocol any --dst-port any --ethertype IPv4 --egress
tux > openstack security group rule create cap --protocol any --dst-port any --ethertype IPv6 --egress
tux > openstack security group rule create cap --protocol tcp --dst-port 20000:20008 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 443:443 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 2793:2793 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 4443:4443 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 2222:2222 --remote-ip 0.0.0.0/0

Clone the Terraform script from GitHub:

tux > git clone git@github.com:kubic-project/automation.git
tux > cd automation/caasp-openstack-terraform

Edit the openstack.tfvars file. Use the names of your OpenStack objects, for example:

image_name = "SUSE-CaaS-Platform-2.0"
internal_net = "caasp-net"
external_net = "floating"
admin_size = "m1.large"
master_size = "m1.large"
masters = 1
worker_size = "m1.xlarge"
workers = 2

Initialize Terraform:

tux > terraform init

10.3 Deploy SUSE Cloud Application Platform

Source your openrc.sh file, set the project, and deploy CaaS Platform:

tux > . openrc.sh
tux > export OS_PROJECT_NAME='caasp'
tux > ./caasp-openstack apply

Wait for a few minutes until all systems are up and running, then view your installation:

tux > openstack server list

Add your cap security group to all CaaS Platform workers:

tux > openstack server add security group caasp-worker0 cap
tux > openstack server add security group caasp-worker1 cap

If you need to log into your new nodes, log in as root using the SSH key in the automation/caasp-openstack-terraform/ssh directory.

10.4 Bootstrap SUSE Cloud Application Platform

The following examples use the xip.io wildcard DNS service. You may use your own DNS/DHCP services that you have set up in OpenStack in place of xip.io.

  • Point your browser to the IP address of the CaaS Platform admin node, and create a new admin user login

  • Replace the default IP address or domain name of the Internal Dashboard FQDN/IP on the Initial CaaS Platform configuration screen with the internal IP address of the CaaS Platform admin node

  • Check the Install Tiller checkbox, then click the Next button

  • Terraform automatically creates all of your worker nodes, according to the number you configured in openstack.tfvars, so click Next to skip Bootstrap your CaaS Platform

  • On the Select nodes and roles screen click Accept all nodes, click to define your master and worker nodes, then click Next

  • For the External Kubernetes API FQDN, use the public (floating) IP address of the CaaS Platform master and append the .xip.io domain suffix

  • For the External Dashboard FQDN use the public (floating) IP address of the CaaS Platform admin node, and append the .xip.io domain suffix

10.5 Growing the Root Filesystem

If the root filesystem on your worker nodes is smaller than the OpenStack virtual disk, use these commands on the worker nodes to grow the filesystems to match:

tux > growpart /dev/vda 3
tux > btrfs filesystem resize max /.snapshots

Part III SUSE Cloud Application Platform Administration

11 Upgrading SUSE Cloud Application Platform
12 Configuration Changes

After the initial deployment of Cloud Application Platform, any changes made to your Helm chart values, whether through your scf-config-values.yaml file or directly using Helm's --set flag, are applied using the helm upgrade command.

13 Managing Passwords

The various components of SUSE Cloud Application Platform authenticate to each other using passwords that are automatically managed by the Cloud Application Platform secrets-generator. The only passwords managed by the cluster administrator are passwords for human users. The administrator may create…

14 Cloud Controller Database Secret Rotation

The Cloud Controller Database (CCDB) encrypts sensitive information like passwords. By default, the encryption key is generated by SCF. If it is compromised and needs to be rotated, new keys can be added. Note that existing encrypted information will not be updated. The encrypted information must be…

15 Backup and Restore
16 Provisioning Services with Minibroker

Minibroker is an OSBAPI compliant broker created by members of the Microsoft Azure team. It provides a simple method to provision service brokers on Kubernetes clusters.

17 Setting up and Using a Service Broker

The Open Service Broker API provides your SUSE Cloud Application Platform applications with access to external dependencies and platform-level capabilities, such as databases, filesystems, external repositories, and messaging systems. These resources are called services. Services are created, used, …

18 App-AutoScaler

The App-AutoScaler service is used for automatically managing an application's instance count when deployed on SUSE Cloud Foundry. The scaling behavior is determined by a set of criteria defined in a policy (See Section 18.4, “Policies”).

19 Logging

There are two types of logs in a deployment of Cloud Application Platform, applications logs and component logs.

20 Managing Certificates

The traffic of your SUSE Cloud Application Platform deployment can be made more secure through the use of TLS certificates.

21 Integrating CredHub with SUSE Cloud Application Platform

SUSE Cloud Application Platform supports CredHub integration. You should already have a working CredHub instance, a CredHub service on your cluster, then apply the steps in this chapter to connect SUSE Cloud Application Platform.

22 Offline Buildpacks

Buildpacks are used to construct the environment needed to run your applications, including any required runtimes or frameworks as well as other dependencies. When you deploy an application, a buildpack can be specified or automatically detected by cycling through all available buildpacks to find on…

23 Custom Application Domains

In a standard SUSE Cloud Foundry deployment, applications will use the same domain as the one configured in your scf-config-values.yaml for SCF. For example, if DOMAIN is set as example.com in your scf-config-values.yaml and you deploy an application called myapp then the application's URL will be m…

11 Upgrading SUSE Cloud Application Platform

11.1 Upgrading SUSE Cloud Application Platform

uaa, scf, and Stratos together make up a SUSE Cloud Application Platform release. Maintenance updates are delivered as container images from the SUSE registry and applied with Helm.

For additional upgrade information, always review the release notes published at https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/.

Warning
Warning: helm rollback is not supported

helm rollback is not supported in SUSE Cloud Application Platform or in upstream Cloud Foundry, and may break your cluster completely, because database migrations only run forward and cannot be reversed. Database schema can change over time. During upgrades both pods of the current and the next release may run concurrently, so the schema must stay compatible with the immediately previous release. But there is no way to guarantee such compatibility for future upgrades. One way to address this is to perform a full raw data backup and restore. (See Section 15.2, “Disaster recovery in scf through raw data backup and restore”)

Do not make changes to pod counts when you upgrade. You may make sizing changes before or after an upgrade (see Section 6.1, “Example High Availability Configuration”).

Use Helm to check for updates:

tux > helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "suse" chart repository
Update Complete. ⎈ Happy Helming!⎈

Get your currently-installed release versions and chart names (your releases may have different names than the examples), and then view the upgrade versions:

tux > helm list
NAME            REVISION  UPDATED                  STATUS    CHART           NAMESPACE
susecf-console  1         Tue Aug 14 11:53:28 2018 DEPLOYED  console-2.0.0   stratos  
susecf-scf      1         Tue Aug 14 10:58:16 2018 DEPLOYED  cf-2.11.0       scf      
susecf-uaa      1         Tue Aug 14 10:49:30 2018 DEPLOYED  uaa-2.11.0      uaa

tux > helm search suse
NAME                            VERSION DESCRIPTION
suse/cf-opensuse                2.15.2  A Helm chart for SUSE Cloud Foundry
suse/uaa-opensuse               2.15.2  A Helm chart for SUSE UAA
suse/cf                         2.15.2  A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres    1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/console                    2.3.0   A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0   A Helm chart for Stratos Metrics
suse/nginx-ingress              0.28.3  An nginx Ingress controller that uses ConfigMap...
suse/uaa                        2.15.2  A Helm chart for SUSE UAA

View all charts in a release, and their versions:

tux > helm search suse/uaa -l
NAME            VERSION     DESCRIPTION              
suse/uaa        2.15.2      A Helm chart for SUSE UAA
suse/uaa        2.14.5      A Helm chart for SUSE UAA
suse/uaa        2.13.3      A Helm chart for SUSE UAA
suse/uaa        2.11.0      A Helm chart for SUSE UAA
suse/uaa        2.10.1      A Helm chart for SUSE UAA
suse/uaa        2.8.0       A Helm chart for SUSE UAA
suse/uaa        2.7.0       A Helm chart for SUSE UAA
...

When you have verified that the upgrade is the next release from your current release, run the following commands to perform the upgrade. What if you missed a release? See Section 11.2, “Installing Skipped Releases”.

Important
Important: Changes to Commands

Take note of the new commands for extracting and using secrets and certificates. If you are still on the SUSE Cloud Application Platform 1.1 release, update your scf-config-values.yaml file with the changes for secrets handling and external IP addresses. (See Section 4.5, “Configure the SUSE Cloud Application Platform Production Deployment” for an example.)

Just like your initial installation, wait for each command to complete before running the next command. Monitor progress with the watch command for each namespace, for example watch -c 'kubectl get pods --namespace uaa'. First upgrade uaa:

tux > helm upgrade susecf-uaa suse/uaa \
    --values scf-config-values.yaml
Important
Important
Using --recreate-pods during a helm upgrade

When upgrading from SUSE Cloud Application Platform 1.3.0 to 1.3.1, running helm upgrade does not require the --recreate-pods option to be used. A change to the active/passive model has allowed for previously unready pods to be upgraded, which allows for zero app downtime during the upgrade process.

Upgrades between other versions will require the --recreate-pods option when using the helm upgrade command. For example, the command to upgrade uaa from SUSE Cloud Application Platform 1.2.1 to 1.3.0 will be as follows:

tux > helm upgrade --recreate-pods susecf-uaa suse/uaa \
    --values scf-config-values.yaml

Then extract the uaa secret for scf to use:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

Upgrade scf, and note that if you see an error message like lost connection to pod Error: UPGRADE FAILED: transport is closing, this is normal. If you can run watch -c 'kubectl get pods --namespace scf' then everything is all right.

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"
Important
Important
Using --recreate-pods during a helm upgrade

When upgrading from SUSE Cloud Application Platform 1.3.0 to 1.3.1, running helm upgrade does not require the --recreate-pods option to be used. A change to the active/passive model has allowed for previously unready pods to be upgraded, which allows for zero app downtime during the upgrade process.

Upgrades between other versions will require the --recreate-pods option when using the helm upgrade command. For example, the command to upgrade uaa from SUSE Cloud Application Platform 1.2.1 to 1.3.0 will be as follows:

tux > helm upgrade --recreate-pods susecf-uaa suse/uaa \
    --values scf-config-values.yaml

Then upgrade Stratos:

tux > helm upgrade susecf-console suse/console \
 --values scf-config-values.yaml
Important
Important
Using --recreate-pods during a helm upgrade

When upgrading from SUSE Cloud Application Platform 1.3.0 to 1.3.1, running helm upgrade does not require the --recreate-pods option to be used. A change to the active/passive model has allowed for previously unready pods to be upgraded, which allows for zero app downtime during the upgrade process.

Upgrades between other versions will require the --recreate-pods option when using the helm upgrade command. For example, the command to upgrade uaa from SUSE Cloud Application Platform 1.2.1 to 1.3.0 will be as follows:

tux > helm upgrade --recreate-pods susecf-uaa suse/uaa \
    --values scf-config-values.yaml

11.1.1 Change in URL of internal cf-usb broker endpoint

This change is only applicable for upgrades from Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 and upgrades from Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1. The URL of the internal cf-usb broker endpoint has changed. Brokers for PostgreSQL and MySQL that use cf-usb will require the following manual fix after upgrading to reconnect with SCF/CAP:

For Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 upgrades:

  1. Get the name of the secret (for example secrets-2.14.5-1):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf secrets-2.14.5-1 -o yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb part doubled with a dash separator

    tux > cf update-service-broker usb broker-admin password https://cf-usb-cf-usb.scf.svc.cluster.local:24054

For Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1 upgrades:

  1. Get the name of the secret (for example 2.15.2):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb-cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf 2.15.2 -o yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb- part removed:

    tux > cf update-service-broker usb broker-admin password https://cf-usb.scf.svc.cluster.local:24054

11.2 Installing Skipped Releases

By default, Helm always installs the latest release. What if you accidentally skipped a release, and need to apply it before upgrading to the current release? Install the missing release by specifying the Helm chart version number. For example, your current uaa and scf versions are 2.10.1. Consult the table at the beginning of this chapter to see which releases you have missed. In this example, the missing Helm chart version for uaa and scf is 2.11.0. Use the --version option to install a specific version:

tux > helm upgrade --recreate-pods --version 2.11.0 susecf-uaa suse/uaa \
--values scf-config-values.yaml

Be sure to install the corresponding versions for scf and Stratos.

12 Configuration Changes

After the initial deployment of Cloud Application Platform, any changes made to your Helm chart values, whether through your scf-config-values.yaml file or directly using Helm's --set flag, are applied using the helm upgrade command.

Warning
Warning: Do not make changes to pod counts during a version upgrade

The helm upgrade command can be used to apply configuration changes as well as perform version upgrades to Cloud Application Platform. A change to the pod count configuration should not be applied simultaneously with a version upgrade. Sizing changes should be made separately, either before or after, from a version upgrade (see Section 6.1, “Example High Availability Configuration”).

12.1 Configuration Change Example

Consider an example where more granular log entries are required than those provided by your default deployment of uaa; (default LOG_LEVEL: "info").

You would then add an entry for LOG_LEVEL to the env section of your scf-config-values.yaml used to deploy uaa:

env:
  LOG_LEVEL: "debug2"

Then apply the change with the helm upgrade command. This example assumes the suse/uaa Helm chart deployed was named susecf-uaa:

tux > helm upgrade susecf-uaa suse/uaa --values scf-config-values.yaml

When all pods are in a READY state, the configuration change will also be reflected. If the chart was deployed to the uaa namespace, progress can be monitored with:

tux > watch -c 'kubectl get pods --namespace uaa'

12.2 Other Examples

The following are other examples of using helm upgrade to make configuration changes:

13 Managing Passwords

The various components of SUSE Cloud Application Platform authenticate to each other using passwords that are automatically managed by the Cloud Application Platform secrets-generator. The only passwords managed by the cluster administrator are passwords for human users. The administrator may create and remove user logins, but cannot change user passwords.

  • The cluster administrator password is initially defined in the deployment's values.yaml file with CLUSTER_ADMIN_PASSWORD

  • The Stratos Web UI provides a form for users, including the administrator, to change their own passwords

  • User logins are created (and removed) with the Cloud Foundry Client, cf CLI

13.1 Password Management with the Cloud Foundry Client

The administrator cannot change other users' passwords. Only users may change their own passwords, and password changes require the current password:

tux > cf passwd
Current Password>
New Password> 
Verify Password> 
Changing password...
OK
Please log in again

The administrator can create a new user:

tux > cf create-user username password

and delete a user:

tux > cf delete-user username password

Use the cf CLI to assign space and org roles. Run cf help -a for a complete command listing, or see Creating and Managing Users with the cf CLI.

13.2 Changing User Passwords with Stratos

The Stratos Web UI provides a form for changing passwords on your profile page. Click the overflow menu button on the top right to access your profile, then click the edit button on your profile page. You can manage your password and username on this page.

Stratos Profile Page
Figure 13.1: Stratos Profile Page
Stratos Edit Profile Page
Figure 13.2: Stratos Edit Profile Page

14 Cloud Controller Database Secret Rotation

The Cloud Controller Database (CCDB) encrypts sensitive information like passwords. By default, the encryption key is generated by SCF. If it is compromised and needs to be rotated, new keys can be added. Note that existing encrypted information will not be updated. The encrypted information must be set again to have them re-encrypted with the new key. The old key cannot be dropped until all references to it are removed from the database.

Updating these secrets is a manual process. The following procedure outlines how this is done.

  1. Create a file called new-key-values.yaml with content of the form:

    env:
      CC_DB_CURRENT_KEY_LABEL: new_key
    
    secrets:
      CC_DB_ENCRYPTION_KEYS:
        new_key: "new_key_value"
  2. Pass your uaa secret and certificate to scf:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    -o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  3. Use the helm upgrade command to import the above data into the cluster. This restarts relevant pods with the new information from the previous step:

  4. tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --values new-key-values.yaml \
    --set "secrets.UAA_CA_CERT=${CA_CERT}"
  5. Perform the rotation:

    1. Change the encryption key in the config file. No output should be produced:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c 'sed -i "/db_encryption_key:/c\\db_encryption_key: \"$(echo $CC_DB_ENCRYPTION_KEYS | jq -r .new_key)\"" /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml'
    2. Run the rotation for the encryption keys. A series of JSON-formatted log entries describing the key rotation progress for various Cloud Controller models will be displayed:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c 'export PATH=/var/vcap/packages/ruby-2.4/bin:$PATH ; export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml ; cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng ; /var/vcap/packages/ruby-2.4/bin/bundle exec rake rotate_cc_database_key:perform'

    Note that keys should be appended to the existing secret to ensure existing environment variables can be decoded. Any operator can check which keys are in use by accessing the CCDB. If the encryption_key_label is empty, the default generated key is still being used

    tux > kubectl exec -it mysql-0 --namespace scf -- /bin/bash -c 'mysql -p${MYSQL_ADMIN_PASSWORD}'
    MariaDB [(none)]> select name, encrypted_environment_variables, encryption_key_label from ccdb.apps;
    +--------+--------------------------------------------------------------------------------------------------------------+----------------------+
    | name   | encrypted_environment_variables                                                                              | encryption_key_label |
    +--------+--------------------------------------------------------------------------------------------------------------+----------------------+
    | go-env | XF08q9HFfDkfxTvzgRoAGp+oci2l4xDeosSlfHJUkZzn5yvr0U/+s5LrbQ2qKtET0ssbMm3L3OuSkBnudZLlaCpFWtEe5MhUe2kUn3A6rUY= | key0                 |
    +--------+--------------------------------------------------------------------------------------------------------------+----------------------+
    1 row in set (0.00 sec)

    For example, if keys were being rotated again, the secret would become:

    SECRET_DATA=$(echo "{key0: abc-123, key1: def-456}" | base64)

    and the CC_DB_CURRENT_KEY_LABEL would be updated to match the new key.

14.1 Tables with Encrypted Information

The CCDB contains several tables with encrypted information as follows:

apps

Environment variables

buildpack_lifecycle_buildpacks

Buildpack URLs may contain passwords

buildpack_lifecycle_data

Buildpack URLs may contain passwords

droplets

May contain Docker registry passwords

env_groups

Environment variables

packages

May contain Docker registry passwords

service_bindings

Contains service credentials

service_brokers

Contains service credentials

service_instances

Contains service credentials

service_keys

Contains service credentials

tasks

Environment variables

14.1.1 Update existing data with new encryption key

To ensure the encryption key is updated for existing data, the command (or its update- equivalent) can be run again with the same parameters. Some commands need to be deleted/recreated to update the label.

apps

Run cf set-env again

buildpack_lifecycle_buildpacks, buildpack_lifecycle_data, droplets

cf restage the app

packages

cf delete, then cf push the app (Docker apps with registry password)

env_groups

Run cf set-staging-environment-variable-group or cf set-running-environment-variable-group again

service_bindings

Run cf unbind-service and cf bind-service again

service_brokers

Run cf update-service-broker with the appropriate credentials

service_instances

Run cf update-service with the appropriate credentials

service_keys

Run cf delete-service-key and cf create-service-key again

tasks

While tasks have an encryption key label, they are generally meant to be a one-off event, and left to run to completion. If there is a task still running, it could be stopped with cf terminate-task, then run again with cf run-task.

15 Backup and Restore

15.1 Backup and restore using cf-plugin-backup

cf-plugin-backup backs up and restores your Cloud Controller Database (CCDB), using the Cloud Foundry command line interface (cf CLI). (See Section 24.1, “Using the cf CLI with SUSE Cloud Application Platform”.)

cf-plugin-backup is not a general-purpose backup and restore plugin. It is designed to save the state of a SUSE Cloud Foundry instance before making changes to it. If the changes cause problems, use cf-plugin-backup to restore the instance from scratch. Do not use it to restore to a non-pristine SUSE Cloud Foundry instance. Some of the limitations for applying the backup to a non-pristine SUSE Cloud Foundry instance are:

  • Application configuration is not restored to running applications, as the plugin does not have the ability to determine which applications should be restarted to load the restored configurations.

  • User information is managed by the User Account and Authentication (uaa) server, not the Cloud Controller (CC). As the plugin talks only to the CC it cannot save full user information, nor restore users. Saving and restoring users must be performed separately, and user restoration must be performed before the backup plugin is invoked.

  • The set of available stacks is part of the SUSE Cloud Foundry instance setup, and is not part of the CC configuration. Trying to restore applications using stacks not available on the target SUSE Cloud Foundry instance will fail. Setting up the necessary stacks must be performed separately before the backup plugin is invoked.

  • Buildpacks are not saved. Applications using custom buildpacks not available on the target SUSE Cloud Foundry instance will not be restored. Custom buildpacks must be managed separately, and relevant buildpacks must be in place before the affected applications are restored.

15.1.1 Installing the cf-plugin-backup

Download the plugin from cf-plugin-backup/releases.

Then install it with cf, using the name of the plugin binary that you downloaded:

tux > cf install-plugin cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64
 Attention: Plugins are binaries written by potentially untrusted authors.
 Install and use plugins at your own risk.
 Do you want to install the plugin 
 backup-plugin/cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64? [yN]: y
 Installing plugin backup...
 OK
 Plugin backup 1.0.8 successfully installed.

Verify installation by listing installed plugins:

tux > cf plugins
 Listing installed plugins...

 plugin   version   command name      command help
 backup   1.0.8     backup-info       Show information about the current snapshot
 backup   1.0.8     backup-restore    Restore the CloudFoundry state from a 
  backup created with the snapshot command
 backup   1.0.8     backup-snapshot   Create a new CloudFoundry backup snapshot 
  to a local file

 Use 'cf repo-plugins' to list plugins in registered repos available to install.

15.1.2 Using cf-plugin-backup

The plugin has three commands:

  • backup-info

  • backup-snapshot

  • backup-restore

View the online help for any command, like this example:

tux >  cf backup-info --help
 NAME:
   backup-info - Show information about the current snapshot

 USAGE:
   cf backup-info

Create a backup of your SUSE Cloud Application Platform data and applications. The command outputs progress messages until it is completed:

tux > cf backup-snapshot   
 2018/08/18 12:48:27 Retrieving resource /v2/quota_definitions
 2018/08/18 12:48:30 org quota definitions done
 2018/08/18 12:48:30 Retrieving resource /v2/space_quota_definitions
 2018/08/18 12:48:32 space quota definitions done
 2018/08/18 12:48:32 Retrieving resource /v2/organizations
 [...]

Your Cloud Application Platform data is saved in the current directory in cf-backup.json, and application data in the app-bits/ directory.

View the current backup:

tux > cf backup-info
 - Org  system

Restore from backup:

tux > cf backup-restore

There are two additional restore options: --include-security-groups and --include-quota-definitions.

15.1.3 Scope of Backup

The following table lists the scope of the cf-plugin-backup backup. Organization and space users are backed up at the SUSE Cloud Application Platform level. The user account in uaa/LDAP, the service instances and their application bindings, and buildpacks are not backed up. The sections following the table goes into more detail.

ScopeRestore
OrgsYes
Org auditorsYes
Org billing-managerYes
Quota definitionsOptional
SpacesYes
Space developersYes
Space auditorsYes
Space managersYes
AppsYes
App binariesYes
RoutesYes
Route mappingsYes
DomainsYes
Private domainsYes
Stacksnot available
Feature flagsYes
Security groupsOptional
Custom buildpacksNo

cf backup-info reads the cf-backup.json snapshot file found in the current working directory, and reports summary statistics on the content.

cf backup-snapshot extracts and saves the following information from the CC into a cf-backup.json snapshot file. Note that it does not save user information, but only the references needed for the roles. The full user information is handled by the uaa server, and the plugin talks only to the CC. The following list provides a summary of what each plugin command does.

  • Org Quota Definitions

  • Space Quota Definitions

  • Shared Domains

  • Security Groups

  • Feature Flags

  • Application droplets (zip files holding the staged app)

  • Orgs

    • Spaces

      • Applications

      • Users' references (role in the space)

cf backup-restore reads the cf-backup.json snapshot file found in the current working directory, and then talks to the targeted SUSE Cloud Foundry instance to upload the following information, in the specified order:

  • Shared domains

  • Feature flags

  • Quota Definitions (iff --include-quota-definitions)

  • Orgs

    • Space Quotas (iff --include-quota-definitions)

    • UserRoles

    • (private) Domains

    • Spaces

      • UserRoles

      • Applications (+ droplet)

        • Bound Routes

      • Security Groups (iff --include-security-groups)

The following list provides more details of each action.

Shared Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Feature Flags

Attempts to update flags from the backup.

Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

Orgs

Attempts to create orgs from the backup. Attempts to update existing orgs from the backup.

Space Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

(private) Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Spaces

Attempts to create spaces from the backup. Attempts to update existing spaces from the backup.

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

Apps

Attempts to create apps from the backup. Attempts to update existing apps from the backup (memory, instances, buildpack, state, ...)

Security groups

Existing groups are overwritten from the backup

15.2 Disaster recovery in scf through raw data backup and restore

A backup and restore of an existing scf deployment's raw data can be used to migrate all data to a new scf deployment. This procedure is applicable to deployments running on any Kubernetes cluster (for example SUSE CaaS Platform, Amazon EKS, and Azure AKS described in this guide) and can included in your disaster recovery solution.

In order to complete a raw data backup and restore it is required to have:

The following lists the data that is included as part of the backup (and restore) procedure:

  • The Cloud Controller Database (CCDB). In addition to what is encompassed by the CCDB listed in Section 15.1.3, “Scope of Backup”, this will include service data as well.

  • The Cloud Controller Blobstore, which includes the types of binary large object (blob) files listed below. (See Blobstore to learn more about each blob type.)

    • App Packages

    • Buildpacks

    • Resource Cache

    • Buildpack Cache

    • Droplets

Note
Note: Restore to the same version

This process is intended for backing up and restoring to a target deployment with the same version as the source deployment.

15.2.1 Performing a raw data backup

Perform the following steps to create a backup of your source scf deployment.

  1. Connect to the blobstore pod:

    tux > kubectl exec -it blobstore-0 --namespace scf -- env /bin/bash
  2. Create an archive of the blobstore directory to preserve all needed files (see ???TITLE???) then disconnect from the pod:

    tux > tar cfvz blobstore-src.tgz /var/vcap/store/shared
    tux > exit
  3. Copy the archive to a location outside of the pod:

    tux > kubectl cp scf/blobstore-0:blobstore-src.tar.gz /tmp/blobstore-src.tgz
  4. Export the Cloud Controller Database (CCDB) into a file:

    tux > kubectl exec mysql-0 --namespace scf -- bash -c \
      '/var/vcap/packages/mariadb/bin/mysqldump \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      ccdb' > /tmp/ccdb-src.sql
  5. Next, obtain the CCDB encryption key(s). The method used to capture the key will depend on whether current_key_label has been defined on the source cluster. This value is defined in /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml of the api-group-0 pod and also found in various tables of the MySQL database.

    Begin by examining the configuration file for thecurrent_key_label setting:

    tux > kubectl exec -it --namespace scf api-group-0 -- bash -c "cat /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml | grep -A 3 database_encryption"
    • If the output contains the current_key_label setting, save the output for the restoration process. Adjust the -A flag as needed to include all keys.

    • If the output does not contain the current_key_label setting, run the following command and save the output for the restoration process:

      tux > kubectl exec api-group-0 --namespace scf -- bash -c 'echo $DB_ENCRYPTION_KEY'

15.2.2 Performing a raw data restore

Perform the following steps to restore your backed up data to the target scf deployment.

Important
Important: Ensure access to the correct scf deployment

Working with multiple Kubernetes clusters simultaneously can be confusing. Ensure you are communicating with the desired cluster by setting $KUBECONFIG correctly.

  1. The target scf cluster needs to be deployed with the correct database encryption key(s) set in your scf-config-values.yaml before data can be restored. How the encryption key(s) will be prepared in your scf-config-values.yaml depends on the result of Step 5 in Section 15.2.1, “Performing a raw data backup”

    • If current_key_label was set, use the current_key_label obtained as the value of CC_DB_CURRENT_KEY_LABEL and all the keys under the keys are defined under CC_DB_ENCRYPTION_KEYS. See the following example scf-config-values.yaml:

      env:
        CC_DB_CURRENT_KEY_LABEL: migrated_key_1
      
      secrets:
        CC_DB_ENCRYPTION_KEYS:
          migrated_key_1: "<key_goes_here>"
          migrated_key_2: "<key_goes_here>"
    • If current_key_label was not set, create one for the new cluster through scf-config-values.yaml and set it to the $DB_ENCRYPTION_KEY value from the old cluster.In this example, migrated_key is the new current_key_label created:

      env:
        CC_DB_CURRENT_KEY_LABEL: migrated_key
      
      secrets:
        CC_DB_ENCRYPTION_KEYS:
          migrated_key: "OLD_CLUSTER_DB_ENCRYPTION_KEY"
  2. Deploy a non-high-availability configuration of scf (see Section 4.10, “Deploy scf) and wait until all pods are ready before proceeding.

  3. In the ccdb-src.sql file created earlier, replace the domain name of the source deployment with the domain name of the target deployment.

    tux > sed -i 's/old-example.com/new-example.com/g' /tmp/ccdb-src.sql
  4. Stop the monit services on the api-group-0, cc-worker-0, and cc-clock-0 pods:

    tux > for n in api-group-0 cc-worker-0 cc-clock-0; do
      kubectl exec -it --namespace scf $n -- bash -l -c 'monit stop all'
    done
  5. Copy the blobstore-src.tgz archive to the blobstore pod:

    tux > kubectl cp /tmp/blobstore-src.tgz scf/blobstore-0:/
  6. Restore the contents of the archive created during the backup process to the blobstore pod:

    tux > kubectl exec -it --namespace scf blobstore-0 -- bash -l -c 'monit stop all & sleep 10 & rm -rf /var/vcap/store/shared/* & tar xvf blobstore-src.tgz & monit start all & rm blobstore-src.tgz'
  7. Recreate the CCDB on the mysql pod:

    tux > kubectl exec mysql-0 --namespace scf -- bash -c \
        "/var/vcap/packages/mariadb/bin/mysql \
        --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
        -e 'drop database ccdb; create database ccdb;'"
  8. Restore the CCDB on the mysql pod:

    tux > kubectl exec -i mysql-0 --namespace scf -- bash -c '/var/vcap/packages/mariadb/bin/mysql --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf ccdb' < /tmp/ccdb-src.sql
  9. Start the monit services on the api-group-0, cc-worker-0, and cc-clock-0 pods

    tux > for n in api-group-0 cc-worker-0 cc-clock-0; do
      kubectl exec -it --namespace scf $n -- bash -l -c 'monit start all'
    done
  10. If your old cluster did not have current_key_label defined, perform a key rotation. Otherwise, a key rotation is not necessary.

    1. Change the encryption key in the config file:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c 'sed -i "/db_encryption_key:/c\\db_encryption_key: \"$(echo $CC_DB_ENCRYPTION_KEYS | jq -r .migrated_key)\"" /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml'
    2. Run the rotation for the encryption keys:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c 'export PATH=/var/vcap/packages/ruby-2.4/bin:$PATH ; export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml ; cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng ; /var/vcap/packages/ruby-2.4/bin/bundle exec rake rotate_cc_database_key:perform'
  11. The data restore is now complete. Run some cf commands, such as cf apps, cf marketplace, or cf services, and verify data from the old cluster is returned.

16 Provisioning Services with Minibroker

Minibroker is an OSBAPI compliant broker created by members of the Microsoft Azure team. It provides a simple method to provision service brokers on Kubernetes clusters.

Warning
Warning: Do not use Minibroker on production systems

Minibroker is a technology preview that is useful for development and testing purposes. It is currently not production-ready and not recommended for use with production systems.

16.1 Deploy Minibroker

  1. Minibroker is deployed using a Helm chart. Ensure your SUSE Helm chart repository contains the most recent Minibroker chart:

    tux > helm repo update
    Hang tight while we grab the latest from your chart repositories...
    ...Skip local chart repository
    ...Successfully got an update from the "stable" chart repository
    ...Successfully got an update from the "suse" chart repository
    Update Complete. ⎈ Happy Helming!⎈
    
    tux > helm search suse
    NAME                        	VERSION	DESCRIPTION                                       
    ...
    suse/minibroker             	0.2.0  	A minibroker for your minikube                    
    ...
  2. Use Helm to deploy Minibroker:

    tux > helm install suse/minibroker --namespace minibroker --name minibroker --set "defaultNamespace=minibroker"

    The repository currently contains charts for the following services:

  3. Monitor the deployment progress. Wait until all pods are in a ready state before proceeding:

    tux > watch -c 'kubectl get pods --namespace minibroker'

16.2 Setting up the environment for Minibroker usage

  1. Begin by logging into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space -o org
    tux > cf target -o org -s space
  2. Create the service broker. Note that Minibroker does not require authentication and the username and password parameters act as dummy values to pass to the cf command. These parameters do not need to be customized for the Cloud Application Platform installation:

    tux > cf create-service-broker minibroker username password http://minibroker-minibroker.minibroker.svc.cluster.local

    Once the service broker is ready, it can be seen on your deployment:

    tux > cf service-brokers
    Getting service brokers as admin...
    
    name               url
    minibroker         http://minibroker-minibroker.minibroker.svc.cluster.local
  3. List the services and their associated plans the Minibroker has access to:

    tux > cf service-access -b minibroker
  4. Enable access to a service. Services that can be enabled are mariadb, mongodb, postgresql, and redis. The example below uses Redis as the service:

    tux > cf enable-service-access redis

    Use cf marketplace to verify the service has been enabled:

    tux > cf marketplace
    Getting services from marketplace in org org / space space as admin...
    OK
    
    service      plans     description
    redis        4-0-10    Helm Chart for redis
    
    TIP:  Use 'cf marketplace -s SERVICE' to view descriptions of individual plans of a given service.
  5. Define your Application Security Group (ASG) rules in a JSON file. Using the defined rules, create an ASG and bind it to an organization and space:

    tux > echo > redis.json '[{ "protocol": "tcp", "destination": "10.0.0.0/8", "ports": "6379", "description": "Allow Redis traffic" }]'
    tux > cf create-security-group redis_networking redis.json
    tux > cf bind-security-group redis_networking org space

    Use following ports to define your ASG for a given service:

    ServicePort
    MariaDB3306
    MongoDB27017
    PostgreSQL5432
    Redis6379
  6. Create an instance of the Redis service. The cf marketplace or cf marketplace -s redis commands can be used to see the available plans for the service:

    tux > cf create-service redis 4-0-10 redis-example-service

    Monitor the progress of the pods and wait until all pods are in a ready state. The example below shows the additional redis pods with a randomly generated name that have been created in the minibroker namespace:

    tux > watch -c 'kubectl get pods --namespace minibroker'
    NAME                                            READY     STATUS             RESTARTS   AGE
    alternating-frog-redis-master-0                 1/1       Running            2          1h
    alternating-frog-redis-slave-7f7444978d-z86nr   1/1       Running            0          1h
    minibroker-minibroker-5865f66bb8-6dxm7          2/2       Running            0          1h

16.3 Using Minibroker with Applications

This section demonstrates how to use Minibroker services with your applications. The example below uses the Redis service instance created in the previous section.

  1. Obtain the demo application from Github and use cf push with the --no-start flag to deploy the application without starting it:

    tux > git clone https://github.com/scf-samples/cf-redis-example-app
    tux > cd cf-redis-example-app
    tux > cf push --no-start
  2. Bind the service to your application and start the application:

    tux > cf bind-service redis-example-app redis-example-service
    tux > cf start redis-example-app
  3. When the application is ready, it can be tested by storing a value into the Redis service:

    tux > export APP=redis-example-app.example.com
    tux > curl -X GET $APP/foo
    tux > curl -X PUT $APP/foo -d 'data=bar'
    tux > curl -X GET $APP/foo

    The first GET will return key not present. After storing a value, it will return bar.

Important
Important: Database names for PostgreSQL and MariaDB instances

By default, Minibroker creates PostgreSQL and MariaDB server instances without a named database. A named database is required for normal usage with these and will need to be added during the cf create-service step using the -c flag. For example:

tux > cf create-service postgresql 9-6-2 djangocms-db -c '{"postgresDatabase":"mydjango"}'
tux > cf create-service mariadb 10-1-34 my-db  -c '{"mariadbDatabase":"mydb"}'

Other options can be set too, but vary by service type.

17 Setting up and Using a Service Broker

The Open Service Broker API provides your SUSE Cloud Application Platform applications with access to external dependencies and platform-level capabilities, such as databases, filesystems, external repositories, and messaging systems. These resources are called services. Services are created, used, and deleted as needed, and provisioned on demand.

17.1 Prerequisites

The following examples demonstrate how to deploy service brokers for MySQL and PostgreSQL with Helm, using charts from the SUSE repository. You must have the following prerequisites:

  • A working SUSE Cloud Application Platform deployment with Helm and the Cloud Foundry command line interface (cf CLI).

  • An Application Security Group (ASG) for applications to reach external databases. (See Understanding Application Security Groups.)

  • An external MySQL or PostgreSQL installation with account credentials that allow creating and deleting databases and users.

For testing purposes you may create an insecure security group:

tux > echo > "internal-services.json" '[{ "destination": "0.0.0.0/0", "protocol": "all" }]'
tux > cf create-security-group internal-services-test internal-services.json
tux > cf bind-running-security-group internal-services-test
tux > cf bind-staging-security-group internal-services-test

You may apply an ASG later, after testing. All running applications must be restarted to use the new security group.

17.2 Deploying on CaaS Platform 3

If you are deploying SUSE Cloud Application Platform on CaaS Platform 3, see for important information on applying the required Pod Security Policy (PSP) to your deployment. You must also apply the PSP to your new service brokers.

Take the example configuration file, cap-psp-rbac.yaml, in , and append these lines to the end, using your own namespace name for your new service broker:

- kind: ServiceAccount
  name: default
  namespace: mysql-sidecar

Then apply the updated PSP configuration, before you deploy your new service broker, with this command:

tux > kubectl apply -f cap-psp-rbac.yaml

kubectl apply updates an existing deployment. After applying the PSP, proceed to configuring and deploying your service broker.

17.3 Configuring the MySQL Deployment

Start by extracting the uaa namespace secrets name, and the uaa and scf namespaces internal certificates with these commands. These will output the complete certificates. Substitute your secrets name if it is different than the example:

tux > kubectl get pods --namespace uaa \
 -o jsonpath='{.items[*].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}'
 secrets-2.8.0-1

tux > kubectl get secret -n scf secrets-2.8.0-1 -o jsonpath='{.data.internal-ca-cert}' | base64 -d
 -----BEGIN CERTIFICATE-----
 MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
 [...]
 xC8x/+zT0QkvcRJBio5gg670+25KJQ==
 -----END CERTIFICATE-----
 
tux > kubectl get secret -n uaa secrets-2.8.0-1 -o jsonpath='{.data.internal-ca-cert}' | base64 -d
 -----BEGIN CERTIFICATE-----
 MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
 [...]
 to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
 -----END CERTIFICATE-----

You will copy these certificates into your configuration file as shown below.

Create a values.yaml file. The following example is called usb-config-values.yaml. Modify the values to suit your SUSE Cloud Application Platform installation.

env:
  # Database access credentials
  SERVICE_MYSQL_HOST: mysql.example.com
  SERVICE_MYSQL_PORT: 3306
  SERVICE_MYSQL_USER: mysql-admin-user
  SERVICE_MYSQL_PASS: mysql-admin-password

  # CAP access credentials, from your original deployment configuration 
  # (see Section 4.5, “Configure the SUSE Cloud Application Platform Production Deployment”)
  CF_ADMIN_USER: admin
  CF_ADMIN_PASSWORD: password
  CF_DOMAIN: example.com
  
  # Copy the certificates you extracted above, as shown in these
  # abbreviated examples, prefaced with the pipe character
  
  # SCF cert
  CF_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
    [...]
    xC8x/+zT0QkvcRJBio5gg670+25KJQ==
    -----END CERTIFICATE-----
   
  # UAA cert
  UAA_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
    [...]
    to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
    -----END CERTIFICATE-----
    
kube:
  organization: "cap"
  registry: 
    hostname: "registry.suse.com"
    username: ""
    password: ""

17.4 Deploying the MySQL Chart

SUSE Cloud Application Platform includes charts for MySQL and PostgreSQL (see Section 4.7, “Add the Kubernetes charts repository” for information on managing your Helm repository):

tux > helm search suse
NAME                            VERSION DESCRIPTION
suse/cf-opensuse                2.15.2  A Helm chart for SUSE Cloud Foundry
suse/uaa-opensuse               2.15.2  A Helm chart for SUSE UAA
suse/cf                         2.15.2  A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres    1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/console                    2.3.0   A Helm chart for deploying Stratos UI Console
suse/metrics                    1.0.0   A Helm chart for Stratos Metrics
suse/nginx-ingress              0.28.3  An nginx Ingress controller that uses ConfigMap...
suse/uaa                        2.15.2  A Helm chart for SUSE UAA

Create a namespace for your MySQL sidecar:

tux > kubectl create namespace mysql-sidecar

Install the MySQL Helm chart:

tux > helm install suse/cf-usb-sidecar-mysql \
  --devel \
  --name mysql-service \
  --namespace mysql-sidecar \
  --set "env.SERVICE_LOCATION=http://cf-usb-sidecar-mysql.mysql-sidecar:8081" \
  --set default-auth=mysql_native_password
  --values usb-config-values.yaml \
  --wait

Wait for the new pods to become ready:

tux > watch kubectl get pods --namespace=mysql-sidecar

Confirm that the new service has been added to your SUSE Cloud Applications Platform installation:

tux > cf marketplace
Warning
Warning: MySQL Requires mysql_native_password

The MySQL sidecar works only with deployments that use mysql_native_password as their authentication plugin. This is the default for MySQL versions 8.0.3 and earlier, but later versions must be started with --default-auth=mysql_native_password before any user creation. (See https://github.com/go-sql-driver/mysql/issues/785

17.5 Create and Bind a MySQL Service

To create a new service instance, use the Cloud Foundry command line client:

tux > cf create-service mysql default service_instance_name

You may replace service_instance_name with any name you prefer.

Bind the service instance to an application:

tux > cf bind-service my_application service_instance_name

17.6 Deploying the PostgreSQL Chart

The PostgreSQL configuration is slightly different from the MySQL configuration. The database-specific keys are named differently, and it requires the SERVICE_POSTGRESQL_SSLMODE key.

env:
  # Database access credentials
  SERVICE_POSTGRESQL_HOST: postgres.example.com
  SERVICE_POSTGRESQL_PORT: 5432
  SERVICE_POSTGRESQL_USER: pgsql-admin-user
  SERVICE_POSTGRESQL_PASS: pgsql-admin-password
  
  # The SSL connection mode when connecting to the database.  For a list of
  # valid values, please see https://godoc.org/github.com/lib/pq
  SERVICE_POSTGRESQL_SSLMODE: disable
  
  # CAP access credentials, from your original deployment configuration 
  # (see Section 4.5, “Configure the SUSE Cloud Application Platform Production Deployment”)
  CF_ADMIN_USER: admin
  CF_ADMIN_PASSWORD: password
  CF_DOMAIN: example.com
  
  # Copy the certificates you extracted above, as shown in these
  # abbreviated examples, prefaced with the pipe character
  
  # SCF certificate
  CF_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
    [...]
    xC8x/+zT0QkvcRJBio5gg670+25KJQ==
    -----END CERTIFICATE-----
   
  # UAA certificate
  UAA_CA_CERT: |
    ----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
    [...]
    to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
    -----END CERTIFICATE-----
   
  SERVICE_TYPE: postgres   
    
kube:
  organization: "cap"
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""

Create a namespace and install the chart:

tux > kubectl create namespace postgres-sidecar

tux > helm install suse/cf-usb-sidecar-postgres \
  --devel \
  --name postgres-service \
  --namespace postgres-sidecar \
  --set "env.SERVICE_LOCATION=http://cf-usb-sidecar-postgres.postgres-sidecar:8081" \
  --values usb-config-values.yaml \
  --wait

Then follow the same steps as for the MySQL chart.

17.7 Removing Service Broker Sidecar Deployments

To correctly remove sidecar deployments, perform the following steps in order.

  • Unbind any applications using instances of the service, and then delete those instances:

    tux > cf unbind-service my_app my_service_instance
    tux > cf delete-service my_service_instance
  • Install the CF-USB CLI plugin for the Cloud Foundry CLI from https://github.com/SUSE/cf-usb-plugin/releases/, for example:

    tux > cf install-plugin \
     https://github.com/SUSE/cf-usb-plugin/releases/download/1.0.0/cf-usb-plugin-1.0.0.0.g47b49cd-linux-amd64
  • Configure the Cloud Foundry USB CLI plugin, using the domain you created for your SUSE Cloud Foundry deployment:

    tux > cf usb-target https://usb.example.com
  • List the current sidecar deployments and take note of the names:

    tux > cf usb-driver-endpoints
  • Remove the service by specifying its name:

    tux > cf usb-delete-driver-endpoint mysql-service
  • Find your release name, then delete the release:

    tux > helm list
    NAME           REVISION UPDATED                   STATUS    CHART                      NAMESPACE
    susecf-console 1        Wed Aug 14 08:35:58 2018  DEPLOYED  console-2.3.0              stratos
    susecf-scf     1        Tue Aug 14 12:24:36 2018  DEPLOYED  cf-2.15.2                  scf
    susecf-uaa     1        Tue Aug 14 12:01:17 2018  DEPLOYED  uaa-2.15.2                 uaa
    mysql-service  1        Mon May 21 11:40:11 2018  DEPLOYED  cf-usb-sidecar-mysql-1.0.1 mysql-sidecar
    
    tux > helm delete --purge mysql-service

17.8 Upgrade Notes

17.8.1 Change in URL of internal cf-usb broker endpoint

This change is only applicable for upgrades from Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 and upgrades from Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1. The URL of the internal cf-usb broker endpoint has changed. Brokers for PostgreSQL and MySQL that use cf-usb will require the following manual fix after upgrading to reconnect with SCF/CAP:

For Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 upgrades:

  1. Get the name of the secret (for example secrets-2.14.5-1):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf secrets-2.14.5-1 -o yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb part doubled with a dash separator

    tux > cf update-service-broker usb broker-admin password https://cf-usb-cf-usb.scf.svc.cluster.local:24054

For Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1 upgrades:

  1. Get the name of the secret (for example 2.15.2):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb-cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf 2.15.2 -o yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb- part removed:

    tux > cf update-service-broker usb broker-admin password https://cf-usb.scf.svc.cluster.local:24054

18 App-AutoScaler

The App-AutoScaler service is used for automatically managing an application's instance count when deployed on SUSE Cloud Foundry. The scaling behavior is determined by a set of criteria defined in a policy (See Section 18.4, “Policies”).

18.1 Prerequisites

Using the App-AutoScaler service requires:

18.2 Enabling the App-AutoScaler Service

By default, the App-AutoScaler service is not enabled as part of a SUSE Cloud Foundry deployment. To enable it, add the following values to your scf-config-values.yaml file and deploy scf:

sizing:
  autoscaler_api:
    count: 1
  autoscaler_eventgenerator:
    count: 1
  autoscaler_metrics:
    count: 1
  autoscaler_operator:
    count: 1
  autoscaler_postgres:
    count: 1
  autoscaler_scalingengine:
    count: 1
  autoscaler_scheduler:
    count: 1
  autoscaler_servicebroker:
    count: 1

See helm inspect suse/cf for the full list of available values that can be used to configure the App-AutoScaler.

If uaa is deployed, pass your uaa secret and certificate to scf. Otherwise deploy uaa first (See Section 4.9, “Deploy uaa), then proceed with this step:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If this is an initial deployment, use helm install to deploy scf:

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

If this is an existing deployment, use helm upgrade to apply the change:

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

18.3 Using the App-AutoScaler Service

Create the Service Broker for App-AutoScaler, replacing example.com with the DOMAIN set in your scf-config-values.yaml file:

tux > SECRET=$(kubectl get pods --namespace scf \
-o jsonpath='{.items[?(.metadata.name=="api-group-0")].spec.containers[?(.name=="api-group")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > AS_PASSWORD="$(kubectl get secret $SECRET --namespace scf -o jsonpath="{.data['autoscaler-service-broker-password']}" | base64 --decode)"

tux > cf create-service-broker autoscaler username $AS_PASSWORD https://autoscalerservicebroker.example.com

Enable access to the service:

tux > cf enable-service-access autoscaler -p autoscaler-free-plan

Create a new instance of the App-AutoScaler service:

tux > cf create-service autoscaler autoscaler-free-plan service_instance_name

A name of your choice may be used to replace service_instance_name.

Bind the service instance to an application and attach a policy (See Section 18.4, “Policies”):

tux > cf bind-service my_application service_instance_name
tux > cf attach-autoscaling-policy my_application my-policy.json

If a policy has already been defined and is available for use, you can attach the policy as part of the binding process instead:

tux > cf bind-service my_application service_instance_name -c '{
    "instance_min_count": 1,
    "instance_max_count": 4,
    "scaling_rules": [{
        "metric_type": "memoryused",
        "stat_window_secs": 60,
        "breach_duration_secs": 60,
        "threshold": 10,
        "operator": ">=",
        "cool_down_secs": 300,
        "adjustment": "+1"
    }]
}'

Note that attaching a policy in this manner requires passing the policy directly rather than specifying the path to the policy file.

Once an instance of the App-AutoScaler service has been created and bound to an app, it can be managed using the cf CLI with the App-AutoScaler plugin (See Section 18.3.1, “The App-AutoScaler cf CLI Plugin”) or using the App-AutoScaler API (See Section 18.3.2, “App-AutoScaler API”).

18.3.1 The App-AutoScaler cf CLI Plugin

The App-AutoScaler plugin is used for managing the service with your applications and provides the following commands. Refer to the command list for details about each command:

autoscaling-api

Set or view AutoScaler service API endpoint

autoscaling-policy

Retrieve the scaling policy of an application

attach-autoscaling-policy

Attach a scaling policy to an application

detach-autoscaling-policy

Detach the scaling policy from an application

autoscaling-metrics

Retrieve the metrics of an application

autoscaling-history

Retrieve the scaling history of an application

18.3.2 App-AutoScaler API

The App-AutoScaler service provides a Public API with detailed usage information. It includes requests to:

18.4 Policies

A policy identifies characteristics including minimum instance count, maximum instance count, and the rules used to determine when the number of application instances is scaled up or down. These rules are categorized into two types, scheduled scaling and dynamic scaling. (See Section 18.4.1, “Scaling Types”). Multiple scaling rules can be specified in a policy, but App-AutoScaler does not detect or handle conflicts that may occur. Ensure there are no conflicting rules to avoid unintended scaling behavior.

Policies are defined using the JSON format and can be attached to an application either by passing the path to the policy file or directly as a parameter.

The following is an example of a policy file, called my-policy.json.

{
    "instance_min_count": 1,
    "instance_max_count": 4,
    "scaling_rules": [{
        "metric_type": "memoryused",
        "stat_window_secs": 60,
        "breach_duration_secs": 60,
        "threshold": 10,
        "operator": ">=",
        "cool_down_secs": 300,
        "adjustment": "+1"
    }]
}

For an example that demonstrates defining multiple scaling rules in a single policy, refer to this sample policy file. The complete list of configurable policy values can be found at App-AutoScaler Policy Definition.

18.4.1 Scaling Types

Scheduled Scaling

Modifies an application's instance count at a predetermined time. This option is suitable for workloads with predictable resource usage.

Dynamic Scaling

Modifies an application's instance count based on metrics criteria. This option is suitable for workloads with dynamic resource usage. The following metrics are available:

  • memoryused

  • memoryutil

  • responsetime

  • throughput

See Scaling type for additional details.

19 Logging

There are two types of logs in a deployment of Cloud Application Platform, applications logs and component logs.

  • Application logs provide information specific to a given application that has been deployed to your Cloud Application Platform cluster and can be accessed through:

    • The cf CLI using the cf logs command

    • The application's log stream within the Stratos console

  • Access to logs for a given component of your Cloud Application Platform deployment can be obtained by:

    • The kubectl logs command

      The following example retrieves the logs of the router-0 pod in the scf namespace

      tux > kubectl logs --namespace scf api-group-0
    • Direct access to the log files using the following:

      1. Open a shell to the container of the component using the kubectl exec command

      2. Navigate to the logs directory at /var/vcap/sys/logs, at which point there will be subdirectories containing the log files for access.

        tux > kubectl exec -it --namespace scf router-0 /bin/bash
        
        router/0:/# cd /var/vcap/sys/log
        
        router/0:/var/vcap/sys/log# ls -R
        .:
        gorouter  loggregator_agent
        
        ./gorouter:
        access.log  gorouter.err.log  gorouter.log  post-start.err.log	post-start.log
        
        ./loggregator_agent:
        agent.log

19.1 Logging to an External Syslog Server

Cloud Application Platform supports sending the cluster's log data to external logging services where additional processing and analysis can be performed.

19.1.1 Configuring Cloud Application Platform

In your scf-config-values.yaml file add the following configuration values to the env: section. The example values below are configured for an external ELK stack.

env:
  SCF_LOG_HOST: elk.example.com
  SCF_LOG_PORT: 5001
  SCF_LOG_PROTOCOL: "tcp"

19.1.2 Example using the ELK Stack

The ELK stack is an example of an external syslog server where log data can be sent to for log management. The ELK stack consists of Elasticsearch, Logstash, and Kibana.

19.1.2.1 Prerequisites

Java 8 is required by both Elasticsearch and Logstash.

19.1.2.2 Installing and Configuring Elasticsearch

See installing Elasticsearch to find available installation methods.

After installation, modify the config file /etc/elasticsearch/elasticsearch.yml to set the following value.

network.host: localhost

19.1.2.3 Installing and Configuring Logstash

See installing Logstash to find available installation methods.

After installation, create the configuration file /etc/logstash/conf.d/00-scf.conf. In this example, we will name it 00-scf.conf. Add the following into the file. Take note of the port used in the input section. This value will need to match the value of the SCF_LOG_PORT property in your scf-config-values.yaml file.

input {
  tcp {
    port => 5001
  }
}
output {
  stdout {}
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "scf-%{+YYYY.MM.dd}"
  }
}

See input plugins and output plugins for additional configuration options as well as other plugins available. For this example, we will demonstrate the flow of data through the stack, but filter plugins can also be specified to perform processing of the log data.

19.1.2.4 Installing and Configuring Kibana

See installing Kibana to find available installation methods.

No configuration changes are required at this point. Refer to the configuring settings for additonal properties that you can specify in your kibana.yml file.

19.2 Log Levels

The log level is configured through the scf-config-values.yaml file by using the LOG_LEVEL property found in the env: section. The LOG_LEVEL property is mapped to component-specific levels. Components have differing technology compositions (for example languages, frameworks) and results in each component determining for itself what content to provide at each level, which may vary between components.

The following are the log levels available along with examples of log entries at the given level.

  • off: disable log messages

  • fatal: fatal conditions

  • error: error conditions

    <11>1 2018-08-21T17:59:48.321059+00:00 api-group-0 vcap.cloud_controller_ng - - -  {"timestamp":1534874388.3206334,"message":"Mysql2::Error: MySQL server has gone away: SELECT count(*) AS `count` FROM `tasks` WHERE (`state` = 'RUNNING') LIMIT 1","log_level":"error","source":"cc.db","data":{},"thread_id":47367387197280,"fiber_id":47367404488760,"process_id":3400,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/sequel-4.49.0/lib/sequel/database/logging.rb","lineno":88,"method":"block in log_each"}
  • warn: warning conditions

    <12>1 2018-08-21T18:49:37.651186+00:00 api-group-0 vcap.cloud_controller_ng - - -  {"timestamp":1534877377.6507676,"message":"Invalid bearer token: #<CF::UAA::InvalidSignature: Signature verification failed> [\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/cf-uaa-lib-3.14.3/lib/uaa/token_coder.rb:118:in `decode'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/cf-uaa-lib-3.14.3/lib/uaa/token_coder.rb:212:in `decode_at_reference_time'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:70:in `decode_token_with_key'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:58:in `block in decode_token_with_asymmetric_key'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:56:in `each'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:56:in `decode_token_with_asymmetric_key'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:29:in `decode_token'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/security/security_context_configurer.rb:22:in `decode_token'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/security/security_context_configurer.rb:10:in `configure'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/security_context_setter.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/vcap_request_id.rb:15:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:49:in `call_app'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:14:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_metrics.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/rack-1.6.9/lib/rack/builder.rb:153:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:86:in `block in pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:84:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:84:in `pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:50:in `block in process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:1067:in `block in spawn_threadpool'\"]","log_level":"warn","source":"cc.uaa_token_decoder","data":{"request_guid":"f3e25c45-a94a-4748-7ccf-5a72600fbb17::774bdb79-5d6a-4ccb-a9b8-f4022afa3bdd"},"thread_id":47339751566100,"fiber_id":47339769104800,"process_id":3245,"file":"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb","lineno":35,"method":"rescue in decode_token"}
  • info: informational messages

    <14>1 2018-08-21T22:42:54.324023+00:00 api-group-0 vcap.cloud_controller_ng - - -  {"timestamp":1534891374.3237739,"message":"Started GET \"/v2/info\" for user: , ip: 127.0.0.1 with vcap-request-id: 45e00b66-e0b7-4b10-b1e0-2657f43284e7 at 2018-08-21 22:42:54 UTC","log_level":"info","source":"cc.api","data":{"request_guid":"45e00b66-e0b7-4b10-b1e0-2657f43284e7"},"thread_id":47420077354840,"fiber_id":47420124921300,"process_id":3200,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb","lineno":12,"method":"call"}
  • debug: debugging messages

    <15>1 2018-08-21T22:45:15.146838+00:00 api-group-0 vcap.cloud_controller_ng - - -  {"timestamp":1534891515.1463814,"message":"dispatch VCAP::CloudController::InfoController get /v2/info","log_level":"debug","source":"cc.api","data":{"request_guid":"b228ef6d-af5e-4808-af0b-791a37f51154"},"thread_id":47420125585200,"fiber_id":47420098783620,"process_id":3200,"file":"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb","lineno":12,"method":"block in define_route"}
  • debug1: lower-level debugging messages

  • debug2: lowest-level debugging message

    <15>1 2018-08-21T22:46:02.173445+00:00 api-group-0 vcap.cloud_controller_ng - - -  {"timestamp":1534891562.1731355,"message":"(0.006130s) SELECT * FROM `delayed_jobs` WHERE ((((`run_at` <= '2018-08-21 22:46:02') AND (`locked_at` IS NULL)) OR (`locked_at` < '2018-08-21 18:46:02') OR (`locked_by` = 'cc_api_worker.api.0.1')) AND (`failed_at` IS NULL) AND (`queue` IN ('cc-api-0'))) ORDER BY `priority` ASC, `run_at` ASC LIMIT 5","log_level":"debug2","source":"cc.background","data":{},"thread_id":47194852110160,"fiber_id":47194886034680,"process_id":3296,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/sequel-4.49.0/lib/sequel/database/logging.rb","lineno":88,"method":"block in log_each"}

20 Managing Certificates

The traffic of your SUSE Cloud Application Platform deployment can be made more secure through the use of TLS certificates.

20.1 Certificate Characteristics

When obtaining or generating your certificates, ensure that they are encoded in the PEM format. The appropriate Subject Alternative Names (SAN) should also be included as part of the certificate.

  • Certificates for the scf router should include:

    • *.DOMAIN A wildcard certificate is suggested as deployed applications on the Cloud Application Platform cluster will have URLs in the form of APP_NAME.DOMAIN

  • Certificates for the uaa server should include:

    • uaa.DOMAIN

    • *.uaa.DOMAIN

    • uaa

20.2 Deploying Custom Certificates

Certificates used in Cloud Application Platform are configurable through the values.yaml files for the deployment of scf and uaa respectively. To specify a certificate, set the value for the certificate and its corresponding private key under the secrets: section using the following properties.

  • In the values.yaml for scf specify the ROUTER_SSL_CERT property and the corresponding ROUTER_SSL_KEY.

    Note
    Note

    Note the use of the "|" character which indicates the use of a literal scalar. See the YAML spec for more information.

    secrets:
      ROUTER_SSL_CERT: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
    
      ROUTER_SSL_KEY: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----
  • The --skip-ssl-validation option will not be used when setting a target API endpoint or logging in with the cf CLI. As a result, a certificate will need to be specified for the uaa component as well. In the values.yaml for uaa, specify the UAA_SERVER_CERT property and the corresponding UAA_SERVER_KEY. If a self-signed certificate is used, then the INTERNAL_CA_CERT property and its associated INTERNAL_CA_KEY will need to be set as well.

    secrets:
      UAA_SERVER_CERT: |
        -----BEGIN CERTIFICATE-----
        MIIFnzCCA4egAwIBAgICEAMwDQYJKoZIhvcNAQENBQAwXDELMAkGA1UEBhMCQ0Ex
        CzAJBgNVBAgMAkJDMRIwEAYDVQQHDAlWYW5jb3V2ZXIxETAPBgNVBAoMCE15Q2Fw
        T3JnMRkwFwYDVQQDDBBNeUNhcE9yZyBSb290IENBMB4XDTE4MDkxNDIyNDMzNVoX
        ...
        IqhPRKYBFHPw6RxVTjG/ClMsFvOIAO3QsK+MwTRIGVu/MNs0wjMu34B/zApLP+hQ
        3ZxAt/z5Dvdd0y78voCWumXYPfDw9T94B4o58FvzcM0eR3V+nVtahLGD2r+DqJB0
        3xoI
        -----END CERTIFICATE-----
    
      UAA_SERVER_KEY: |
        -----BEGIN PRIVATE KEY-----
        MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDhRlcoZAVwUkg0
        sdExkBnPenhLG5FzQM3wm9t4erbSQulKjeFlBa9b0+RH6gbYDHh5+NyiL0L89txO
        JHNRGEmt+4zy+9bY7e2syU18z1orOrgdNq+8QhsSoKHJV2w+0QZkSHTLdWmAetrA
        ...
        ZP5BpgjrT2lGC1ElW/8AFM5TxkkOPMzDCe8HRXPUUw+2YDzyKY1YgkwOMpHlk8Cs
        wPQYJsrcObenRwsGy2+A6NiIg2AVJwHASFG65taoV+1A061P3oPDtyIH/UPhRUoC
        OULPS8fbHefNiSvZTNVKwj8=
        -----END PRIVATE KEY-----
    
      INTERNAL_CA_CERT: |
        -----BEGIN CERTIFICATE-----
        MIIFljCCA36gAwIBAgIBADANBgkqhkiG9w0BAQ0FADBcMQswCQYDVQQGEwJDQTEL
        MAkGA1UECAwCQkMxEjAQBgNVBAcMCVZhbmNvdXZlcjERMA8GA1UECgwITXlDYXBP
        cmcxGTAXBgNVBAMMEE15Q2FwT3JnIFJvb3QgQ0EwHhcNMTgwOTE0MjA1MzU5WhcN
        ...
        PlezSFbDGGIc1beUs1gNMwJki7fs/jDjpA7TKuUDzoGSqDiJXeQAluBILHHQ4q2B
        KuLcZc6LbPsaADmtTbx+Ww/ZzIlF3ENVVvtrWTl5MOV3VhoJwsKmFiNLtkMuppBY
        bhbFkKwtW9xnUzXwjUCy87WPLx84xdBuL/nvJhoMUN75JklvtVkzyX/X
        -----END CERTIFICATE-----
    
      INTERNAL_CA_KEY: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIJKQIBAAKCAgEA/kK6Hw1da9aBwdbP6+wjiR/pSLv6ilNAxtOcKfaNKtc71nwO
        Hjw62ZLBkS2ZtwdNpt5QuueIsUXvFiy7xz4TzyAATXVLR0GBkaHl/PwlwSN5nTMC
        JT3T+89tg4UDFhcdGSZXjQyGZINLK6dHivuAcL3zgEZQwr6UeZINFb27WhsTZEMC
        ...
        0qmnlGxjAdwan+PrarR6ztyp/bYcAvQhgEwc9oF2hj9wBhkdWVNVQ4LaxGtUfV4S
        yhbc7dZNw17fXhgVMZPDTRBfwwrcJ6KcF7g1PCsaGcuOPZWxroemvn28ytYBt1IG
        tfIdEIQIUTDVM4K2wiE6bwslIYwv5pEBLAdWG0gw8KCZl+ffTNOv+8PkdaiD
        -----END RSA PRIVATE KEY-----

Once all pods are up and running, verify by running the cf api command followed by the cf login command and entering in your credentials. Both commands should be executed without using the --skip-ssl-validation option.

cf api https://api.example.com
cf login

20.2.1 Configuring Multiple Certificates

Cloud Application Platform supports configurations that use multiple certificates. To specify multiple certificates with their associated keys, replace the ROUTER_SSL_CERT and ROUTER_SSL_KEY properties with the ROUTER_TLS_PEM property in your values.yaml file.

secrets:
  ROUTER_TLS_PEM: |
    - cert_chain: |
        -----BEGIN CERTIFICATE-----
        MIIEDzCCAfcCCQCWC4NErLzy9DANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        opR9hW2YNrMYQYfhVu4KTkpXIr4iBrt2L+aq2Rk4NBaprH+0X6CPlYg+3edC7Jc+
	...
        ooXNKOrpbSUncflZYrAfYiBfnZGIC99EaXShRdavStKJukLZqb3iHBZWNLYnugGh
        jyoKpGgceU1lwcUkUeRIOXI8qs6jCqsePM6vak3EO5rSiMpXMvLO8WMaWsXEfcBL
        dglVTMCit9ORAbVZryXk8Xxiham83SjG+fOVO4pd0R8UuCE=
        -----END CERTIFICATE-----
      private_key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEA0HZ/aF64ITOrwtzlRlDkxf0b4V6MFaaTx/9UIQKQZLKT0d7u
        3Rz+egrsZ90Jk683Oz9fUZKtgMXt72CMYUn13TTYwnh5fJrDM1JXx6yHJyiIp0rf
        3G6wh4zzgBosIFiadWPQgL4iAJxmP14KMg4z7tNERu6VXa+0OnYT0DBrf5IJhbn6
	...
        ja0CsQKBgQCNrhKuxLgmQKp409y36Lh4VtIgT400jFOsMWFH1hTtODTgZ/AOnBZd
        bYFffmdjVxBPl4wEdVSXHEBrokIw+Z+ZhI2jf2jJkge9vsSPqX5cTd2X146sMUSy
        o+J1ZbzMp423AvWB7imsPTA+t9vfYPSlf+Is0MhBsnGE7XL4fAcVFQ==
        -----END RSA PRIVATE KEY-----
    - cert_chain: |
        -----BEGIN CERTIFICATE-----
        MIIEPzCCAiegAwIBAgIJAJYLg0SsvPL1MA0GCSqGSIb3DQEBCwUAMEYxCzAJBgNV
        BAYTAkNBMRMwEQYDVQQIDApTb21lLVN0YXRlMQ4wDAYDVQQKDAVNeU9yZzESMBAG
        A1UEAwwJTXlDQS5zaXRlMB4XDTE4MDkxNzE1MjQyMVoXDTIwMDEzMDE1MjQyMVow
	...
        FXrgM9jVBGXeL7T/DNfJp5QfRnrQq1/NFWafjORXEo9EPbAGVbPh8LiaEqwraR/K
        cDuNI7supZ33I82VOrI4+5mSMxj+jzSGd2fRAvWEo8E+MpHSpHJt6trGa5ON57vV
        duCWD+f1swpuuzW+rNinrNZZxUQ77j9Vk4oUeVUfL91ZK4k=
        -----END CERTIFICATE-----
      private_key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEowIBAAKCAQEA5kNN9ZZK/UssdUeYSajG6xFcjyJDhnPvVHYA0VtgVOq8S/rb
        irVvkI1s00rj+WypHqP4+l/0dDHTiclOpUU5c3pn3vbGaaSGyonOyr5Cbx1X+JZ5
        17b+ah+oEnI5pUDn7chGI1rk56UI5oV1Qps0+bYTetEYTE1DVjGOHl5ERMv2QqZM
	...
        rMMhAoGBAMmge/JWThffCaponeakJu63DHKz87e2qxcqu25fbo9il1ZpllOD61Zi
        xd0GATICOuPeOUoVUjSuiMtS7B5zjWnmk5+siGeXF1SNJCZ9spgp9rWA/dXqXJRi
        55w7eGyYZSmOg6I7eWvpYpkRll4iFVApMt6KPM72XlyhQOigbGdJ
        -----END RSA PRIVATE KEY-----

20.3 Rotating Automatically Generated Secrets

Cloud Application Platform uses a number of automatically generated secrets for use internally. These secrets have a default expiration of 10950 days and are set through the CERT_EXPIRATION property in the env: section of the values.yaml file. If rotation of the secrets is required, increment the value of secrets_generation_counter in the kube: section of the values.yaml configuration file (for example the example scf-config-values.yaml used in this guide) then run helm upgrade.

This example demonstrates rotating the secrets of the scf deployment.

First, update the scf-config-values.yaml file.

kube:
  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 2

Next, perform a helm upgrade to apply the change.

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
 -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade susecf-scf suse/cf \
 --values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

21 Integrating CredHub with SUSE Cloud Application Platform

SUSE Cloud Application Platform supports CredHub integration. You should already have a working CredHub instance, a CredHub service on your cluster, then apply the steps in this chapter to connect SUSE Cloud Application Platform.

21.1 Installing the CredHub Client

Start by creating a new directory for the CredHub client on your local workstation, then download and unpack the CredHub client. The following example is for the 2.2.0 Linux release; see cloudfoundry-incubator/credhub-cli for other platforms and current releases:

tux > mkdir chclient
tux > cd chclient
tux > wget https://github.com/cloudfoundry-incubator/credhub-cli/releases/download/2.2.0/credhub-linux-2.2.0.tgz
tux > tar zxf credhub-linux-2.2.0.tgz

21.2 Enabling Credhub

Enable credhub for your deployment in the sizing section of your deployment configuration file, which in this guide is scf-config-values.yaml:

sizing:   
  credhub_user:
    count: 1

Then apply the change with helm. First fetch the uaa credentials, them apply the updated configuration:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

21.3 Connecting to the CredHub Service

Set environment variables for the CredHub client, your CredHub service location, and Cloud Application Platform namespace. In these guides the example namespace is scf:

tux > CH_CLI=~/.chclient/credhub
tux > CH_SERVICE=https://credhub.example.com    
tux > NAMESPACE=scf

Set up the CredHub service location:

tux > SECRET="$(kubectl get secrets --namespace "${NAMESPACE}" | awk '/^secrets-/ { print $1 }')"
tux > CH_SECRET="$(kubectl get secrets --namespace "${NAMESPACE}" "${SECRET}" -o jsonpath="{.data['uaa-clients-credhub-user-cli-secret']}"|base64 -d)"
tux > CH_CLIENT=credhub_user_cli
tux > echo Service ......@ $CH_SERVICE
tux > echo CH cli Secret @ $CH_SECRET

Set the CredHub target through its Kubernetes service, then log into CredHub:

tux > "${CH_CLI}" api --skip-tls-validation --server "${CH_SERVICE}"
tux > "${CH_CLI}" login --client-name="${CH_CLIENT}" --client-secret="${CH_SECRET}"

Test your new connection by inserting and retrieving some fake credentials:

tux > "${CH_CLI}" set -n FOX -t value -v 'fox over lazy dog'   
tux > "${CH_CLI}" set -n DOG -t user -z dog -w fox
tux > "${CH_CLI}" get -n FOX
tux > "${CH_CLI}" get -n DOG

22 Offline Buildpacks

Buildpacks are used to construct the environment needed to run your applications, including any required runtimes or frameworks as well as other dependencies. When you deploy an application, a buildpack can be specified or automatically detected by cycling through all available buildpacks to find one that is applicable. When there is a suitable buildpack for your application, the buildpack will then download any necessary dependencies during the staging process.

An offline, or cached, buildpack packages the runtimes, frameworks, and dependencies needed to run your applications into an archive that is then uploaded to your Cloud Application Platform deployment. When an application is deployed using an offline buildpack, access to the Internet to download dependencies is no longer required. This has the benefit of providing improved staging performance and allows for staging to take place on air-gapped environments.

22.1 Creating an Offline Buildpack

Offline buildpacks can be created using the cf-buildpack-packager-docker tool, which is available as a Docker image. The only requirement to use this tool is a system with Docker support.

Important
Important: Disclaimer

Some Cloud Foundry buildpacks can reference binaries with proprietary or mutually incompatible open source licenses which cannot be distributed together as offline/cached buildpack archives. Operators who wish to package and maintain offline buildpacks will be responsible for any required licensing or export compliance obligations.

For automation purposes, you can use the --accept-external-binaries option to accept this disclaimer without the interactive prompt.

Usage of the tool is as follows:

package [--accept-external-binaries] org [all [stack] | language [tag] [stack]]

Where:

  • org is the Github organization hosting the buildpack repositories, such as "cloudfoundry" or "SUSE"

  • A tag cannot be specified when using all as the language because the tag is different for each language

  • tag is not optional if a stack is specified. To specify the latest release, use "" as the tag

  • A maximum of one stack can be specified

The following example demonstrates packaging an offline Ruby buildpack and uploading it to your Cloud Application Platform deployment to use. The packaged buildpack will be a Zip file placed in the current working directory, $PWD.

  1. Build the latest released SUSE Ruby buildpack for the SUSE Linux Enterprise 12 stack:

    tux > docker run -it --rm -v $PWD:/out splatform/cf-buildpack-packager SUSE ruby "" sle12
  2. Verify the archive has been created in your current working directory:

    tux > ls
    ruby_buildpack-cached-sle12-v1.7.30.1.zip
  3. Log into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space -o org
    tux > cf target -o org -s space
  4. List the currently available buildpacks:

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack               position   enabled   locked   filename
    staticfile_buildpack    1          true      false    staticfile_buildpack-v1.4.34.1-1.1-1dd6386a.zip
    java_buildpack          2          true      false    java-buildpack-v4.16.1-e638145.zip
    ruby_buildpack          3          true      false    ruby_buildpack-v1.7.26.1-1.1-c2218d66.zip
    nodejs_buildpack        4          true      false    nodejs_buildpack-v1.6.34.1-3.1-c794e433.zip
    go_buildpack            5          true      false    go_buildpack-v1.8.28.1-1.1-7508400b.zip
    python_buildpack        6          true      false    python_buildpack-v1.6.23.1-1.1-99388428.zip
    php_buildpack           7          true      false    php_buildpack-v4.3.63.1-1.1-2515c4f4.zip
    binary_buildpack        8          true      false    binary_buildpack-v1.0.27.1-3.1-dc23dfe2.zip
    dotnet-core_buildpack   9          true      false    dotnet-core-buildpack-v2.0.3.zip
  5. Upload your packaged offline buildpack to your Cloud Application Platform deployment:

    tux > cf create-buildpack ruby_buildpack_cached /tmp/ruby_buildpack-cached-sle12-v1.7.30.1.zip 1 --enable
    Creating buildpack ruby_buildpack_cached...
    OK
    
    Uploading buildpack ruby_buildpack_cached...
    Done uploading               
    OK
  6. Verify your buildpack is available:

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack               position   enabled   locked   filename
    ruby_buildpack_cached   1          true      false    ruby_buildpack-cached-sle12-v1.7.30.1.zip
    staticfile_buildpack    2          true      false    staticfile_buildpack-v1.4.34.1-1.1-1dd6386a.zip
    java_buildpack          3          true      false    java-buildpack-v4.16.1-e638145.zip
    ruby_buildpack          4          true      false    ruby_buildpack-v1.7.26.1-1.1-c2218d66.zip
    nodejs_buildpack        5          true      false    nodejs_buildpack-v1.6.34.1-3.1-c794e433.zip
    go_buildpack            6          true      false    go_buildpack-v1.8.28.1-1.1-7508400b.zip
    python_buildpack        7          true      false    python_buildpack-v1.6.23.1-1.1-99388428.zip
    php_buildpack           8          true      false    php_buildpack-v4.3.63.1-1.1-2515c4f4.zip
    binary_buildpack        9          true      false    binary_buildpack-v1.0.27.1-3.1-dc23dfe2.zip
    dotnet-core_buildpack   10         true      false    dotnet-core-buildpack-v2.0.3.zip
  7. Deploy a sample Rails app using the new buildpack:

    tux > git clone https://github.com/scf-samples/12factor
    tux > cd 12factor
    tux > cf push 12factor -b ruby_buildpack_cached
    Note
    Note: Specifying a buildpack to use with your application

    You can specify which buildpack is used to deploy your application through two methods:

    • Using the -b option during cf push, for example:

      tux > cf push my_application -b my_buildpack
    • Using the buildpacks in your application's manifest.yml:

      ---
      applications:
      - name: my_application
        buildpacks:
          - my_buildpack

23 Custom Application Domains

In a standard SUSE Cloud Foundry deployment, applications will use the same domain as the one configured in your scf-config-values.yaml for SCF. For example, if DOMAIN is set as example.com in your scf-config-values.yaml and you deploy an application called myapp then the application's URL will be myapp.example.com.

This chapter describes the changes required to allow applications to use a separate domain.

23.1 Customizing Application Domains

Begin by adding the following to your scf-config-values.yaml. Replace appdomain.com with the domain to use with your applications:

bosh:
  instance_groups:
  - name: api-group
    jobs:
    - name: cloud_controller_ng
      properties:
        app_domains:
	- appdomain.com

If uaa is deployed, pass your uaa secret and certificate to scf. Otherwise deploy uaa first (See Section 4.9, “Deploy uaa), then proceed with this step:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

If this is an initial deployment, use helm install to deploy scf:

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

If this is an existing deployment, use helm upgrade to apply the change:

tux > helm upgrade susecf-scf suse/cf \
--values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

Monitor the progess of the deployment:

tux > watch -c 'kubectl get pods --namespace scf'

When all pods are in a ready state, do the following to confirm custom application domains have been configured correctly.

Run cf curl /v2/info and verify the SCF domain is not appdomain.com:

tux > cf api --skip-ssl-validation https://api.example.com
tux > cf curl /v2/info | grep endpoint

Deploy an application and examine the routes field to verify appdomain.com is being used:

tux > cf login
tux > cf create-org org
tux > cf create-space space -o org
tux > cf target -o org -s space
tux > cf push myapp
cf push myapp
Pushing app myapp to org org / space space as admin...
Getting app info...
Creating app with these attributes...
  name:       myapp
  path:       /path/to/myapp
  routes:
+   myapp.appdomain.com

Creating app myapp...
Mapping routes...

...

Waiting for app to start...

name:              myapp
requested state:   started
instances:         1/1
usage:             1G x 1 instances
routes:            myapp.appdomain.com
last uploaded:     Mon 14 Jan 11:08:02 PST 2019
stack:             sle12
buildpack:         ruby
start command:     bundle exec rackup config.ru -p $PORT

     state     since                  cpu    memory       disk          details
#0   running   2019-01-14T19:09:42Z   0.0%   2.7M of 1G   80.6M of 1G

Part IV SUSE Cloud Application Platform User Guide

24 Deploying and Managing Applications with the Cloud Foundry Client

24.1 Using the cf CLI with SUSE Cloud Application Platform

The Cloud Foundry command line interface (cf CLI) is for deploying and managing your applications. You may use it for all the orgs and spaces that you are a member of. Install the client on a workstation for remote administration of your SUSE Cloud Foundry instances.

The complete guide is at Using the Cloud Foundry Command Line Interface, and source code with a demo video is on GitHub at Cloud Foundry CLI.

The following examples demonstrate some of the commonly-used commands. The first task is to log into your new SUSE Cloud Foundry instance. When your installation completes it prints a welcome screen with the information you need to access it.

       NOTES:
    Welcome to your new deployment of SCF.

    The endpoint for use by the `cf` client is
        https://api.example.com

    To target this endpoint run
        cf api --skip-ssl-validation https://api.example.com

    Your administrative credentials are:
        Username: admin
        Password: password

    Please remember, it may take some time for everything to come online.

    You can use
        kubectl get pods --namespace scf

    to spot-check if everything is up and running, or
        watch -c 'kubectl get pods --namespace scf'

    to monitor continuously.

You can display this message anytime with this command:

tux > helm status $(helm list | awk '/cf-([0-9]).([0-9]).*/{print$1}') | \
sed -n -e '/NOTES/,$p'

You need to provide the API endpoint of your SUSE Cloud Application Platform instance to log in. The API endpoint is the DOMAIN value you provided in scf-config-values.yaml, plus the api. prefix, as it shows in the above welcome screen. Set your endpoint, and use --skip-ssl-validation when you have self-signed SSL certificates. It asks for an email address, but you must enter admin instead (you cannot change this to a different username, though you may create additional users), and the password is the one you created in scf-config-values.yaml:

tux > cf login --skip-ssl-validation  -a https://api.example.com 
API endpoint: https://api.example.com

Email> admin

Password> 
Authenticating...
OK

Targeted org system

API endpoint:   https://api.example.com (API version: 2.101.0)
User:           admin
Org:            system
Space:          No space targeted, use 'cf target -s SPACE'

cf help displays a list of commands and options. cf help [command] provides information on specific commands.

You may pass in your credentials and set the API endpoint in a single command:

tux > cf login -u admin -p password --skip-ssl-validation -a https://api.example.com

Log out with cf logout.

Change the admin password:

tux > cf passwd
Current Password>
New Password> 
Verify Password> 
Changing password...
OK
Please log in again

View your current API endpoint, user, org, and space:

tux > cf target

Switch to a different org or space:

tux > cf target -o org
tux > cf target -s space

List all apps in the current space:

tux > cf apps

Query the health and status of a particular app:

tux > cf app appname

View app logs. The first example tails the log of a running app. The --recent option dumps recent logs instead of tailing, which is useful for stopped and crashed apps:

tux > cf logs appname
tux > cf logs --recent appname

Restart all instances of an app:

tux > cf restart appname

Restart a single instance of an app, identified by its index number, and restart it with the same index number:

tux > cf restart-app-instance appname index

After you have set up a service broker (see Chapter 17, Setting up and Using a Service Broker), create new services:

tux > cf create-service service-name default mydb

Then you may bind a service instance to an app:

tux > cf bind-service appname service-instance

The most-used command is cf push, for pushing new apps and changes to existing apps.

 tux > cf push new-app -b buildpack

Part V Troubleshooting

25 Troubleshooting

Cloud stacks are complex, and debugging deployment issues often requires digging through multiple layers to find the information you need. Remember that the SUSE Cloud Foundry releases must be deployed in the correct order, and that each release must deploy successfully, with no failed pods, before …

25 Troubleshooting

Cloud stacks are complex, and debugging deployment issues often requires digging through multiple layers to find the information you need. Remember that the SUSE Cloud Foundry releases must be deployed in the correct order, and that each release must deploy successfully, with no failed pods, before deploying the next release.

25.1 Using Supportconfig

If you ever need to request support, or just want to generate detailed system information and logs, use the supportconfig utility. Run it with no options to collect basic system information, and also cluster logs including Docker, etcd, flannel, and Velum. supportconfig may give you all the information you need.

supportconfig -h prints the options. Read the "Gathering System Information for Support" chapter in any SUSE Linux Enterprise Administration Guide to learn more.

25.2 Deployment is Taking Too Long

A deployment step seems to take too long, or you see that some pods are not in a ready state hours after all the others are ready, or a pod shows a lot of restarts. This example shows not-ready pods many hours after the others have become ready:

tux > kubectl get pods --namespace scf
NAME                     READY STATUS    RESTARTS  AGE
router-3137013061-wlhxb  0/1   Running   0         16h
routing-api-0            0/1   Running   0         16h

The Running status means the pod is bound to a node and all of its containers have been created. However, it is not Ready, which means it is not ready to service requests. Use kubectl to print a detailed description of pod events and status:

tux > kubectl describe pod --namespace scf router-3137013061-wlhxb

This prints a lot of information, including IP addresses, routine events, warnings, and errors. You should find the reason for the failure in this output.

Important
Important
Some pods show not running

Some uaa and scf pods perform only deployment tasks, and it is normal for them to show as unready and Completed after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed
          
tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed

25.3 Deleting and Rebuilding a Deployment

There may be times when you want to delete and rebuild a deployment, for example when there are errors in your scf-config-values.yaml file, you wish to test configuration changes, or a deployment fails and you want to try it again. This has five steps: first delete the StatefulSets of the namespace associated with the release or releases you want to re-deploy, then delete the release or releases, delete its namespace, then re-create the namespace and re-deploy the release.

The namespace is also deleted as part of the process because the SCF and UAA namespaces contain generated secrets which Helm is not aware of and will not remove when a release is deleted. When deleting a release, busy systems may encounter timeouts. By first deleting the StatefulSets, it ensures that this operation is more likely to succeed. Using the delete statefulsets command requires kubectl v1.9.6 or newer.

Use helm to see your releases:

tux > helm list
NAME            REVISION  UPDATED                  STATUS    CHART           NAMESPACE
susecf-console  1         Tue Aug 14 11:53:28 2018 DEPLOYED  console-2.3.0   stratos
susecf-scf      1         Tue Aug 14 10:58:16 2018 DEPLOYED  cf-2.15.2       scf
susecf-uaa      1         Tue Aug 14 10:49:30 2018 DEPLOYED  uaa-2.15.2      uaa

This example deletes the susecf-uaa release and namespace:

tux > kubectl delete statefulsets --all --namespace uaa
statefulset "mysql" deleted
statefulset "uaa" deleted
  
tux > helm delete --purge susecf-uaa
release "susecf-uaa" deleted

tux > kubectl delete namespace uaa
namespace "uaa" deleted

Then you can start over.

25.4 Querying with Kubectl

You can safely query with kubectl to get information about resources inside your Kubernetes cluster. kubectl cluster-info dump | tee clusterinfo.txt outputs a large amount of information about the Kubernetes master and cluster services to a text file.

The following commands give more targeted information about your cluster.

  • List all cluster resources:

    tux > kubectl get all --all-namespaces
  • List all of your running pods:

    tux > kubectl get pods --all-namespaces
  • List all of your running pods, their internal IP addresses, and which Kubernetes nodes they are running on:

    tux > kubectl get pods --all-namespaces --output-wide
  • See all pods, including those with Completed or Failed statuses:

    tux > kubectl get pods --show-all --all-namespaces
  • List pods in one namespace:

    tux > kubectl get pods --namespace scf
  • Get detailed information about one pod:

    tux > kubectl describe --namespace scf po/diego-cell-0
  • Read the log file of a pod:

    tux > kubectl logs --namespace scf po/diego-cell-0
  • List all Kubernetes nodes, then print detailed information about a single node:

    tux > kubectl get nodes
    tux > kubectl describe node 6a2752b6fab54bb889029f60de6fa4d5.infra.caasp.local
  • List all containers in all namespaces, formatted for readability:

    tux > kubectl get pods --all-namespaces -o jsonpath="{..image}" |\
    tr -s '[[:space:]]' '\n' |\
    sort |\
    uniq -c
  • These two commands check node capacities, to verify that there are enough resources for the pods:

    tux > kubectl get nodes -o yaml | grep '\sname\|cpu\|memory'
    tux > kubectl get nodes -o json | \
    jq '.items[] | {name: .metadata.name, cap: .status.capacity}'

A Appendix

A.1 Manual Configuration of Pod Security Policies

SUSE Cloud Application Platform 1.3.1 introduces built-in support for Pod Security Policies (PSPs), which are provided via Helm charts and are set up automatically, unlike older releases which require manual PSP setup. SUSE CaaS Platform and Microsoft AKS both require PSPs for Cloud Application Platform to operate correctly. This section provides instructions for configuring and applying the appropriate PSPs to older Cloud Application Platform releases.

See the upstream documentation at Pod Security Policies, Orgs, Spaces, Roles, and Permissions, and Identity Provider Workflow for more information on understanding and using PSPs.

Copy the following example into cap-psp-rbac.yaml:

---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: suse.cap.psp
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  # Privileged
  #default in suse.caasp.psp.unprivileged
  #privileged: false
  privileged: true
  # Volumes and File Systems
  volumes:
    # Kubernetes Pseudo Volume Types
    - configMap
    - secret
    - emptyDir
    - downwardAPI
    - projected
    - persistentVolumeClaim
    # Networked Storage
    - nfs
    - rbd
    - cephFS
    - glusterfs
    - fc
    - iscsi
    # Cloud Volumes
    - cinder
    - gcePersistentDisk
    - awsElasticBlockStore
    - azureDisk
    - azureFile
    - vsphereVolume
  allowedFlexVolumes: []
  # hostPath volumes are not allowed; pathPrefix must still be specified
  allowedHostPaths:      
    - pathPrefix: /opt/kubernetes-hostpath-volumes
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  #default in suse.caasp.psp.unprivileged
  #allowPrivilegeEscalation: false
  allowPrivilegeEscalation: true
  #default in suse.caasp.psp.unprivileged
  #defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities:
  - SYS_RESOURCE
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: false
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: suse:cap:psp
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['suse.cap.psp']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cap:clusterrole
roleRef:
  kind: ClusterRole
  name: suse:cap:psp
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: default
  namespace: uaa
- kind: ServiceAccount
  name: default
  namespace: scf
- kind: ServiceAccount
  name: default
  namespace: stratos
- kind: ServiceAccount
  name: default-privileged
  namespace: scf
- kind: ServiceAccount
  name: node-reader
  namespace: scf

Apply it to your cluster with kubectl:

tux > kubectl create -f cap-psp-rbac.yaml
podsecuritypolicy.extensions "suse.cap.psp" created
clusterrole.rbac.authorization.k8s.io "suse:cap:psp" created
clusterrolebinding.rbac.authorization.k8s.io "cap:clusterrole" created

Verify that the new PSPs exist by running the kubectl get psp command to list them. Then continue by deploying UAA and SCF. Ensure that your scf-config-values.yaml file specifies the name of your PSP in the kube: section. These settings will grant only a limited subset of roles to be privileged.

kube:
  psp:
    privileged: "suse.cap.psp"
Tip
Tip

Note that the example cap-psp-rbac.yaml file sets the name of the PSPs, which in the previous examples is suse.cap.psp.

A.1.1 Using Custom Pod Security Policies

When using a custom PSP, your scf-config-values.yaml file requires the SYS_RESOURCE capability to be added to the following roles:

sizing:
  cc_uploader:
    capabilities: ["SYS_RESOURCE"]
  diego_api:
    capabilities: ["SYS_RESOURCE"]
  diego_brain:
    capabilities: ["SYS_RESOURCE"]
  diego_ssh:
    capabilities: ["SYS_RESOURCE"]
  nats:
    capabilities: ["SYS_RESOURCE"]
  router:
    capabilities: ["SYS_RESOURCE"]
  routing_api:
    capabilities: ["SYS_RESOURCE"]

A.2 Complete suse/uaa values.yaml file

This is the complete output of helm inspect suse/uaa for the current SUSE Cloud Application Platform 1.3 release.

apiVersion: 2.15.2+cf3.6.0.0.gde1bd02f
description: A Helm chart for SUSE UAA
name: uaa
version: 2.15.2

---
---
kube:
  auth: "rbac"
  external_ips: []

  # Whether HostPath volume mounts are available
  hostpath_available: false

  organization: "cap"
  psp:
    nonprivileged: ~
  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""

  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 1

  storage_class:
    persistent: "persistent"
    shared: "shared"
config:
  # Flag to activate high-availability mode
  HA: false

  # Global memory configuration
  memory:
    # Flag to activate memory requests
    requests: false

    # Flag to activate memory limits
    limits: false

  # Global CPU configuration
  cpu:
    # Flag to activate cpu requests
    requests: false

    # Flag to activate cpu limits
    limits: false

  # Flag to specify whether to add Istio related annotations and labels
  use_istio: false

bosh:
  instance_groups: []
services:
  loadbalanced: false
  ingress: ~
secrets:
  # PEM-encoded CA certificate used to sign the TLS certificate used by all
  # components to secure their communications.
  # This value uses a generated default.
  INTERNAL_CA_CERT: ~

  # PEM-encoded CA key.
  INTERNAL_CA_CERT_KEY: ~

  # PEM-encoded JWT certificate.
  # This value uses a generated default.
  JWT_SIGNING_CERT: ~

  # PEM-encoded JWT signing key.
  JWT_SIGNING_CERT_KEY: ~

  # Password used for the monit API.
  # This value uses a generated default.
  MONIT_PASSWORD: ~

  # The password for the MySQL server admin user.
  # This value uses a generated default.
  MYSQL_ADMIN_PASSWORD: ~

  # The password for the cluster logger health user.
  # This value uses a generated default.
  MYSQL_CLUSTER_HEALTH_PASSWORD: ~

  # The password used to contact the sidecar endpoints via Basic Auth.
  # This value uses a generated default.
  MYSQL_GALERA_HEALTHCHECK_ENDPOINT_PASSWORD: ~

  # The password for Basic Auth used to secure the MySQL proxy API.
  # This value uses a generated default.
  MYSQL_PROXY_ADMIN_PASSWORD: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SAML_SERVICEPROVIDER_CERT: ~

  # PEM-encoded key.
  SAML_SERVICEPROVIDER_CERT_KEY: ~

  # The password for access to the UAA database.
  # This value uses a generated default.
  UAADB_PASSWORD: ~

  # The password of the admin client - a client named admin with uaa.admin as an
  # authority.
  UAA_ADMIN_CLIENT_SECRET: ~

  # The server's ssl certificate. The default is a self-signed certificate and
  # should always be replaced for production deployments.
  # This value uses a generated default.
  UAA_SERVER_CERT: ~

  # The server's ssl private key. Only passphrase-less keys are supported.
  UAA_SERVER_CERT_KEY: ~

env:
  # Expiration for generated certificates (in days)
  CERT_EXPIRATION: "10950"

  # Base domain name of the UAA endpoint; `uaa.${DOMAIN}` must be correctly
  # configured to point to this UAA instance
  DOMAIN: ~

  KUBERNETES_CLUSTER_DOMAIN: ~

  # The cluster's log level: off, fatal, error, warn, info, debug, debug1,
  # debug2.
  LOG_LEVEL: "info"

  # The log destination to talk to. This has to point to a syslog server.
  SCF_LOG_HOST: ~

  # The port used by rsyslog to talk to the log destination. It defaults to 514,
  # the standard port of syslog.
  SCF_LOG_PORT: "514"

  # The protocol used by rsyslog to talk to the log destination. The allowed
  # values are tcp, and udp. The default is tcp.
  SCF_LOG_PROTOCOL: "tcp"

  # If true, authenticate against the SMTP server using AUTH command. See
  # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html
  SMTP_AUTH: "false"

  # SMTP from address, for password reset emails etc.
  SMTP_FROM_ADDRESS: ~

  # SMTP server host address, for password reset emails etc.
  SMTP_HOST: ~

  # SMTP server password, for password reset emails etc.
  SMTP_PASSWORD: ~

  # SMTP server port, for password reset emails etc.
  SMTP_PORT: "25"

  # If true, send STARTTLS command before logging in to SMTP server. See
  # https://javamail.java.net/nonav/docs/api/com/sun/mail/smtp/package-summary.html
  SMTP_STARTTLS: "false"

  # SMTP server username, for password reset emails etc.
  SMTP_USER: ~

# The sizing section contains configuration to change each individual instance
# group. Due to limitations on the allowable names, any dashes ("-") in the
# instance group names are replaced with underscores ("_").
sizing:
  # The mysql instance group contains the following jobs:
  #
  # - global-uaa-properties: Dummy BOSH job used to host global parameters that
  #   are required to configure SCF / fissile
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: mysql, proxy
  mysql:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The mysql instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      mysql_data: 20

    # Unit [MiB]
    memory:
      request: 1400
      limit: ~

  # The secret-generation instance group contains the following jobs:
  #
  # - generate-secrets: This job will generate the secrets for the cluster
  secret_generation:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The secret-generation instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The uaa instance group contains the following jobs:
  #
  # - global-uaa-properties: Dummy BOSH job used to host global parameters that
  #   are required to configure SCF / fissile
  #
  # - wait-for-database: This is a pre-start job to delay starting the rest of
  #   the role until a database connection is ready. Currently it only checks
  #   that a response can be obtained from the server, and not that it responds
  #   intelligently.
  #
  #
  # - uaa: The UAA is the identity management service for Cloud Foundry. It's
  #   primary role is as an OAuth2 provider, issuing tokens for client
  #   applications to use when they act on behalf of Cloud Foundry users. It can
  #   also authenticate users with their Cloud Foundry credentials, and can act
  #   as an SSO service using those credentials (or others). It has endpoints
  #   for managing user accounts and for registering OAuth2 clients, as well as
  #   various other management functions.
  uaa:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The uaa instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2100
      limit: ~

A.3 Complete suse/scf values.yaml file

This is the complete output of helm inspect suse/cf for the current SUSE Cloud Application Platform 1.3 release.

apiVersion: 2.15.2+cf3.6.0.0.gde1bd02f
appVersion: 1.3.1
description: A Helm chart for SUSE Cloud Foundry
name: cf
version: 2.15.2

---
---
kube:
  external_ips: []

  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 1

  storage_class:
    persistent: "persistent"
    shared: "shared"
  psp:
    nonprivileged: ~
    privileged: ~

  # Whether HostPath volume mounts are available
  hostpath_available: false

  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"
  auth: "rbac"
config:
  # Flag to activate high-availability mode
  HA: false

  # Global memory configuration
  memory:
    # Flag to activate memory requests
    requests: false

    # Flag to activate memory limits
    limits: false

  # Global CPU configuration
  cpu:
    # Flag to activate cpu requests
    requests: false

    # Flag to activate cpu limits
    limits: false

services:
  loadbalanced: false
secrets:
  # PEM encoded RSA private key used to identify host.
  # This value uses a generated default.
  APP_SSH_KEY: ~

  # MD5 fingerprint of the host key of the SSH proxy that brokers connections to
  # application instances.
  APP_SSH_KEY_FINGERPRINT: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  AUCTIONEER_REP_CERT: ~

  # PEM-encoded key
  AUCTIONEER_REP_CERT_KEY: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  AUCTIONEER_SERVER_CERT: ~

  # PEM-encoded server key
  AUCTIONEER_SERVER_CERT_KEY: ~

  # A PEM-encoded TLS certificate for clients to connect to the Autoscaler
  # Actors. This includes the Autoscaler Scheduler and the Scaling Engine.
  # This value uses a generated default.
  AUTOSCALER_ASACTORS_CLIENT_CERT: ~

  # A PEM-encoded TLS key for clients to connect to the Autoscaler Actors. This
  # includes the Autoscaler Scheduler and the Scaling Engine.
  AUTOSCALER_ASACTORS_CLIENT_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler Actors https server. This
  # includes the Autoscaler Scheduler and the Scaling Engine.
  # This value uses a generated default.
  AUTOSCALER_ASACTORS_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler Actors https server. This includes
  # the Autoscaler Scheduler and the Scaling Engine.
  AUTOSCALER_ASACTORS_SERVER_CERT_KEY: ~

  # A PEM-encoded TLS certificate for clients to connect to the Autoscaler API.
  # This includes the Autoscaler ApiServer and the Service Broker.
  # This value uses a generated default.
  AUTOSCALER_ASAPI_CLIENT_CERT: ~

  # A PEM-encoded TLS key for clients to connect to the Autoscaler API. This
  # includes the Autoscaler ApiServer and the Service Broker.
  AUTOSCALER_ASAPI_CLIENT_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler API public https server.
  # This includes the Autoscaler ApiServer and the Service Broker.
  # This value uses a generated default.
  AUTOSCALER_ASAPI_PUBLIC_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler API public https server. This
  # includes the Autoscaler ApiServer and the Service Broker.
  AUTOSCALER_ASAPI_PUBLIC_SERVER_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler API https server. This
  # includes the Autoscaler ApiServer and the Service Broker.
  # This value uses a generated default.
  AUTOSCALER_ASAPI_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler API https server. This includes the
  # Autoscaler ApiServer and the Service Broker.
  AUTOSCALER_ASAPI_SERVER_CERT_KEY: ~

  # A PEM-encoded TLS certificate for clients to connect to the Autoscaler
  # Metrics. This includes the Autoscaler Metrics Collector and Event Generator.
  # This value uses a generated default.
  AUTOSCALER_ASMETRICS_CLIENT_CERT: ~

  # A PEM-encoded TLS key for clients to connect to the Autoscaler Metrics. This
  # includes the Autoscaler Metrics Collector and Event Generator.
  AUTOSCALER_ASMETRICS_CLIENT_CERT_KEY: ~

  # A PEM-encoded TLS certificate of the Autoscaler Metrics https server. This
  # includes the Autoscaler Metrics Collector and Event Generator.
  # This value uses a generated default.
  AUTOSCALER_ASMETRICS_SERVER_CERT: ~

  # A PEM-encoded TLS key of the Autoscaler Metrics https server. This includes
  # the Autoscaler Metrics Collector and Event Generator.
  AUTOSCALER_ASMETRICS_SERVER_CERT_KEY: ~

  # The password for the Autoscaler postgres database.
  # This value uses a generated default.
  AUTOSCALER_DB_PASSWORD: ~

  # The password for the Autoscaler Service Broker.
  # This value uses a generated default.
  AUTOSCALER_SERVICE_BROKER_PASSWORD: ~

  # the uaa client secret used by Autoscaler.
  # This value uses a generated default.
  AUTOSCALER_UAA_CLIENT_SECRET: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  BBS_AUCTIONEER_CERT: ~

  # PEM-encoded key
  BBS_AUCTIONEER_CERT_KEY: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BBS_CLIENT_CRT: ~

  # PEM-encoded client key.
  BBS_CLIENT_CRT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  BBS_REP_CERT: ~

  # PEM-encoded key
  BBS_REP_CERT_KEY: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BBS_SERVER_CRT: ~

  # PEM-encoded client key.
  BBS_SERVER_CRT_KEY: ~

  # The basic auth password that Cloud Controller uses to connect to the
  # blobstore server. Auto-generated if not provided. Passwords must be
  # alphanumeric (URL-safe).
  # This value uses a generated default.
  BLOBSTORE_PASSWORD: ~

  # The secret used for signing URLs between Cloud Controller and blobstore.
  # This value uses a generated default.
  BLOBSTORE_SECURE_LINK: ~

  # The PEM-encoded certificate (optionally as a certificate chain) for serving
  # blobs over TLS/SSL.
  # This value uses a generated default.
  BLOBSTORE_TLS_CERT: ~

  # The PEM-encoded private key for signing TLS/SSL traffic.
  BLOBSTORE_TLS_CERT_KEY: ~

  # The password for the bulk api.
  # This value uses a generated default.
  BULK_API_PASSWORD: ~

  # A map of labels and encryption keys
  CC_DB_ENCRYPTION_KEYS: "~"

  # The PEM-encoded certificate for internal cloud controller traffic.
  # This value uses a generated default.
  CC_SERVER_CRT: ~

  # The PEM-encoded private key for internal cloud controller traffic.
  CC_SERVER_CRT_KEY: ~

  # The PEM-encoded certificate for internal cloud controller uploader traffic.
  # This value uses a generated default.
  CC_UPLOADER_CRT: ~

  # The PEM-encoded private key for internal cloud controller uploader traffic.
  CC_UPLOADER_CRT_KEY: ~

  # PEM-encoded broker server certificate.
  # This value uses a generated default.
  CF_USB_BROKER_SERVER_CERT: ~

  # PEM-encoded broker server key.
  CF_USB_BROKER_SERVER_CERT_KEY: ~

  # The password for access to the Universal Service Broker.
  # This value uses a generated default.
  # Example: "password"
  CF_USB_PASSWORD: ~

  # The password for the cluster administrator.
  CLUSTER_ADMIN_PASSWORD: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  CREDHUB_SERVER_CERT: ~

  # PEM-encoded server key
  CREDHUB_SERVER_CERT_KEY: ~

  # PEM-encoded client certificate
  # This value uses a generated default.
  DIEGO_CLIENT_CERT: ~

  # PEM-encoded client key
  DIEGO_CLIENT_CERT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  DOPPLER_CERT: ~

  # PEM-encoded key.
  DOPPLER_CERT_KEY: ~

  # Basic auth password for access to the Cloud Controller's internal API.
  # This value uses a generated default.
  INTERNAL_API_PASSWORD: ~

  # PEM-encoded CA certificate used to sign the TLS certificate used by all
  # components to secure their communications.
  # This value uses a generated default.
  INTERNAL_CA_CERT: ~

  # PEM-encoded CA key.
  INTERNAL_CA_CERT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  LOGGREGATOR_AGENT_CERT: ~

  # PEM-encoded key.
  LOGGREGATOR_AGENT_CERT_KEY: ~

  # PEM-encoded client certificate for loggregator mutual authentication
  # This value uses a generated default.
  LOGGREGATOR_CLIENT_CERT: ~

  # PEM-encoded client key for loggregator mutual authentication
  LOGGREGATOR_CLIENT_CERT_KEY: ~

  # Password used for the monit API.
  # This value uses a generated default.
  MONIT_PASSWORD: ~

  # The password for the MySQL server admin user.
  # This value uses a generated default.
  MYSQL_ADMIN_PASSWORD: ~

  # The password for access to the Cloud Controller database.
  # This value uses a generated default.
  MYSQL_CCDB_ROLE_PASSWORD: ~

  # The password for access to the usb config database.
  # This value uses a generated default.
  # Example: "password"
  MYSQL_CF_USB_PASSWORD: ~

  # The password for the cluster logger health user.
  # This value uses a generated default.
  MYSQL_CLUSTER_HEALTH_PASSWORD: ~

  # The password for access to the credhub-user database.
  # This value uses a generated default.
  MYSQL_CREDHUB_USER_PASSWORD: ~

  # Database password for the diego locket service.
  # This value uses a generated default.
  MYSQL_DIEGO_LOCKET_PASSWORD: ~

  # The password for access to MySQL by diego.
  # This value uses a generated default.
  MYSQL_DIEGO_PASSWORD: ~

  # Password used to authenticate to the MySQL Galera healthcheck endpoint.
  # This value uses a generated default.
  MYSQL_GALERA_HEALTHCHECK_ENDPOINT_PASSWORD: ~

  # Database password for storing broker state for the Persi NFS Broker
  # This value uses a generated default.
  MYSQL_PERSI_NFS_PASSWORD: ~

  # The password for Basic Auth used to secure the MySQL proxy API.
  # This value uses a generated default.
  MYSQL_PROXY_ADMIN_PASSWORD: ~

  # The password for access to MySQL by the routing-api
  # This value uses a generated default.
  MYSQL_ROUTING_API_PASSWORD: ~

  # The password for access to NATS.
  # This value uses a generated default.
  NATS_PASSWORD: ~

  # Basic auth password to verify on incoming Service Broker requests
  # This value uses a generated default.
  PERSI_NFS_BROKER_PASSWORD: ~

  # LDAP service account password (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PASSWORD: "-"

  # PEM-encoded server certificate
  # This value uses a generated default.
  REP_SERVER_CERT: ~

  # PEM-encoded server key
  REP_SERVER_CERT_KEY: ~

  # Support for route services is disabled when no value is configured. A robust
  # passphrase is recommended.
  # This value uses a generated default.
  ROUTER_SERVICES_SECRET: ~

  # The public ssl cert for ssl termination. Will be ignored if ROUTER_TLS_PEM
  # is set.
  # This value uses a generated default.
  ROUTER_SSL_CERT: ~

  # The private ssl key for ssl termination. Will be ignored if ROUTER_TLS_PEM
  # is set.
  ROUTER_SSL_CERT_KEY: ~

  # Password for HTTP basic auth to the varz/status endpoint.
  # This value uses a generated default.
  ROUTER_STATUS_PASSWORD: ~

  # Array of private keys and certificates used for TLS handshakes with
  # downstream clients. Each element in the array is an object containing fields
  # 'private_key' and 'cert_chain', each of which supports a PEM block. This
  # setting overrides ROUTER_SSL_CERT and ROUTER_SSL_KEY.
  # Example:
  #   - cert_chain: |
  #       -----BEGIN CERTIFICATE-----
  #       -----END CERTIFICATE-----
  #       -----BEGIN CERTIFICATE-----
  #       -----END CERTIFICATE-----
  #     private_key: |
  #       -----BEGIN RSA PRIVATE KEY-----
  #       -----END RSA PRIVATE KEY-----
  ROUTER_TLS_PEM: ~

  # The password for access to the uploader of staged droplets.
  # This value uses a generated default.
  STAGING_UPLOAD_PASSWORD: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_ADAPT_CERT: ~

  # PEM-encoded key.
  SYSLOG_ADAPT_CERT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_RLP_CERT: ~

  # PEM-encoded key.
  SYSLOG_RLP_CERT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_SCHED_CERT: ~

  # PEM-encoded key.
  SYSLOG_SCHED_CERT_KEY: ~

  # PEM-encoded client certificate for internal communication between the cloud
  # controller and TPS.
  # This value uses a generated default.
  TPS_CC_CLIENT_CRT: ~

  # PEM-encoded client key for internal communication between the cloud
  # controller and TPS.
  TPS_CC_CLIENT_CRT_KEY: ~

  # PEM-encoded certificate for communication with the traffic controller of the
  # log infra structure.
  # This value uses a generated default.
  TRAFFICCONTROLLER_CERT: ~

  # PEM-encoded key for communication with the traffic controller of the log
  # infra structure.
  TRAFFICCONTROLLER_CERT_KEY: ~

  # The password of the admin client - a client named admin with uaa.admin as an
  # authority.
  UAA_ADMIN_CLIENT_SECRET: ~

  # The CA certificate for UAA
  UAA_CA_CERT: ~

  # The password for UAA access by the Cloud Controller.
  # This value uses a generated default.
  UAA_CC_CLIENT_SECRET: ~

  # The password for UAA access by the Routing API.
  # This value uses a generated default.
  UAA_CLIENTS_CC_ROUTING_SECRET: ~

  # Used for third party service dashboard SSO.
  # This value uses a generated default.
  UAA_CLIENTS_CC_SERVICE_DASHBOARDS_CLIENT_SECRET: ~

  # Used for fetching service key values from CredHub.
  # This value uses a generated default.
  UAA_CLIENTS_CC_SERVICE_KEY_CLIENT_SECRET: ~

  # The password for UAA access by the Universal Service Broker.
  # This value uses a generated default.
  UAA_CLIENTS_CF_USB_SECRET: ~

  # The password for UAA access by the Cloud Controller for fetching usernames.
  # This value uses a generated default.
  UAA_CLIENTS_CLOUD_CONTROLLER_USERNAME_LOOKUP_SECRET: ~

  # The password for UAA access by the client for the user-accessible credhub
  # This value uses a generated default.
  UAA_CLIENTS_CREDHUB_USER_CLI_SECRET: ~

  # The password for UAA access by the SSH proxy.
  # This value uses a generated default.
  UAA_CLIENTS_DIEGO_SSH_PROXY_SECRET: ~

  # The password for UAA access by doppler.
  # This value uses a generated default.
  UAA_CLIENTS_DOPPLER_SECRET: ~

  # The password for UAA access by the gorouter.
  # This value uses a generated default.
  UAA_CLIENTS_GOROUTER_SECRET: ~

  # The password for UAA access by the login client.
  # This value uses a generated default.
  UAA_CLIENTS_LOGIN_SECRET: ~

  # The password for UAA access by the task creating the cluster administrator
  # user
  # This value uses a generated default.
  UAA_CLIENTS_SCF_AUTO_CONFIG_SECRET: ~

  # The password for UAA access by the TCP emitter.
  # This value uses a generated default.
  UAA_CLIENTS_TCP_EMITTER_SECRET: ~

  # The password for UAA access by the TCP router.
  # This value uses a generated default.
  UAA_CLIENTS_TCP_ROUTER_SECRET: ~

env:
  # The number of parallel test executors to spawn for Cloud Foundry acceptance
  # tests. The larger the number the higher the stress on the system.
  ACCEPTANCE_TEST_NODES: "4"

  # List of domains (including scheme) from which Cross-Origin requests will be
  # accepted, a * can be used as a wildcard for any part of a domain.
  ALLOWED_CORS_DOMAINS: "[]"

  # Allow users to change the value of the app-level allow_ssh attribute.
  ALLOW_APP_SSH_ACCESS: "true"

  # Extra token expiry time while uploading big apps, in seconds.
  APP_TOKEN_UPLOAD_GRACE_PERIOD: "1200"

  # The TTL of credential cache for the Autoscaler custom metrics, in seconds
  AUTOSCALER_API_SERVER_CACHE_TTL: "600"

  # Idle connection timeout for the Autoscaler API Server database, in seconds.
  AUTOSCALER_API_SERVER_DB_CONFIG_IDLE_TIMEOUT: "100"

  # The maximum number of connections to the Autoscaler API Server database.
  AUTOSCALER_API_SERVER_DB_CONFIG_MAX_CONNECTIONS: "10"

  # The minimum number of connections to the Autoscaler API Server database.
  AUTOSCALER_API_SERVER_DB_CONFIG_MIN_CONNECTIONS: "0"

  # The build version of Autoscaler. It should be defined when it is deployed to
  # a product environment.
  AUTOSCALER_API_SERVER_INFO_BUILD: "beta"

  # The description of Autoscaler. It should be defined when it is deployed to a
  # product environment.
  AUTOSCALER_API_SERVER_INFO_DESCRIPTION: "autoscaler"

  # The name of the Autoscaler API Server. It should be defined when it is
  # deployed to a product environment.
  AUTOSCALER_API_SERVER_INFO_NAME: "autoscalerapiserver"

  # The support url of autoscaler where the users could find support
  # information. It should be defined when it is deployed to a product
  # environment.
  AUTOSCALER_API_SERVER_INFO_SUPPORT_URL: ""

  # The maximum age of a connection that can be reused for the Autoscaler
  # appmetrics database.
  AUTOSCALER_APPMETRICS_DB_CONNECTION_CONFIG_CONNECTION_MAX_LIFETIME: "60s"

  # The maximum number of connections to the Autoscaler appmetrics database.
  AUTOSCALER_APPMETRICS_DB_CONNECTION_CONFIG_MAX_IDLE_CONNECTIONS: "10"

  # The minimum number of connections to the Autoscaler appmetrics database.
  AUTOSCALER_APPMETRICS_DB_CONNECTION_CONFIG_MAX_OPEN_CONNECTIONS: "100"

  # Whether to skip ssl validation when Autoscaler components to communicate
  # with CloudFoundry components.
  AUTOSCALER_CF_SKIP_SSL_VALIDATION: "true"

  # The maximum number of connections the Autoscaler postgres database server
  # could serve.
  AUTOSCALER_DATABASE_MAX_CONNECTIONS: "1000"

  # The default breach duration for the Autoscaler Event Generator, in
  # seconds.The breach duration means the App Autoscaler won’t take the
  # desired adjustment until the application has been breaching the Autoscaler
  # policy for a period longer than breach_duration_secs setting.
  AUTOSCALER_DEFAULT_BREACH_DURATION_SECS: "300"

  # The default cooldown duration between two scaling action for the Autoscaler
  # Event Generator, in seconds.
  AUTOSCALER_DEFAULT_COOLDOWN_SECS: "300"

  # The default statistic window duration between two scaling action for the
  # Autoscaler Event Generator, in seconds.
  AUTOSCALER_DEFAULT_STAT_WINDOW_SECS: "300"

  # The size of golang channel used by the Autoscaler Event Generator as a
  # buffer to save metrics by batch.
  AUTOSCALER_EVENT_GENERATOR_AGGREGATOR_APP_METRIC_CHANNEL_SIZE: "1000"

  # The size of golang channel used by the Autoscaler Event Generator to
  # exchange data.
  AUTOSCALER_EVENT_GENERATOR_AGGREGATOR_APP_MONITOR_CHANNEL_SIZE: "200"

  # The duration defines how long does Autoscaler Event Generator aggregate
  # appmetrics.
  AUTOSCALER_EVENT_GENERATOR_AGGREGATOR_EXECUTE_INTERVAL: "40s"

  # The number of metricpollers in Autoscaler Event Generator.
  AUTOSCALER_EVENT_GENERATOR_AGGREGATOR_METRIC_POLLER_COUNT: "20"

  # The duration defines how long does the Autoscaler Event Generator reload
  # data from policy database.
  AUTOSCALER_EVENT_GENERATOR_AGGREGATOR_POLICY_POLLER_INTERVAL: "40s"

  # The duration defines how long does the Autoscaler Event Generator save
  # appmetrics to database by batch.
  AUTOSCALER_EVENT_GENERATOR_AGGREGATOR_SAVE_INTERVAL: "5s"

  # The initial exponential back off interval for the circuit breaker of the
  # Autoscaler Event Generator.
  AUTOSCALER_EVENT_GENERATOR_CIRCUIT_BREAKER_BACK_OFF_INITIAL_INTERVAL: "5m"

  # The maximum exponential back off interval for the circuit breaker of the
  # Autoscaler Event Generator.
  AUTOSCALER_EVENT_GENERATOR_CIRCUIT_BREAKER_BACK_OFF_MAX_INTERVAL: "120m"

  # The number of consecutive failure to trip the circuit down for the circuit
  # breaker of the Autoscaler Event Generator.
  AUTOSCALER_EVENT_GENERATOR_CIRCUIT_BREAKER_CONSECUTIVE_FAILURE_COUNT: "3"

  # The duration defines how long does the Autoscaler Event Generator evaluate
  # appmetrics.
  AUTOSCALER_EVENT_GENERATOR_EVALUATOR_EVALUATION_MANAGER_EXECUTE_INTERVAL: "40s"

  # The number of evaluators in the Autoscaler Event Generator.
  AUTOSCALER_EVENT_GENERATOR_EVALUATOR_EVALUATOR_COUNT: "20"

  # The size of golang channel used by the Autoscaler Event Generator App
  # Evaluation Manager to pass triggers to evaluators.
  AUTOSCALER_EVENT_GENERATOR_EVALUATOR_TRIGGER_ARRAY_CHANNEL_SIZE: "200"

  # The maximum age of a connection that can be reused for the Autoscaler
  # instance metrics database.
  AUTOSCALER_INSTANCE_METRICS_DB_CONNECTION_CONFIG_CONNECTION_MAX_LIFETIME: "60s"

  # The maximum number of idle connections to the Autoscaler instance metrics
  # database.
  AUTOSCALER_INSTANCE_METRICS_DB_CONNECTION_CONFIG_MAX_IDLE_CONNECTIONS: "10"

  # The maximum number of connections to the Autoscaler instance metrics
  # database.
  AUTOSCALER_INSTANCE_METRICS_DB_CONNECTION_CONFIG_MAX_OPEN_CONNECTIONS: "100"

  # The maximum age of a connection that can be reused for the Autoscaler lock
  # database.
  AUTOSCALER_LOCK_DB_CONNECTION_CONFIG_CONNECTION_MAX_LIFETIME: "60s"

  # The maximum number of idle connections to the Autoscaler lock database.
  AUTOSCALER_LOCK_DB_CONNECTION_CONFIG_MAX_IDLE_CONNECTIONS: "10"

  # The maximum number of connections to the Autoscaler lock database.
  AUTOSCALER_LOCK_DB_CONNECTION_CONFIG_MAX_OPEN_CONNECTIONS: "100"

  # The duration defines how long does the Autoscaler Metrics Collector
  # aggregate collected instance metrics.
  AUTOSCALER_METRICS_COLLECTOR_COLLECTOR_COLLECT_INTERVAL: "30s"

  # The duration defines how long does the Autoscaler Metrics Collector reload
  # data from policy database.
  AUTOSCALER_METRICS_COLLECTOR_COLLECTOR_REFRESH_INTERVAL: "60s"

  # The duration defines how long does the Autoscaler Metrics Collector save
  # instance metrics to database by batch.
  AUTOSCALER_METRICS_COLLECTOR_COLLECTOR_SAVE_INTERVAL: "30s"

  # The method type of the Autoscaler Metrics Collector to fetch metrics from
  # cloudfoundry: polling or streaming.
  AUTOSCALER_METRICS_COLLECTOR_COLLECT_METHOD: "streaming"

  # The duration defines how many days of appmetrics data does the Autoscaler
  # Operator kept for each pruning.
  AUTOSCALER_OPERATOR_APP_METRICS_DB_CUTOFF_DAYS: "30"

  # The duration between two pruning opertions for appmetrics database in the
  # Autoscaler Operator.
  AUTOSCALER_OPERATOR_APP_METRICS_DB_REFRESH_INTERVAL: "24h"

  # The interval of synchronizing applications information between AutoScaler
  # policy database and Cloudfoundry.
  AUTOSCALER_OPERATOR_APP_SYNC_INTERVAL: "24h"

  # Whether to enable the Autoscaler Operator's db lock. If there are more than
  # one operators, the value should be true
  AUTOSCALER_OPERATOR_ENABLE_DB_LOCK: "true"

  # The duration defines how many days of instance metrics data does the
  # Autoscaler Operator kept for each pruning.
  AUTOSCALER_OPERATOR_INSTANCE_METRICS_DB_CUTOFF_DAYS: "30"

  # The duration between two pruning opertions for instance metrics database in
  # the Autoscaler Operator.
  AUTOSCALER_OPERATOR_INSTANCE_METRICS_DB_REFRESH_INTERVAL: "24h"

  # The interval for each Autoscaler Operator instance to retry to get database
  # lock.
  AUTOSCALER_OPERATOR_LOCK_RETRY_INTERVAL: "10s"

  # The maximum duration of a Autoscaler Operator instance could hold the
  # database lock.
  AUTOSCALER_OPERATOR_LOCK_TTL: "15s"

  # The duration defines how many days of the scaling history data does the
  # Autoscaler Operator kept for each pruning.
  AUTOSCALER_OPERATOR_SCALING_ENGINE_DB_CUTOFF_DAYS: "30"

  # The duration between two pruning opertions for scalingengine database in the
  # Autoscaler Operator.
  AUTOSCALER_OPERATOR_SCALING_ENGINE_DB_FRESH_INTERVAL: "24h"

  # The interval of synchronizing the Autoscaler Scaling Engine active schedules
  # with Autoscaler Scheduler.
  AUTOSCALER_OPERATOR_SCALING_ENGINE_SYNC_INTERVAL: "600s"

  # The interval of synchronizing the Autoscaler Scheduler schedules with policy
  # database.
  AUTOSCALER_OPERATOR_SCHEDULER_SYNC_INTERVAL: "600s"

  # The maximum age of a connection that can be reused for the Autoscaler policy
  # database.
  AUTOSCALER_POLICY_DB_CONNECTION_CONFIG_CONNECTION_MAX_LIFETIME: "60s"

  # The maximum number of idle connections to the Autoscaler policy database.
  AUTOSCALER_POLICY_DB_CONNECTION_CONFIG_MAX_IDLE_CONNECTIONS: "10"

  # The maximum number of connections to the Autoscaler policy database.
  AUTOSCALER_POLICY_DB_CONNECTION_CONFIG_MAX_OPEN_CONNECTIONS: "100"

  # The maximum age of a connection that can be reused for the Autoscaler
  # Scaling Engine database.
  AUTOSCALER_SCALING_ENGINE_DB_CONNECTION_CONFIG_CONNECTION_MAX_LIFETIME: "60s"

  # The maximum number of idle connections to the Autoscaler Scaling Engine
  # database.
  AUTOSCALER_SCALING_ENGINE_DB_CONNECTION_CONFIG_MAX_IDLE_CONNECTIONS: "10"

  # The maximum number of connections to the Autoscaler Scaling Engine database.
  AUTOSCALER_SCALING_ENGINE_DB_CONNECTION_CONFIG_MAX_OPEN_CONNECTIONS: "100"

  # The time interval to emit health metrics for the Autoscaler Scaling Engine."
  AUTOSCALER_SCALING_ENGINE_HEALTH_EMIT_INTERVAL: "15s"

  # The lock number of the Autoscaler Scaling Engine to do synchronization for
  # multiple scaling requests.
  AUTOSCALER_SCALING_ENGINE_LOCK_SIZE: "32"

  # The maximum age of a connection that can be reused for the Autoscaler
  # Scheduler database.
  AUTOSCALER_SCHEDULER_DB_CONNECTION_CONFIG_CONNECTION_MAX_LIFETIME: "60s"

  # The maximum number of idle connections to the Autoscaler Scheduler database.
  AUTOSCALER_SCHEDULER_DB_CONNECTION_CONFIG_MAX_IDLE_CONNECTIONS: "10"

  # The maximum number of connections to the Autoscaler Scaling Engine database.
  AUTOSCALER_SCHEDULER_DB_CONNECTION_CONFIG_MAX_OPEN_CONNECTIONS: "100"

  # Rescheduling interval for quartz job for the Autoscaler Scheduler, in
  # milliseconds.
  AUTOSCALER_SCHEDULER_JOB_RESCHEDULE_INTERVAL_MILISECOND: "100"

  # Maximum number of jobs can be re-scheduled in the Autoscaler Scheduler.
  AUTOSCALER_SCHEDULER_JOB_RESCHEDULE_MAXCOUNT: "6"

  # Maximum number of notification sent to Autoscaler Scaling Engine for job
  # re-schedule.
  AUTOSCALER_SCHEDULER_NOTIFICATION_RESCHEDULE_MAXCOUNT: "3"

  # Idle connection timeout for the Autoscaler Service Broker database, in
  # seconds.
  AUTOSCALER_SERVICE_BROKER_DB_CONFIG_IDLE_TIMEOUT: "1000"

  # The maximum number of connections to the Autoscaler Service Broker database.
  AUTOSCALER_SERVICE_BROKER_DB_CONFIG_MAX_CONNECTIONS: "10"

  # The minimum number of connections to the Autoscaler Service Broker database.
  AUTOSCALER_SERVICE_BROKER_DB_CONFIG_MIN_CONNECTIONS: "0"

  # The timeout in milliseconds for http request from Autoscaler Service Broker
  # to other Autoscaler components.
  AUTOSCALER_SERVICE_BROKER_HTTP_REQUEST_TIMEOUT: "5000"

  # Whether Autoscaler is provided as a cloudfoundry service, default is false.
  AUTOSCALER_SERVICE_OFFERING_ENABLED: "false"

  # The name of the metadata label to query on worker nodes to get AZ
  # information. When set, the cells will query their worker node for AZ
  # information and inject the result into cloudfoundry via the KUBE_AZ
  # parameter. When left to the default no custom AZ processing is done.
  AZ_LABEL_NAME: ""

  # List of allow / deny rules for the blobstore internal server. Will be
  # followed by 'deny all'. Each entry must be follow by a semicolon.
  BLOBSTORE_ACCESS_RULES: "allow 10.0.0.0/8; allow 172.16.0.0/12; allow 192.168.0.0/16;"

  # Maximal allowed file size for upload to blobstore, in megabytes.
  BLOBSTORE_MAX_UPLOAD_SIZE: "5000"

  # For requests to service brokers, this is the HTTP (open and read) timeout
  # setting, in seconds.
  BROKER_CLIENT_TIMEOUT_SECONDS: "70"

  # The set of CAT test suites to run. If not specified it falls back to a
  # hardwired set of suites.
  CATS_SUITES: ~

  # The key used to encrypt entries in the CC database
  CC_DB_CURRENT_KEY_LABEL: ""

  # URI for a CDN to use for buildpack downloads.
  CDN_URI: ""

  # Expiration for generated certificates (in days)
  CERT_EXPIRATION: "10950"

  # The Oauth2 authorities available to the cluster administrator.
  CLUSTER_ADMIN_AUTHORITIES: "scim.write,scim.read,openid,cloud_controller.admin,clients.read,clients.write,doppler.firehose,routing.router_groups.read,routing.router_groups.write"

  # 'build' attribute in the /v2/info endpoint
  CLUSTER_BUILD: "2.0.2"

  # 'description' attribute in the /v2/info endpoint
  CLUSTER_DESCRIPTION: "SUSE Cloud Foundry"

  # 'name' attribute in the /v2/info endpoint
  CLUSTER_NAME: "SCF"

  # 'version' attribute in the /v2/info endpoint
  CLUSTER_VERSION: "2"

  # The standard amount of disk (in MB) given to an application when not
  # overriden by the user via manifest, command line, etc.
  DEFAULT_APP_DISK_IN_MB: "1024"

  # The standard amount of memory (in MB) given to an application when not
  # overriden by the user via manifest, command line, etc.
  DEFAULT_APP_MEMORY: "1024"

  # If set apps pushed to spaces that allow SSH access will have SSH enabled by
  # default.
  DEFAULT_APP_SSH_ACCESS: "true"

  # The default stack to use if no custom stack is specified by an app.
  DEFAULT_STACK: "sle12"

  # The container disk capacity the cell should manage. If this capacity is
  # larger than the actual disk quota of the cell component, over-provisioning
  # will occur.
  DIEGO_CELL_DISK_CAPACITY_MB: "auto"

  # The memory capacity the cell should manage. If this capacity is larger than
  # the actual memory of the cell component, over-provisioning will occur.
  DIEGO_CELL_MEMORY_CAPACITY_MB: "auto"

  # Maximum network transmission unit length in bytes for application
  # containers.
  DIEGO_CELL_NETWORK_MTU: "1400"

  # A CIDR subnet mask specifying the range of subnets available to be assigned
  # to containers.
  DIEGO_CELL_SUBNET: "10.38.0.0/16"

  # Disable external buildpacks. Only admin buildpacks and system buildpacks
  # will be available to users.
  DISABLE_CUSTOM_BUILDPACKS: "false"

  # Base domain of the SCF cluster.
  # Example: "my-scf-cluster.com"
  DOMAIN: ~

  # The number of versions of an application to keep. You will be able to
  # rollback to this amount of versions.
  DROPLET_MAX_STAGED_STORED: "5"

  # By default, Cloud Foundry does not enable Cloud Controller request logging.
  # To enable this feature, you must set this property to "true". You can learn
  # more about the format of the logs here
  # https://docs.cloudfoundry.org/loggregator/cc-uaa-logging.html#cc
  ENABLE_SECURITY_EVENT_LOGGING: "false"

  # Enables setting the X-Forwarded-Proto header if SSL termination happened
  # upstream and the header value was set incorrectly. When this property is set
  # to true, the gorouter sets the header X-Forwarded-Proto to https. When this
  # value set to false, the gorouter sets the header X-Forwarded-Proto to the
  # protocol of the incoming request.
  FORCE_FORWARDED_PROTO_AS_HTTPS: "false"

  # AppArmor profile name for garden-runc; set this to empty string to disable
  # AppArmor support
  GARDEN_APPARMOR_PROFILE: "garden-default"

  # URL pointing to the Docker registry used for fetching Docker images. If not
  # set, the Docker service default is used.
  GARDEN_DOCKER_REGISTRY: "registry-1.docker.io"

  # Override DNS servers to be used in containers; defaults to the same as the
  # host.
  GARDEN_LINUX_DNS_SERVER: ""

  # The filesystem driver to use (btrfs or overlay-xfs).
  GARDEN_ROOTFS_DRIVER: "btrfs"

  # Location of the proxy to use for secure web access.
  HTTPS_PROXY: ~

  # Location of the proxy to use for regular web access.
  HTTP_PROXY: ~

  # A comma-separated whitelist of insecure Docker registries in the form of
  # ''<HOSTNAME|IP>:PORT'. Each registry must be quoted separately.
  #
  # Example: "\"docker-registry.example.com:80\", \"hello.example.org:443\""
  INSECURE_DOCKER_REGISTRIES: ""

  KUBERNETES_CLUSTER_DOMAIN: ~

  # The cluster's log level: off, fatal, error, warn, info, debug, debug1,
  # debug2.
  LOG_LEVEL: "info"

  # The maximum amount of disk a user can request for an application via
  # manifest, command line, etc., in MB. See also DEFAULT_APP_DISK_IN_MB for the
  # standard amount.
  MAX_APP_DISK_IN_MB: "2048"

  # Maximum health check timeout that can be set for an app, in seconds.
  MAX_HEALTH_CHECK_TIMEOUT: "180"

  # The time allowed for the MySQL server to respond to healthcheck queries, in
  # milliseconds.
  MYSQL_PROXY_HEALTHCHECK_TIMEOUT: "30000"

  # Sets the maximum allowed size of the client request body, specified in the
  # “Content-Length” request header field, in megabytes. If the size in a
  # request exceeds the configured value, the 413 (Request Entity Too Large)
  # error is returned to the client. Please be aware that browsers cannot
  # correctly display this error. Setting size to 0 disables checking of client
  # request body size. This limits application uploads, buildpack uploads, etc.
  NGINX_MAX_REQUEST_BODY_SIZE: "2048"

  # Comma separated list of IP addresses and domains which should not be
  # directoed through a proxy, if any.
  NO_PROXY: ~

  # Comma separated list of white-listed options that may be set during create
  # or bind operations.
  # Example:
  # "uid,gid,allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,fsname,username,password"
  PERSI_NFS_ALLOWED_OPTIONS: "uid,gid,auto_cache,username,password"

  # Comma separated list of default values for nfs mount options. If a default
  # is specified with an option not included in PERSI_NFS_ALLOWED_OPTIONS, then
  # this default value will be set and it won't be overridable.
  PERSI_NFS_DEFAULT_OPTIONS: ~

  # Comma separated list of white-listed options that may be accepted in the
  # mount_config options. Note a specific 'sloppy_mount:true' volume option
  # tells the driver to ignore non-white-listed options, while a
  # 'sloppy_mount:false' tells the driver to fail fast instead when receiving a
  # non-white-listed option."
  #
  # Example:
  # "allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,sloppy_mount,fsname"
  PERSI_NFS_DRIVER_ALLOWED_IN_MOUNT: "auto_cache"

  # Comma separated list of white-listed options that may be configured in
  # supported in the mount_config.source URL query params.
  # Example: "uid,gid,auto-traverse-mounts,dircache"
  PERSI_NFS_DRIVER_ALLOWED_IN_SOURCE: "uid,gid"

  # Comma separated list default values for options that may be configured in
  # the mount_config options, formatted as 'option:default'. If an option is not
  # specified in the volume mount, or the option is not white-listed, then the
  # specified default value will be used instead.
  #
  # Example:
  # "allow_root:false,nfs_uid:2000,nfs_gid:2000,auto_cache:true,sloppy_mount:true"
  PERSI_NFS_DRIVER_DEFAULT_IN_MOUNT: "auto_cache:true"

  # Comma separated list of default values for options in the source URL query
  # params, formatted as 'option:default'. If an option is not specified in the
  # volume mount, or the option is not white-listed, then the specified default
  # value will be applied.
  PERSI_NFS_DRIVER_DEFAULT_IN_SOURCE: ~

  # Disable Persi NFS driver
  PERSI_NFS_DRIVER_DISABLE: "false"

  # LDAP server host name or ip address (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_HOST: ""

  # LDAP server port (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PORT: "389"

  # LDAP server protocol (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PROTOCOL: "tcp"

  # LDAP service account user name (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_USER: ""

  # LDAP fqdn for user records we will search against when looking up user uids
  # (required for LDAP integration only)
  # Example: "cn=Users,dc=corp,dc=test,dc=com"
  PERSI_NFS_DRIVER_LDAP_USER_FQDN: ""

  # Certficates to add to the rootfs trust store. Multiple certs are possible by
  # concatenating their definitions into one big block of text.
  ROOTFS_TRUSTED_CERTS: ""

  # The algorithm used by the router to distribute requests for a route across
  # backends. Supported values are round-robin and least-connection.
  ROUTER_BALANCING_ALGORITHM: "round-robin"

  # How to handle client certificates. Supported values are none, request, or
  # require. See
  # https://docs.cloudfoundry.org/adminguide/securing-traffic.html#gorouter_mutual_auth
  # for more information.
  ROUTER_CLIENT_CERT_VALIDATION: "request"

  # How to handle the x-forwarded-client-cert (XFCC) HTTP header. Supported
  # values are always_forward, forward, and sanitize_set. See
  # https://docs.cloudfoundry.org/concepts/http-routing.html for more
  # information.
  ROUTER_FORWARDED_CLIENT_CERT: "always_forward"

  # The log destination to talk to. This has to point to a syslog server.
  SCF_LOG_HOST: ~

  # The port used by rsyslog to talk to the log destination. It defaults to 514,
  # the standard port of syslog.
  SCF_LOG_PORT: "514"

  # The protocol used by rsyslog to talk to the log destination. The allowed
  # values are tcp, and udp. The default is tcp.
  SCF_LOG_PROTOCOL: "tcp"

  # Timeout for staging an app, in seconds.
  STAGING_TIMEOUT: "900"

  # Support contact information for the cluster
  SUPPORT_ADDRESS: "support@example.com"

  # TCP routing domain of the SCF cluster; only used for testing;
  # Example: "tcp.my-scf-cluster.com"
  TCP_DOMAIN: ~

  # Concatenation of trusted CA certificates to be made available on the cell.
  TRUSTED_CERTS: ~

  # The host name of the UAA server (root zone)
  UAA_HOST: ~

  # The tcp port the UAA server (root zone) listens on for requests.
  UAA_PORT: "2793"

  # Whether or not to use privileged containers for buildpack based
  # applications. Containers with a docker-image-based rootfs will continue to
  # always be unprivileged.
  USE_DIEGO_PRIVILEGED_CONTAINERS: "false"

  # Whether or not to use privileged containers for staging tasks.
  USE_STAGER_PRIVILEGED_CONTAINERS: "false"

# The sizing section contains configuration to change each individual instance
# group. Due to limitations on the allowable names, any dashes ("-") in the
# instance group names are replaced with underscores ("_").
sizing:
  # The adapter instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: adapter, loggregator_agent
  adapter:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The adapter instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The api-group instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # - cloud_controller_ng: The Cloud Controller provides primary Cloud Foundry
  #   API that is by the CF CLI. The Cloud Controller uses a database to keep
  #   tables for organizations, spaces, apps, services, service instances, user
  #   roles, and more. Typically multiple instances of Cloud Controller are load
  #   balanced.
  #
  # - route_registrar: Used for registering routes
  #
  # Also: loggregator_agent, statsd_injector, go-buildpack, binary-buildpack,
  # nodejs-buildpack, ruby-buildpack, php-buildpack, python-buildpack,
  # staticfile-buildpack, java-buildpack, dotnet-core-buildpack
  api_group:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The api-group instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 3800
      limit: ~

  # The autoscaler-actors instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: scalingengine, scheduler, operator
  autoscaler_actors:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The autoscaler-actors instance group can scale between 0 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 0

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2350
      limit: ~

  # The autoscaler-api instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - route_registrar: Used for registering routes
  #
  # Also: apiserver, servicebroker
  autoscaler_api:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The autoscaler-api instance group can scale between 0 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 0

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The autoscaler-metrics instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: metricscollector, eventgenerator
  autoscaler_metrics:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The autoscaler-metrics instance group can scale between 0 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 0

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The autoscaler-postgres instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - postgres: The Postgres server provides a single instance Postgres database
  #   that can be used with the Cloud Controller or the UAA. It does not provide
  #   highly-available configuration.
  autoscaler_postgres:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The autoscaler-postgres instance group can scale between 0 and 3
    # instances.
    # For high availability it needs at least 2 instances.
    count: 0

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      postgres_data: 100

    # Unit [MiB]
    memory:
      request: 1024
      limit: ~

  # The blobstore instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - route_registrar: Used for registering routes
  #
  # Also: blobstore, loggregator_agent
  blobstore:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The blobstore instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      blobstore_data: 50

    # Unit [MiB]
    memory:
      request: 500
      limit: ~

  # The cc-clock instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - cloud_controller_clock: The Cloud Controller clock periodically schedules
  #   Cloud Controller clean up tasks for app usage events, audit events, failed
  #   jobs, and more. Only single instance of this job is necessary.
  #
  # Also: loggregator_agent, statsd_injector
  cc_clock:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The cc-clock instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 750
      limit: ~

  # The cc-uploader instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: tps, cc_uploader, loggregator_agent
  cc_uploader:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The cc-uploader instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The cc-worker instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - cloud_controller_worker: Cloud Controller worker processes background
  #   tasks submitted via the.
  #
  # Also: loggregator_agent
  cc_worker:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The cc-worker instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 750
      limit: ~

  # The cf-usb instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: cf-usb
  cf_usb:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The cf-usb instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The credhub-user instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - route_registrar: Used for registering routes
  #
  # Also: credhub
  credhub_user:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The credhub-user instance group can scale between 0 and 1 instances.
    # For high availability it needs at least 1 instances.
    count: 0

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2000
      limit: ~

  # The diego-api instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: bbs, cfdot, loggregator_agent, locket
  diego_api:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The diego-api instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The diego-brain instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: auctioneer, cfdot, loggregator_agent
  diego_brain:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The diego-brain instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The diego-cell instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - get-kubectl: This job exists only to ensure the presence of the kubectl
  #   binary in the role referencing it.
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - wait-for-uaa: Wait for UAA to be ready before starting any jobs
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: rep, cfdot, route_emitter, garden, groot-btrfs,
  # cflinuxfs2-rootfs-setup, opensuse42-rootfs-setup, cf-sle12-setup,
  # loggregator_agent, nfsv3driver
  diego_cell:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego-cell instance group can scale between 1 and 254 instances.
    # For high availability it needs at least 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    disk_sizes:
      grootfs_data: 50

    # Unit [MiB]
    memory:
      request: 2800
      limit: ~

  # The diego-ssh instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: ssh_proxy, loggregator_agent, file_server
  diego_ssh:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The diego-ssh instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The doppler instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: doppler, loggregator_agent
  doppler:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The doppler instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 410
      limit: ~

  # The log-api instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: loggregator_trafficcontroller, loggregator_agent, reverse_log_proxy
  log_api:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The log-api instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The mysql instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: mysql, proxy
  mysql:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The mysql instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      mysql_data: 20

    # Unit [MiB]
    memory:
      request: 2500
      limit: ~

  # The nats instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - nats: The NATS server provides publish-subscribe messaging system for the
  #   Cloud Controller, the DEA , HM9000, and other Cloud Foundry components.
  #
  # Also: loggregator_agent
  nats:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The nats instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The nfs-broker instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: loggregator_agent, nfsbroker
  nfs_broker:
    # Node affinity rules can be specified here
    affinity: {}

    # The nfs-broker instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The post-deployment-setup instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - uaa-create-user: Create the initial user in UAA
  #
  # - configure-scf: Uses the cf CLI to configure SCF once it's online (things
  #   like proxy settings, service brokers, etc.)
  post_deployment_setup:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The post-deployment-setup instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The router instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - gorouter: Gorouter maintains a dynamic routing table based on updates
  #   received from NATS and (when enabled) the Routing API. This routing table
  #   maps URLs to backends. The router finds the URL in the routing table that
  #   most closely matches the host header of the request and load balances
  #   across the associated backends.
  #
  # Also: loggregator_agent
  router:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The router instance group can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The routing-api instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: loggregator_agent, routing-api
  routing_api:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The routing-api instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The secret-generation instance group contains the following jobs:
  #
  # - generate-secrets: This job will generate the secrets for the cluster
  secret_generation:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The secret-generation instance group cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The syslog-scheduler instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: scheduler, loggregator_agent
  syslog_scheduler:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The syslog-scheduler instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

  # The tcp-router instance group contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - wait-for-uaa: Wait for UAA to be ready before starting any jobs
  #
  # Also: tcp_router, loggregator_agent
  tcp_router:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The tcp-router instance group can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 128
      limit: ~

    ports:
      tcp_route:
        count: 9
Print this page