SUSE Cloud Application Platform 1.2

Deployment Guide

Authors: Carla Schroder and Billy Tat
Publication Date: October 04, 2018
About This Guide
Required Background
Available Documentation
Feedback
Documentation Conventions
About the Making of This Documentation
1 About SUSE Cloud Application Platform
1.1 New in Version 1.2
1.2 SUSE Cloud Application Platform Overview
1.3 SUSE Cloud Application Platform Architecture
2 Production Installation
2.1 Prerequisites
2.2 Choose Storage Class
2.3 Test Storage Class
2.4 Configure the SUSE Cloud Application Platform Production Deployment
2.5 Deploy with Helm
2.6 Install the Kubernetes charts repository
2.7 Create Namespaces
2.8 Copy SUSE Enterprise Storage Secret
2.9 Deploy UAA
2.10 Deploy SCF
3 Installing the Stratos Web Console
3.1 Install Stratos with Helm
4 Upgrading SUSE Cloud Application Platform
4.1 Upgrading SCF, UAA, and Stratos
5 SUSE Cloud Application Platform High Availability
5.1 Example High Availability Configuration
6 Deploying and Managing Applications with the Cloud Foundry Client
6.1 Using the cf CLI with SUSE Cloud Application Platform
7 Managing Passwords
7.1 Password Management with the Cloud Foundry Client
7.2 Changing User Passwords with Stratos
8 Setting up and Using a Service Broker Sidecar
8.1 Prerequisites
8.2 Deploying on CaaS Platform 3
8.3 Configuring the MySQL Deployment
8.4 Deploying the MySQL Chart
8.5 Create and Bind a MySQL Service
8.6 Deploying the PostgreSQL Chart
8.7 Removing Service Broker Sidecar Deployments
9 Backup and Restore
9.1 Installing the cf-plugin-backup
9.2 Using cf-plugin-backup
9.3 Scope of Backup
10 Logging
10.1 Logging to an External Syslog Server
10.2 Log Levels
11 Managing Certificates
11.1 Certificate Characteristics
11.2 Deploying Custom Certificates
11.3 Rotating Automatically Generated Secrets
12 Preparing Microsoft Azure for SUSE Cloud Application Platform
12.1 Prerequisites
12.2 Create Resource Group and AKS Instance
12.3 Apply Pod Security Policies
12.4 Enable Swap Accounting
12.5 Create a Basic Load Balancer and Public IP Address
12.6 Configure Load Balancing and Network Security Rules
12.7 Example SUSE Cloud Application Platform Configuration File
13 Deploying SUSE Cloud Application Platform on Amazon EKS
13.1 Prerequisites
13.2 IAM Requirements for EKS
13.3 Disk Space
13.4 The Helm CLI and Tiller
13.5 Default Storage Class
13.6 Security Group rules
13.7 Find your kube.external_ips
13.8 Configuring and Deploying SUSE Cloud Application Platform
14 Installing SUSE Cloud Application Platform on OpenStack
14.1 Prerequisites
14.2 Create a New OpenStack Project
14.3 Deploy SUSE Cloud Application Platform
14.4 Bootstrap SUSE Cloud Application Platform
14.5 Growing the Root Filesystem
15 Running SUSE Cloud Application Platform on non-SUSE CaaS Platform Kubernetes Systems
15.1 Kubernetes Requirements
16 Minimal Installation for Testing
16.1 Prerequisites
16.2 Create hostpath Storage Class
16.3 Test Storage Class
16.4 Configuring the Minimal Test Deployment
16.5 Deploy with Helm
16.6 Install the Stratos Console
16.7 Updating SUSE Cloud Foundry, UAA, and Stratos
17 Troubleshooting
17.1 Using Supportconfig
17.2 Deployment is Taking Too Long
17.3 Deleting and Rebuilding a Deployment
17.4 Querying with Kubectl
A Appendix
A.1 Complete SCF values.yaml file
A.2 Complete UAA values.yaml file
B GNU Licenses
B.1 GNU Free Documentation License

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

SUSE Cloud Application Platform is a software platform for cloud-native application development, based on Cloud Foundry, with additional supporting services and components. The core of the platform is SUSE Cloud Foundry, a Cloud Foundry distribution for Kubernetes which runs on SUSE Linux Enterprise containers.

The Cloud Foundry code base provides the basic functionality. SUSE Cloud Foundry differentiates itself from other Cloud Foundry distributions by running in Linux containers managed by Kubernetes, rather than virtual machines managed with BOSH, for greater fault tolerance and lower memory use.

SUSE Cloud Foundry is designed to run on any Kubernetes cluster. This guide describes how to deploy it on SUSE Container as a Service (CaaS) Platform 2.0.

1 Required Background

To keep the scope of these guidelines manageable, certain technical assumptions have been made:

  • You have some computer experience and are familiar with common technical terms.

  • You are familiar with the documentation for your system and the network on which it runs.

  • You have a basic understanding of Linux systems.

2 Available Documentation

We provide HTML and PDF versions of our books in different languages. Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates and browse or download the documentation in various formats.

The following documentation is available for this product:

Deployment Guide

The SUSE Cloud Application Platform deployment guide gives you details about installation and configuration of SUSE Cloud Application Platform along with a description of architecture and minimum system requirements.

3 Feedback

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

4 Documentation Conventions

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • x86_64 This paragraph is only relevant for the AMD64/Intel 64 architecture. The arrows mark the beginning and the end of the text block.

    System z, POWER This paragraph is only relevant for the architectures z Systems and POWER. The arrows mark the beginning and the end of the text block.

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

5 About the Making of This Documentation

This documentation is written in SUSEDoc, a subset of DocBook 5. The XML source files were validated by jing (see https://code.google.com/p/jing-trang/), processed by xsltproc, and converted into XSL-FO using a customized version of Norman Walsh's stylesheets. The final PDF is formatted through FOP from Apache Software Foundation. The open source tools and the environment used to build this documentation are provided by the DocBook Authoring and Publishing Suite (DAPS). The project's home page can be found at https://github.com/openSUSE/daps.

The XML source code of this documentation can be found at https://github.com/SUSE/doc-cap.

1 About SUSE Cloud Application Platform

1.1 New in Version 1.2

See the Release Notes for a list of changes, upgrade instructions, and known issues.

Note
Note

Reviewing the Release Notes should be a priority, and will prevent a lot of frustration.

1.2 SUSE Cloud Application Platform Overview

SUSE Cloud Application Platform is a software platform for cloud-native application deployment based on SUSE Cloud Foundry and Kubernetes. It serves different but complementary purposes for operators and application developers.

For operators, the platform is:

  • Easy to install, manage, and maintain

  • Secure by design

  • Fault tolerant and self-healing

  • Offers high availability for critical components

  • Uses industry-standard components

  • Avoids single vendor lock-in

For developers, the platform:

  • Allocates computing resources on demand via API or Web interface

  • Offers users a choice of language and Web framework

  • Gives access to databases and other data services

  • Emits and aggregates application log streams

  • Tracks resource usage for users and groups

  • Makes the software development workflow more efficient

The principle interface and API for deploying applications to SUSE Cloud Application Platform is SUSE Cloud Foundry. Most Cloud Foundry distributions run on virtual machines managed by BOSH. SUSE Cloud Foundry runs in SUSE Linux Enterprise containers managed by Kubernetes. Containerizing the components of the platform itself has these advantages:

  • Improves fault tolerance. Kubernetes monitors the health of all containers, and automatically restarts faulty containers faster than virtual machines can be restarted or replaced.

  • Reduces physical memory overhead. SUSE Cloud Foundry components deployed in containers consume substantially less memory, as host-level operations are shared between containers by Kubernetes.

SUSE Cloud Foundry packages upstream Cloud Foundry BOSH releases to produce containers and configurations which are deployed to Kubernetes clusters using Helm.

1.3 SUSE Cloud Application Platform Architecture

This guide details the steps for deploying SUSE Cloud Foundry on SUSE CaaS Platform, and on supported Kubernetes environments such as Microsoft Azure Kubernetes Service (AKS), and Amazon Elastic Container Service for Kubernetes (EKS).

Important
Important: Required Knowledge

Installing and administering SUSE Cloud Application Platform requires knowledge of Linux, Docker, Kubernetes, and your Kubernetes platform (e.g. SUSE CaaS Platform, AKS, EKS, OpenStack). You must plan resource allocation and network architecture by taking into account the requirements of your Kubernetes platform in addition to Cloud Application Platform requirements. Cloud Application Platform is a discrete component in your cloud stack, but it still requires knowledge of administering and troubleshooting the underlying stack.

SUSE CaaS Platform is a specialized application development and hosting platform built on the SUSE MicroOS container host operating system, container orchestration with Kubernetes, and Salt for automating installation and configuration.

A supported deployment includes SUSE Cloud Foundry installed on CaaS Platform, Amazon EKS, or Azure AKS. You also need a storage backend, such as SUSE Enterprise Storage, a DNS/DHCP server, and an Internet connection to download additional packages during installation and ~10GB of Docker images on each Kubernetes worker after installation.

A production deployment requires considerable resources. SUSE Cloud Application Platform includes an entitlement of SUSE CaaS Platform and SUSE Enterprise Storage. SUSE Enterprise Storage alone has substantial requirements; see the Tech Specs for details. SUSE CaaS Platform requires a minimum of four hosts: one admin and three Kubernetes nodes. SUSE Cloud Foundry is then deployed on the Kubernetes nodes. Four CaaS Platform nodes are not sufficient for a production deployment. Figure 1.1, “Minimal Example Production Deployment” describes a minimal production deployment with SUSE Cloud Foundry deployed on a Kubernetes cluster containing three Kubernetes masters and three workers, plus an ingress controller, administration workstation, DNS/DHCP server, and a SUSE Enterprise Storage cluster.

network architecture of minimal production setup
Figure 1.1: Minimal Example Production Deployment

The minimum 4-node deployment is sufficient for a compact test deployment, which you can run virtualized on a single workstation or laptop. Chapter 2, Production Installation details a basic production deployment, and Chapter 16, Minimal Installation for Testing describes a minimal test deployment.

Note that after you have deployed your cluster and start building and running applications, your applications may depend on buildpacks that are not bundled in the container images that ship with SUSE Cloud Foundry. These will be downloaded at runtime, when you are pushing applications to the platform. Some of these buildpacks may include components with proprietary licenses. (See Customizing and Developing Buildpacks to learn more about buildpacks, and creating and managing your own.)

The following figures illustrate the main components of SUSE Cloud Application Platform. Figure 1.2, “Cloud Platform Comparisons” shows a comparison of the basic cloud platforms: Infrastructure as a Service (IaaS), Container as a Service (CaaS), Platform as a Service (Paas), and Software as a Service (SaaS). SUSE CaaS Platform is a Container as a Service platform, and SUSE Cloud Application Platform is a PaaS.

Comparison of cloud platforms.
Figure 1.2: Cloud Platform Comparisons

Figure 1.3, “Containerized Platforms” illustrates how SUSE CaaS Platform and SUSE Cloud Application Platform containerize the platform itself.

SUSE CaaS Platform and SUSE Cloud Application Platform containerize the platform itself.
Figure 1.3: Containerized Platforms

Figure 1.4, “Main SUSE Cloud Application Platform Components” shows the relationships of the major SUSE Cloud Application Platform components. Cloud Application Platform runs on Kubernetes, which in turn runs on multiple platforms, from bare metal to various cloud stacks. Your applications run on Cloud Application Platform and provide services.

Relationships of the main Cloud Application Platform components.
Figure 1.4: Main SUSE Cloud Application Platform Components

Figure 1.5, “SUSE Cloud Application Platform Internal Services” provides a look at Cloud Application Platform's internal services and their functions.

Cloud Application Platform's internal services and their functions.
Figure 1.5: SUSE Cloud Application Platform Internal Services

2 Production Installation

A basic SUSE Cloud Application Platform production deployment requires at least eight hosts plus a storage backend: one SUSE CaaS Platform admin server, three Kubernetes masters, three Kubernetes workers, a DNS/DHCP server, and a storage backend such as SUSE Enterprise Storage. This is a bare minimum, and actual requirements are likely to be much larger, depending on your workloads. You also need an external workstation for administering your cluster. You may optionally make your SUSE Cloud Application Platform instance highly-available.

Note
Note: Remote Administration

You will run most of the commands in this chapter from a remote workstation, rather than directly on any of the SUSE Cloud Application Platform nodes. These are indicated by the unprivileged user Tux, while root prompts are on a cluster node. There are few tasks that need to be performed directly on any of the cluster hosts.

The optional High Availability example in this chapter provides HA only for the SUSE Cloud Application Platform cluster, and not for CaaS Platform or SUSE Enterprise Storage. See Section 5.1, “Example High Availability Configuration”.

2.1 Prerequisites

Calculating hardware requirements is best done with an analysis of your expected workloads, traffic patterns, storage needs, and application requirements. The following examples are bare minimums to deploy a running cluster, and any production deployment will require more.

Minimum Hardware Requirements

8GB of memory per CaaS Platform dashboard and Kubernetes master nodes.

16GB of memory per Kubernetes worker.

40GB disk space per CaaS Platform dashboard and Kubernetes master nodes.

60GB disk space per Kubernetes worker.

Network Requirements

Your Kubernetes cluster needs its own domain and network. Each node should resolve to its hostname, and to its fully-qualified domain name. Typically, a Kubernetes cluster sits behind a load balancer, which also provides external access to the cluster. Another option is to use DNS round-robin to the Kubernetes workers to provide external access. It is also a common practice to create a wildcard DNS entry pointing to the domain, e.g. *.example.com, so that applications can be deployed without creating DNS entries for each application. This guide does not describe how to set up a load balancer or name services, as these depend on customer requirements and existing network architectures.

SUSE CaaS Platform Deployment Guide: Network Requirements provides guidance on network and name services configurations.

Install SUSE CaaS Platform

SUSE Cloud Application Platform is supported on SUSE CaaS Platform 2 and 3. Installation is similar for both; see Section 2.1.1, “Installation on SUSE CaaS Platform 3” for special notes on setting up SUSE CaaS Platform 3.

After installing CaaS Platform 2 or CaaS Platform 3 and logging into the Velum Web interface, check the box to install Tiller (Helm's server component).

Install Tiller
Figure 2.1: Install Tiller

Take note of the Overlay network settings. These define the networks that are exclusive to the internal Kubernetes cluster communications. They are not externally accessible. You may assign different networks to avoid address collisions.

There is also a form for proxy settings; if you're not using a proxy then leave it empty.

The easiest way to create the Kubernetes nodes, after you create the admin node, is to use AutoYaST; see Installation with AutoYaST. Set up CaaS Platform with one admin node and at least three Kubernetes masters and three Kubernetes workers. You also need an Internet connection, as the installer downloads additional packages, and the Kubernetes workers will each download ~10GB of Docker images.

Assigning Roles to Nodes
Figure 2.2: Assigning Roles to Nodes

When you have completed Bootstrapping the Cluster click the kubectl config button to download your new cluster's kubeconfig file. This takes you to a login screen; use the login you created to access Velum. Save the file as ~/.kube/config on your workstation. This file enables the remote administration of your cluster.

Download kubeconfig
Figure 2.3: Download kubeconfig
Install kubectl

To install kubectl on a SLE 12 SP3 or 15 workstation, install the package kubernetes-client from the Public Cloud module. For other operating systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/. After installation, run this command to verify that it is installed, and that is communicating correctly with your cluster:

tux > kubectl version --short
Client Version: v1.9.1
Server Version: v1.9.8

As the client is on your workstation, and the server is on your cluster, reporting the server version verifies that kubectl is using ~/.kube/config and is communicating with your cluster.

The following kubectl examples query the cluster configuration and node status:

tux > kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://11.100.10.10:6443
  name: local
contexts:
[...]

tux > kubectl get nodes
NAME                  STATUS   ROLES     AGE  VERSION
ef254d3.example.com   Ready    Master    4h   v1.9.8
b70748d.example.com   Ready    <none>    4h   v1.9.8
cb77881.example.com   Ready    <none>    4h   v1.9.8
d028551.example.com   Ready    <none>    4h   v1.9.8
[...]
Install Helm

Deploying SUSE Cloud Application Platform is different than the usual method of installing software. Rather than installing packages in the usual way with YaST or Zypper, you will install the Helm client on your workstation to install the required Kubernetes applications to set up SUSE Cloud Application Platform, and to administer your cluster remotely. Helm is the Kubernetes package manager. The Helm client goes on your remote administration computer, and Tiller is Helm's server, which is installed on your Kubernetes cluster.

Helm client version 2.6 or higher is required.

Warning
Warning: Initialize Only the Helm Client

When you initialize Helm on your workstation be sure to initialize only the client, as the server, Tiller, was installed during the CaaS Platform installation. You do not want two Tiller instances.

If the Linux distribution on your workstation doesn't provide the correct Helm version, or you are using some other platform, see the Helm Quickstart Guide for installation instructions and basic usage examples. Download the Helm binary into any directory that is in your PATH on your workstation, such as your ~/bin directory. Then initialize the client only:

tux > helm init --client-only
Creating /home/tux/.helm 
Creating /home/tux/.helm/repository 
Creating /home/tux/.helm/repository/cache 
Creating /home/tux/.helm/repository/local 
Creating /home/tux/.helm/plugins 
Creating /home/tux/.helm/starters 
Creating /home/tux/.helm/cache/archive 
Creating /home/tux/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/tux/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!

2.1.1 Installation on SUSE CaaS Platform 3

SUSE CaaS Platform 3 introduces PodSecurityPolicy (PSP) support. This change adds two new PSPs to CaaS Platform 3:

  • unprivileged, which is the default assigned to all users. The unprivileged PodSecurityPolicy is intended as a reasonable compromise between the reality of Kubernetes workloads, and the suse:caasp:psp:privileged role. By default, this PSP is granted to all users and service accounts.

  • privileged, which is intended to be assigned only to trusted workloads. It applies few restrictions, and should only be assigned to highly trusted users.

See Pod Security Policies, Orgs, Spaces, Roles, and Permissions, and Identity Provider Workflow for more information.

Currently, all the pods are created using the default serviceAccount in their namespaces in order to apply the unprivileged PSP. Consequently, some pods cannot be created because privileged mode and privilege escalation are disabled by default. (error: cannot set allowPrivilegeEscalation to false and privileged to true).

To get around this, create a configuration file called cap-psp-rbac.yaml. This enables both privileged mode and privilege escalation. You need a running CaaS Platform cluster, and kubectl configured and working. You will apply this file before you deploy UAA and SCF.

Copy the following example into cap-psp-rbac.yaml:

---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: suse.cap.psp
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  # Privileged
  #default in suse.caasp.psp.unprivileged
  #privileged: false
  privileged: true
  # Volumes and File Systems
  volumes:
    # Kubernetes Pseudo Volume Types
    - configMap
    - secret
    - emptyDir
    - downwardAPI
    - projected
    - persistentVolumeClaim
    # Networked Storage
    - nfs
    - rbd
    - cephFS
    - glusterfs
    - fc
    - iscsi
    # Cloud Volumes
    - cinder
    - gcePersistentDisk
    - awsElasticBlockStore
    - azureDisk
    - azureFile
    - vsphereVolume
  allowedFlexVolumes: []
  # hostPath volumes are not allowed; pathPrefix must still be specified
  allowedHostPaths:      
    - pathPrefix: /opt/kubernetes-hostpath-volumes
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  #default in suse.caasp.psp.unprivileged
  #allowPrivilegeEscalation: false
  allowPrivilegeEscalation: true
  #default in suse.caasp.psp.unprivileged
  #defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: []
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: false
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: suse:cap:psp
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['suse.cap.psp']
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cap:clusterrole
roleRef:
  kind: ClusterRole
  name: suse:cap:psp
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: default
  namespace: uaa
- kind: ServiceAccount
  name: default
  namespace: scf
- kind: ServiceAccount
  name: default
  namespace: stratos

Apply it to your cluster with kubectl:

tux > kubectl create -f cap-psp-rbac.yaml

Then continue by deploying UAA and SCF.

2.2 Choose Storage Class

The Kubernetes cluster requires a persistent storage class for the databases to store persistent data. Your available storage classes depend on which storage cluster you are using (SUSE Enterprise Storage users, see SUSE CaaS Platform Integration with SES). After connecting your storage backend use kubectl to see your available storage classes:

tux > kubectl get storageclasses

See Section 2.4, “Configure the SUSE Cloud Application Platform Production Deployment” to learn where to configure your storage class for SUSE Cloud Application Platform. See the Kubernetes document Persistent Volumes for detailed information on storage classes.

2.3 Test Storage Class

You may test that your storage class is properly configured before deploying SUSE Cloud Application Platform by creating a persistent volume claim on your storage class, then verifying that the status of the claim is bound, and a volume has been created.

First copy the following configuration file, which in this example is named test-storage-class.yaml, substituting the name of your storageClassName:

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-sc-persistent
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: persistent

Create your persistent volume claim:

tux > kubectl create -f test-storage-class.yaml
persistentvolumeclaim "test-sc-persistent" created

Check that the claim has been created, and that the status is bound:

tux > kubectl get pv,pvc
NAME                                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                        STORAGECLASS   REASON    AGE
pv/pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c   1Gi        RWO            Delete           Bound     default/test-sc-persistent   persistent               2m

NAME                     STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc/test-sc-persistent   Bound     pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c   1Gi        RWO            persistent     2m

This verifies that your storage class is correctly configured. Delete your volume claims when you're finished:

tux > kubectl delete pv/pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c
persistentvolume "pvc-c464ed6a-3852-11e8-bd10-90b8d0c59f1c" deleted
tux > kubectl delete pvc/test-sc-persistent
persistentvolumeclaim "test-sc-persistent" deleted

If something goes wrong and your volume claims get stuck in pending status, you can force deletion with the --grace-period=0 option:

tux > kubectl delete pvc/test-sc-persistent --grace-period=0

2.4 Configure the SUSE Cloud Application Platform Production Deployment

Create a configuration file on your workstation for Helm to use. In this example it is called scf-config-values.yaml. (See the Release Notes for information on configuration changes.)

env:    
    # Enter the domain you created for your CAP cluster
    DOMAIN: example.com
    
    # UAA host and port
    UAA_HOST: uaa.example.com
    UAA_PORT: 2793

kube:
    # The IP address assigned to the kube node pointed to by the domain.
    external_ips: ["11.100.10.10"]
    
    # Run kubectl get storageclasses
    # to view your available storage classes
    storage_class: 
        persistent: "persistent"
        shared: "shared"
        
    # The registry the images will be fetched from. 
    # The values below should work for
    # a default installation from the SUSE registry.
    registry: 
        hostname: "registry.suse.com"
        username: ""
        password: ""
    organization: "cap"

    auth: rbac 

secrets:
    # Create a password for your CAP cluster
    CLUSTER_ADMIN_PASSWORD: password 
    
    # Create a password for your UAA client secret
    UAA_ADMIN_CLIENT_SECRET: password
Note
Note: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

2.5 Deploy with Helm

The following list provides an overview of Helm commands to complete the deployment. Included are links to detailed descriptions.

  1. Download the SUSE Kubernetes charts repository (Section 2.6, “Install the Kubernetes charts repository”)

  2. Create the UAA and SCF namespaces (Section 2.7, “Create Namespaces”)

  3. Copy the storage secret of your storage cluster to the UAA and SCF namespaces (Section 2.8, “Copy SUSE Enterprise Storage Secret”)

  4. Deploy UAA (Section 2.9, “Deploy UAA”)

  5. Copy the UAA secret and certificate to the SCF namespace, deploy SCF (Section 2.10, “Deploy SCF”)

2.6 Install the Kubernetes charts repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME            URL                                             
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts                    
suse            https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                         VERSION DESCRIPTION
suse/cf                      2.13.3  A Helm chart for SUSE Cloud Foundry               
suse/cf-usb-sidecar-mysql    1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres 1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/console                 2.1.0   A Helm chart for deploying Stratos UI Console     
suse/uaa                     2.13.3  A Helm chart for SUSE UAA

2.7 Create Namespaces

Create the UAA (User Account and Authentication) and SCF (SUSE Cloud Foundry) namespaces:

tux > kubectl create namespace uaa
tux > kubectl create namespace scf

2.8 Copy SUSE Enterprise Storage Secret

If you are using SUSE Enterprise Storage you must copy the Ceph admin secret to the UAA and SCF namespaces:

tux > kubectl get secret ceph-secret-admin -o json --namespace default | \
sed 's/"namespace": "default"/"namespace": "uaa"/' | kubectl create -f -

tux > kubectl get secret ceph-secret-admin -o json --namespace default | \
sed's/"namespace": "default"/"namespace": "scf"/' | kubectl create -f -

2.9 Deploy UAA

Use Helm to deploy the UAA (User Account and Authentication) server. You may create your own release --name:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful UAA deployment before going to the next steps, which you can monitor with the watch command:

tux > watch -c 'kubectl get pods --all-namespaces'

When the status shows RUNNING for all of the UAA nodes, proceed to deploying SUSE Cloud Foundry. Pressing CtrlC stops the watch command.

Important
Important: Some Pods Show Not Running

Some UAA and SCF pods perform only deployment tasks, and it is normal for them to show as unready after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed
          
tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed

2.10 Deploy SCF

First pass your UAA secret and certificate to SCF, then use Helm to install SUSE Cloud Foundry:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Now sit back and wait for the pods come online:

tux > watch -c 'kubectl get pods --all-namespaces'

When all services are running use the Cloud Foundry command-line interface to log in to SUSE Cloud Foundry to deploy and manage your applications. (See Section 6.1, “Using the cf CLI with SUSE Cloud Application Platform”)

3 Installing the Stratos Web Console

3.1 Install Stratos with Helm

Stratos UI is a modern web-based management application for Cloud Foundry. It provides a graphical management console for both developers and system administrators. Install Stratos with Helm after all of the UAA and SCF pods are running. Start by preparing the environment:

tux > kubectl create namespace stratos

If you are using SUSE Enterprise Storage as your storage backend, copy the secret into the Stratos namespace.

tux > kubectl get secret ceph-secret-admin -o json --namespace default | \
sed 's/"namespace": "default"/"namespace": "stratos"/' | \
kubectl create -f -

You should already have the Stratos charts when you downloaded the SUSE charts repository (see Section 2.6, “Install the Kubernetes charts repository”). Search your Helm repository:

tux > helm search suse                                  
NAME                         VERSION DESCRIPTION
suse/cf                      2.13.3  A Helm chart for SUSE Cloud Foundry               
suse/cf-usb-sidecar-mysql    1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres 1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/console                 2.1.0   A Helm chart for deploying Stratos UI Console     
suse/uaa                     2.13.3  A Helm chart for SUSE UAA

Install Stratos, and if you have not set a default storage class you must specify it:

tux > helm install suse/console \
    --name susecf-console \
    --namespace stratos \
    --values scf-config-values.yaml \
    --set storageClass=persistent

Monitor progress:

$ watch -c 'kubectl get pods --namespace stratos'
 Every 2.0s: kubectl get pods --namespace stratos
 
NAME                               READY     STATUS    RESTARTS   AGE
console-0                          3/3       Running   0          30m
console-mariadb-3697248891-5drf5   1/1       Running   0          30m

When all statuses show Ready, press CtrlC to exit and to view your release information:

NAME:   susecf-console
LAST DEPLOYED: Tue Aug 14 11:53:28 2018
NAMESPACE: stratos
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                           TYPE    DATA  AGE
susecf-console-mariadb-secret  Opaque  2     2s
susecf-console-secret          Opaque  2     2s

==> v1/PersistentVolumeClaim
NAME                                  STATUS  VOLUME                                    CAPACITY  ACCESSMODES  STORAGECLASS    AGE
console-mariadb                       Bound   pvc-ef3a120d-3e76-11e8-946a-90b8d00d625f  1Gi       RWO          persistent      2s
susecf-console-upgrade-volume         Bound   pvc-ef409e41-3e76-11e8-946a-90b8d00d625f  20Mi      RWO          persistent      2s
susecf-console-encryption-key-volume  Bound   pvc-ef49b860-3e76-11e8-946a-90b8d00d625f  20Mi      RWO          persistent      2s

==> v1/Service
NAME                    CLUSTER-IP      EXTERNAL-IP    PORT(S)         AGE
susecf-console-mariadb  172.24.181.255  <none>         3306/TCP        2s
susecf-console-ui-ext   172.24.84.50    10.10.100.82   8443:32511/TCP  1s

==> v1beta1/Deployment
NAME             DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
console-mariadb  1        1        1           0          1s

==> v1beta1/StatefulSet
NAME     DESIRED  CURRENT  AGE
console  1        1        1s

In this example, pointing your web browser to https://example.com:8443 opens the console. Wade through the nag screens about the self-signed certificates and log in as admin with the password you created in scf-config-values.yaml. If you see an upgrade message, wait a few minutes and try again.

Stratos UI Cloud Foundry Console
Figure 3.1: Stratos UI Cloud Foundry Console

Another way to get the release name is with the helm list command, then query the release name to get its IP address and port number:

tux > helm list
NAME            REVISION  UPDATED                  STATUS   CHART           NAMESPACE
susecf-console  1         Tue Aug 14 11:53:28 2018 DEPLOYED console-2.0.0   stratos  
susecf-scf      1         Tue Aug 14 10:58:16 2018 DEPLOYED cf-2.11.0       scf      
susecf-uaa      1         Tue Aug 14 10:49:30 2018 DEPLOYED uaa-2.11.0      uaa 

tux > helm status susecf-console
LAST DEPLOYED: Tue Aug 14 11:53:28 2018
NAMESPACE: stratos
STATUS: DEPLOYED
[...]
==> v1/Service
NAME                    CLUSTER-IP      EXTERNAL-IP    PORT(S)         AGE
susecf-console-mariadb  172.24.181.255  <none>         3306/TCP        19m
susecf-console-ui-ext   172.24.84.50    10.10.100.82   8443:32511/TCP  19m

4 Upgrading SUSE Cloud Application Platform

4.1 Upgrading SCF, UAA, and Stratos

Maintenance updates are delivered as container images from the SUSE registry and applied with Helm. (See the Release Notes for additional upgrade information.) Check for available updates:

tux > helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "suse" chart repository
Update Complete. ⎈ Happy Helming!⎈

For the SUSE Cloud Application Platform 1.1 release, update your scf-config-values.yaml file with the changes for secrets handling and external IP addresses. (See Section 2.4, “Configure the SUSE Cloud Application Platform Production Deployment” for an example.)

Get your release and chart names (your releases may have different names than the examples), and then apply the updates:

tux > helm list
NAME            REVISION  UPDATED                  STATUS    CHART           NAMESPACE
susecf-console  1         Tue Aug 14 11:53:28 2018 DEPLOYED  console-2.0.0   stratos  
susecf-scf      1         Tue Aug 14 10:58:16 2018 DEPLOYED  cf-2.11.0       scf      
susecf-uaa      1         Tue Aug 14 10:49:30 2018 DEPLOYED  uaa-2.11.0      uaa

tux > helm repo list
NAME            URL                                             
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts                    
suse            https://kubernetes-charts.suse.com/             

tux > helm search suse
NAME                         VERSION DESCRIPTION
suse/cf                      2.13.3  A Helm chart for SUSE Cloud Foundry               
suse/cf-usb-sidecar-mysql    1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres 1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/console                 2.1.0   A Helm chart for deploying Stratos UI Console     
suse/uaa                     2.13.3  A Helm chart for SUSE UAA

Run the following commands to perform the upgrade. Wait for each command to complete before running the next command.

Note
Note: Important Changes

Take note of the new commands for extracting and using secrets and certificates.

tux > helm upgrade --recreate-pods susecf-uaa suse/uaa \
 --values scf-config-values.yaml

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
 -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade --recreate-pods susecf-scf suse/cf \
 --values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

tux > helm upgrade --recreate-pods susecf-console suse/console \
 --values scf-config-values.yaml

5 SUSE Cloud Application Platform High Availability

5.1 Example High Availability Configuration

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The simplest method is to set the HA parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own custom values.

5.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for each namespace describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm charts:

tux > helm inspect suse/uaa | less +/sizing:
tux > helm inspect suse/cf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section. The following example is for the UAA namespace.

Note
Note: mysql_proxy does not scale

Currently mysql_proxy does not scale. This will change in the SUSE Cloud Application Platform 1.2 release, and then scaling will be supported.

tux > helm inspect values suse/uaa | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'
 60:    # The mysql role can scale between 1 and 3 instances.
 61:    # For high availability it needs at least 2 instances.
 62:    count: 1
 93:    # The mysql-proxy role can scale between 1 and 3 instances.
 94:    # For high availability it needs at least 2 instances.
 95:    count: 1
 117:    # The secret-generation role cannot be scaled.
 118:    count: 1
 140:  #   for managing user accounts and for registering OAuth2 clients, as well as
 149:    # The uaa role can scale between 1 and 65535 instances.
 150:    count: 1
 230:  # Increment this counter to rotate all generated secrets
 231:  secrets_generation_counter: 1

The default values.yaml files are also included in this guide at Appendix A, Appendix.

5.1.2 Simple High Availability Configuration

The simplest way to make your SUSE Cloud Application Platform deployment highly available is to set HA to true in your deployment configuration file, e.g. scf-config-values.yaml:

config:
  # Flag to activate high-availability mode
  HA: true

Or, you may pass it as a command-line option when you are deploying with Helm, for example:

tux > helm install suse/uaa \
 --name susecf-uaa \
 --namespace uaa \ 
 --values scf-config-values.yaml \
 --set config.HA=true

This changes all roles with a default size of 1 to the minimum required for a High Availability deployment. It is not possible to customize any of the sizing values.

5.1.3 Example Custom High Availability Configurations

The following two example High Availability configuration files are for the UAA and SCF namespaces. The example values are not meant to be copied, as these depend on your particular deployment and requirements. Do not change the config.HA flag to true (see Section 5.1.2, “Simple High Availability Configuration”.)

The first example is for the UAA namespace, uaa-sizing.yaml:

sizing:
  mysql:
    count: 3
  uaa:
    count: 2

The second example is for SCF, scf-sizing.yaml.

sizing:
  api:
    count: 6
  cc_clock:
    count: 3
  cc_uploader:
    count: 3
  cc_worker:
    count: 6
  cf_usb:
    count: 3
  consul:
    count: 1
  diego_access:
    count: 3
  diego_api:
    count: 3
  diego_brain:
    count: 2
  diego_cell:
    count: 3
  diego_locket:
    count: 3
  doppler:
    count: 4
  loggregator:
    count: 7
  mysql:
    count: 3
  nats:
    count: 2
  nfs_broker:
    count: 3
  postgres:
    count: 3
  router:
    count: 13
  routing_api:
    count: 3
  syslog_adapter:
    count: 7
  syslog_rlp:
    count: 7
  syslog_scheduler:
    count: 3
  tcp_router:
    count: 3

After creating your configuration files, follow the steps in Section 2.4, “Configure the SUSE Cloud Application Platform Production Deployment” until you get to Section 2.9, “Deploy UAA”. Then deploy UAA with this command:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \ 
--values scf-config-values.yaml \
--values uaa-sizing.yaml

When the status shows RUNNING for all of the UAA nodes, deploy SCF with these commands:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
 -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"    

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--values scf-sizing.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

The HA pods with the following roles will enter in both passive and ready states; there should always be at least one pod in each role that is ready.

  • diego-brain

  • diego-database

  • routing-api

You can confirm this by looking at the logs inside the container. Look for .consul-lock.acquiring-lock.

Some roles follow an active/passive scaling model, meaning all pods except the active one will be shown as NOT READY by Kubernetes. This is appropriate and expected behavior.

5.1.4 Upgrading a non-High Availability Deployment to High Availability

You may make a non-High Availability deployment highly available by upgrading with Helm:

tux > helm upgrade suse/uaa \
--name susecf-uaa \
--namespace uaa \ 
--values scf-config-values.yaml \
--values uaa-sizing.yaml 

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
 -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"    

tux > helm upgrade suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--values scf-sizing.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

This may take a long time, and your cluster will be unavailable until the upgrade is complete.

6 Deploying and Managing Applications with the Cloud Foundry Client

6.1 Using the cf CLI with SUSE Cloud Application Platform

The Cloud Foundry command line interface (cf CLI) is for deploying and managing your applications. You may use it for all the orgs and spaces that you are a member of. Install the client on a workstation for remote administration of your SUSE Cloud Foundry instances.

The complete guide is at Using the Cloud Foundry Command Line Interface, and source code with a demo video is on GitHub at Cloud Foundry CLI.

The following examples demonstrate some of the commonly-used commands. The first task is to log into your new SUSE Cloud Foundry instance. When your installation completes it prints a welcome screen with the information you need to access it.

       NOTES:
    Welcome to your new deployment of SCF.

    The endpoint for use by the `cf` client is
        https://api.example.com

    To target this endpoint run
        cf api --skip-ssl-validation https://api.example.com

    Your administrative credentials are:
        Username: admin
        Password: password

    Please remember, it may take some time for everything to come online.

    You can use
        kubectl get pods --namespace scf

    to spot-check if everything is up and running, or
        watch -c 'kubectl get pods --namespace scf'

    to monitor continuously.

You can display this message anytime with this command:

tux > helm status $(helm list | awk '/cf-([0-9]).([0-9]).*/{print$1}') | \
sed -n -e '/NOTES/,$p'

You need to provide the API endpoint of your SUSE Cloud Application Platform instance to log in. The API endpoint is the DOMAIN value you provided in scf-config-values.yaml, plus the api. prefix, as it shows in the above welcome screen. Set your endpoint, and use --skip-ssl-validation when you have self-signed SSL certificates. It asks for an email address, but you must enter admin instead (you cannot change this to a different username, though you may create additional users), and the password is the one you created in scf-config-values.yaml:

tux > cf login --skip-ssl-validation  -a https://api.example.com 
API endpoint: https://api.example.com

Email> admin

Password> 
Authenticating...
OK

Targeted org system

API endpoint:   https://api.example.com (API version: 2.101.0)
User:           admin
Org:            system
Space:          No space targeted, use 'cf target -s SPACE'

cf help displays a list of commands and options. cf help [command] provides information on specific commands.

You may pass in your credentials and set the API endpoint in a single command:

tux > cf login -u admin -p password --skip-ssl-validation -a https://api.example.com

Log out with cf logout.

Change the admin password:

tux > cf passwd
Current Password>
New Password> 
Verify Password> 
Changing password...
OK
Please log in again

View your current API endpoint, user, org, and space:

tux > cf target

Switch to a different org or space:

tux > cf target -o org
tux > cf target -s space

List all apps in the current space:

tux > cf apps

Query the health and status of a particular app:

tux > cf app appname

View app logs. The first example tails the log of a running app. The --recent option dumps recent logs instead of tailing, which is useful for stopped and crashed apps:

tux > cf logs appname
tux > cf logs --recent appname

Restart all instances of an app:

tux > cf restart appname

Restart a single instance of an app, identified by its index number, and restart it with the same index number:

tux > cf restart-app-instance appname index

After you have set up a service broker (see Chapter 8, Setting up and Using a Service Broker Sidecar), create new services:

tux > cf create-service service-name default mydb

Then you may bind a service instance to an app:

tux > cf bind-service appname service-instance

The most-used command is cf push, for pushing new apps and changes to existing apps.

 tux > cf push new-app -b buildpack

7 Managing Passwords

The various components of SUSE Cloud Application Platform authenticate to each other using passwords that are automatically managed by the Cloud Application Platform secrets-generator. The only passwords managed by the cluster administrator are passwords for human users. The administrator may create and remove user logins, but cannot change user passwords.

  • The cluster administrator password is initially defined in the deployment's values.yaml file with CLUSTER_ADMIN_PASSWORD

  • The Stratos Web UI provides a form for users, including the administrator, to change their own passwords

  • User logins are created (and removed) with the Cloud Foundry Client, cf CLI

7.1 Password Management with the Cloud Foundry Client

The administrator cannot change other users' passwords. Only users may change their own passwords, and password changes require the current password:

tux > cf passwd
Current Password>
New Password> 
Verify Password> 
Changing password...
OK
Please log in again

The administrator can create a new user:

tux > cf create-user username password

and delete a user:

tux > cf delete-user username password

Use the cf CLI to assign space and org roles. Run cf help -a for a complete command listing, or see Creating and Managing Users with the cf CLI.

7.2 Changing User Passwords with Stratos

The Stratos Web UI provides a form for changing passwords on your profile page. Click the overflow menu button on the top right to access your profile, then click the edit button on your profile page. You can manage your password and username on this page.

Stratos Profile Page
Figure 7.1: Stratos Profile Page
Stratos Edit Profile Page
Figure 7.2: Stratos Edit Profile Page

8 Setting up and Using a Service Broker Sidecar

The Open Service Broker API provides your SUSE Cloud Foundry applications with access to external dependencies and platform-level capabilities, such as databases, filesystems, external repositories, and messaging systems. These resources are called services. Services are created, used, and deleted as needed, and provisioned on demand.

8.1 Prerequisites

The following examples demonstrate how to deploy service brokers for MySQL and PostgreSQL with Helm, using charts from the SUSE repository. You must have the following prerequisites:

  • A working SUSE Cloud Application Platform deployment with Helm and the Cloud Foundry command line interface (cf CLI).

  • An Application Security Group (ASG) for applications to reach external databases. (See Understanding Application Security Groups.)

  • An external MySQL or PostgreSQL installation with account credentials that allow creating and deleting databases and users.

For testing purposes you may create an insecure security group:

tux > echo > "internal-services.json" '[{ "destination": "0.0.0.0/0", "protocol": "all" }]'
tux > cf create-security-group internal-services-test internal-services.json
tux > cf bind-running-security-group internal-services-test
tux > cf bind-staging-security-group internal-services-test

You may apply an ASG later, after testing. All running applications must be restarted to use the new security group.

8.2 Deploying on CaaS Platform 3

If you are deploying SUSE Cloud Application Platform on CaaS Platform 3, see Section 2.1.1, “Installation on SUSE CaaS Platform 3” for important information on applying the required PodSecurityPolicy (PSP) to your deployment. You must also apply the PSP to your new service brokers.

Take the example configuration file, cap-psp-rbac.yaml, in Section 2.1.1, “Installation on SUSE CaaS Platform 3”, and append these lines to the end, using your own namespace name for your new service broker:

- kind: ServiceAccount
  name: default
  namespace: mysql-sidecar

Then apply the updated PSP configuration, before you deploy your new service broker, with this command:

tux > kubectl apply -f cap-psp-rbac.yaml

kubectl apply updates an existing deployment. After applying the PSP, proceed to configuring and deploying your service broker.

8.3 Configuring the MySQL Deployment

Start by extracting the uaa namespace secrets name, and the uaa and scf namespaces internal certificates with these commands. These will output the complete certificates. Substitute your secrets name if it is different than the example:

tux > kubectl get pods --namespace uaa \
 -o jsonpath='{.items[*].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}'
 secrets-2.8.0-1

tux > kubectl get secret -n scf secrets-2.8.0-1 -o jsonpath='{.data.internal-ca-cert}' | base64 -d
 -----BEGIN CERTIFICATE-----
 MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
 [...]
 xC8x/+zT0QkvcRJBio5gg670+25KJQ==
 -----END CERTIFICATE-----
 
tux > kubectl get secret -n uaa secrets-2.8.0-1 -o jsonpath='{.data.internal-ca-cert}' | base64 -d
 -----BEGIN CERTIFICATE-----
 MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
 [...]
 to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
 -----END CERTIFICATE-----

You will copy these certificates into your configuration file as shown below.

Create a values.yaml file. The following example is called usb-config-values.yaml. Modify the values to suit your SUSE Cloud Application Platform installation.

env:
  # Database access credentials
  SERVICE_MYSQL_HOST: mysql.example.com
  SERVICE_MYSQL_PORT: 3306
  SERVICE_MYSQL_USER: mysql-admin-user
  SERVICE_MYSQL_PASS: mysql-admin-password

  # CAP access credentials, from your original deployment configuration 
  # (see Section 2.4, “Configure the SUSE Cloud Application Platform Production Deployment”)
  CF_ADMIN_USER: admin
  CF_ADMIN_PASSWORD: password
  CF_DOMAIN: example.com
  
  # Copy the certificates you extracted above, as shown in these
  # abbreviated examples, prefaced with the pipe character
  
  # SCF cert
  CF_CA_CERT: |
   -----BEGIN CERTIFICATE-----
   MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
   [...]
   xC8x/+zT0QkvcRJBio5gg670+25KJQ==
   -----END CERTIFICATE-----
   
   # UAA cert
   UAA_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
    [...]
    to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
    -----END CERTIFICATE-----
    
  kube:
   organization: cap
   registry: 
     hostname: "registry.suse.com"
     username: ""
     password: ""

8.4 Deploying the MySQL Chart

The 1.1 release of SUSE Cloud Application Platform includes charts for MySQL and PostgreSQL (see Section 2.6, “Install the Kubernetes charts repository” for information on managing your Helm repository):

tux > helm search suse
NAME                         VERSION DESCRIPTION
suse/cf                      2.13.3  A Helm chart for SUSE Cloud Foundry               
suse/cf-usb-sidecar-mysql    1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres 1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/console                 2.1.0   A Helm chart for deploying Stratos UI Console     
suse/uaa                     2.13.3  A Helm chart for SUSE UAA

Create a namespace for your MySQL sidecar:

tux > kubectl create namespace mysql-sidecar

Install the MySQL Helm chart:

tux > helm install suse/cf-usb-sidecar-mysql \
  --devel \
  --name mysql-service \
  --namespace mysql-sidecar \
  --set "env.SERVICE_LOCATION=http://cf-usb-sidecar-mysql.mysql-sidecar:8081" \
  --values usb-config-values.yaml \
  --wait

Wait for the new pods to become ready:

tux > watch kubectl get pods --namespace=mysql-sidecar

Confirm that the new service has been added to your SUSE Cloud Applications Platform installation:

tux > cf marketplace

8.5 Create and Bind a MySQL Service

To create a new service instance, use the Cloud Foundry command line client:

tux > cf create-service mysql default service_instance_name

You may replace service_instance_name with any name you prefer.

Bind the service instance to an application:

tux > cf bind-service my_application service_instance_name

8.6 Deploying the PostgreSQL Chart

The PostgreSQL configuration is slightly different from the MySQL configuration. The database-specific keys are named differently, and it requires the SERVICE_POSTGRESQL_SSLMODE key.

env:
  # Database access credentials
   SERVICE_POSTGRESQL_HOST: postgres.example.com
   SERVICE_POSTGRESQL_PORT: 5432
   SERVICE_POSTGRESQL_USER: pgsql-admin-user
   SERVICE_POSTGRESQL_PASS: pgsql-admin-password
  # The SSL connection mode when connecting to the database.  For a list of
  # valid values, please see https://godoc.org/github.com/lib/pq
   SERVICE_POSTGRESQL_SSLMODE: disable
  
  # CAP access credentials, from your original deployment configuration 
  # (see Section 2.4, “Configure the SUSE Cloud Application Platform Production Deployment”)
   CF_ADMIN_USER: admin
   CF_ADMIN_PASSWORD: password
   CF_DOMAIN: example.com
  
  # Copy the certificates you extracted above, as shown in these
  # abbreviated examples, prefaced with the pipe character
  
  # SCF certificate
   CF_CA_CERT: |
    -----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUT/Yu/Sv4UHl5zHZYZKCy5RKJqmYwDQYJKoZIhvcNAQEN
    [...]
    xC8x/+zT0QkvcRJBio5gg670+25KJQ==
    -----END CERTIFICATE-----
   
   # UAA certificate
   UAA_CA_CERT: |
    ----BEGIN CERTIFICATE-----
    MIIE8jCCAtqgAwIBAgIUSI02lj0a0InLb/zMrjNgW5d8EygwDQYJKoZIhvcNAQEN
    [...]
    to2GI8rPMb9W9fd2WwUXGEHTc+PqTg==
    -----END CERTIFICATE-----
   
   SERVICE_TYPE: postgres   
    
   kube:
     organization: cap
     registry: 
       hostname: "registry.suse.com"
       username: ""
       password: ""

Create a namespace and install the chart:

tux > kubectl create namespace postgres-sidecar

tux > helm install suse/cf-usb-sidecar-postgres \
  --devel \
  --name postgres-service \
  --namespace postgres-sidecar \
  --set "env.SERVICE_LOCATION=http://cf-usb-sidecar-postgres.postgres-sidecar:8081" \
  --values usb-config-values.yaml \
  --wait

Then follow the same steps as for the MySQL chart.

8.7 Removing Service Broker Sidecar Deployments

To correctly remove sidecar deployments, perform the following steps in order.

  • Unbind any applications using instances of the service, and then delete those instances:

    tux > cf unbind-service my_app my_service_instance
    tux > cf delete-service my_service_instance
  • Install the CF-USB CLI plugin for the Cloud Foundry CLI from https://github.com/SUSE/cf-usb-plugin/releases/, for example:

    tux > cf install-plugin \
     https://github.com/SUSE/cf-usb-plugin/releases/download/1.0.0/cf-usb-plugin-1.0.0.0.g47b49cd-linux-amd64
  • Configure the Cloud Foundry USB CLI plugin, using the domain you created for your SUSE Cloud Foundry deployment:

    tux > cf usb-target https://usb.example.com
  • Remove the services:

    tux > cf usb delete-driver-endpoint "http://cf-usb-sidecar-mysql.mysql-sidecar:8081"
  • Find your release name, then delete the release:

    tux > helm list
    NAME           REVISION UPDATED                   STATUS    CHART                      NAMESPACE
    susecf-console 1        Wed Aug 14 08:35:58 2018  DEPLOYED  console-2.0.0              stratos  
    susecf-scf     1        Tue Aug 14 12:24:36 2018  DEPLOYED  cf-2.11.0                  scf      
    susecf-uaa     1        Tue Aug 14 12:01:17 2018  DEPLOYED  uaa-2.11.0                 uaa
    mysql-service  1        Mon May 21 11:40:11 2018  DEPLOYED  cf-usb-sidecar-mysql-1.0.1 mysql-sidecar
    
    tux > helm delete --purge mysql-service

9 Backup and Restore

cf-plugin-backup backs up and restores your cloud controller database (CDDB), using the Cloud Foundry command line interface (cf CLI). (See Section 6.1, “Using the cf CLI with SUSE Cloud Application Platform”.)

cf-plugin-backup is not a general-purpose backup and restore plugin. It is designed to save the state of a SUSE Cloud Foundry instance before making changes to it. If the changes cause problems, use cf-plugin-backup to restore the instance from scratch. Do not use it to restore to a non-pristine SUSE Cloud Foundry instance. Some of the limitations for applying the backup to a non-pristine SUSE Cloud Foundry instance are:

  • Application configuration is not restored to running applications, as the plugin does not have the ability to determine which applications should be restarted to load the restored configurations.

  • User information is managed by the User Account and Authentication (UAA) server, not the cloud controller (CC). As the plugin talks only to the CC it cannot save full user information, nor restore users. Saving and restoring users must be performed separately, and user restoration must be performed before the backup plugin is invoked.

  • The set of available stacks is part of the SUSE Cloud Foundry instance setup, and is not part of the CC configuration. Trying to restore applications using stacks not available on the target SUSE Cloud Foundry instance will fail. Setting up the necessary stacks must be performed separately before the backup plugin is invoked.

  • Buildpacks are not saved. Applications using custom buildpacks not available on the target SUSE Cloud Foundry instance will not be restored. Custom buildpacks must be managed separately, and relevant buildpacks must be in place before the affected applications are restored.

9.1 Installing the cf-plugin-backup

Download the plugin from cf-plugin-backup/releases.

Then install it with cf, using the name of the plugin binary that you downloaded:

tux > cf install-plugin cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64
 Attention: Plugins are binaries written by potentially untrusted authors.
 Install and use plugins at your own risk.
 Do you want to install the plugin 
 backup-plugin/cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64? [yN]: y
 Installing plugin backup...
 OK
 Plugin backup 1.0.8 successfully installed.

Verify installation by listing installed plugins:

tux > cf plugins
 Listing installed plugins...

 plugin   version   command name      command help
 backup   1.0.8     backup-info       Show information about the current snapshot
 backup   1.0.8     backup-restore    Restore the CloudFoundry state from a 
  backup created with the snapshot command
 backup   1.0.8     backup-snapshot   Create a new CloudFoundry backup snapshot 
  to a local file

 Use 'cf repo-plugins' to list plugins in registered repos available to install.

9.2 Using cf-plugin-backup

The plugin has three commands:

  • backup-info

  • backup-snapshot

  • backup-restore

View the online help for any command, like this example:

tux >  cf backup-info --help
 NAME:
   backup-info - Show information about the current snapshot

 USAGE:
   cf backup-info

Create a backup of your SUSE Cloud Application Platform data and applications. The command outputs progress messages until it is completed:

tux > cf backup-snapshot   
 2018/08/18 12:48:27 Retrieving resource /v2/quota_definitions
 2018/08/18 12:48:30 org quota definitions done
 2018/08/18 12:48:30 Retrieving resource /v2/space_quota_definitions
 2018/08/18 12:48:32 space quota definitions done
 2018/08/18 12:48:32 Retrieving resource /v2/organizations
 [...]

Your Cloud Application Platform data is saved in the current directory in cf-backup.json, and application data in the app-bits/ directory.

View the current backup:

tux > cf backup-info
 - Org  system

Restore from backup:

tux > cf backup-restore

There are two additional restore options: --include-security-groups and --include-quota-definitions.

9.3 Scope of Backup

The following table lists the scope of the cf-plugin-backup backup. Organization and space users are backed up at the SUSE Cloud Application Platform level. The user account in UAA/LDAP, the service instances and their application bindings, and buildpacks are not backed up. The sections following the table goes into more detail.

ScopeRestore
OrgsYes
Org auditorsYes
Org billing-managerYes
Quota definitionsOptional
SpacesYes
Space developersYes
Space auditorsYes
Space managersYes
AppsYes
App binariesYes
RoutesYes
Route mappingsYes
DomainsYes
Private domainsYes
Stacksnot available
Feature flagsYes
Security groupsOptional
Custom buildpacksNo

cf backup-info reads the cf-backup.json snapshot file found in the current working directory, and reports summary statistics on the content.

cf backup-snapshot extracts and saves the following information from the CC into a cf-backup.json snapshot file. Note that it does not save user information, but only the references needed for the roles. The full user information is handled by the UAA server, and the plugin talks only to the CC. The following list provides a summary of what each plugin command does.

  • Org Quota Definitions

  • Space Quota Definitions

  • Shared Domains

  • Security Groups

  • Feature Flags

  • Application droplets (zip files holding the staged app)

  • Orgs

    • Spaces

      • Applications

      • Users' references (role in the space)

cf backup-restore reads the cf-backup.json snapshot file found in the current working directory, and then talks to the targeted SUSE Cloud Foundry instance to upload the following information, in the specified order:

  • Shared domains

  • Feature flags

  • Quota Definitions (iff --include-quota-definitions)

  • Orgs

    • Space Quotas (iff --include-quota-definitions)

    • UserRoles

    • (private) Domains

    • Spaces

      • UserRoles

      • Applications (+ droplet)

        • Bound Routes

      • Security Groups (iff --include-security-groups)

The following list provides more details of each action.

Shared Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Feature Flags

Attempts to update flags from the backup.

Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

Orgs

Attempts to create orgs from the backup. Attempts to update existing orgs from the backup.

Space Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

(private) Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Spaces

Attempts to create spaces from the backup. Attempts to update existing spaces from the backup.

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

Apps

Attempts to create apps from the backup. Attempts to update existing apps from the backup (memory, instances, buildpack, state, ...)

Security groups

Existing groups are overwritten from the backup

10 Logging

There are two types of logs in a deployment of Cloud Application Platform, applications logs and component logs.

  • Application logs provide information specific to a given application that has been deployed to your Cloud Application Platform cluster and can be accessed through:

    • The cf CLI using the cf logs command

    • The application's log stream within the Stratos console

  • Access to logs for a given component of your Cloud Application Platform deployment can be obtained by:

    • The kubectl logs command

      The following example retrieves the logs of the router-0 pod in the scf namespace

      tux > kubectl logs --namespace scf api-0
    • Direct access to the log files using the following:

      1. Open a shell to the container of the component using the kubectl exec command

      2. Navigate to the logs directory at /var/vcap/sys/logs, at which point there will be subdirectories containing the log files for access.

        tux > kubectl exec -it --namespace scf router-0 /bin/bash
        
        router/0:/# cd /var/vcap/sys/log
        
        router/0:/var/vcap/sys/log# ls -R
        .:
        gorouter  metron_agent
        
        ./gorouter:
        access.log  gorouter.err.log  gorouter.log  post-start.err.log	post-start.log
        
        ./metron_agent:
        metron.log

10.1 Logging to an External Syslog Server

Cloud Application Platform supports sending the cluster's log data to external logging services where additional processing and analysis can be performed.

10.1.1 Configuring Cloud Application Platform

In your scf-config-values.yaml file add the following configuration values to the env: section. The example values below are configured for an external ELK stack.

env:
    SCF_LOG_HOST: elk.example.com
    SCF_LOG_PORT: 5001
    SCF_LOG_PROTOCOL: "tcp"

10.1.2 Example using the ELK Stack

The ELK stack is an example of an external syslog server where log data can be sent to for log management. The ELK stack consists of Elasticsearch, Logstash, and Kibana.

10.1.2.1 Prerequisites

Java 8 is required by both Elasticsearch and Logstash.

10.1.2.2 Installing and Configuring Elasticsearch

See installing Elasticsearch to find available installation methods.

After installation, modify the config file /etc/elasticsearch/elasticsearch.yml to set the following value.

network.host: localhost

10.1.2.3 Installing and Configuring Logstash

See installing Logstash to find available installation methods.

After installation, create the configuration file /etc/logstash/conf.d/00-scf.conf. In this example, we will name it 00-scf.conf. Add the following into the file. Take note of the port used in the input section. This value will need to match the value of the SCF_LOG_PORT property in your scf-config-values.yaml file.

input {
  tcp {
    port => 5001
  }
}
output {
  stdout {}
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "scf-%{+YYYY.MM.dd}"
  }
}

See input plugins and output plugins for additional configuration options as well as other plugins available. For this example, we will demonstrate the flow of data through the stack, but filter plugins can also be specified to perform processing of the log data.

10.1.2.4 Installing and Configuring Kibana

See installing Kibana to find available installation methods.

No configuration changes are required at this point. Refer to the configuring settings for additonal properties that you can specify in your kibana.yml file.

10.2 Log Levels

The log level is configured through the scf-config-values.yaml file by using the LOG_LEVEL property found in the env: section. The following are the log levels available along with examples of log entries at the given level.

  • off

  • fatal

  • error

    <11>1 2018-08-21T17:59:48.321059+00:00 api-0 vcap.cloud_controller_ng - - -  {"timestamp":1534874388.3206334,"message":"Mysql2::Error: MySQL server has gone away: SELECT count(*) AS `count` FROM `tasks` WHERE (`state` = 'RUNNING') LIMIT 1","log_level":"error","source":"cc.db","data":{},"thread_id":47367387197280,"fiber_id":47367404488760,"process_id":3400,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/sequel-4.49.0/lib/sequel/database/logging.rb","lineno":88,"method":"block in log_each"}
  • warn

    <12>1 2018-08-21T18:49:37.651186+00:00 api-0 vcap.cloud_controller_ng - - -  {"timestamp":1534877377.6507676,"message":"Invalid bearer token: #<CF::UAA::InvalidSignature: Signature verification failed> [\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/cf-uaa-lib-3.14.3/lib/uaa/token_coder.rb:118:in `decode'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/cf-uaa-lib-3.14.3/lib/uaa/token_coder.rb:212:in `decode_at_reference_time'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:70:in `decode_token_with_key'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:58:in `block in decode_token_with_asymmetric_key'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:56:in `each'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:56:in `decode_token_with_asymmetric_key'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb:29:in `decode_token'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/security/security_context_configurer.rb:22:in `decode_token'\", \"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/security/security_context_configurer.rb:10:in `configure'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/security_context_setter.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/vcap_request_id.rb:15:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:49:in `call_app'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:14:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_metrics.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/rack-1.6.9/lib/rack/builder.rb:153:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:86:in `block in pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:84:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:84:in `pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/thin-1.7.0/lib/thin/connection.rb:50:in `block in process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:1067:in `block in spawn_threadpool'\"]","log_level":"warn","source":"cc.uaa_token_decoder","data":{"request_guid":"f3e25c45-a94a-4748-7ccf-5a72600fbb17::774bdb79-5d6a-4ccb-a9b8-f4022afa3bdd"},"thread_id":47339751566100,"fiber_id":47339769104800,"process_id":3245,"file":"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/uaa/uaa_token_decoder.rb","lineno":35,"method":"rescue in decode_token"}
  • info

    <14>1 2018-08-21T22:42:54.324023+00:00 api-0 vcap.cloud_controller_ng - - -  {"timestamp":1534891374.3237739,"message":"Started GET \"/v2/info\" for user: , ip: 127.0.0.1 with vcap-request-id: 45e00b66-e0b7-4b10-b1e0-2657f43284e7 at 2018-08-21 22:42:54 UTC","log_level":"info","source":"cc.api","data":{"request_guid":"45e00b66-e0b7-4b10-b1e0-2657f43284e7"},"thread_id":47420077354840,"fiber_id":47420124921300,"process_id":3200,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb","lineno":12,"method":"call"}
  • debug

    <15>1 2018-08-21T22:45:15.146838+00:00 api-0 vcap.cloud_controller_ng - - -  {"timestamp":1534891515.1463814,"message":"dispatch VCAP::CloudController::InfoController get /v2/info","log_level":"debug","source":"cc.api","data":{"request_guid":"b228ef6d-af5e-4808-af0b-791a37f51154"},"thread_id":47420125585200,"fiber_id":47420098783620,"process_id":3200,"file":"/var/vcap/packages-src/8d7a6cd54ff4180c0094fc9aefbe3e5f43169e13/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb","lineno":12,"method":"block in define_route"}
  • debug1

  • debug2

    <15>1 2018-08-21T22:46:02.173445+00:00 api-0 vcap.cloud_controller_ng - - -  {"timestamp":1534891562.1731355,"message":"(0.006130s) SELECT * FROM `delayed_jobs` WHERE ((((`run_at` <= '2018-08-21 22:46:02') AND (`locked_at` IS NULL)) OR (`locked_at` < '2018-08-21 18:46:02') OR (`locked_by` = 'cc_api_worker.api.0.1')) AND (`failed_at` IS NULL) AND (`queue` IN ('cc-api-0'))) ORDER BY `priority` ASC, `run_at` ASC LIMIT 5","log_level":"debug2","source":"cc.background","data":{},"thread_id":47194852110160,"fiber_id":47194886034680,"process_id":3296,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.4.0/gems/sequel-4.49.0/lib/sequel/database/logging.rb","lineno":88,"method":"block in log_each"}

11 Managing Certificates

The traffic of your SUSE Cloud Application Platform deployment can be made more secure through the use of TLS certificates.

11.1 Certificate Characteristics

When obtaining or generating your certificates, ensure that they are encoded in the PEM format. The appropriate Subject Alternative Names (SAN) should also be included as part of the certificate.

  • Certificates for the SCF router should include:

    • *.DOMAIN A wildcard certificate is suggested as deployed applications on the Cloud Application Platform cluster will have URLs in the form of APP_NAME.DOMAIN

  • Certificates for the UAA server should include:

    • uaa.DOMAIN

    • *.uaa.DOMAIN

    • uaa

11.2 Deploying Custom Certificates

Certificates used in Cloud Application Platform are configurable through the values.yaml files for the deployment of SCF and UAA respectively. To specify a certificate, set the value for the certificate and its corresponding private key under the secrets: section using the following properties.

  • In the values.yaml for SCF specify the ROUTER_SSL_CERT property and the corresponding ROUTER_SSL_KEY.

    Note
    Note

    Note the use of the "|" character which indicates the use of a literal scalar. See the YAML spec for more information.

    secrets:
      ROUTER_SSL_CERT: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
    
      ROUTER_SSL_KEY: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----
  • The --skip-ssl-validation option will not be used when setting a target API endpoint or logging in with the cf CLI. As a result, a certificate will need to be specified for the UAA component as well. In the values.yaml for UAA, specify the UAA_SERVER_CERT property and the corresponding UAA_SERVER_KEY. If a self-signed certificate is used, then the INTERNAL_CA_CERT property and its associated INTERNAL_CA_KEY will need to be set as well.

    secrets:
      UAA_SERVER_CERT: |
        -----BEGIN CERTIFICATE-----
        MIIFnzCCA4egAwIBAgICEAMwDQYJKoZIhvcNAQENBQAwXDELMAkGA1UEBhMCQ0Ex
        CzAJBgNVBAgMAkJDMRIwEAYDVQQHDAlWYW5jb3V2ZXIxETAPBgNVBAoMCE15Q2Fw
        T3JnMRkwFwYDVQQDDBBNeUNhcE9yZyBSb290IENBMB4XDTE4MDkxNDIyNDMzNVoX
        ...
        IqhPRKYBFHPw6RxVTjG/ClMsFvOIAO3QsK+MwTRIGVu/MNs0wjMu34B/zApLP+hQ
        3ZxAt/z5Dvdd0y78voCWumXYPfDw9T94B4o58FvzcM0eR3V+nVtahLGD2r+DqJB0
        3xoI
        -----END CERTIFICATE-----
    
      UAA_SERVER_KEY: |
        -----BEGIN PRIVATE KEY-----
        MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDhRlcoZAVwUkg0
        sdExkBnPenhLG5FzQM3wm9t4erbSQulKjeFlBa9b0+RH6gbYDHh5+NyiL0L89txO
        JHNRGEmt+4zy+9bY7e2syU18z1orOrgdNq+8QhsSoKHJV2w+0QZkSHTLdWmAetrA
        ...
        ZP5BpgjrT2lGC1ElW/8AFM5TxkkOPMzDCe8HRXPUUw+2YDzyKY1YgkwOMpHlk8Cs
        wPQYJsrcObenRwsGy2+A6NiIg2AVJwHASFG65taoV+1A061P3oPDtyIH/UPhRUoC
        OULPS8fbHefNiSvZTNVKwj8=
        -----END PRIVATE KEY-----
    
      INTERNAL_CA_CERT: |
        -----BEGIN CERTIFICATE-----
        MIIFljCCA36gAwIBAgIBADANBgkqhkiG9w0BAQ0FADBcMQswCQYDVQQGEwJDQTEL
        MAkGA1UECAwCQkMxEjAQBgNVBAcMCVZhbmNvdXZlcjERMA8GA1UECgwITXlDYXBP
        cmcxGTAXBgNVBAMMEE15Q2FwT3JnIFJvb3QgQ0EwHhcNMTgwOTE0MjA1MzU5WhcN
        ...
        PlezSFbDGGIc1beUs1gNMwJki7fs/jDjpA7TKuUDzoGSqDiJXeQAluBILHHQ4q2B
        KuLcZc6LbPsaADmtTbx+Ww/ZzIlF3ENVVvtrWTl5MOV3VhoJwsKmFiNLtkMuppBY
        bhbFkKwtW9xnUzXwjUCy87WPLx84xdBuL/nvJhoMUN75JklvtVkzyX/X
        -----END CERTIFICATE-----
    
      INTERNAL_CA_KEY: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIJKQIBAAKCAgEA/kK6Hw1da9aBwdbP6+wjiR/pSLv6ilNAxtOcKfaNKtc71nwO
        Hjw62ZLBkS2ZtwdNpt5QuueIsUXvFiy7xz4TzyAATXVLR0GBkaHl/PwlwSN5nTMC
        JT3T+89tg4UDFhcdGSZXjQyGZINLK6dHivuAcL3zgEZQwr6UeZINFb27WhsTZEMC
        ...
        0qmnlGxjAdwan+PrarR6ztyp/bYcAvQhgEwc9oF2hj9wBhkdWVNVQ4LaxGtUfV4S
        yhbc7dZNw17fXhgVMZPDTRBfwwrcJ6KcF7g1PCsaGcuOPZWxroemvn28ytYBt1IG
        tfIdEIQIUTDVM4K2wiE6bwslIYwv5pEBLAdWG0gw8KCZl+ffTNOv+8PkdaiD
        -----END RSA PRIVATE KEY-----

Once all pods are up and running, verify by running the cf api command followed by the cf login command and entering in your credentials. Both commands should be executed without using the --skip-ssl-validation option.

cf api https://api.example.com
cf login

11.2.1 Configuring Multiple Certificates

Cloud Application Platform supports configurations that use multiple certificates. To specify multiple certificates with their associated keys, replace the ROUTER_SSL_CERT and ROUTER_SSL_KEY properties with the ROUTER_TLS_PEM property in your values.yaml file.

secrets:
  ROUTER_TLS_PEM: |
    - cert_chain: |
        -----BEGIN CERTIFICATE-----
        MIIEDzCCAfcCCQCWC4NErLzy9DANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        opR9hW2YNrMYQYfhVu4KTkpXIr4iBrt2L+aq2Rk4NBaprH+0X6CPlYg+3edC7Jc+
	...
        ooXNKOrpbSUncflZYrAfYiBfnZGIC99EaXShRdavStKJukLZqb3iHBZWNLYnugGh
        jyoKpGgceU1lwcUkUeRIOXI8qs6jCqsePM6vak3EO5rSiMpXMvLO8WMaWsXEfcBL
        dglVTMCit9ORAbVZryXk8Xxiham83SjG+fOVO4pd0R8UuCE=
        -----END CERTIFICATE-----
      private_key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEA0HZ/aF64ITOrwtzlRlDkxf0b4V6MFaaTx/9UIQKQZLKT0d7u
        3Rz+egrsZ90Jk683Oz9fUZKtgMXt72CMYUn13TTYwnh5fJrDM1JXx6yHJyiIp0rf
        3G6wh4zzgBosIFiadWPQgL4iAJxmP14KMg4z7tNERu6VXa+0OnYT0DBrf5IJhbn6
	...
        ja0CsQKBgQCNrhKuxLgmQKp409y36Lh4VtIgT400jFOsMWFH1hTtODTgZ/AOnBZd
        bYFffmdjVxBPl4wEdVSXHEBrokIw+Z+ZhI2jf2jJkge9vsSPqX5cTd2X146sMUSy
        o+J1ZbzMp423AvWB7imsPTA+t9vfYPSlf+Is0MhBsnGE7XL4fAcVFQ==
        -----END RSA PRIVATE KEY-----
    - cert_chain: |
        -----BEGIN CERTIFICATE-----
        MIIEPzCCAiegAwIBAgIJAJYLg0SsvPL1MA0GCSqGSIb3DQEBCwUAMEYxCzAJBgNV
        BAYTAkNBMRMwEQYDVQQIDApTb21lLVN0YXRlMQ4wDAYDVQQKDAVNeU9yZzESMBAG
        A1UEAwwJTXlDQS5zaXRlMB4XDTE4MDkxNzE1MjQyMVoXDTIwMDEzMDE1MjQyMVow
	...
        FXrgM9jVBGXeL7T/DNfJp5QfRnrQq1/NFWafjORXEo9EPbAGVbPh8LiaEqwraR/K
        cDuNI7supZ33I82VOrI4+5mSMxj+jzSGd2fRAvWEo8E+MpHSpHJt6trGa5ON57vV
        duCWD+f1swpuuzW+rNinrNZZxUQ77j9Vk4oUeVUfL91ZK4k=
        -----END CERTIFICATE-----
      private_key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEowIBAAKCAQEA5kNN9ZZK/UssdUeYSajG6xFcjyJDhnPvVHYA0VtgVOq8S/rb
        irVvkI1s00rj+WypHqP4+l/0dDHTiclOpUU5c3pn3vbGaaSGyonOyr5Cbx1X+JZ5
        17b+ah+oEnI5pUDn7chGI1rk56UI5oV1Qps0+bYTetEYTE1DVjGOHl5ERMv2QqZM
	...
        rMMhAoGBAMmge/JWThffCaponeakJu63DHKz87e2qxcqu25fbo9il1ZpllOD61Zi
        xd0GATICOuPeOUoVUjSuiMtS7B5zjWnmk5+siGeXF1SNJCZ9spgp9rWA/dXqXJRi
        55w7eGyYZSmOg6I7eWvpYpkRll4iFVApMt6KPM72XlyhQOigbGdJ
        -----END RSA PRIVATE KEY-----

11.3 Rotating Automatically Generated Secrets

Cloud Application Platform uses a number of automatically generated secrets for use internally. These secrets have a default expiration of 10950 days and are set through the CERT_EXPIRATION property in the env: section of the values.yaml file. If rotation of the secrets is required, increment the value of secrets_generation_counter in the kube: section of the values.yaml configuration file (e.g. the example scf-config-values.yaml used in this guide) then run helm upgrade.

This example demonstrates rotating the secrets of the scf deployment.

First, update the scf-config-values.yaml file.

kube:
    # Increment this counter to rotate all generated secrets
    secrets_generation_counter: 2

Next, perform a helm upgrade to apply the change.

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
 -o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm upgrade --recreate-pods susecf-scf suse/cf \
 --values scf-config-values.yaml --set "secrets.UAA_CA_CERT=${CA_CERT}"

12 Preparing Microsoft Azure for SUSE Cloud Application Platform

SUSE Cloud Application Platform version 1.1 and up supports deployment on Microsoft Azure Kubernetes Service (AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparing Azure for a SUSE Cloud Application Platform deployment, with a basic Azure load balancer. (See Azure Kubernetes Service (AKS) for more information.)

In Kubernetes terminology a node used to be a minion, which was the name for a worker node. Now the correct term is simply node (see https://kubernetes.io/docs/concepts/architecture/nodes/). This can be confusing, as computing nodes have traditionally been defined as any device in a network that has an IP address. In Azure they are called agent nodes. In this chapter we call them agent nodes or Kubernetes nodes.

12.1 Prerequisites

Install az, the Azure command-line client, on your remote administration machine. See Install Azure CLI 2.0 for instructions.

See the Azure CLI 2.0 Reference for a complete az command reference.

You also need the kubectl, curl, sed, and jq commands, and the name of the SSH key that is attached to your Azure account.

Log in to your Azure Account:

tux > az login

Your Azure user needs the User Access Administrator role. Check your assigned roles with the az command:

tux > az role assignment list --assignee login-name
[...]
"roleDefinitionName": "User Access Administrator",

If you do not have this role, then you must request it from your Azure administrator.

You need your Azure subscription ID. Extract it with az:

tux > az account show --query "{ subscription_id: id }"
{
"subscription_id": "a900cdi2-5983-0376-s7je-d4jdmsif84ca"
}

Replace subscription-id in the next command with your subscription-id. Then export it as an environment variable and set it as the current subscription:

tux > export SUBSCRIPTION_ID=a900cdi2-5983-0376-s7je-d4jdmsif84ca"

tux > az account set --subscription $SUBSCRIPTION_ID

Verify that the Microsoft.Network, Microsoft.Storage, Microsoft.Compute, and Microsoft.ContainerService providers are enabled:

tux > az provider list | egrep -w 'Microsoft.Network|Microsoft.Storage|Microsoft.Compute|Microsoft.ContainerService'

If any of these are missing, enable them with the az provider register -n provider command.

12.2 Create Resource Group and AKS Instance

Now you can create a new Azure resource group and AKS instance. Set the required variables as environment variables, which helps to speed up the setup, and to reduce errors.

Note
Note: Use different names

It is better to use unique resource group and cluster names, and not copy the examples, especially when your Azure subscription supports multiple users.

  1. Create and set the resource group name:

    tux > export RGNAME="cap-aks"
  2. Create and set the AKS managed cluster name. Azure's default is to use the resource group name, then prepend it with MC and append the location, e.g. MC_cap-aks_cap-aks_eastus. This example command gives it the same name as the resource group; you may give it a different name.

    tux > export AKSNAME=$RGNAME
  3. Set the Azure location. See Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster for supported locations. Current supported Azure locations are eastus, westeurope, centralus, canadacentral, and canadaeast.

    tux > export REGION="eastus"
  4. Set the Kubernetes agent node count. (Cloud Application Platform requires a minimum of 3.)

    tux > export NODECOUNT="3"
  5. Set the virtual machine size (see Sizes for Cloud Services):

    tux > export NODEVMSIZE="Standard_D2_v2"
  6. Set the public SSH key name associated with your Azure account:

    tux > export SSHKEYVALUE="~/.ssh/id_rsa.pub"
  7. Create and set a new admin username:

    tux > export ADMINUSERNAME="scf-admin"

Now that your environment variables are in place, create a new resource group:

tux > az group create --name $RGNAME --location $REGION

Create a new AKS managed cluster:

tux > az aks create --resource-group $RGNAME --name $AKSNAME \
 --node-count $NODECOUNT --admin-username $ADMINUSERNAME \
 --ssh-key-value $SSHKEYVALUE --node-vm-size $NODEVMSIZE \
 --node-osdisk-size=60
Note
Note

An OS disk size of at least 60GB must be specified using the --node-osdisk-size flag.

This takes a few minutes. When it is completed, fetch your kubectl credentials. The default behavior for az aks get-credentials is to merge the new credentials with the existing default configuration, and to set the new credentials as as the current Kubernetes context. You should first backup your current configuration, or move it to a different location, then fetch the new credentials:

tux > az aks get-credentials --resource-group $RGNAME --name $AKSNAME
 Merged "cap-aks" as current context in /home/tux/.kube/config

Verify that you can connect to your cluster:

tux > kubectl get nodes
NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-47788232-0   Ready     agent     5m        v1.9.6
aks-nodepool1-47788232-1   Ready     agent     6m        v1.9.6
aks-nodepool1-47788232-2   Ready     agent     6m        v1.9.6

tux > kubectl get pods --all-namespaces
NAMESPACE     NAME                                READY  STATUS    RESTARTS   AGE
kube-system   azureproxy-79c5db744-fwqcx          1/1    Running   2          6m
kube-system   heapster-55f855b47-c4mf9            2/2    Running   0          5m
kube-system   kube-dns-v20-7c556f89c5-spgbf       3/3    Running   0          6m
kube-system   kube-dns-v20-7c556f89c5-z2g7b       3/3    Running   0          6m
kube-system   kube-proxy-g9zpk                    1/1    Running   0          6m
kube-system   kube-proxy-kph4v                    1/1    Running   0          6m
kube-system   kube-proxy-xfngh                    1/1    Running   0          6m
kube-system   kube-svc-redirect-2knsj             1/1    Running   0          6m
kube-system   kube-svc-redirect-5nz2p             1/1    Running   0          6m
kube-system   kube-svc-redirect-hlh22             1/1    Running   0          6m
kube-system   kubernetes-dashboard-546686-mr9hz   1/1    Running   1          6m
kube-system   tunnelfront-595565bc78-j8msn        1/1    Running   0          6m

When all nodes are in a ready state and all pods are running, proceed to the next steps.

12.3 Apply Pod Security Policies

Role-based access control (RBAC) is enabled by default on AKS. Follow the instructions in Section 2.1.1, “Installation on SUSE CaaS Platform 3” to apply pod security policies (PSPs) to your new AKS cluster. Also note that when you create your Cloud Application Platform configuration file (e.g. scf-config-values.yaml), you must set auth: rbac.

You may disable RBAC during cluster creation with the --disable-rbac option, and then set auth: none. Do not apply the PSPS. This is less secure, but may be useful for testing.

12.4 Enable Swap Accounting

Identify and set the cluster resource group, then enable kernel swap accounting. Swap accounting is required by Cloud Application Platform, but it is not the default in AKS nodes. The following commands use the az command to modify the GRUB configuration on each node, and then reboot the virtual machines.

  1. tux > export MCRGNAME=$(az group list -o table | grep MC_"$RGNAME"_ | awk '{print$1}')
  2. tux > vmnodes=$(az vm list -g $MCRGNAME -o json | jq -r '.[] | select (.tags.poolName | contains("node")) | .name')
  3. tux > for i in $vmnodes
     do
       az vm run-command invoke -g $MCRGNAME -n $i --command-id RunShellScript \
       --scripts "sudo sed -i 's|linux.*./boot/vmlinuz-.*|& swapaccount=1|' /boot/grub/grub.cfg"
    done
  4. tux > for i in $vmnodes
    do
       az vm restart -g $MCRGNAME -n $i
    done

When this runs correctly, you will see multiple "status": "Succeeded" messages for all of your virtual machines.

12.5 Create a Basic Load Balancer and Public IP Address

Azure offers two load balancers, Basic and Standard. Currently Basic is free, while you have to pay for Standard. (See Load Balancer.) The following steps create a Basic load balancer (Basic is the default.) Look for "provisioningState": "Succeeded" messages in the command output to verify that the commands succeeded.

  1. Create a static public IPv4 address:

    tux > az network public-ip create \
     --resource-group $MCRGNAME \
     --name $AKSNAME-public-ip \
     --allocation-method Static
  2. Create the load balancer:

    tux > az network lb create \
     --resource-group $MCRGNAME \
     --name $AKSNAME-lb \
     --public-ip-address $AKSNAME-public-ip \
     --frontend-ip-name $AKSNAME-lb-front \
     --backend-pool-name $AKSNAME-lb-back
  3. Set the virtual machine network interfaces, then add them to the load balancer:

    tux > NICNAMES=$(az network nic list --resource-group $MCRGNAME -o json | jq -r '.[].name')
    
    tux > for i in $NICNAMES
    do
        az network nic ip-config address-pool add \
        --resource-group $MCRGNAME \
        --nic-name $i \
        --ip-config-name ipconfig1 \
        --lb-name $AKSNAME-lb \
        --address-pool $AKSNAME-lb-back
    done

12.6 Configure Load Balancing and Network Security Rules

  1. Set the required ports to allow access to SUSE Cloud Application Platform. Port 8443 is optional for the Stratos Web Console.

    tux > export CAPPORTS="80 443 4443 2222 2793 8443"
  2. Create network and load balancer rules:

    tux > for i in $CAPPORTS
    do
        az network lb probe create \
        --resource-group $MCRGNAME \
        --lb-name $AKSNAME-lb \
        --name probe-$i \
        --protocol tcp \
        --port $i 
        
        az network lb rule create \
        --resource-group $MCRGNAME \
        --lb-name $AKSNAME-lb \
        --name rule-$i \
        --protocol Tcp \
        --frontend-ip-name $AKSNAME-lb-front \
        --backend-pool-name $AKSNAME-lb-back \
        --frontend-port $i \
        --backend-port $i \
        --probe probe-$i 
    done
  3. Verify port setup:

    tux > az network lb rule list -g $MCRGNAME --lb-name $AKSNAME-lb|grep -i port
        
        "backendPort": 8443,
        "frontendPort": 8443,
        "backendPort": 80,
        "frontendPort": 80,
        "backendPort": 443,
        "frontendPort": 443,
        "backendPort": 4443,
        "frontendPort": 4443,
        "backendPort": 2222,
        "frontendPort": 2222,
        "backendPort": 2793,
        "frontendPort": 2793,
  4. Set the network security group name and priority level. The priority levels range from 100-4096, with 100 the highest priority. Each rule must have a unique priority level:

    tux > nsg=$(az network nsg list --resource-group=$MCRGNAME -o json | jq -r '.[].name')
    tux > pri=200
  5. Create the network security rule:

    tux > for i in $CAPPORTS
    do
        az network nsg rule create \
        --resource-group $MCRGNAME \
        --priority $pri \
        --nsg-name $nsg \
        --name $AKSNAME-$i \
        --direction Inbound \
        --destination-port-ranges $i \
        --access Allow
        pri=$(expr $pri + 1)
    done
  6. Print the public and private IP addresses for later use:

    tux > echo -e "\n Resource Group:\t$RGNAME\n \
    Public IP:\t\t$(az network public-ip show --resource-group $MCRGNAME --name $AKSNAME-public-ip --query ipAddress)\n \
    Private IPs:\t\t\"$(az network nic list --resource-group $MCRGNAME -o json | jq -r '.[].ipConfigurations[].privateIpAddress' | paste -s -d " " | sed -e 's/ /", "/g')\"\n"
    
     Resource Group:        cap-aks
     Public IP:             "40.101.3.25"
     Private IPs:           "10.240.0.4", "10.240.0.6", "10.240.0.5"

12.7 Example SUSE Cloud Application Platform Configuration File

The following example scf-config-values.yaml contains parameters particular to running SUSE Cloud Application Platform on Azure Kubernetes Service. You need the IP addresses from the last command in the previous section. This is a simplified example that does not use Azure's DNS services. For quick testing and proof of concept, you can use the free wildcard DNS services, xip.io or nip.io. See Azure DNS Documentation to learn more about Azure's name services.

Warning
Warning: Do not use xip.io or nip.io on production systems

Never use xip.io or nip.io on production systems! You must provide proper DNS and DHCP services on production clusters.

secrets:
    # Password for user 'admin' in the cluster
    CLUSTER_ADMIN_PASSWORD: password

    # Password for SCF to authenticate with UAA
    UAA_ADMIN_CLIENT_SECRET: password

env:
    # Use the public IP address
    DOMAIN: 40.101.3.25.xip.io
            
    # uaa prefix is required        
    UAA_HOST: uaa.40.101.3.25.xip.io
    UAA_PORT: 2793
    
    #Azure deployment requires overlay
    GARDEN_ROOTFS_DRIVER: "overlay-xfs"
    
kube:
    # List the private IP addresses 
    external_ips: ["10.240.0.5", "10.240.0.6", "10.240.0.4"]
    storage_class:
        # Azure supports only "default" or "managed-premium"
        persistent: "default"
        shared: "shared"
    
    registry:
       hostname: "registry.suse.com"
       username: ""
       password: ""
    organization: "cap"
    
    auth: rbac

Now Azure is ready, and you can deploy SUSE Cloud Application Platform on it. Note that you will not install SUSE CaaS Platform, which provides a Kubernetes cluster, because AKS provides a managed Kubernetes cluster. Start with the "Helm Init" sections of the Chapter 2, Production Installation or Chapter 16, Minimal Installation for Testing guides.

When your UAA deployment has completed, test that it is operating correctly by running curl on the DNS name that you configured for your UAA_HOST:

tux > curl -k https://uaa.40.101.3.25.xip.io:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.40.101.3.25.xip.io:2793/oauth/token",
"authorization_endpoint":"https://uaa.40.101.3.25.xip.io:2793
/oauth/authorize","token_endpoint":"https://uaa.40.101.3.25.
xip.io:2793/oauth/token",

13 Deploying SUSE Cloud Application Platform on Amazon EKS

Starting with the 1.2 release, SUSE Cloud Application Platform supports deployment on Amazon EKS, Amazon's managed Kubernetes service.

13.1 Prerequisites

You need an Amazon EKS account. See Getting Started with Amazon EKS for instructions on creating a Kubernetes cluster for your SUSE Cloud Application Platform deployment.

When you create your cluster, use node sizes that are at least a t2.large.

See Section 13.2, “IAM Requirements for EKS” for guidance on configuring Identity and Access Management (IAM) for your users.

13.2 IAM Requirements for EKS

These IAM policies provide sufficient access to use EKS.

13.2.1 Unscoped Operations

Some of these permissions are very broad. They are difficult to scope effectively, in part because many resources are created and named dynamically when deploying an EKS cluster using the CloudFormation console. It may be helpful to enforce certain naming conventions, such as prefixing cluster names with ${aws:username} for pattern-matching in Conditions. However, this requires special consideration beyond the EKS deployment guide, and should be evaluated in the broader context of organizational IAM policies.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "UnscopedOperations",
            "Effect": "Allow",
            "Action": [
                "cloudformation:CreateUploadBucket",
                "cloudformation:EstimateTemplateCost",
                "cloudformation:ListExports",
                "cloudformation:ListStacks",
                "cloudformation:ListImports",
                "cloudformation:DescribeAccountLimits",
                "eks:ListClusters",
                "cloudformation:ValidateTemplate",
                "cloudformation:GetTemplateSummary",
                "eks:CreateCluster"
            ],
            "Resource": "*"
        },
        {
            "Sid": "EffectivelyUnscopedOperations",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:DeleteInstanceProfile",
                "iam:GetRole",
                "iam:DetachRolePolicy",
                "iam:RemoveRoleFromInstanceProfile",
                "cloudformation:*",
                "iam:CreateRole",
                "iam:DeleteRole",
                "eks:*"
            ],
            "Resource": [
                "arn:aws:eks:*:*:cluster/*",
                "arn:aws:cloudformation:*:*:stack/*/*",
                "arn:aws:cloudformation:*:*:stackset/*:*",
                "arn:aws:iam::*:instance-profile/*",
                "arn:aws:iam::*:role/*"
            ]
        }
    ]
}

13.2.2 Scoped Operations

These policies deal with sensitive access controls, such as passing roles and attaching/detaching policies from roles.

This policy, as written, allows unrestricted use of only customer-managed policies, and not Amazon-managed policies. This prevents potential security holes such as attaching the IAMFullAccess policy to a role. If you are using roles in a way that would be undermined by this, you should strongly consider integrating a Permissions Boundary before using this policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "UseCustomPoliciesWithCustomRoles",
            "Effect": "Allow",
            "Action": [
                "iam:DetachRolePolicy",
                "iam:AttachRolePolicy"
            ],
            "Resource": [
                "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*",
                "arn:aws:iam::<YOUR_ACCOUNT_ID>:policy/*"
            ],
            "Condition": {
                "ForAllValues:ArnNotLike": {
                    "iam:PolicyARN": "arn:aws:iam::aws:policy/*"
                }
            }
        },
        {
            "Sid": "AllowPassingRoles",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/*"
        },
        {
            "Sid": "AddCustomRolesToInstanceProfiles",
            "Effect": "Allow",
            "Action": "iam:AddRoleToInstanceProfile",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:instance-profile/*"
        },
        {
            "Sid": "AssumeServiceRoleForEKS",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<EKS_SERVICE_ROLE_NAME>"
        },
        {
            "Sid": "DenyUsingAmazonManagedPoliciesUnlessNeededForEKS",
            "Effect": "Deny",
            "Action": "iam:*",
            "Resource": "arn:aws:iam::aws:policy/*",
            "Condition": {
                "ArnNotEquals": {
                    "iam:PolicyARN": [
                        "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
                        "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
                        "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAttachingSpecificAmazonManagedPoliciesForEKS",
            "Effect": "Allow",
            "Action": [
                "iam:DetachRolePolicy",
                "iam:AttachRolePolicy"
            ],
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "iam:PolicyARN": [
                        "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
                        "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
                        "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
                    ]
                }
            }
        }
    ]
}

13.3 Disk Space

Usually EC2 nodes come with 20GB of disk space. This is insufficient for a SUSE Cloud Application Platform deployment. Make sure to increase the disk space for each node after creating the Kubernetes cluster. First enable SSH to the nodes:

  1. Go to Services -> EC2

  2. Select Security Groups

  3. Click on the Node Security Group for your nodes

  4. Click Actions -> Edit inbound rules

  5. Click Add Rule

  6. Choose "SSH" in the Type column and "Anywhere" in the Source column

  7. Click "Save"

Next, grow the volumes in AWS:

  1. Go to Services -> EC2

  2. Select Volumes

  3. Increase the size for all volumes attached to your nodes from 20GB to at least 60GB.

Make the growth available to the filesystems:

13.4 The Helm CLI and Tiller

Use this version of Helm, or newer: https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-rc4-linux-amd64.tar.gz

In rbac-config.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Apply it to your cluster with these commands:

tux > kubectl create -f rbac-config.yaml
tux > helm init --service-account tiller

13.5 Default Storage Class

This example creates a simple storage class for your cluster in storage-class.yaml:

  apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp2
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

Then apply the new storage class configuration with this command:

tux > kubectl create -f storage-class.yaml

13.6 Security Group rules

In your EC2 virtual machine list, add the following rules to the security group to any one of your nodes:

  Type 	Protocol 	Port Range 	Source 	Description
HTTP 	TCP 	80 	0.0.0.0/0 	CAP HTTP
Custom TCP Rule 	TCP 	2793 	0.0.0.0/0 	CAP UAA
Custom TCP Rule 	TCP 	2222 	0.0.0.0/0 	CAP SSH
Custom TCP Rule 	TCP 	4443 	0.0.0.0/0 	CAP WSS
Custom TCP Rule 	TCP 	443 	0.0.0.0/0 	CAP HTTPS
Custom TCP Rule 	TCP 	20000-20009 	0.0.0.0/0 	TCP Routing

13.7 Find your kube.external_ips

In your EC2 VM List, look up the private IP addresses of one of the nodes, and note the address that is used in its private DNS, which looks like ip-address.us-west-2.compute.internal. Also note the public IP address, which you will need for the DOMAIN of the cluster in your SUSE Cloud Application Platform configuration file.

13.8 Configuring and Deploying SUSE Cloud Application Platform

Follow the instructions in Section 2.2, “Choose Storage Class” to deploy SUSE Cloud Application Platform. When you create your scf-config-values.yaml file, make the following changes:

  • Use overlay-xfs for env.GARDEN_ROOTFS_DRIVER

  • use "" for env.GARDEN_APPARMOR_PROFILE

  • Set kube.storage_class.persistent and kube.storage_class.shared to gp2

The following roles must have all capabilities:

  • cc_uploader

  • diego_locket

  • diego_access

  • diego_brain

  • diego_api

  • nats

  • routing_api

Use this example scf-config-values.yaml as a template for your configuration.

    env:
    # Domain for SCF. DNS for *.DOMAIN must point to a kube node's (not master)
    # external IP address.
    DOMAIN: public_ip_of_node-VM.example.com
    #### The UAA hostname is hardcoded to uaa.$DOMAIN, so shouldn't be
    #### specified when deploying
    # UAA host/port that SCF will talk to. If you have a custom UAA
    # provide its host and port here.
    UAA_HOST: uaa.public_ip_of_node-VM.example.com
    UAA_PORT: 2793
    GARDEN_ROOTFS_DRIVER: overlay-xfs
    GARDEN_APPARMOR_PROFILE: ""
sizing:
  cc_uploader:
    capabilities: ["ALL"]
  nats:
    capabilities: ["ALL"]
  routing_api:
    capabilities: ["ALL"]
  router:
    capabilities: ["ALL"]
  diego_locket:
    capabilities: ["ALL"]
  diego_access:
    capabilities: ["ALL"]
  diego_brain:
    capabilities: ["ALL"]
  diego_api:
    capabilities: ["ALL"]
kube:
    # The IP address assigned to the kube node pointed to by the domain.
    #### the external_ip setting changed to accept a list of IPs, and was
    #### renamed to external_ips
    external_ips:
    - private_ip_of_node-VM
    storage_class:
        # Make sure to change the value in here to whatever storage class you use
        persistent: "gp2"
        shared: "gp2"
    # The registry the images will be fetched from. The values below should work for
    # a default installation from the suse registry.
    registry:
      hostname: "registry.suse.com"
      username: ""
      password: ""
    organization: "cap"
    organization: "splatform"
    auth: rbac
secrets:
    # Password for user 'admin' in the cluster
    CLUSTER_ADMIN_PASSWORD: password
    # Password for SCF to authenticate with UAA
    UAA_ADMIN_CLIENT_SECRET: password

14 Installing SUSE Cloud Application Platform on OpenStack

You can deploy a SUSE Cloud Application Platform on CaaS Platform stack on OpenStack. This chapter describes how to deploy a small testing and development instance with one Kubernetes master and two worker nodes, using Terraform to automate the deployment. This does not create a production deployment, which should be deployed on bare metal for best performance.

14.1 Prerequisites

The following prequisites should be met before attempting to deploy SUSE Cloud Application Platform on OpenStack. The memory and disk space requirements are minimums, and may need to be larger according to your workloads.

  • 8GB of memory per CaaS Platform dashboard and Kubernetes master nodes

  • 16GB of memory per Kubernetes worker

  • 40GB disk space per CaaS Platform dashboard and Kubernetes master nodes

  • 60GB disk space per Kubernetes worker

  • A SUSE Customer Center account for downloading CaaS Platform. Get SUSE-CaaS-Platform-2.0-KVM-and-Xen.x86_64-1.0.0-GM.qcow2, which has been tested on OpenStack.

  • Download the openrc.sh file for your OpenStack account

14.2 Create a New OpenStack Project

You may use an existing OpenStack project, or run the following commands to create a new project with the necessary configuration for SUSE Cloud Application Platform.

tux > openstack project create --domain default --description "CaaS Platform Project" caasp
tux > openstack role add --project caasp --user admin admin

Create an OpenStack network plus a subnet for CaaS Platform (for example, caasp-net), and add a router to the external (e.g. floating) network:

tux > openstack network create caasp-net
tux > openstack subnet create caasp_subnet --network caasp-net \
--subnet-range 10.0.2.0/24
tux > openstack router create caasp-net-router
tux > openstack router set caasp-net-router --external-gateway floating
tux > openstack router add subnet caasp-net-router caasp_subnet

Upload your CaaS Platform image to your OpenStack account:

tux > 
$ openstack image create \
  --file SUSE-CaaS-Platform-2.0-KVM-and-Xen.x86_64-1.0.0-GM.qcow2

Create a security group with the rules needed for CaaS Platform:

tux > openstack security group create cap --description "Allow CAP traffic"
tux > openstack security group rule create cap --protocol any --dst-port any --ethertype IPv4 --egress
tux > openstack security group rule create cap --protocol any --dst-port any --ethertype IPv6 --egress
tux > openstack security group rule create cap --protocol tcp --dst-port 20000:20008 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 443:443 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 2793:2793 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 4443:4443 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0
tux > openstack security group rule create cap --protocol tcp --dst-port 2222:2222 --remote-ip 0.0.0.0/0

Clone the Terraform script from GitHub:

tux > git clone git@github.com:kubic-project/automation.git
tux > cd automation/caasp-openstack-terraform

Edit the openstack.tfvars file. Use the names of your OpenStack objects, for example:

image_name = "SUSE-CaaS-Platform-2.0"
internal_net = "caasp-net"
external_net = "floating"
admin_size = "m1.large"
master_size = "m1.large"
masters = 1
worker_size = "m1.xlarge"
workers = 2

Initialize Terraform:

tux > terraform init

14.3 Deploy SUSE Cloud Application Platform

Source your openrc.sh file, set the project, and deploy CaaS Platform:

tux > . openrc.sh
tux > export OS_PROJECT_NAME='caasp'
tux > ./caasp-openstack apply

Wait for a few minutes until all systems are up and running, then view your installation:

tux > openstack server list

Add your cap security group to all CaaS Platform workers:

tux > openstack server add security group caasp-worker0 cap
tux > openstack server add security group caasp-worker1 cap

If you need to log into your new nodes, log in as root using the SSH key in the automation/caasp-openstack-terraform/ssh directory.

14.4 Bootstrap SUSE Cloud Application Platform

The following examples use the xip.io wildcard DNS service. You may use your own DNS/DHCP services that you have set up in OpenStack in place of xip.io.

  • Point your browser to the IP address of the CaaS Platform admin node, and create a new admin user login

  • Replace the default IP address or domain name of the Internal Dashboard FQDN/IP on the Initial CaaS Platform configuration screen with the internal IP address of the CaaS Platform admin node

  • Check the Install Tiller checkbox, then click the Next button

  • Terraform automatically creates all of your worker nodes, according to the number you configured in openstack.tfvars, so click Next to skip Bootstrap your CaaS Platform

  • On the Select nodes and roles screen click Accept all nodes, click to define your master and worker nodes, then click Next

  • For the External Kubernetes API FQDN, use the public (floating) IP address of the CaaS Platform master and append the .xip.io domain suffix

  • For the External Dashboard FQDN use the public (floating) IP address of the CaaS Platform admin node, and append the .xip.io domain suffix

14.5 Growing the Root Filesystem

If the root filesystem on your worker nodes is smaller than the OpenStack virtual disk, use these commands on the worker nodes to grow the filesystems to match:

tux > growpart /dev/vda 3
tux > btrfs filesystem resize max /.snapshots

15 Running SUSE Cloud Application Platform on non-SUSE CaaS Platform Kubernetes Systems

15.1 Kubernetes Requirements

SUSE Cloud Application Platform is designed to run on any Kubernetes system that meets the following requirements:

  • Kubernetes API version 1.8+

  • Kernel parameter swapaccount=1

  • docker info must not show aufs as the storage driver

  • The Kubernetes cluster must have a storage class for SUSE Cloud Application Platform to use. The default storage class is persistent. You may specify a different storage class in your deployment's values.yaml file (which is called scf-config-values.yaml in the examples in this guide), or as a helm command option, e.g. --set kube.storage_class.persistent=my_storage_class.

  • kube-dns must be be running

  • Either ntp or systemd-timesyncd must be installed and active

  • Docker must be configured to allow privileged containers

  • Privileged container must be enabled in kube-apiserver. See kube-apiserver.

  • Privileged must be enabled in kubelet

  • The TasksMax property of the containerd service definition must be set to infinity

  • Helm's Tiller has to be installed and active, with Tiller on the Kubernetes cluster and Helm on your remote administration machine

16 Minimal Installation for Testing

A production deployment of SUSE Cloud Application Platform requires a significant number of physical or virtual hosts. For testing and learning, you can set up a minimal four-host deployment of SUSE Cloud Foundry on SUSE CaaS Platform on a single workstation in a hypervisor such as KVM or VirtualBox. This extremely minimal deployment uses Kubernetes' hostpath storage type instead of a storage server, such as SUSE Enterprise Storage. You must also provide DNS, DHCP, and a network space for your cluster. KVM and VirtualBox include name services and network management. Figure 16.1, “Minimal Network Architecture” illustrates the layout of a physical minimal test installation with an external administration workstation and DNS/DHCP server. Access to the cluster is provided by the UAA (User Account and Authentication) server on worker 1.

network architecture of minimal test setup
Figure 16.1: Minimal Network Architecture

This minimal four-node deployment will run on a minimum of 32GB host system memory, though more memory is better. 32GB is enough to test setting up and configuring SUSE CaaS Platform and SUSE Cloud Foundry, and to run a few lightweight workloads. You may also test connecting external servers with your cluster, such as a separate name server, a storage server (e.g. SUSE Enterprise Storage), SUSE Customer Center, or Subscription Management Tool. You must be familiar with installing and configuring CaaS Platform (see the SUSE CaaS Platform 3 Deployment Guide).

After you have installed CaaS Platform you will install and administer SUSE Cloud Foundry remotely from your host workstation, using tools such as the Helm package manager for Kubernetes, and the Kubernetes command-line tool kubectl.

Warning
Warning: Limitations of minimal test environment

This is a limited deployment that is useful for testing basic deployment and functionality, but it is NOT a production system, and cannot be upgraded to a production system. Its reduced complexity allows basic testing, it is portable (on laptops with enough memory), and is useful in environments that have resource constraints.

16.1 Prerequisites

Important
Important: You must be familiar with SUSE CaaS Platform

Setting up SUSE CaaS Platform correctly, and knowledge of basic administration is essential to a successful SUSE Cloud Application Platform deployment. See the SUSE CaaS Platform 3 Deployment Guide.

CaaS Platform requires a minimum of four physical or virtual hosts: one admin, one Kubernetes master, and two Kubernetes workers. You also need an Internet connection, as the installer has an option to download updates during installation, and the Kubernetes workers will each download ~10GB of Docker images.

Hardware requirements

Any AMD64/Intel EM64T processor with at least 8 virtual or physical cores. This table describes the minimum requirements per node.

NodeCPURAMDisk
CaaS Platform Dashboard18GB40GB
CaaS Platform Master28GB40GB
CaaS Platform Workers216GB60GB
Network and Name Services

You must provide DNS and DHCP services, either via your hypervisor, or with a separate name server. Your cluster needs its own domain. Every node needs a hostname and a fully-qualified domain name, and should all be on the same network. By default, the CaaS Platform installer requests a hostname from any available DHCP server. When you install the admin server you may adjust its network settings manually, and should give it a hostname, a static IP address, and specify which name server to use if there is more than one.

CaaS Platform supports multiple methods for installing the Kubernetes workers. We recommend using AutoYaST, and then when you deploy the Kubernetes workers you will create their hostnames with a kernel boot option.

After your Kubernetes nodes are running select one Kubernetes worker to act as the external access point for your cluster and map your domain name to it. On production clusters it is a common practice to use wildcard DNS, rather than trying to manage DNS for hundreds or thousands of applications. Map your domain wildcard to the IP address of the Kubernetes worker you selected as the external access point to your cluster.

Install SUSE CaaS Platform 3

Install SUSE CaaS Platform 3 CaaS Platform. When you reach the step where you log into the Velum Web interface, check the box to install Tiller (Helm's server component).

Install Tiller
Figure 16.2: Install Tiller

Take note of the Overlay network settings. These define the cluster and services networks that are exclusive to the internal cluster communications. They are not accessible outside of the cluster. You may change the default overlay network assignments to avoid address collisions with your existing network.

There is also a form for proxy settings; if you're not using a proxy then leave it empty.

The easiest way to create the Kubernetes nodes is to use AutoYaST see Installation with AutoYaST. Pass in these kernel boot options to each worker: hostname, netsetup, and the AutoYaST path, which you find in Velum on the "Bootstrap your CaaS Platform" page.

Kernel boot options
Figure 16.3: Kernel boot options

When you have completed Bootstrapping the Cluster open a Web browser to the Velum Web interface. If you see a "site not available" or "We're sorry, but something went wrong" error wait a few minutes, then try again. Click the kubectl config button to download your new cluster's kubeconfig file. This takes you to a login screen; use the login you created to access Velum. Save the file as ~/.kube/config on your host workstation. This file enables the remote administration of your cluster.

Download kubeconfig
Figure 16.4: Download kubeconfig
Install kubectl
Note
Note: Remote Cluster Administration

You will administer your cluster from your host workstation, rather than directly on any of your cluster nodes. The remote environment is indicated by the unprivileged user Tux, while root prompts are on a cluster host. There are few tasks that need to be performed directly on any of the cluster hosts.

Follow the instructions at Install and Set Up kubectl to install kubectl on your host workstation. After installation, run this command to verify that it is installed, and that it is communicating correctly with your cluster:

tux > kubectl version --short
Client Version: v1.9.1
Server Version: v1.9.8

As the client is on your workstation, and the server is on your cluster, reporting the server version verifies that kubectl is using ~/.kube/config and is communicating with your cluster.

The following examples query the cluster configuration and node status:

tux > kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://11.100.10.10:6443
  name: local
contexts:
[...]

tux > kubectl get nodes
NAME                         STATUS  ROLES     AGE  VERSION
4a10db2c.infra.caasp.local   Ready   Master    4h   v1.9.8
87c9e8ff.infra.caasp.local   Ready   <none>    4h   v1.9.8
34ce7eb0.infra.caasp.local   Ready   <none>    4h   v1.9.8
Install Helm

Deploying SUSE Cloud Foundry is different than the usual method of installing software. Rather than installing packages in the usual way with YaST or Zypper, you will install the Helm client on your workstation to install the required Kubernetes applications to set up SUSE Cloud Foundry, and to administer your cluster remotely.

Helm client version 2.6 or higher is required.

Warning
Warning: Initialize Only the Helm Client

When you initialize Helm on your workstation be sure to initialize only the client, as the server, Tiller, was installed during the CaaS Platform installation. You do not want two Tiller instances.

If the Linux distribution on your workstation doesn't provide the correct Helm version, or you are using some other platform, see the Helm Quickstart Guide for installation instructions and basic usage examples. Download the Helm binary into any directory that is in your PATH on your workstation, such as your ~/bin directory. Then initialize the client only:

tux > helm init --client-only
Creating /home/tux/.helm 
Creating /home/tux/.helm/repository 
Creating /home/tux/.helm/repository/cache 
Creating /home/tux/.helm/repository/local 
Creating /home/tux/.helm/plugins 
Creating /home/tux/.helm/starters 
Creating /home/tux/.helm/cache/archive 
Creating /home/tux/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/tux/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!

16.2 Create hostpath Storage Class

The Kubernetes cluster requires a persistent storage class for the databases to store persistent data. You can provide this with your own storage (e.g. SUSE Enterprise Storage), or use the built-in hostpath storage type. hostpath is NOT suitable for a production deployment, but it is an easy option for a minimal test deployment.

Warning
Warning: Using the hostpath storage type on CaaS Platform

CaaS Platform is configured as a multi-node Kubernetes setup with a minimum of one master and two workers. Hostpath provisioning on CaaS Platform uses local storage on each of these nodes, therefore persistent data stored will only be available locally on the Kubernetes nodes. This impacts use cases where SUSE Cloud Foundry containers restart on a different Kubernetes worker, for example in high availability setups or update tests. If a container starts on a different worker than before it will miss its persistent data, leading to various other side effects. In addition, hostpath-provisioner uses the local root filesystem of the Kubernetes node. If it runs out of disk space your Kubernetes node won't work anymore.

Open an SSH session to your Kubernetes master node and add the argument --enable-hostpath-provisioner to /etc/kubernetes/controller-manager:

root # vim /etc/kubernetes/controller-manager 
    KUBE_CONTROLLER_MANAGER_ARGS="\
        --enable-hostpath-provisioner \
        "

Restart the Kubernetes controller-manager:

root # systemctl restart kube-controller-manager

Create a persistent storage class named hostpath:

root # echo '{"kind":"StorageClass","apiVersion":"storage.k8s.io/v1", "metadata":{"name":"hostpath"},"provisioner":"kubernetes.io/host-path"}' | \
kubectl create -f -

storageclass "hostpath" created

Verify that your new storage class has been created:

root # kubectl get storageclass
NAME       TYPE
hostpath   kubernetes.io/host-path

Log into all of your Kubernetes nodes and create the /tmp/hostpath_pv directory, then set its permissions to read/write/execute:

root # mkdir /tmp/hostpath_pv  
root # chmod -R 0777 /tmp/hostpath_pv

See the Kubernetes document Storage Classes for detailed information on storage classes.

Tip
Tip: Log in Directly to Kubernetes Nodes

By default, SUSE CaaS Platform allows logging into the Kubernetes nodes only from the admin node. You can set up direct logins to your Kubernetes nodes from your workstation by copying the SSH keys from your admin node to your Kubernetes nodes, and then you will have password-less SSH logins. This is not a best practice for a production deployment, but will make running a test deployment a little easier.

16.3 Test Storage Class

See Section 2.3, “Test Storage Class” to learn how to test that your storage class is correctly configured before you deploy SUSE Cloud Foundry.

16.4 Configuring the Minimal Test Deployment

Create a configuration file on your workstation for Helm to use. In this example it is called scf-config-values.yaml. (See the Release Notes for information on configuration changes.)

env:    
    # Enter the domain you created for your CAP cluster
    DOMAIN: example.com
    
    # UAA host and port
    UAA_HOST: uaa.example.com
    UAA_PORT: 2793

kube:
    # # The IP address assigned to the kube node pointed to by the domain.
    external_ips: ["11.100.10.10"]
    
    # Run kubectl get storageclasses
    # to view your available storage classes
    storage_class: 
        persistent: "hostpath"
        shared: "shared"
        
    # The registry the images will be fetched from. 
    # The values below should work for
    # a default installation from the SUSE registry.
    registry: 
        hostname: "registry.suse.com"
        username: ""
        password: ""
    organization: "cap"

    # Required for CaaSP 2
    auth: rbac 

secrets:
    # Create a password for your CAP cluster
    CLUSTER_ADMIN_PASSWORD: password 
    
    # Create a password for your UAA client secret
    UAA_ADMIN_CLIENT_SECRET: password
Note
Note: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

16.5 Deploy with Helm

Run the following Helm commands to complete the deployment. There are six steps, and they must be run in this order:

  • Download the SUSE Kubernetes charts repository

  • Create namespaces

  • If you are using SUSE Enterprise Storage, copy the storage secret to the UAA and SCF namespaces

  • Install UAA

  • Copy UAA secret and certificate to SCF namespace

  • Install SCF

16.5.1 Install the Kubernetes charts repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME        URL                                             
stable      https://kubernetes-charts.storage.googleapis.com
local       http://127.0.0.1:8879/charts                    
suse        https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                         VERSION DESCRIPTION
suse/cf                      2.13.3  A Helm chart for SUSE Cloud Foundry               
suse/cf-usb-sidecar-mysql    1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/cf-usb-sidecar-postgres 1.0.1   A Helm chart for SUSE Universal Service Broker ...
suse/console                 2.1.0   A Helm chart for deploying Stratos UI Console     
suse/uaa                     2.13.3  A Helm chart for SUSE UAA

16.5.2 Create Namespaces

Use kubectl on your host workstation to create and verify the UAA (User Account and Authentication) and SCF (SUSE Cloud Foundry) namespaces:

tux > kubectl create namespace uaa
 namespace "uaa" created
 
tux > kubectl create namespace scf
 namespace "scf" created
 
tux > kubectl get namespaces
NAME          STATUS    AGE
default       Active    27m
kube-public   Active    27m
kube-system   Active    27m
scf           Active    1m
uaa           Active    1m

16.5.3 Copy SUSE Enterprise Storage Secret

If you are using the hostpath storage class (see Section 16.2, “Create hostpath Storage Class”) there is no secret so skip this step.

If you are using SUSE Enterprise Storage you must copy the Ceph admin secret to the UAA and SCF namespaces:

tux > kubectl get secret ceph-secret-admin -o json --namespace default | \
sed 's/"namespace": "default"/"namespace": "uaa"/' | kubectl create -f -

tux > kubectl get secret ceph-secret-admin -o json --namespace default | \
sed 's/"namespace": "default"/"namespace": "scf"/' | kubectl create -f -

16.5.4 Install UAA

Use Helm to install the UAA (User Account and Authentication) server:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful UAA deployment before going to the next steps, which you can monitor with the watch command. This will take time, possibly an hour or two, according to your hardware resources:

tux > watch -c 'kubectl get pods --all-namespaces'

When the status shows RUNNING for all of the UAA nodes, then proceed to the next step.

Important
Important: Some Pods Show Not Running

Some UAA and SCF pods perform only deployment tasks, and it is normal for them to show as unready after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed
          
tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed

16.5.5 Install SUSE Cloud Foundry

First pass your UAA secret and certificate to SCF, then use Helm to install SUSE Cloud Foundry:

tux > SECRET=$(kubectl get pods --namespace uaa \
-o jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')

tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
-o jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml \
--set "secrets.UAA_CA_CERT=${CA_CERT}"

Now sit back and wait for the pods come online:

tux > watch -c 'kubectl get pods --all-namespaces'

When all services are running you can use the Cloud Foundry command-line interface to log in to SUSE Cloud Foundry. (See Section 2.10, “Deploy SCF”.)

16.6 Install the Stratos Console

Stratos UI is a modern, web-based management application for Cloud Foundry. It provides a graphical management console for both developers and system administrators. (See Section 3.1, “Install Stratos with Helm”).

16.7 Updating SUSE Cloud Foundry, UAA, and Stratos

Maintenance updates are delivered as container images from the SUSE registry and applied with Helm. See Section 4.1, “Upgrading SCF, UAA, and Stratos”.

Note
Note: No Upgrades with Hostpath

Upgrades do not work with the hostpath storage type, as the required stateful data may not be preserved.

17 Troubleshooting

Cloud stacks are complex, and debugging deployment issues often requires digging through multiple layers to find the information you need. Remember that the SUSE Cloud Foundry releases must be deployed in the correct order, and that each release must deploy successfully, with no failed pods, before deploying the next release.

17.1 Using Supportconfig

If you ever need to request support, or just want to generate detailed system information and logs, use the supportconfig utility. Run it with no options to collect basic system information, and also cluster logs including Docker, etcd, flannel, and Velum. supportconfig may give you all the information you need.

supportconfig -h prints the options. Read the "Gathering System Information for Support" chapter in any SUSE Linux Enterprise Administration Guide to learn more.

17.2 Deployment is Taking Too Long

A deployment step seems to take too long, or you see that some pods are not in a ready state hours after all the others are ready, or a pod shows a lot of restarts. This example shows not-ready pods many hours after the others have become ready:

tux > kubectl get pods --namespace scf
NAME                     READY STATUS    RESTARTS  AGE
router-3137013061-wlhxb  0/1   Running   0         16h
routing-api-0            0/1   Running   0         16h

The Running status means the pod is bound to a node and all of its containers have been created. However, it is not Ready, which means it is not ready to service requests. Use kubectl to print a detailed description of pod events and status:

tux > kubectl describe pod --namespace scf router-3137013061-wlhxb

This prints a lot of information, including IP addresses, routine events, warnings, and errors. You should find the reason for the failure in this output.

Important
Important: Some Pods Correctly Show Not Running

Some UAA and SCF pods perform only deployment tasks, and it is normal for them to show as unready after they have completed their tasks, as these examples show:

tux > kubectl get pods --namespace uaa
secret-generation-1-z4nlz   0/1       Completed
          
tux > kubectl get pods --namespace scf
secret-generation-1-m6k2h       0/1       Completed
post-deployment-setup-1-hnpln   0/1       Completed

17.3 Deleting and Rebuilding a Deployment

There may be times when you want to delete and rebuild a deployment, for example when there are errors in your scf-config-values.yaml file, you wish to test configuration changes, or a deployment fails and you want to try it again. This has five steps: first delete the StatefulSets of the namespace associated with the release or releases you want to re-deploy, then delete the release or releases, delete its namespace, then re-create the namespace and re-deploy the release.

The namespace is also deleted as part of the process because the SCF and UAA namespaces contain generated secrets which Helm is not aware of and will note remove when a release is deleted. When deleting a release, busy systems may encounter timeouts. By first deleting the StatefulSets, it ensures that this operation is more likely to succeed. Using the delete statefulsets command requires kubectl v1.9.6 or newer.

Use helm to see your releases:

tux > helm list
NAME            REVISION  UPDATED                  STATUS    CHART           NAMESPACE
susecf-console  1         Tue Aug 14 11:53:28 2018 DEPLOYED  console-2.0.0   stratos  
susecf-scf      1         Tue Aug 14 10:58:16 2018 DEPLOYED  cf-2.11.0       scf      
susecf-uaa      1         Tue Aug 14 10:49:30 2018 DEPLOYED  uaa-2.11.0      uaa

This example deletes the susecf-uaa release and namespace:

tux > kubectl delete statefulsets --all --namespace uaa
statefulset "mysql" deleted
statefulset "mysql-proxy" deleted
statefulset "uaa" deleted
tux > helm delete --purge susecf-uaa
release "susecf-uaa" deleted
tux > kubectl delete namespace uaa
namespace "uaa" deleted

Then you can start over.

17.4 Querying with Kubectl

You can safely query with kubectl to get information about resources inside your Kubernetes cluster. kubectl cluster-info dump | tee clusterinfo.txt outputs a large amount of information about the Kubernetes master and cluster services to a text file.

The following commands give more targeted information about your cluster.

  • List all cluster resources:

    tux > kubectl get all --all-namespaces
  • List all of your running pods:

    tux > kubectl get pods --all-namespaces
  • See all pods, including those with Completed or Failed statuses:

    tux > kubectl get pods --show-all --all-namespaces
  • List pods in one namespace:

    tux > kubectl get pods --namespace scf
  • Get detailed information about one pod:

    tux > kubectl describe --namespace scf po/diego-cell-0
  • Read the log file of a pod:

    tux > kubectl logs --namespace scf po/diego-cell-0
  • List all Kubernetes nodes, then print detailed information about a single node:

    tux > kubectl get nodes
    tux > kubectl describe node 6a2752b6fab54bb889029f60de6fa4d5.infra.caasp.local
  • List all containers in all namespaces, formatted for readability:

    tux > kubectl get pods --all-namespaces -o jsonpath="{..image}" |\
    tr -s '[[:space:]]' '\n' |\
    sort |\
    uniq -c
  • These two commands check node capacities, to verify that there are enough resources for the pods:

    tux > kubectl get nodes -o yaml | grep '\sname\|cpu\|memory'
    tux > kubectl get nodes -o json | \
    jq '.items[] | {name: .metadata.name, cap: .status.capacity}'

A Appendix

This appendix contains copies of the complete values.yaml files that are included in the Helm charts for the SCF and UAA namespaces. These are useful references as they provide a complete listing of configuration options and their default settings. (See Section 5.1.1, “Finding Default and Allowable Sizing Values” to learn how to find and read these files from the Helm charts.)

A.1 Complete SCF values.yaml file

This is the values.yaml configuration file that is shipped with the Helm charts for the SCF namespace.

---
env:
  # List of domains (including scheme) from which Cross-Origin requests will be
  # accepted, a * can be used as a wildcard for any part of a domain.
  ALLOWED_CORS_DOMAINS: "[]"

  # Allow users to change the value of the app-level allow_ssh attribute.
  ALLOW_APP_SSH_ACCESS: "true"

  # Extra token expiry time while uploading big apps, in seconds.
  APP_TOKEN_UPLOAD_GRACE_PERIOD: "1200"

  # List of allow / deny rules for the blobstore internal server. Will be
  # followed by 'deny all'. Each entry must be follow by a semicolon.
  BLOBSTORE_ACCESS_RULES: "allow 10.0.0.0/8; allow 172.16.0.0/12; allow 192.168.0.0/16;"

  # Maximal allowed file size for upload to blobstore, in megabytes.
  BLOBSTORE_MAX_UPLOAD_SIZE: "5000"

  # The set of CAT test suites to run. If not specified it falls back to a
  # hardwired set of suites. This is for SUSE internal testing, and not
  # for production deployments
  CATS_SUITES: ~

  # URI for a CDN to use for buildpack downloads.
  CDN_URI: ""

  # The Oauth2 authorities available to the cluster administrator.
  CLUSTER_ADMIN_AUTHORITIES: "scim.write,scim.read,openid,cloud_controller.admin,clients.read,clients.write,doppler.firehose,routing.router_groups.read,routing.router_groups.write"

  # 'build' attribute in the /v2/info endpoint
  CLUSTER_BUILD: "2.0.2"

  # 'description' attribute in the /v2/info endpoint
  CLUSTER_DESCRIPTION: "SUSE Cloud Foundry"

  # 'name' attribute in the /v2/info endpoint
  CLUSTER_NAME: "SCF"

  # 'version' attribute in the /v2/info endpoint
  CLUSTER_VERSION: "2"

  # The standard amount of disk (in MB) given to an application when not
  # overriden by the user via manifest, command line, etc.
  DEFAULT_APP_DISK_IN_MB: "1024"

  # The standard amount of memory (in MB) given to an application when not
  # overriden by the user via manifest, command line, etc.
  DEFAULT_APP_MEMORY: "1024"

  # If set apps pushed to spaces that allow SSH access will have SSH enabled by
  # default.
  DEFAULT_APP_SSH_ACCESS: "true"

  # The default stack to use if no custom stack is specified by an app.
  DEFAULT_STACK: "sle12"

  # The container disk capacity the cell should manage. If this capacity is
  # larger than the actual disk quota of the cell component, over-provisioning
  # will occur.
  DIEGO_CELL_DISK_CAPACITY_MB: "auto"

  # The memory capacity the cell should manage. If this capacity is larger than
  # the actual memory of the cell component, over-provisioning will occur.
  DIEGO_CELL_MEMORY_CAPACITY_MB: "auto"

  # Maximum network transmission unit length in bytes for application
  # containers.
  DIEGO_CELL_NETWORK_MTU: "1400"

  # A CIDR subnet mask specifying the range of subnets available to be assigned
  # to containers.
  DIEGO_CELL_SUBNET: "10.38.0.0/16"

  # Disable external buildpacks. Only admin buildpacks and system buildpacks
  # will be available to users.
  DISABLE_CUSTOM_BUILDPACKS: "false"

  # The host to ping for confirmation of DNS resolution.
  DNS_HEALTH_CHECK_HOST: "127.0.0.1"

  # Base domain of the SCF cluster.
  # Example: my-scf-cluster.com
  DOMAIN: ~

  # The number of versions of an application to keep. You will be able to
  # rollback to this amount of versions.
  DROPLET_MAX_STAGED_STORED: "5"

  # Enables setting the X-Forwarded-Proto header if SSL termination happened
  # upstream and the header value was set incorrectly. When this property is set
  # to true, the gorouter sets the header X-Forwarded-Proto to https. When this
  # value set to false, the gorouter sets the header X-Forwarded-Proto to the
  # protocol of the incoming request.
  FORCE_FORWARDED_PROTO_AS_HTTPS: "false"

  # AppArmor profile name for garden-runc; set this to empty string to disable
  # AppArmor support
  GARDEN_APPARMOR_PROFILE: "garden-default"

  # URL pointing to the Docker registry used for fetching Docker images. If not
  # set, the Docker service default is used.
  GARDEN_DOCKER_REGISTRY: "registry-1.docker.io"

  # Whitelist of IP:PORT tuples and CIDR subnet masks. Pulling from docker
  # registries with self signed certificates will not be permitted if the
  # registry's address is not listed here.
  GARDEN_INSECURE_DOCKER_REGISTRIES: ""

  # Override DNS servers to be used in containers; defaults to the same as the
  # host.
  GARDEN_LINUX_DNS_SERVER: ""

  # The filesystem driver to use (btrfs or overlay-xfs).
  GARDEN_ROOTFS_DRIVER: "btrfs"

  # Location of the proxy to use for secure web access.
  HTTPS_PROXY: ~

  # Location of the proxy to use for regular web access.
  HTTP_PROXY: ~

  KUBE_SERVICE_DOMAIN_SUFFIX: ~

  # The cluster's log level: off, fatal, error, warn, info, debug, debug1,
  # debug2.
  LOG_LEVEL: "info"

  # The maximum amount of disk a user can request for an application via
  # manifest, command line, etc., in MB. See also DEFAULT_APP_DISK_IN_MB for the
  # standard amount.
  MAX_APP_DISK_IN_MB: "2048"

  # Maximum health check timeout that can be set for an app, in seconds.
  MAX_HEALTH_CHECK_TIMEOUT: "180"

  # The time allowed for the MySQL server to respond to healthcheck queries, in
  # milliseconds.
  MYSQL_PROXY_HEALTHCHECK_TIMEOUT: "30000"

  # Sets the maximum allowed size of the client request body, specified in the
  # “Content-Length” request header field, in megabytes. If the size in a
  # request exceeds the configured value, the 413 (Request Entity Too Large)
  # error is returned to the client. Please be aware that browsers cannot
  # correctly display this error. Setting size to 0 disables checking of client
  # request body size. This limits application uploads, buildpack uploads, etc.
  NGINX_MAX_REQUEST_BODY_SIZE: "2048"

  # Comma separated list of IP addresses and domains which should not be
  # directoed through a proxy, if any.
  NO_PROXY: ~

  # Comma separated list of white-listed options that may be set during create
  # or bind operations.
  # Example:
  # uid,gid,allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,fsname,username,password
  PERSI_NFS_ALLOWED_OPTIONS: "uid,gid,auto_cache,username,password"

  # Comma separated list of default values for nfs mount options. If a default
  # is specified with an option not included in PERSI_NFS_ALLOWED_OPTIONS, then
  # this default value will be set and it won't be overridable.
  PERSI_NFS_DEFAULT_OPTIONS: ~

  # Comma separated list of white-listed options that may be accepted in the
  # mount_config options. Note a specific 'sloppy_mount:true' volume option
  # tells the driver to ignore non-white-listed options, while a
  # 'sloppy_mount:false' tells the driver to fail fast instead when receiving a
  # non-white-listed option."
  #
  # Example:
  # allow_root,allow_other,nfs_uid,nfs_gid,auto_cache,sloppy_mount,fsname
  PERSI_NFS_DRIVER_ALLOWED_IN_MOUNT: "auto_cache"

  # Comma separated list of white-listed options that may be configured in
  # supported in the mount_config.source URL query params
  #
  # Example: uid,gid,auto-traverse-mounts,dircache
  PERSI_NFS_DRIVER_ALLOWED_IN_SOURCE: "uid,gid"

  # Comma separated list default values for options that may be configured in
  # the mount_config options, formatted as 'option:default'. If an option is not
  # specified in the volume mount, or the option is not white-listed, then the
  # specified default value will be used instead.
  #
  # Example:
  # allow_root:false,nfs_uid:2000,nfs_gid:2000,auto_cache:true,sloppy_mount:true
  PERSI_NFS_DRIVER_DEFAULT_IN_MOUNT: "auto_cache:true"

  # Comma separated list of default values for options in the source URL query
  # params, formatted as 'option:default'. If an option is not specified in the
  # volume mount, or the option is not white-listed, then the specified default
  # value will be applied.
  PERSI_NFS_DRIVER_DEFAULT_IN_SOURCE: ~

  # Disable Persi NFS driver
  PERSI_NFS_DRIVER_DISABLE: "false"

  # LDAP server host name or ip address (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_HOST: ""

  # LDAP server port (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PORT: "389"

  # LDAP server protocol (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PROTOCOL: "tcp"

  # LDAP service account user name (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_USER: ""

  # LDAP fqdn for user records we will search against when looking up user uids
  # (required for LDAP integration only)
  # Example: cn=Users,dc=corp,dc=test,dc=com
  PERSI_NFS_DRIVER_LDAP_USER_FQDN: ""

  # Certficates to add to the rootfs trust store. Multiple certs are possible by
  # concatenating their definitions into one big block of text.
  ROOTFS_TRUSTED_CERTS: ""

  # The algorithm used by the router to distribute requests for a route across
  # backends. Supported values are round-robin and least-connection.
  ROUTER_BALANCING_ALGORITHM: "round-robin"

  # How to handle the x-forwarded-client-cert (XFCC) HTTP header. Supported
  # values are always_forward, forward, and sanitize_set. See
  # https://docs.cloudfoundry.org/concepts/http-routing.html for more
  # information.
  ROUTER_FORWARDED_CLIENT_CERT: "always_forward"

  # The log destination to talk to. This has to point to a syslog server.
  SCF_LOG_HOST: ~

  # The port used by rsyslog to talk to the log destination. If not set it
  # defaults to 514, the standard port of syslog.
  SCF_LOG_PORT: ~

  # The protocol used by rsyslog to talk to the log destination. The allowed
  # values are tcp, and udp. The default is tcp.
  SCF_LOG_PROTOCOL: "tcp"

  # A comma-separated list of insecure Docker registries in the form of
  # '<HOSTNAME|IP>:PORT'. Each registry must be quoted separately.
  #
  # Example: "docker-registry.example.com:80", "hello.example.org:443"
  STAGER_INSECURE_DOCKER_REGISTRIES: ""

  # Timeout for staging an app, in seconds.
  STAGING_TIMEOUT: "900"

  # Support contact information for the cluster
  SUPPORT_ADDRESS: "support@example.com"

  # TCP routing domain of the SCF cluster; only used for testing;
  # Example: tcp.my-scf-cluster.com
  TCP_DOMAIN: ~

  # Concatenation of trusted CA certificates to be made available on the cell.
  TRUSTED_CERTS: ~

  # The host name of the UAA server (root zone)
  UAA_HOST: ~

  # The tcp port the UAA server (root zone) listens on for requests.
  UAA_PORT: "2793"

  # Whether or not to use privileged containers for buildpack based
  # applications. Containers with a docker-image-based rootfs will continue to
  # always be unprivileged.
  USE_DIEGO_PRIVILEGED_CONTAINERS: "false"

  # Whether or not to use privileged containers for staging tasks.
  USE_STAGER_PRIVILEGED_CONTAINERS: "false"

sizing:
  # Flag to activate high-availability mode
  HA: false

  # The api role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # - cloud_controller_ng: The Cloud Controller provides primary Cloud Foundry
  #   API that is by the cf CLI. The Cloud Controller uses a database to keep
  #   tables for organizations, spaces, apps, services, service instances, user
  #   roles, and more. Typically multiple instances of Cloud Controller are load
  #   balanced.
  #
  # - route_registrar: Used for registering routes
  #
  # Also: metron_agent, statsd_injector, go-buildpack, binary-buildpack,
  # nodejs-buildpack, ruby-buildpack, php-buildpack, python-buildpack,
  # staticfile-buildpack, java-buildpack, dotnet-core-buildpack
  api:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The api role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2421
      limit: ~

  # The blobstore role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - route_registrar: Used for registering routes
  #
  # Also: blobstore, metron_agent
  blobstore:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The blobstore role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      blobstore_data: 50

    # Unit [MiB]
    memory:
      request: 420
      limit: ~

  # The cc-clock role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - cloud_controller_clock: The Cloud Controller clock periodically schedules
  #   Cloud Controller clean up tasks for app usage events, audit events, failed
  #   jobs, and more. Only single instance of this job is necessary.
  #
  # Also: metron_agent, statsd_injector
  cc_clock:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The cc-clock role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 789
      limit: ~

  # The cc-uploader role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: tps, cc_uploader, metron_agent
  cc_uploader:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The cc-uploader role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 129
      limit: ~

  # The cc-worker role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - cloud_controller_worker: Cloud Controller worker processes background
  #   tasks submitted via the.
  #
  # Also: metron_agent
  cc_worker:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The cc-worker role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 753
      limit: ~

  # The cf-usb role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: cf-usb
  cf_usb:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The cf-usb role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 117
      limit: ~

  # The consul role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: consul_agent
  consul:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The consul role can scale between 0 and 1 instances.
    # The instance count must be an odd number (not divisible by 2).
    # For high availability it needs at least 1 instances.
    count: 0

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # Global CPU configuration
  cpu:
    # Flag to activate cpu requests
    requests: false

    # Flag to activate cpu limits
    limits: false

  # The diego-access role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: ssh_proxy, metron_agent, file_server
  diego_access:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The diego-access role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 123
      limit: ~

  # The diego-api role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: bbs, cfdot, metron_agent
  diego_api:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The diego-api role can scale between 1 and 3 instances.
    # The instance count must be an odd number (not divisible by 2).
    # For high availability it needs at least 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 138
      limit: ~

  # The diego-brain role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: auctioneer, cfdot, metron_agent
  diego_brain:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The diego-brain role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 99
      limit: ~

  # The diego-cell role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - wait-for-uaa: Wait for UAA to be ready before starting any jobs
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: rep, cfdot, route_emitter, garden, cflinuxfs2-rootfs-setup,
  # opensuse42-rootfs-setup, cf-sle12-setup, metron_agent, nfsv3driver
  diego_cell:
    # Node affinity rules can be specified here
    affinity: {}

    # The diego-cell role can scale between 1 and 254 instances.
    # For high availability it needs at least 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    disk_sizes:
      grootfs_data: 50

    # Unit [MiB]
    memory:
      request: 4677
      limit: ~

  # The diego-locket role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: locket, metron_agent
  diego_locket:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The diego-locket role can scale between 1 and 3 instances.
    # For high availability it needs at least 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 90
      limit: ~

  # The doppler role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: doppler, metron_agent
  doppler:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The doppler role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 390
      limit: ~

  # The loggregator role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - route_registrar: Used for registering routes
  #
  # Also: loggregator_trafficcontroller, metron_agent
  loggregator:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The loggregator role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 153
      limit: ~

  # Global memory configuration
  memory:
    # Flag to activate memory requests
    requests: false

    # Flag to activate memory limits
    limits: false

  # The mysql role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: mysql
  mysql:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The mysql role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      mysql_data: 20

    # Unit [MiB]
    memory:
      request: 2841
      limit: ~

  # The mysql-proxy role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: proxy
  mysql_proxy:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The mysql-proxy role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 63
      limit: ~

  # The nats role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - nats: The NATS server provides publish-subscribe messaging system for the
  #   Cloud Controller, the DEA , HM9000, and other Cloud Foundry components.
  #
  # Also: metron_agent
  nats:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The nats role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 60
      limit: ~

  # The nfs-broker role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: metron_agent, nfsbroker
  nfs_broker:
    # Node affinity rules can be specified here
    affinity: {}

    # The nfs-broker role can scale between 1 and 3 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 63
      limit: ~

  # The post-deployment-setup role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - uaa-create-user: Create the initial user in UAA
  #
  # - configure-scf: Uses the cf CLI to configure SCF once it's online (things
  #   like proxy settings, service brokers, etc.)
  post_deployment_setup:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The post-deployment-setup role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The postgres role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - postgres: The Postgres server provides a single instance Postgres database
  #   that can be used with the Cloud Controller or the UAA. It does not provide
  #   highly-available configuration.
  postgres:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The postgres role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      postgres_data: 20

    # Unit [MiB]
    memory:
      request: 3072
      limit: ~

  # The router role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - gorouter: Gorouter maintains a dynamic routing table based on updates
  #   received from NATS and (when enabled) the Routing API. This routing table
  #   maps URLs to backends. The router finds the URL in the routing table that
  #   most closely matches the host header of the request and load balances
  #   across the associated backends.
  #
  # Also: metron_agent
  router:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The router role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 135
      limit: ~

  # The routing-api role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # Also: metron_agent, routing-api
  routing_api:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The routing-api role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 4000
      limit: ~

    # Unit [MiB]
    memory:
      request: 114
      limit: ~

  # The secret-generation role contains the following jobs:
  #
  # - generate-secrets: This job will generate the secrets for the cluster
  secret_generation:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The secret-generation role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The syslog-adapter role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: adapter, metron_agent
  syslog_adapter:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The syslog-adapter role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 78
      limit: ~

  # The syslog-rlp role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: metron_agent, reverse_log_proxy
  syslog_rlp:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The syslog-rlp role can scale between 1 and 65535 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 93
      limit: ~

  # The syslog-scheduler role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # Also: scheduler, metron_agent
  syslog_scheduler:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The syslog-scheduler role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 69
      limit: ~

  # The tcp-router role contains the following jobs:
  #
  # - global-properties: Dummy BOSH job used to host global parameters that are
  #   required to configure SCF
  #
  # - authorize-internal-ca: Install both internal and UAA CA certificates
  #
  # - wait-for-uaa: Wait for UAA to be ready before starting any jobs
  #
  # Also: tcp_router, metron_agent
  tcp_router:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The tcp-router role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 99
      limit: ~

    ports:
      tcp_route:
        count: 9

secrets:
  # The password for the cluster administrator.
  CLUSTER_ADMIN_PASSWORD: ~

  # LDAP service account password (required for LDAP integration only)
  PERSI_NFS_DRIVER_LDAP_PASSWORD: "-"

  # The password of the admin client - a client named admin with uaa.admin as an
  # authority.
  UAA_ADMIN_CLIENT_SECRET: ~

  # The CA certificate for UAA
  UAA_CA_CERT: ~

  # PEM encoded RSA private key used to identify host.
  # This value uses a generated default.
  APP_SSH_KEY: ~

  # MD5 fingerprint of the host key of the SSH proxy that brokers connections to
  # application instances.
  # This value uses a generated default.
  APP_SSH_KEY_FINGERPRINT: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  AUCTIONEER_REP_CERT: ~

  # PEM-encoded key
  # This value uses a generated default.
  AUCTIONEER_REP_KEY: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  AUCTIONEER_SERVER_CERT: ~

  # PEM-encoded server key
  # This value uses a generated default.
  AUCTIONEER_SERVER_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  BBS_AUCTIONEER_CERT: ~

  # PEM-encoded key
  # This value uses a generated default.
  BBS_AUCTIONEER_KEY: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BBS_CLIENT_CRT: ~

  # PEM-encoded client key.
  # This value uses a generated default.
  BBS_CLIENT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  BBS_REP_CERT: ~

  # PEM-encoded key
  # This value uses a generated default.
  BBS_REP_KEY: ~

  # PEM-encoded client certificate.
  # This value uses a generated default.
  BBS_SERVER_CRT: ~

  # PEM-encoded client key.
  # This value uses a generated default.
  BBS_SERVER_KEY: ~

  # The basic auth password that Cloud Controller uses to connect to the
  # blobstore server. Auto-generated if not provided. Passwords must be
  # alphanumeric (URL-safe).
  # This value uses a generated default.
  BLOBSTORE_PASSWORD: ~

  # The secret used for signing URLs between Cloud Controller and blobstore.
  # This value uses a generated default.
  BLOBSTORE_SECURE_LINK: ~

  # The PEM-encoded certificate (optionally as a certificate chain) for serving
  # blobs over TLS/SSL.
  # This value uses a generated default.
  BLOBSTORE_TLS_CERT: ~

  # The PEM-encoded private key for signing TLS/SSL traffic.
  # This value uses a generated default.
  BLOBSTORE_TLS_KEY: ~

  # The password for the bulk api.
  # This value uses a generated default.
  BULK_API_PASSWORD: ~

  # The PEM-encoded certificate for internal cloud controller traffic.
  # This value uses a generated default.
  CC_SERVER_CRT: ~

  # The PEM-encoded private key for internal cloud controller traffic.
  # This value uses a generated default.
  CC_SERVER_KEY: ~

  # The PEM-encoded certificate for internal cloud controller uploader traffic.
  # This value uses a generated default.
  CC_UPLOADER_CRT: ~

  # The PEM-encoded private key for internal cloud controller uploader traffic.
  # This value uses a generated default.
  CC_UPLOADER_KEY: ~

  # PEM-encoded broker server certificate.
  # This value uses a generated default.
  CF_USB_BROKER_SERVER_CERT: ~

  # PEM-encoded broker server key.
  # This value uses a generated default.
  CF_USB_BROKER_SERVER_KEY: ~

  # The password for access to the Universal Service Broker.
  # Example: password
  # This value uses a generated default.
  CF_USB_PASSWORD: ~

  # PEM-encoded consul client certificate
  # This value uses a generated default.
  CONSUL_CLIENT_CERT: ~

  # PEM-encoded consul client key
  # This value uses a generated default.
  CONSUL_CLIENT_KEY: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  CONSUL_SERVER_CERT: ~

  # PEM-encoded server key
  # This value uses a generated default.
  CONSUL_SERVER_KEY: ~

  # PEM-encoded client certificate
  # This value uses a generated default.
  DIEGO_CLIENT_CERT: ~

  # PEM-encoded client key
  # This value uses a generated default.
  DIEGO_CLIENT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  DOPPLER_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  DOPPLER_KEY: ~

  # Basic auth password for access to the Cloud Controller's internal API.
  # This value uses a generated default.
  INTERNAL_API_PASSWORD: ~

  # PEM-encoded CA certificate used to sign the TLS certificate used by all
  # components to secure their communications.
  # This value uses a generated default.
  INTERNAL_CA_CERT: ~

  # PEM-encoded CA key.
  # This value uses a generated default.
  INTERNAL_CA_KEY: ~

  # PEM-encoded client certificate for loggregator mutual authentication
  # This value uses a generated default.
  LOGGREGATOR_CLIENT_CERT: ~

  # PEM-encoded client key for loggregator mutual authentication
  # This value uses a generated default.
  LOGGREGATOR_CLIENT_KEY: ~

  # PEM-encoded certificate.
  # This value uses a generated default.
  METRON_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  METRON_KEY: ~

  # Password used for the monit API.
  # This value uses a generated default.
  MONIT_PASSWORD: ~

  # The password for the MySQL server admin user.
  # This value uses a generated default.
  MYSQL_ADMIN_PASSWORD: ~

  # The password for access to the Cloud Controller database.
  # This value uses a generated default.
  MYSQL_CCDB_ROLE_PASSWORD: ~

  # The password for access to the usb config database.
  # Example: password
  # This value uses a generated default.
  MYSQL_CF_USB_PASSWORD: ~

  # The password for the cluster logger health user.
  # This value uses a generated default.
  MYSQL_CLUSTER_HEALTH_PASSWORD: ~

  # Database password for the diego locket service.
  # This value uses a generated default.
  MYSQL_DIEGO_LOCKET_PASSWORD: ~

  # The password for access to MySQL by diego.
  # This value uses a generated default.
  MYSQL_DIEGO_PASSWORD: ~

  # Password used to authenticate to the MySQL Galera healthcheck endpoint.
  # This value uses a generated default.
  MYSQL_GALERA_HEALTHCHECK_ENDPOINT_PASSWORD: ~

  # Database password for storing broker state for the Persi NFS Broker
  # This value uses a generated default.
  MYSQL_PERSI_NFS_PASSWORD: ~

  # The password for Basic Auth used to secure the MySQL proxy API.
  # This value uses a generated default.
  MYSQL_PROXY_ADMIN_PASSWORD: ~

  # The password for access to MySQL by the routing-api
  # This value uses a generated default.
  MYSQL_ROUTING_API_PASSWORD: ~

  # The password for access to NATS.
  # This value uses a generated default.
  NATS_PASSWORD: ~

  # Basic auth password to verify on incoming Service Broker requests
  # This value uses a generated default.
  PERSI_NFS_BROKER_PASSWORD: ~

  # PEM-encoded server certificate
  # This value uses a generated default.
  REP_SERVER_CERT: ~

  # PEM-encoded server key
  # This value uses a generated default.
  REP_SERVER_KEY: ~

  # Support for route services is disabled when no value is configured. A robust
  # passphrase is recommended.
  # This value uses a generated default.
  ROUTER_SERVICES_SECRET: ~

  # The public ssl cert for ssl termination.
  # This value uses a generated default.
  ROUTER_SSL_CERT: ~

  # The private ssl key for ssl termination.
  # This value uses a generated default.
  ROUTER_SSL_KEY: ~

  # Password for HTTP basic auth to the varz/status endpoint.
  # This value uses a generated default.
  ROUTER_STATUS_PASSWORD: ~

  # The password for access to the uploader of staged droplets.
  # This value uses a generated default.
  STAGING_UPLOAD_PASSWORD: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_ADAPT_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  SYSLOG_ADAPT_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_RLP_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  SYSLOG_RLP_KEY: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SYSLOG_SCHED_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  SYSLOG_SCHED_KEY: ~

  # PEM-encoded client certificate for internal communication between the cloud
  # controller and TPS.
  # This value uses a generated default.
  TPS_CC_CLIENT_CRT: ~

  # PEM-encoded client key for internal communication between the cloud
  # controller and TPS.
  # This value uses a generated default.
  TPS_CC_CLIENT_KEY: ~

  # PEM-encoded certificate for communication with the traffic controller of the
  # log infra structure.
  # This value uses a generated default.
  TRAFFICCONTROLLER_CERT: ~

  # PEM-encoded key for communication with the traffic controller of the log
  # infra structure.
  # This value uses a generated default.
  TRAFFICCONTROLLER_KEY: ~

  # The password for UAA access by the Cloud Controller.
  # This value uses a generated default.
  UAA_CC_CLIENT_SECRET: ~

  # The password for UAA access by the Routing API.
  # This value uses a generated default.
  UAA_CLIENTS_CC_ROUTING_SECRET: ~

  # Used for third party service dashboard SSO.
  # This value uses a generated default.
  UAA_CLIENTS_CC_SERVICE_DASHBOARDS_CLIENT_SECRET: ~

  # Used for fetching service key values from CredHub.
  # This value uses a generated default.
  UAA_CLIENTS_CC_SERVICE_KEY_CLIENT_SECRET: ~

  # The password for UAA access by the Universal Service Broker.
  # This value uses a generated default.
  UAA_CLIENTS_CF_USB_SECRET: ~

  # The password for UAA access by the Cloud Controller for fetching usernames.
  # This value uses a generated default.
  UAA_CLIENTS_CLOUD_CONTROLLER_USERNAME_LOOKUP_SECRET: ~

  # The password for UAA access by the SSH proxy.
  # This value uses a generated default.
  UAA_CLIENTS_DIEGO_SSH_PROXY_SECRET: ~

  # The password for UAA access by doppler.
  # This value uses a generated default.
  UAA_CLIENTS_DOPPLER_SECRET: ~

  # The password for UAA access by the gorouter.
  # This value uses a generated default.
  UAA_CLIENTS_GOROUTER_SECRET: ~

  # The password for UAA access by the login client.
  # This value uses a generated default.
  UAA_CLIENTS_LOGIN_SECRET: ~

  # The password for UAA access by the task creating the cluster administrator
  # user
  # This value uses a generated default.
  UAA_CLIENTS_SCF_AUTO_CONFIG_SECRET: ~

  # The password for UAA access by the TCP emitter.
  # This value uses a generated default.
  UAA_CLIENTS_TCP_EMITTER_SECRET: ~

  # The password for UAA access by the TCP router.
  # This value uses a generated default.
  UAA_CLIENTS_TCP_ROUTER_SECRET: ~

services:
  loadbalanced: false
kube:
  external_ips: []

  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 1

  storage_class:
    persistent: "persistent"
    shared: "shared"

  # Whether HostPath volume mounts are available
  hostpath_available: false

  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"
  auth: "rbac"

A.2 Complete UAA values.yaml file

This is the UAA values.yaml configuration file that is shipped with the Helm charts. It provides a complete reference of configuration options for UAA, and their default settings.

---
env:
  # Base domain name of the UAA endpoint; `uaa.${DOMAIN}` must be correctly
  # configured to point to this UAA instance
  DOMAIN: ~

  KUBE_SERVICE_DOMAIN_SUFFIX: ~

  # The cluster's log level: off, fatal, error, warn, info, debug, debug1,
  # debug2.
  LOG_LEVEL: "info"

  # The log destination to talk to. This has to point to a syslog server.
  SCF_LOG_HOST: ~

  # The port used by rsyslog to talk to the log destination. If not set it
  # defaults to 514, the standard port of syslog.
  SCF_LOG_PORT: ~

  # The protocol used by rsyslog to talk to the log destination. The allowed
  # values are tcp, and udp. The default is tcp.
  SCF_LOG_PROTOCOL: "tcp"

sizing:
  # Flag to activate high-availability mode
  HA: false

  # Global CPU configuration
  cpu:
    # Flag to activate cpu requests
    requests: false

    # Flag to activate cpu limits
    limits: false

  # Global memory configuration
  memory:
    # Flag to activate memory requests
    requests: false

    # Flag to activate memory limits
    limits: false

  # The mysql role contains the following jobs:
  #
  # - global-uaa-properties: Dummy BOSH job used to host global parameters that
  #   are required to configure SCF / fissile
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: mysql
  mysql:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The mysql role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    disk_sizes:
      mysql_data: 20

    # Unit [MiB]
    memory:
      request: 1779
      limit: ~

  # The mysql-proxy role contains the following jobs:
  #
  # - global-uaa-properties: Dummy BOSH job used to host global parameters that
  #   are required to configure SCF / fissile
  #
  # - patch-properties: Dummy BOSH job used to host parameters that are used in
  #   SCF patches for upstream bugs
  #
  # Also: proxy
  mysql_proxy:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The mysql-proxy role can scale between 1 and 3 instances.
    # For high availability it needs at least 2 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 63
      limit: ~

  # The secret-generation role contains the following jobs:
  #
  # - generate-uaa-secrets: This job will generate the secrets for the cluster
  secret_generation:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The secret-generation role cannot be scaled.
    count: 1

    # Unit [millicore]
    cpu:
      request: 1000
      limit: ~

    # Unit [MiB]
    memory:
      request: 256
      limit: ~

  # The uaa role contains the following jobs:
  #
  # - global-uaa-properties: Dummy BOSH job used to host global parameters that
  #   are required to configure SCF / fissile
  #
  # - uaa: The UAA is the identity management service for Cloud Foundry. It's
  #   primary role is as an OAuth2 provider, issuing tokens for client
  #   applications to use when they act on behalf of Cloud Foundry users. It can
  #   also authenticate users with their Cloud Foundry credentials, and can act
  #   as an SSO service using those credentials (or others). It has endpoints
  #   for managing user accounts and for registering OAuth2 clients, as well as
  #   various other management functions.
  uaa:
    # Node affinity rules can be specified here
    affinity: {}

    # Additional privileges can be specified here
    capabilities: []

    # The uaa role can scale between 1 and 65535 instances.
    count: 1

    # Unit [millicore]
    cpu:
      request: 2000
      limit: ~

    # Unit [MiB]
    memory:
      request: 2205
      limit: ~

secrets:
  # The password of the admin client - a client named admin with uaa.admin as an
  # authority.
  UAA_ADMIN_CLIENT_SECRET: ~

  # PEM-encoded CA certificate used to sign the TLS certificate used by all
  # components to secure their communications.
  # This value uses a generated default.
  INTERNAL_CA_CERT: ~

  # PEM-encoded CA key.
  # This value uses a generated default.
  INTERNAL_CA_KEY: ~

  # PEM-encoded JWT certificate.
  # This value uses a generated default.
  JWT_SIGNING_CERT: ~

  # PEM-encoded JWT signing key.
  # This value uses a generated default.
  JWT_SIGNING_KEY: ~

  # Password used for the monit API.
  # This value uses a generated default.
  MONIT_PASSWORD: ~

  # The password for the MySQL server admin user.
  # This value uses a generated default.
  MYSQL_ADMIN_PASSWORD: ~

  # The password for the cluster logger health user.
  # This value uses a generated default.
  MYSQL_CLUSTER_HEALTH_PASSWORD: ~

  # The password used to contact the sidecar endpoints via Basic Auth.
  # This value uses a generated default.
  MYSQL_GALERA_HEALTHCHECK_ENDPOINT_PASSWORD: ~

  # The password for Basic Auth used to secure the MySQL proxy API.
  # This value uses a generated default.
  MYSQL_PROXY_ADMIN_PASSWORD: ~

  # PEM-encoded certificate
  # This value uses a generated default.
  SAML_SERVICEPROVIDER_CERT: ~

  # PEM-encoded key.
  # This value uses a generated default.
  SAML_SERVICEPROVIDER_KEY: ~

  # The password for access to the UAA database.
  # This value uses a generated default.
  UAADB_PASSWORD: ~

  # The server's ssl certificate. The default is a self-signed certificate and
  # should always be replaced for production deployments.
  # This value uses a generated default.
  UAA_SERVER_CERT: ~

  # The server's ssl private key. Only passphrase-less keys are supported.
  # This value uses a generated default.
  UAA_SERVER_KEY: ~

services:
  loadbalanced: false
kube:
  external_ips: []

  # Increment this counter to rotate all generated secrets
  secrets_generation_counter: 1

  storage_class:
    persistent: "persistent"
    shared: "shared"

  # Whether HostPath volume mounts are available
  hostpath_available: false

  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"
  auth: "rbac"
Print this page