SUSE CaaS Platform 3

Deployment Guide

Shows how to deploy a SUSE® CaaS Platform cluster, onto bare metal or virtual machines. Describes multiple deployment methods: by individually installing each node from ISO images, automatic deployment using AutoYAST, and also building a cluster of Xen or KVM VMs using pre-installed virtual disk images.

Authors: Liam Proven, Christoph Wickert, Markus Napp, Sven Seeberg-Elverfeldt, and Jana Halačková
Publication Date: November 16, 2018

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

1 Required Background

To keep the scope of these guidelines manageable, certain technical assumptions have been made:

  • You have some computer experience and are familiar with common technical terms.

  • You are familiar with the documentation for your system and the network on which it runs.

  • You have a basic understanding of Linux systems.

2 Available Documentation

We provide HTML and PDF versions of our books in different languages. Documentation for our products is available at, where you can also find the latest updates and browse or download the documentation in various formats.

The following documentation is available for this product:

Deployment Guide

The SUSE CaaS Platform deployment guide gives you details about installation and configuration of SUSE CaaS Platform along with a description of architecture and minimum system requirements.

Book “Installation Quick Start

The SUSE CaaS Platform quick start guides you through installation of a minimum cluster in a fastest way as possible.

Book “Administration Guide

The SUSE CaaS Platform Admin Guide discusses authorization, updating clusters and individual nodes, monitoring, use of Helm and Tiller, the Kubernetes dashboard, and integration with SUSE Enterprise Storage.

3 Feedback

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to

To report bugs for a product component, go to, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to and enter your comments there.


For feedback on the documentation of this product, you can also send a mail to Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

4 Documentation Conventions

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • x86_64 This paragraph is only relevant for the AMD64/Intel 64 architecture. The arrows mark the beginning and the end of the text block.

    System z, POWER This paragraph is only relevant for the architectures z Systems and POWER. The arrows mark the beginning and the end of the text block.

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

5 About the Making of This Documentation

This documentation is written in SUSEDoc, a subset of DocBook 5. The XML source files were validated by jing (see, processed by xsltproc, and converted into XSL-FO using a customized version of Norman Walsh's stylesheets. The final PDF is formatted through FOP from Apache Software Foundation. The open source tools and the environment used to build this documentation are provided by the DocBook Authoring and Publishing Suite (DAPS). The project's home page can be found at

The XML source code of this documentation can be found at

1 About SUSE CaaS Platform

SUSE SUSE CaaS Platform is a Cloud-Native Computing Foundation (CNCF) certified Kubernetes distribution on top of SUSE MicroOS. SUSE MicroOS is a minimalist operating system based on SUSE Linux Enterprise, dedicated to hosting containers. SUSE MicroOS inherits the benefits of SUSE Linux Enterprise in the form of a smaller, simpler, and more robust operating system, optimized for large, clustered deployments. It also features an atomic, transactional update mechanism, making the system more resilient against software-update-related problems.

SUSE CaaS Platform automates the orchestration and management of containerized applications and services with powerful Kubernetes capabilities, including:

  • Workload scheduling optimizes hardware utilization while taking the container requirements into account.

  • Service proxies provide single IP addresses for services and distribute the load between containers.

  • Application scaling up and down accommodates changing loads.

  • Non-disruptive rollout/rollback of new applications and updates enables frequent changes without downtime.

  • Health monitoring and management supports application self-healing and ensures application availability.

In addition, SUSE CaaS Platform simplifies the platform operator’s experience, with everything you need to get up and running quickly, and to manage the environment effectively in production. It provides:

  • A complete container execution environment, including a purpose-built container host operating system (SUSE MicroOS), container runtime, and container image registries.

  • Enhanced datacenter integration features that enable you to plug Kubernetes into new or existing infrastructure, systems, and processes.

  • Application ecosystem support with SUSE Linux Enterprise container base images, and access to tools and services offered by SUSE Ready for CaaS Platform partners and the Kubernetes community.

  • End-to-End security, implemented holistically across the full stack.

  • Advanced platform management that simplifies platform installation, configuration, re-configuration, monitoring, maintenance, updates, and recovery.

  • Enterprise hardening including comprehensive interoperability testing, support for thousands of platforms, and world-class platform maintenance and technical support.

You can deploy SUSE CaaS Platform onto physical servers or use it on virtual machines. After deployment, it is immediately ready to run and provides a highly-scalable cluster.

While SUSE CaaS Platform inherits benefits of SUSE Linux Enterprise and uses tools and technologies well-known to system administrators such as cloud-init and Salt, the main innovation (compared to SUSE Linux Enterprise Server) comes with transactional updates. A transactional update is an update that can be installed when the system is running without any down-time. A transactional update can be rolled back, so if the update fails or is not compatible with your infrastructure, you can restore the previous system state.

SUSE CaaS Platform uses the Btrfs file system with the following characteristics:

  • The root filesystem and its snapshots are read-only.

  • Sub-volumes for data sharing are read-write.

  • SUSE CaaS Platform introduces overlays of the /etc directories used by cloud-init and Salt.

For more information, including a list of the various components which make up SUSE CaaS Platform please refer to the Release Notes on

1.1 Architectural Overview

A typical SUSE CaaS Platform cluster consists of several types of nodes:

  • The administration node is a Salt master which assigns roles to Salt minions. This node runs the ‐ Web-based dashboard for managing the whole cluster. For details, refer to Section 1.3, “The Administration Node”.

  • Each cluster node is a Salt minion which can have one of the following roles:

    • Kubernetes master: master nodes manage the worker nodes.

    • Kubernetes worker: worker nodes run the application containers with the main workload of the cluster.

In large-scale clusters, there are other types of node that can help you to manage and run the cluster:

  • a local SMT server that manages subscriptions for workers and so decreases the traffic to the SUSE Customer Center.

  • a log server that stores the cluster nodes' logs.

The following figure illustrates the interactions of the nodes.

SUSE CaaS Platform Nodes Architecture
Figure 1.1: SUSE CaaS Platform Nodes Architecture

1.2 Software Components

To run the whole cluster, SUSE CaaS Platform uses various technologies such as Salt, Flannel networking, the etcd distributed key- store database, a controller, a scheduler, kubelet, a Kubernetes API Server, and a choice of two container runtime engines, Docker or CRI-O.

  • Docker.  This is the leading open-source format for application containers. It is fully supported by SUSE.

    For more information, see

  • CRI-O.  Designed specifically for Kubernetes, CRI-O is an implementation of CRI, the Container Runtime Interface. A lightweight alternative to Docker or Moby, it supports Open Container Initiative (OCI) images.

    For more information, see

    Note: Container Engine Support Status

    CRI-O is included as an unsupported technology preview, to allow customers to evaluate the new technology. It is not supported for use in production deployments.

Salt is used to manage deployment and administration of the cluster. Salt-api is used to distribute commands from Velum to the salt-master daemon. The salt-master daemon stores events in MariaDB, which is also used to store Velum data. The salt-minion daemon on the administration node generates the required certificates, and Salt minions on the worker nodes communicate with the administration node.

As there can be several containers running on each host machine, each container is assigned an IP address that is used for communication with other containers on the same host. Containers might need to have a unique IP address exposed for network communications, thus Flannel networking is used. Flannel gives each host an IP subnet from which the container engine can allocate IP addresses to containers. The mapping of IP addresses is stored by etcd. The flanneld daemon manages routing of packets and mapping of IP addresses.

Within the cluster there are several instances of etcd, each with a different purpose. The etcd discovery daemon running on the administration node is used to bootstrap instances of etcd on other nodes and is not part of the etcd cluster on the other nodes. The etcd instance on the master node stores events from the Kubernetes API Server. The etcd instance on worker nodes runs as a proxy that forwards clients to the etcd on the master node.

Kubernetes is used to manage container orchestration. The following services and daemons are used by Kubernetes:


An agent that runs on each node to monitor and control all the containers in a pod, ensuring that they are running and healthy.


This daemon exposes a REST API used to manage pods. The API server performs authentication and authorization.


The scheduler assigns pods onto nodes. It does not run them itself; that is kubelet's job.


These monitor the shared state of the cluster through the apiserver and handle pod replication, deployment, etc.


This runs on each node and is used to distribute loads and reach services.

Now let's focus on a more detailed view of the cluster that involves also services running on each node type.

Services on nodes
Figure 1.2: Services on nodes

1.3 The Administration Node

The administration node manages the cluster and runs several applications required for proper functioning of the cluster. Because it is integral to the operation of SUSE CaaS Platform, the administration node must have a fully-qualified domain name (FQDN) which can be resolved from outside the cluster.

The administration node runs Velum, the administration dashboard; the MariaDB database; the etcd discovery server, salt-api, salt-master and salt-minion. The dashboard, database, and daemons all run in separate containers.

Velum is a Web application that enables you to deploy, manage, and monitor the cluster. The dashboard manages the cluster using salt-api to interact with the underlying Salt technology.

The containers on the administration node are managed by kubelet as a static pod. Bear in mind that this kubelet does not manage the cluster nodes. Each cluster node has its own running instance of kubelet.

1.4 Master Nodes

SUSE CaaS Platform master nodes monitor and control the worker nodes. They make global decisions about the cluster, such as starting and scheduling pods of containers on the worker nodes. They run kube-apiserver but do not host application containers.

Each cluster must have at least one master node. For larger clusters, more master nodes can be added, but there must always be an odd number.

Like the administration node, the master node must have a resolvable FQDN. For Velum to function correctly, it must always be able to resolve the IP address of a master node, so if there are multiple master nodes, they must all share the same FQDN, meaning that load-balancing should be configured.

1.5 Worker Nodes

The worker nodes are the machines in the cluster which host application containers. Each runs its own instance of kubelet which controls the pods on that machine. Earlier versions of Kubernetes referred to worker nodes as "minions".

Each worker node runs a container runtime engine (either Docker or cri-o) and an instance of kube-proxy.

The worker nodes do not require individual FQDNs, although it may help in troubleshooting network problems.

2 System Requirements

This chapter specifies the requirements to install and operate SUSE CaaS Platform. Before you begin the installation, please make sure your system meets all requirements listed below.

2.1 Cluster Size Requirements

SUSE CaaS Platform is a dedicated cluster operating system and only functions in a multi-node configuration. It requires a connected group of four or more physical or virtual machines.

The minimum supported cluster size is four nodes: a single administration node, one master node, and two worker nodes.

Note: Test and Proof-of-Concept Clusters

It is possible to provision a three-node cluster with only a single worker node, but this is not a supported configuration for deployment.

For improved performance, multiple master nodes are supported, but there must always be an odd number. For cluster reliability, when using multiple master nodes, some form of DNS load-balancing should be used.

Any number of worker nodes may be added up to the maximum cluster size. For the current maximum supported number of nodes, please refer to the Release Notes on

2.1.1 Salt Cluster Sizing

SUSE CaaS Platform relies on SaltStack (or Salt for short) to automate various tasks. The actions are performed by so called "minions". A master can handle a theoretical number of minions at the same time. Salt assigns a configured number of "worker threads" to the minions. The number of available worker threads limits the overall size of the cluster.

Up until a total cluster size of 40 nodes, no adjustments to the default configuration of SUSE CaaS Platform need to be made.

For clusters that consist of more than 40 nodes the number of Salt worker threads must be adjusted as follows:

Table 2.1: Salt Cluster Sizing
Cluster Size (nodes)Salt worker threads

As a rule of thumb, if the cluster grows above 100 nodes the number of worker threads should be at about two thirds of the overall number of cluster nodes.

To adjust the number of Salt worker threads, refer to: Book “Administration Guide”, Chapter 2 “Cluster Management”, Section 2.2.1 “Adjusting The Number Of Salt Worker Threads”.

2.2 Supported Environments

Regarding deployment scenarios, SUSE supports SUSE CaaS Platform running in the following environments:

  • SUSE CaaS Platform only supports x86_64 hardware.

  • Aside from this, the same hardware and virtualization platforms as SUSE Linux Enterprise 12 SP3 are supported. For a list of certified hardware, see

  • Virtualized—running under the following hypervisors:

    • KVM

      • on SUSE Linux Enterprise 11 SP4

      • on SUSE Linux Enterprise 12 SP1

      • on SUSE Linux Enterprise 12 SP2

      • on SUSE Linux Enterprise 12 SP3

    • Xen

      • same host platforms as for KVM

      • full virtualization

      • paravirtualization

      • Citrix XenServer 6.5

    • VMware

      Important: Disable VMware Memory Ballooning

      When installing SUSE CaaS Platform on VMware you must disable VMware's memory ballooning feature. VMware has instructions on how to do this here:

      When using pre-installed disk images, read Section 3.2.2, “Converting Images For VMware ESX and ESXi”. After bootstrapping the cluster, install the VMware tools. For details, see Section 4.6, “Installing VMware Tools”.

      • ESX 5.5

      • ESXi 6.0

      • ESXi 6.5+

    • Hyper-V

      • Windows Server 2008 SP2+

      • Windows Server 2008 R2 SP1+

      • Windows Server 2012+

      • Windows Server 2012 R2+

      • Windows Server 2016

    • Oracle VM 3.3

  • Private and Public Cloud Environments

    • SUSE OpenStack Cloud 7

    • Amazon AWS*

    • Microsoft Azure*

    • Google Compute Engine*

2.3 Container Data Storage

Storage can be provided using:

  • SUSE Enterprise Storage

  • NFS

  • hostpath

    Note: hostpath Storage

    Storage using hostpath is still supported, but by default it is disabled by PodSecurityPolicies.

2.4 Minimum Node Specification

Each node in the cluster must meet the following minimum specifications. All these specifications must be adjusted according to the expected load and type of deployments.

  • 4 Core AMD64/Intel* EM64T processor

  • 32-bit processors are not supported

  • 8 GB

    Although it may be possible to install SUSE CaaS Platform with less memory than recommended, there is a high risk that the operating system will run out of memory and subsequently causes a cluster failure.

    Note: Swap partitions

    Kubernetes does not support swap.

    For technical reasons, an administration node installed from an ISO image will have a small swap partition which will be disabled after installation. Nodes built using AutoYaST do not have a swap partition.

Storage Size
  • 40 GB for the root file system with Btrfs and enabled snapshots.

    Note: Cloud default root volume size

    In some Public Cloud frameworks the default root volume size of the images is smaller than 40GB. You must resize the root volume before instance launch using the command line tools or the web interface for the framework of your choice.

Storage Performance
  • IOPS: 500 sequential IOPS

  • Write Performance: 10MB/s

    Note: etcd Storage requirements

    Storage performance requirements are tied closely to the etcd hardware recommendations

2.5 Network Requirements

  • All the nodes on the cluster must be on a the same network and be able to communicate directly with one another.

    Important: Reliable Networking

    Please make sure all nodes can communicate without interruptions.

  • The admin node and the Kubernetes API master must have valid Fully-Qualified Domain Names (FQDNs), which can be resolved both by all other nodes and from other networks which need to access the cluster.

    Admin node and Kubernetes API master node should be configured as CNAME records in the local DNS. This improves portability for disaster recovery.

  • A DHCP server to dynamically provide IP addresses and host names for the nodes in your cluster (unless you configure all nodes statically).

  • A DNS server to resolve host names. If you are using host names to specify nodes, please make sure you have reliable DNS resolution at all times, especially in combination with DHCP.

    Important: Unique Host Names

    Host names must be unique. It is recommended to let the DHCP server provide not only IP addresses but also host names of the cluster nodes.

  • On the same network, a separate computer with a Web browser is required in order to complete bootstrap of the cluster.

  • We recommend that SUSE CaaS Platform is setup to run in two subnets in one network segment, also referred to as VPC or VNET. The administration node should run in a subnet that is not accessible to the outside world and should be connected to your network via VPN or other means. Consider a security group/firewall that only allows ingress traffic on ports 22 (SSH) and 443 (https) for the Administrative node from outside the VPC. All nodes must have access to the Internet through some route in order to connect to SUSE Customer Center and receive updates, or be otherwise configured to receive updates, for example through SMT.

    Depending on the applications running in your cluster you may consider exposing the subnet for the cluster nodes to the outside world. Use a security group/firewall that only allows incoming traffic on ports served by your workload. For example, a containerized application providing the backend for REST based services with content served over https should only allow ingress traffic on port 443.

  • In a SUSE CaaS Platform cluster, internal TCP/IP ports are managed using iptables controlled by Salt and so need not be manually configured. However, for reference and for environments where there are existing security policies, the following are the standard ports in use.

    Table 2.2: Node types and open ports





    All nodes



    SSH (required in public clouds)




    HTTP (Only used for AutoYaST)



    LDAP (user management)






    etcd discovery

    4505 - 4506




    2379 - 2380


    etcd (peer-to-peer traffic)

    6443 - 6444


    Kubernetes API server

    8471 - 8472


    VXLAN traffic (used by Flannel)

    10250, 20255








    Dex (OIDC Connect)


    2379 - 2380


    etcd (peer-to-peer traffic)




    8471 - 8472


    VXLAN traffic (used by Flannel)

    10250, 10255








    Dex (OIDC Connect)

1) Information about whether the port is used by internal cluster nodes or external networks or hosts.

When some additional ingress mechanism is used, additional ports would also be open.

2.6 Public Cloud Requirements

Amazon AWS*

The adminstrative instance must be launched with an IAM role that allows full access to the EC2 API.

Microsoft Azure*

All security credentials will be collected during setup.

Microsoft Azure does not provide a time protocol service. Please refer to SUSE Linux Enterprise Server documentation for more information about NTP configuration. No manual NTP configuration is required on the cluster nodes, they synchronize time with the administration node.

Google Compute Engine*

The instance must be launched with an IAM role including Compute Admin and Service Account Actor scopes.

2.7 Limitations

  • SUSE CaaS Platform 3 does not support remote installations with Virtual Network Computing (VNC).

  • SUSE CaaS Platform is a dedicated cluster operating system and does not support dual-booting with other operating systems. Ensure that all drives in all cluster nodes are empty and contain no other operating systems before beginning installation.

3 Preparing The Installation

This chapter prepares the installation of SUSE CaaS Platform on physical machines, manually configured virtual machines or in private cloud environments.

There are several ways to install SUSE CaaS Platform:

Section 3.1, “Installing From USB, DVD Or ISO Images”

This method requires lots of manual interaction with the physical or virtual machines. This method is suitable for small to medium cluster sizes.

Section 3.2, “Installing From Virtual Disk Images”

You can use prepared disk images ready to deploy on virtual machines. This increases the deployment speed.

Section 3.3, “Installing From Network Source”

With this method the boot into the installation is done with a network installation server. This reduces the need to interact with every single node. This method is suitable for medium cluster sizes.

Section 3.4, “Installing In SUSE OpenStack Cloud”

You can deploy SUSE CaaS Platform on SUSE OpenStack Cloud. This method is suitable for large cluster sizes.

Section 3.5, “Installing In Public Cloud”

This section is about installing SUSE CaaS Platform in a public cloud, for example Microsoft Azure*, Amazon AWS* and Google Compute Engine*.

To customize the setup of the nodes, use cloud-init. For details, see Section 5.2, “Customizing Configuration with cloud-init.

3.1 Installing From USB, DVD Or ISO Images

This procedure provides an overview of the steps for the cluster deployment with classical boot devices like DVD drives. This method is suitable for small to medium cluster sizes.

  1. Choose an installation medium. You can install from DVD or USB sticks. On virtual machines, you can install from ISO images.

  2. Boot the machine designated for the administration node from your selected medium. Then follow the installation and configuration instructions detailed in Section 4.1, “Installing the Administration Node” and Section 4.3, “Configuring the Administration Node”.

  3. Boot the machines designated to be the Master and Worker Nodes from your selected medium. Then follow the installation and configuration instructions detailed in Section 4.4, “Installing Master and Worker Nodes”. To automate the installation, you can use an AutoYaST file provided by the administration node. We recommend to use AutoYaST to speed up the node deployment. For details, see Section 4.4.3, “Automatic Installation Using AutoYaST”.

  4. Finish the installation by bootstrapping the cluster. For details, see Section 4.5, “Bootstrapping the Cluster”.

  5. For VMware ESX/ESXi: install the open-vm-tools package. See Section 4.6, “Installing VMware Tools”.

3.2 Installing From Virtual Disk Images

For building clusters from virtual machines on supported hypervisors, it is not necessary to individually install each node. SUSE offers pre-installed VM disk images in the following formats:

KVM and Xen

In QCOW2 format, for KVM and for Xen using full virtualization.


In QCOW2 format, for Xen using paravirtualization.


In VMDK format, for VMware ESXi.


For Microsoft Hyper-V.

3.2.1 Overview

When deploying a cluster node from pre-installed disk images, the setup program never runs. Therefore, the configuration must happen while the node is starting up. For this purpose, SUSE CaaS Platform includes cloud-init. The described process is very similar for all hypervisors. The examples in this manual use KVM running on SUSE Linux Enterprise. The following procedure outlines the overall process.

  1. For VMware ESX/ESXi: Convert VMDK images. See Section 3.2.2, “Converting Images For VMware ESX and ESXi”.

  2. Write cloud-init configuration files. See Section 3.2.3, “Configuration Files”.

  3. Prepare ISO images containing the cloud-init configuration files. See Section 3.2.4, “Preparing An ISO image”.

  4. Create a copy of the downloaded disk image for each virtual machine, naming them appropriately: for example, caas-admin, caas-master, caas-worker1, caas-worker2 and so on.

  5. Boot and configure the administration node. See Section 3.2.5, “Bringing Up An Administration Node”.

  6. Boot and configure the worker nodes. See Section 3.2.6, “Bringing Up A Worker Node”.

3.2.2 Converting Images For VMware ESX and ESXi

Downloaded disk images need to be converted for usage with VMware ESX and ESXi. On the ESX/ESXi host, run the following command on the downloaded disk images:

root # vmkfstools -i DOWNLOADED_IMAGE.vmdk CONVERTED_IMAGE.vmdk

3.2.3 Configuration Files

There are two separate configuration files: user-data and meta-data. Each node needs both. Thus, you need to prepare at a minimum one pair of files for the administration node, and another pair of files for all the worker nodes.

Place the files into subdirectories named cc-admin for the administration node and cc-worker for the worker nodes.

So, for instance, if your working directory is ~/cloud-config, then for the admin node, you need these two files:


For a worker node, you need:


The same meta-data file can be used for both node types. Here is an sample meta-data file:

instance-id: iid-CAAS01
network-interfaces: |
   auto eth0
   iface eth0 inet dhcp

The user-data file contains settings such as time servers, the root password, and the node type.

Here is an example cc-admin/user-data file for an administration node:

debug: True
disable_root: False
ssh_deletekeys: False
ssh_pwauth: True
   list: |
     expire: False
   - /usr/bin/systemctl enable --now ntpd
  role: admin

Here is an example cc-worker/user-data for a worker node. Rather than providing the root password in clear text, you can use a hash instead; this example is hashed with SHA-256.

debug: True
disable_root: False
ssh_deletekeys: False
ssh_pwauth: True
   list: |
     expire: False
    role: cluster

For more informatin, also refer to Section 5.2, “Customizing Configuration with cloud-init.

3.2.4 Preparing An ISO image

First, edit the configuration files as described in Section 3.2.3, “Configuration Files”. Then create an ISO image with the volume label cidata containing only the subdirectory for that node type.

On SUSE Linux Enterprise 12 or openSUSE 42, use genisoimage to do this. On SUSE Linux Enterprise or openSUSE 15, use mkisofs. The parameters are the same for both commands.

For example, to create the ISO image for an admin node on a computer running openSUSE 42:

tux > sudo genisoimage -output cc-admin.iso -volid cidata -joliet -rock cc-admin

To create the ISO image for a worker node on a computer running openSUSE 15, substituting the name of the folder containing the configuration files for a worker node and titling the volume cc-worker:

tux > sudo mkisofs -output cc-worker.iso -volid cidata -joliet -rock cc-worker

3.2.5 Bringing Up An Administration Node

  1. Create a new VM for the administration node.

  2. Attach a copy of the downloaded disk image as its main hard disk.

  3. Attach the cc-admin.iso image as a virtual DVD unit. For details about preparing the image, see Section 3.2.4, “Preparing An ISO image”

  4. Start the VM.

  5. Configure the new administration node as in step Section 4.3, “Configuring the Administration Node”.

3.2.6 Bringing Up A Worker Node

Then, repeat the following steps for each worker node:

  1. Create a new VM for the worker node.

  2. Attach a copy of the downloaded disk image as its main hard disk.

  3. Attach the cc-worker.iso disk image as a virtual DVD unit. For details about preparing the image, see Section 3.2.4, “Preparing An ISO image”. The ISO image can be reused for multiple worker nodes.

  4. Start the VM.

Once you have brought up as many worker nodes as you need, proceed to bootstap the cluster using the Velum dashboard.

3.3 Installing From Network Source

This procedure provides an overview of the steps for the cluster deployment from an network installation server. A PXE environment sis used to provide the nodes with the data required for installation.

  1. Install an installation server that provides a DHCP, PXE and TFTP service. Additionally, you can provide the installation data on an HTTP or FTP server. For details, refer to the SUSE Linux Enterprise 12 Deployment Guide:

    You can directly use the initrd and linux files from your installation media, or install the package tftpboot-installation-CAASP-3.0 onto your TFTP server. The package provides the required initrd and linux files in the /srv/tftpboot/ directory. You will need to modify the paths used in the SUSE Linux Enterprise 12 Deployment Guide to correctly point to the files provided by the package.

  2. PXE boot the machine designated for the administration node. Then follow the installation and configuration instructions detailed in Section 4.1, “Installing the Administration Node” and Section 4.3, “Configuring the Administration Node”.

  3. Pxe boot the machines designated to be the Master and Worker Nodes. Then follow the installation and configuration instructions detailed in Section 4.4, “Installing Master and Worker Nodes”. To automate the installation, you can use an AutoYaST file provided by the administration node. For details, see Section 4.4.3, “Automatic Installation Using AutoYaST”.

  4. Finish the installation by bootstrapping the cluster. For details, see Section 4.5, “Bootstrapping the Cluster”.

  5. For VMware ESX/ESXi: install the open-vm-tools package. See Section 4.6, “Installing VMware Tools”.

3.4 Installing In SUSE OpenStack Cloud

You can deploy a SUSE CaaS Platform on SUSE OpenStack Cloud using OpenStack. You will need a SUSE CaaS Platform machine image and OpenStack Heat templates. Once you have created a stack, you will continue with the SUSE CaaS Platform setup.

Note: SUSE CaaS Platform Machine Image For SUSE OpenStack Cloud

Download the latest SUSE CaaS Platform for OpenStack image from (for example, SUSE-CaaS-Platform-3.0-OpenStack-Cloud.x86_64-1.0.0-GM.qcow2).

Note: OpenStack Heat Templates Repository

SUSE CaaS Platform Heat templates are available from GitHub.

3.4.1 Using The Horizon Dashboard

  1. Go to Project → Compute → Images and click on Create Image.

    Give your image a name (for example: CaaSP-3); you will need to use this later to find the image.

  2. Go to Project → Orchestration and click on Stacks.

  3. Click on Launch Stack and provide the stack templates. Either upload the files, provide the URL to the raw files directly (only applies to stack template), or copy and paste the contents into the Direct Input fields.

    Warning: Replace The Default root_password

    Do not use the caasp-environment.yaml directly from the GitHub repository.

    You must make sure to replace the value for root_password with a secure password. This will become the password for the root account on all nodes in the stack.

  4. Click Next.

  5. Now you need to define more information about the stack.

    Stack Name

    Give your stack a name


    Your SUSE OpenStack Cloud password


    Select the image your machines will be created from


    Set the root password for your cluster machines


    Select the machine flavor for your nodes


    Number of worker nodes to be launched


    Select an external network that your cluster will be reachable from


    The internal network range to be used inside the cluster


    Internal name server for the cluster

  6. Click Launch.

  7. After the cluster has been started and the cluster overview shows Create Complete, you need to find the external IP address for the admin node of your cluster (here: Now visit that IP address in your browser. You should see the Velum login page and can continue with Section 4.3, “Configuring the Administration Node”.

3.4.2 Using The OpenStack CLI


You need to have access to the OpenStack command-line tools. You can either access those via ssh on your SUSE OpenStack Cloud admin server or install a local openstack client.

To use the local client, you need to access Project → Compute → Access & Security in the Horizon Dashboard and click on the Download OpenStack RC File v3.

The downloaded file is a script that you then need to load using the source command. The script will ask you for your SUSE OpenStack Cloud password.

tux > source
  1. Upload the container image to OpenStack Glance (Image service). This example uses the name CaaSP-3 as the name of the image that is created in SUSE OpenStack Cloud.

    tux > openstack image create --public --disk-format qcow2 \
    --container-format bare \
    --file SUSE-CaaS-Platform-3.0-OpenStack-Cloud.x86_64-1.0.0-GM.qcow2 \
  2. Warning
    Warning: Replace The Default root_password

    Do not use the caasp-environment.yaml directly from the GitHub repository.

    You must make sure to replace the value for root_password with a secure password. This will become the password for the root account on all nodes in the stack.

    Download the caasp-stack.yaml and caasp-environment.yaml Heat templates to your workstation and then run the openstack stack create command.

    tux > openstack stack create \
    -t caasp-stack.yaml \
    -e caasp-environment.yaml \
    --parameter image=CaaSP-3 caasp3-stack
  3. Find out which (external) IP address was assigned to the admin node of your SUSE CaaS Platform cluster (here:

    tux > openstack server list --name "admin" | awk 'FNR > 3 {print $4 $5 $9}'
  4. Visit the external IP address in your browser. You should see the Velum login page and can continue with Section 4.3, “Configuring the Administration Node”.

3.5 Installing In Public Cloud

3.5.1 Overview

The SUSE CaaS Platform images published by SUSE in selected Public Cloud environments are provided as Bring Your Own Subscription (BYOS) images. SUSE CaaS Platform instances need to be registered with the SUSE Customer Center in order to receive bugfix and security updates. Images labeled with the cluster designation in the name are not intended to be started directly; they are deployed by the Administrative node. Administrative node images contain the admin designation in the image name.

The following procedure outlines the deployment process:

  1. Read the special system requirements for public cloud installations in Section 2.6, “Public Cloud Requirements”.

  2. Provision the cluster nodes. For details, see Section 3.5.2, “Provisioning Cluster Nodes”.

  3. Deploy the admin node with caasp-admin-setup. For details, see Section 4.2, “Installing the Administration Node with Command Line Interface”.

  4. Finish bootstrapping your cluster. The provisioned worker nodes are ready to be consumed into the cluster. For details, see Section 4.5, “Bootstrapping the Cluster”.

3.5.2 Provisioning Cluster Nodes Amazon Web Services EC2

You may select from one of the predefined instance types, hand selected for general container workloads, or choose Other types... and enter any instance type, as defined at

Two configuration options are required in EC2:

Subnet ID

The subnet within which cluster nodes will be attached to the network, in the form subnet-xxxxxxxx.

Security Group ID

The security group defining network access rules for the cluster nodes, in the form sg-xxxxxxxx.

The defaults used for those two options are preset to the subnet ID of the administration host and the security group ID that was automatically created by caasp-admin-setup. You may choose to place the cluster nodes in a different subnet and you can also use a custom security group, but please bear in mind that traffic must be allowed between the individual cluster nodes and also between the admininstration node and the cluster nodes.

See the Amazon Virtual Private Cloud Documentationfor more information. Microsoft Azure

You need to configure credentials for access to the Azure framework so instances can be created, as well as parameters for the cluster node instances themselves. The credentials refer to authentication via a service principal. See for more information on how you can create a service principal.

Subscription ID

The subscription ID of your Azure account.

Tenant ID

The tenant ID of your service principal, also known as the Active Directory ID.

Application ID

The application ID of your service principal.

Client Secret

The key value or password of your service principal.

Below the Service Principal Authentication box you will find the Instance Type configuration. You may select from one of the predefined instance types, hand selected for general container workloads, or choose Other types... and enter any size, as defined at the Cluster size using the slider.

The parameters in Resource Scopes define attributes of the cluster instances, as required for Azure Resource Manager:

Resource Group

The Resource Group in which all cluster nodes will be created.

Storage Account

The Storage Account that will be used for storing the cluster node OS disks. See more information about Azure Storage Accounts.


The virtual network the cluster nodes will be connected to.


A subnet in the previously defined virtual network. See for more information about Azure Virtual Networks. Google Compute Engine

You may select from one of the predefined instance types, hand selected for general container workloads, or choose Other types... and enter any machine type, as defined at

Two configuration options are required in GCE:


The name of the virtual network the cluster nodes will run within.


If you created a custom network, you must specify the name of the subnet within which the cluster nodes will run.

See the GCE Network Documentation for more information.

4 Installing and Configuring Nodes

This chapter details the procedures for installing the administration node, master nodes and worker nodes. Make sure that you prepared the installation according to Chapter 3, Preparing The Installation.

4.1 Installing the Administration Node

The procedure for installing the administration node is identical whether or not you use AutoYaST for the rest of the cluster.

  1. Connect or insert the SUSE CaaS Platform installation media, then reboot the computer to start the installation program. On machines with a traditional BIOS, you will see the graphical boot screen shown below. The boot screen on machines equipped with UEFI is slightly different.

    SecureBoot on UEFI machines is supported.

    Use F2 to change the language for the installer. A corresponding keyboard layout is chosen automatically. See for more information about changing boot options.

  2. Select Installation on the boot screen, then press Enter. This boots the system and loads the SUSE CaaS Platform installer.

  3. Configure the following mandatory settings on the Installation Overview screen.

    Note: Help and Release Notes

    From this point on, a brief help document and the Release Notes can be viewed from any screen during the installation process by selecting Help or Release Notes respectively.

    Keyboard Layout

    The Keyboard Layout is initialized with the language settings you have chosen on the boot screen. Change it here, if necessary.

    Password for root

    Type a password for the system administrator account (called the root user) and confirm it.

    Warning: Do not forget the root Password

    You must not lose the root password! After you enter it here, the password cannot be retrieved. For more information, see

    Registration Code or SMT Server URL

    Enter the Registration Code or SMT Server URL. SMT Server URLs should use https or http; other protocols are not supported. Fill this field to enable installing current updates during the installation process. Alternatively, the machine can be registered at the SUSE Customer Center or a SMT server at any later point in time with SUSEConnect. For details about registering with SUSEConnect and using an authenticated proxy server for registration, see Section 5.3, “Registering Node at SUSE Customer Center, SMT or RMT”.

    System Role

    From the System Role menu, choose Administration Node (Dashboard).

    NTP Servers

    Enter the host names or IP addresses of one or more NTP Servers for the node, separated by colons or white space. A single time server is sufficient, but for optimal precision and reliability, nodes should use at least three.

    For more information about NTP, refer to

    Optionally, you can customize the following settings. If you do not make any changes, defaults are used. A brief summary of the settings is displayed below the respective settings option.


    Review the partition setup proposed by the system and change it if necessary. You have the following options:

    Select a hard disk

    Select a disk on to which to install SUSE CaaS Platform with the recommended partitioning scheme.

    Custom Partitioning (for Experts)

    Opens the Expert Partitioner described in

    Warning: For Experts only

    As the name suggests, the Expert Partitioner is for experts only. Custom partitioning schemes that do not meet the requirements of SUSE CaaS Platform are not supported.

    Requirements for custom partitioning schemes
    • SUSE CaaS Platform only supports the Btrfs file system with OverlayFS. A read-only Btrfs file system is used for the root file system, which enables transactional updates.

    • For snapshots, partitions should have a capacity of at least 11 GB.

    • Depending on the number and size of your containers, you will need sufficient space under the /var mount point.

    To accept the proposed setup without any changes, choose Next to proceed.


    This section shows the boot loader configuration. Changing the defaults is only recommended if really needed. For details, refer to

    Network Configuration

    If the network could not be configured automatically while starting the installation system, you should manually configure the Network Settings. Please make sure at least one network interface is connected to the Internet in order to register your product.

    By default, the installer requests a host name from the DHCP server. If you set a custom name in the Hostname/DNS tab, make sure that it is unique.

    For more information on configuring network connections, refer to


    Kdump saves the memory image (core dump) to the file system in case the kernel crashes. This enables you to find the cause of the crash by debugging the dump file. For more information, see .

    Warning: Kdump with large amounts of RAM

    If you have a system with large amounts of RAM or a small hard drive, core dumps may not be able to fit on the disk. If the installer warns you about this, there are two options:

    1. Enter the Expert Partitioner and increase the size of the root partition so that it can accommodate the size of the core dump. In this case, you will need to decrease the size of the data partition accordingly. Remember to keep all other parameters of the partitioning (e.g. the root file system, mount point of data partition) when doing these changes.

    2. Disable Kdump completely.

    System Information

    View detailed hardware information by clicking System Information. In this screen you can also change Kernel Settings. See for more information.

    Proceed with Next.

    Tip: Installing Product Patches at Installation Time

    If SUSE CaaS Platform has been successfully registered at the SUSE Customer Center, you are asked whether to install the latest available online updates during the installation. If you choose Yes, the system will be installed with the most current packages without having to apply the updates after installation. Activating this option is recommended.

  4. After you have finalized the system configuration on the Installation Overview screen, click Install. Up to this point no changes have been made to your system.

    Click Install a second time to start the installation process.

  5. During the installation, the progress is shown in detail on the Details tab.

  6. After the installation routine has finished, the computer will reboot into the installed system.

4.2 Installing the Administration Node with Command Line Interface

Important: Do not use this for datacenter installations

This procedure is intended to be used with public cloud installations only.

Use SSH to log into the admin node and run the caasp-admin-setup executable as the rootuser.

By default the caasp-admin-setup executable operates in wizard mode, walking you through the necessary steps. During this process your SUSE Customer Center credentials will be requested. Registration with SUSE Customer Center can be skipped. If this step is skipped during setup the admin node and the cluster nodes will not receive any updates. While registration to SUSE Customer Center can be performed after the initial setup with SUSEConnect, performing the registration during setup has the advantage that cluster nodes will automatically be registered with SUSE Customer Center as well. If you prefer not to run the wizard, use caasp-admin-setup --help to obtain a list of the available command line arguments.

Once the caasp-admin-setup process is complete all SUSE CaaS Platform containers will be launched on the admin node instance. Use your web browser to access the Velum dashboard via https. If you did not provide your own certificate, a certificate was generated for you and the fingerprint was written to the terminal in which caasp-admin-setup was executed. You can compare this fingerprint in your browser to establish the chain of trust.

4.2.1 caasp-admin-setup Details

The general purpose of caasp-admin-setup is to collect all information needed to successfully start the SUSE CaaS Platform containers.

When caasp-admin-setup is executed it determines which cluster node image to use according to the cloud framework. For this operation to succeed outgoing traffic on port 443 to the Internet must be permitted. The code will access the Public Cloud Information Tracker service operated by SUSE. This service provides information about all images ever released to the Public Cloud by SUSE. The latest available cluster node image for this version of SUSE CaaS Platform will be used. This initial outreach and image filtering introduces a small startup delay before the command line options are processed or the wizard mode starts.

When all information is collected, accept your selections/input with y to complete the initial setup.

4.2.2 Providing SSL Certificate and Key

You may choose to supply your own SSL certificate and key for initial access the dashboard, with the --ssl-crt and --ssl-key options or by answering the question Would you like to use your own certificate from a known (public or self signed) Certificate Authority? with y.

In order to use your own SSL certificate and key you must upload the files to the admin node into a location of your choice. This location is then provided to the setup code. For example, if your certificate is called my-velum.crt and you uploaded it to /tmp then the caasp-admin-setup code expects /tmp/my-velum.crt as the location for the SSL certificate. The same concept applies to the SSL key. The certificate and key will be placed in the appropriate locations on the admin node.

4.2.3 Velum Administrator Credentials

Velum is the name of the administrative dashboard web interface. The setup code will ask for an e-mail address and a password if not supplied with the --admin-email and --admin-password arguments. These are the administrative credentials to log into the Velum dashboard. The e-mail used does not have to be an e-mail associated with your SUSE Customer Center account. Please do not forget the values you enter, as they cannot be recovered.

4.2.4 Registering with SUSE Customer Center

To register all cluster nodes with SUSE Customer Center, provide your e-mail address and the registration code. The registration process requires access to the Internet on port 443. Alternatively you may use the --reg-email and --reg-code arguments. Registration with SUSE Customer Center is optional. However, without registration the system will not receive any updates unless specifically setup to receive updates via a different route such as a private SMT server. Registration after the initial setup also requires an explicit registration of each node in the cluster.

For registering your nodes after the installation, refer to Section 5.3, “Registering Node at SUSE Customer Center, SMT or RMT”.

4.3 Configuring the Administration Node

Before installing the other nodes, it is necessary to configure the administration node.

  1. After the administration node has finished booting and you see the login prompt, point a web browser to:

    ... where is the host name or IP address of the administration node. The host name and IP address are both shown on the administration node console, above the login prompt.

  2. To create an Administrator account, click Create an account and provide an e-mail address and a password. Confirm the password and click Create Admin. You will be logged into the dashboard automatically.

  3. Fill in the values Internal Dashboard Location. If necessary, configure the other settings.

    Note: Host Name, FQDN or IP Address

    Generally, FQDNs are preferable to host names.

    For test deployments, you can use IP addresses instead of names for both the dashboard and API server, but this is not recommended for use in production.

    Internal Dashboard Location

    FQDN or IP of the node running the Velum dashboard (reachable from inside the cluster).

    Install Tiller (Helm's Server Component)

    If you intend to deploy SUSE Cloud Foundry on SUSE CaaS Platform, or any other software that is installed with Helm (the Kubernetes package manager), check the box to install Tiller.

    Overlay network settings

    Describes the settings used by flannel to create the overlay network used by all the Kubernetes pods and services. With this change, the default settings are exposed to the user for fine tuning. The most common reason to change them is to avoid clashes between the default subnetwork we picked up and an already existing one.

    Networks are described in CIDR notation.

    Warning: Adjust overlay network to avoid collision with existing services

    The overlay network settings have to be verified and adjusted so that they do not collide with any services / addresses in the infrastructure that potentially need to be reached from any node or service running within the SUSE CaaS Platform cluster.

    For example, an oracle database is running on in the existing infrastructure and a pod in the cluster needs to contact that database. Then, the defaults be adjusted to provide a different overlay network. Another example would be an NFS server or a SES/Ceph cluster running anywhere in the network and where persistent storage access of the CaaSP cluster should be hosted on.

    Cluster CIDR

    Classless Inter-Domain Routing subnet used for the cluster

    Cluster CIDR (lower bound)

    Lower boundary for CIDR notation

    Cluster CIDR (upper bound)

    Upper boundary for CIDR notation

    Node allocation size (CIDR length per worker node)

    Length of CIDR notation length per worker node in Bits (Default: 23)

    Service CIDR

    Classless Inter-Domain Routing subnet used for services

    API IP address

    IP address in the CIDR network for the Kubernetes API

    DNS IP address

    IP address in the CIDR network for the DNS service

    Proxy Settings

    If enabled, you can set proxy servers for HTTP and HTTPS. You may also configure exceptions and choose whether to apply the settings only to the container engine or to all processes running on the cluster nodes.

    HTTP Proxy

    HTTP Proxy to be used.

    HTTPS Proxy

    HTTPS Proxy to be used.


    Comma separated list of hostnames/IP addresses whose traffic should not be routed through the configured proxy.

    Use proxy systemwide

    Select if the proxy settings will be applied for the Container engine only or for the Entire node communication.

    SUSE registry mirror

    Configure a mirror for the SUSE container registry.


    URL where the local registry mirror can be reached.


    Select No/Yes if you wish to provide the certificate used to protect your registry mirror. Copy the body of the certificate in the text field.

    Cloud provider integration

    Cloud provider integration enables you to deploy SUSE CaaS Platform on OpenStack/SUSE OpenStack Cloud.

    Keystone API URL

    Specifies the URL of the Keystone API used to authenticate the user. This value can be found in Horizon (the OpenStack control panel) under Project → Access and Security → API Access → Credentials.

    Domain name

    (Optional) Used to specify the name of the domain your user belongs to.

    Domain ID

    (Optional) Used to specify the name of the domain your user belongs to.

    Project name

    (Optional) Used to specify the name of the project where you want to create your resources.

    Project ID

    (Optional) Used to specify the name of the project where you want to create your resources.

    Region name

    Used to specify the identifier of the region to use when running on a multi-region OpenStack cloud. A region is a general division of an OpenStack deployment.


    Refers to the username of a valid user set in Keystone.


    Refers to the password of a valid user set in Keystone.

    Subnet UUID for CaaS Platform private network

    Used to specify the identifier of the subnet you want to create your load balancer on. This value can be found on the OpenStack control panels, under Project → Network → Networks. Click on the respective network to see its subnets.

    Floating network UUID

    (Optional) When specified, will lead to the creation of a floating IP for the load balancer.

    Load balancer monitor max retries

    Number of permissible ping failures before changing the load balancer member's status to INACTIVE. Must be a number between 1 and 10. (Default: 3)

    Cinder Block Storage API version

    Specifies the API version to be used when talking to Cinder. Currently: v2

    Ignore Cinder availability zone

    Influence availability zone use when attaching Cinder volumes. When Nova and Cinder have different availability zones, this should be set to True.

    Container runtime

    Please note CRI-O is currently only a tech preview. It will work but is not officially supported.

    Allows choice between Docker and CRI-O as the main container runtime.

    System wide certificate

    Specify a system wide trusted certificate.

  4. Click Next.

  5. You will be shown an information screen about AutoYaST.

    This is now the time for you to install the master/worker nodes for the cluster.

    Continue with Section 4.4, “Installing Master and Worker Nodes”.

4.4 Installing Master and Worker Nodes


Before you can install the worker nodes of your new cluster, you need to install and configure the administration node. Ensure that you have completed the steps in Section 4.1, “Installing the Administration Node” and Section 4.3, “Configuring the Administration Node”.

4.4.1 Installation on Cloud services

If you are installing on an OpenStack based cloud using HEAT templates or using a public cloud service (Azure, EC2, GCE), your machines will be set up automatically.

Important: Adjust Salt Worker Threads For More Than 40 Nodes

If you are deploying a cluster with more than 40 overall nodes, you must adjust the number of available Salt worker threads before you continue.

Refer to: Section 2.1.1, “Salt Cluster Sizing”.

You can continue directly to Section 4.5, “Bootstrapping the Cluster”.

4.4.2 Manual Installation

  1. Follow the same procedure as for installing the administration node in Section 4.1, “Installing the Administration Node”, up until selection of the System Role.

  2. Select Cluster Node as System Role and enter the host name or IP address of the administration node.

    Note: Plain System

    It is also possible to select a third node type, "plain node". These can be used for testing and debugging purposes, but are not usually needed.

  3. After you have finalized the system configuration on the Installation Overview screen, click Install. Up to this point no changes have been made to your system. After you click Install a second time, the installation process starts.

    After a reboot, the new node should appear in the dashboard and can be added to your cluster.

    Repeat this procedure at least twice more to add a minimum of three nodes: one master node and two worker nodes. This is the minimum supported size for a SUSE CaaS Platform cluster.

  4. Once you have installed all desired machines, continue with Section 4.5, “Bootstrapping the Cluster”.

    Important: Adjust Salt Worker Threads For More Than 40 Nodes

    If you are deploying a cluster with more than 40 overall nodes, you must adjust the number of available Salt worker threads before you continue.

    Refer to: Section 2.1.1, “Salt Cluster Sizing”.

4.4.3 Automatic Installation Using AutoYaST

Before installing worker nodes with AutoYaST, you need to obtain the URL that points to the AutoYaST file on the administration node. Generally, this will be supplied by the Velum dashboard on the administration node.

Note: root Password

When nodes are installed using AutoYaST, there is no opportunity to specify the password for root. However, each node will have ssh keys for root on the master node pre-installed. Thus it is possible to access the worker nodes by opening an ssh session from the master node.

  1. Insert the SUSE CaaS Platform DVD into the drive, then reboot the computer to start the installation program.

  2. Select Installation on the boot screen, but do not press Enter.

    Before proceeding to boot the machine, you should enter the necessary Boot Options for AutoYaST and networking.

    The most important options are:


    Path to the AutoYaST file. It is in the form of a URL built from the FQDN of the administration node, followed the path to the AutoYaST file. For example,

    For more information, refer to


    Network configuration. If you are using dhcp, you can simply enter netsetup=dhcp. For manual configuration, refer to


    The host name for the node, if not provided by DHCP. If you manually specify a host name, make sure that it is unique.

    Press Enter. This boots the system and loads the SUSE CaaS Platform installer.

  3. So long as there are no errors, the rest of the installation should complete automatically. After a reboot, the new node should appear in the dashboard and can be added to your cluster.

  4. Once you have installed all desired machines, continue with Section 4.5, “Bootstrapping the Cluster”.

    Important: Adjust Salt Worker Threads For More Than 40 Nodes

    If you are deploying a cluster with more than 40 overall nodes, you must adjust the number of available Salt worker threads before you continue.

    Refer to: Section 2.1.1, “Salt Cluster Sizing”.

4.5 Bootstrapping the Cluster

To complete the installation of your SUSE CaaS Platform cluster, it is necessary to bootstrap at least three additional nodes; those will be the Kubernetes master and workers.

In case of problems, refer to Book “Administration Guide”, Chapter 8 “Troubleshooting”, Section 8.2 “Debugging Failed Bootstrap”.

  1. Return to your admin node; with the AutoYaST instructions screen open from before.

  2. Click Next.

  3. On the screen Select nodes and roles, you will see a list of salt-minion IDs under Pending Nodes. These are internal IDs for the master/worker nodes you have just set up and which have automatically registered with the admin node in the background.

  4. Accept individual nodes into the cluster or click Accept All Nodes.

  5. Assign the roles of the added nodes.

    By clicking on Select remaining nodes, all nodes without a selected role will be assigned the Worker role.

    Important: Minimum cluster size

    You must designate at least 1 master node and 2 worker nodes..

    Tip: Assign Unused Nodes Later

    Nodes that you do not wish to designate for a role now, can later be assigned one on the Velum status page.

  6. Once you have assigned all desired nodes a role, click Next.

  7. The last step is to configure the external FQDNs for dashboard and Kubernetes API.

    These values will determine where the nodes in the cluster will attempt to communicate.

    Note: Master Node Loadbalancer FQDN

    If you are planning a larger cluster with multiple master nodes, they must all be accessible from a single host name. If not, the functionality of Velum will degrade if the original master node is removed.

    Therefore, you should ensure that there is some form of load-balancing or reverse proxy configured at the location you enter here.

    External Kubernetes API FQDN

    Name used to reach the node running the Kubernetes API server.

    In a simple deployment with a single master node, this will be the name of the node that was selected as the master node during bootstrapping of the cluster.

    External Dashboard FQDN

    Name used to reach the admin node running Velum.

  8. Click on Bootstrap cluster to finalize the intial setup and start the bootstrapping process.

    The status overview will be shown while the nodes are bootstrapped for their respective roles in the background.

4.6 Installing VMware Tools

This section is only relevant for deployments on VMware ESX and ESXi environments. This step is not required if you are using virtual disk images as described in Section 3.2, “Installing From Virtual Disk Images”, because open-vm-tools is already installed.

After the bootstrapping of the cluster is finished, install the VMware tools on all nodes that are included in the package open-vm-tools. Log in on the administration node and execute:

  1. Install open-vm-tools on all nodes.

    root@admin # docker exec $(docker ps -q --filter name=salt-master) \
    salt -P "roles:admin|kube-master|kube-minion" \ 'transactional-update pkg install --no-confirm open-vm-tools'
  2. Reboot all nodes using salt. If you are already running a workload, also see Book “Administration Guide”, Chapter 2 “Cluster Management”, Section 2.4 “Graceful Shutdown and Startup”.

    root@admin # docker exec -it $(docker ps -q --filter name=salt-master) salt '*' system.reboot
  3. Check status of VMware Tools in the ESX / ESXi user interface.

5 Node Configuration

SUSE CaaS Platform is typically configured in two stages: first, during the installation process. After installation, you can configure your cluster using cloud-init. The first-stage configuration of SUSE CaaS Platform comes as preconfigured as possible. The second stage is typically used for large-scale clusters. If you only have a few machines, cloud-init is not necessary.

Each configuration stage is described in the following sections.

5.1 Default Configuration Values

The defaults for the first stage configuration are the following:


The timezone is set to UTC by default, but can be changed by cloud-init.

Keyboard Layout

The keyboard layout is set to US by default, but can be changed during the installation process.


The locale is set to en_US.utf8 by default, but can be changed by cloud-init.


Note that SUSE CaaS Platform does not support the full range of locales that are available in SUSE Linux Enterprise. Because of this, we do recommend to not change the locale.

5.2 Customizing Configuration with cloud-init

cloud-init is a tool that helps customizing an operating system at boot time. cloud-init can set environment variables, configure the hostname, SSH keys, mount points and network devices.

The customization information is usually read from 3 files during boot: meta-data, user-data and optionally vendor-data. These files can be loaded from different datasources. In SUSE CaaS Platform, the following datasources are preconfigured, and cloud-init searches them in the following order:


First, the configuration is searched in the local directory /cloud-init-config. For details, see Section 5.2.1, “The LocalDisk Datasource”.


Second, cloud-init tries to read the configuration from a local block device, for example an USB stick, DVD or virtual disk. For details, see Section 5.2.2, “The NoCloud Datasource”.


Third, if there is a running OpenStack service, this datasource can be used. The configuration then depends on a particular setup and is not covered by this manual.

For details about the configuration files, see:

More information about cloud-init can be found at

5.2.1 The LocalDisk Datasource

To provide the cloud-init configuration on the local file system, create a directory /cloud-init-config/. Then create the files meta-data, user-data and optionally vendor-data. For details about the content of the files, see the following sections.

5.2.2 The NoCloud Datasource

The NoCloud datasource enables you to read the cloud-init configuration without running a network service. cloud-init searches for the configuration files meta-data, user-data, and (optional) vendor-data in the root directory of a local file system formatted as vfat or iso9660 with a label cidata. Typically it is an unpartitioned USB stick or disk or a DVD iso.

Alternatively you can specify a remote location of the cloud.cfg, but you have to configure network first, for example by using local configuration files. The url is specified as a boot parameter and must be in the format: cloud-config-url=http://hostname.domain/cloud.cfg. The content of the passed url is copied to /etc/cloud/cloud.cfg.d/91_kernel_cmdline_url.cfg and it is not overwritten even though the url changes. For details about boot parameters, see Generating cloud-init images for NoCloud datasource

Note: mkisofs dependecy

You need the tool mkisofs to produce the images. It should be preinstalled openSUSE Leap and Tumbleweed installations. For other operating systems, please refer to the specific instructions for the respective distribution.

If mkisofs is not available, you can use the nearly identical genisoimage.

You can use a local ISO image generated from cloud-init data to help configuring a cluster that is not hosted in the cloud.

You will need to create two image files. One that is used to initialize the admin node and one that is used to initialize the master and worker nodes. You need to provide a meta-data and user-data file per node type.

Procedure 5.1: Generate cloud-init ISO files
  1. Create two directories: cc-admin and cc-worker.

  2. Create your configuration files as described in:

    The result should be three or more files:

    • cc-admin/meta-data (This file will be reused across all nodes)

    • cc-admin/user-data

    • cc-admin/vendor-data (Optional)

    • cc-worker/user-data

    • cc-worker/vendor-data (Optional)

  3. Copy cc-admin/meta-data to cc-worker/.

  4. Finally, you need to package the respective configurations into two ISO files. The result of the following commands will be two iso9660 ISO files with the volume label cidata and additional joliet metadata.

    tux > sudo mkisofs -output cc-admin.iso -volid cidata -joliet -rock cc-admin
    tux > sudo mkisofs -output cc-worker.iso -volid cidata -joliet -rock cc-worker

    The files will be called cc-admin.iso and cc-worker.iso. You need to attach these files to your respective VM as a block device before boot.

5.2.3 The cloud.cfg Configuration File

The /etc/cloud/cloud.cfg file is used to define a datasource and the locations of the other required configuration files. Use the #cloud-config syntax when defining the content.

An example with NoCloud datasource follows:

     # default seedfrom is None
     # if found, then it should contain a url with:
     #    <url>user-data and <url>meta-data
     # seedfrom:<path>/

5.2.4 The meta-data Configuration File

The file meta-data is a YAML format file which is intended to configure system items such as network, instance ID, etc. The file typically contains the instance-id and network-interfaces options. Each is described below.

Important: Network Configuration Priority

If you are deploying SUSE CaaS Platform nodes using AutoYaST, the network settings from cloud-init will be ignored and the settings made in the AutoYaST process are applied.


Defines the instance. If you perform any changes to the configuration (with either user-data or meta-data), you must update this option with another value. Thus cloud-init can recognize if this is the first boot of that particular host instance.

instance-id: iid-example001

Here you can define the following options:

  • auto to start the network in that configuration automatically during the boot phase.

  • iface that defines the configured interfaces.

A static network configuration then could look as follows:

network-interfaces: |
  auto eth0
  iface eth0 inet static

5.2.5 The user-data Configuration File

The configuration file user-data is a YAML file used to configure users, SSH keys, time zone, etc. Each part of the file is described in following sections. user-data Header

Each user-data file must start with #cloud-config that indicates the cloud-config format. The snippet below enables debugging output and disables passwordless authentication for root. Thus you must login with the root credentials.

debug: True
disable_root: False runcmd Statements

In the user-data you can use the runcmd statement to run various commands in your system. The user-data file can contain only a single runcmd statement, so if you must run several commands, group them into one statement:

    - /usr/bin/systemctl enable --now ntpd

By using the runcmd statement, you can perform the following in your system:

Configure keyboard layout

for example, configure the German keyboard layout with nodeadkeys:

  - /usr/bin/localectl set-keymap de-latin1-nodeadkeys
Start services

for example, start the NTP server as described in Section, “NTP Server Configuration”. SSH Keys Management

You can configure the behaviour of adding SSH keys to the authorized_keys and the SSH login pattern.

ssh_deletekeys: False
ssh_pwauth: True
  - ssh-rsa XXXKEY

The option ssh_deletekeys disables/enables automatic deletion of old private and public SSH keys of the host. The default value is true—the keys are deleted and new keys are generated. We do not recommend using the default value, as there could be a problem with ssh reporting that the keys are incorrect or have been changed after the cloud-init configuration has been changed.

The option ssh_pwauth: true allows you to login by using SSH with a password, if the password is set.

The option ssh_authorized_keys defines whether the SSH key will be added to the authorized_keys file of the user. If not specified otherwise, the default user is root. Setting Password

The user-data file enables you to set default passwords by using the chpasswd option:

  list: |
  expire: True

In the example above you set the password for root to be "linux". The expire option defines whether the user will be prompted to change the default password at the first login.

For additional security, password hashes may be used instead of plain text. The format is as follows:


The value "X" in $X$ can be any of 1, 2a, 2y, 5, or 6. For more information, see the HASHING METHODS section in the output of the command man 3 crypt.

For example, you can generate a safe hash with the following command:

mkpasswd --method=SHA-512 --rounds=4096

This command would create an SHA-512 password hash with 4096 salt rounds, using stdin as input.

This could be specified in the file using $6$, as follows:

root:$6$j212wezy$7H/1LT4f9/N3wpgNunhsIqtMj62OKiS3nyNwuizouQc3u7MbYCarYeAHWYPYb2FT.lbioDm2RrkJPb9BZMN1O/ Adding Custom Repository

You can add a custom software repository to your system by using the zypp_repos option:

    - id: opensuse-oss
      name: os-oss
      enabled: 1
      autorefresh: 1
    - id: opensuse-oss-update
      name: os-oss-up

The options available are:


The local unique ID of the repository, also known as its alias. (Mandatory.)


A more descriptive string describing the repository, used in the UI. (Mandatory.)


URL to the directory where the repository's repodata directory lives. (Mandatory.)


Zypper is able to work with three types of repository: yast2 and rpm-md (yum) repositories, as well as plaindir - plain directories containing .rpm files.


This is relative to the baseurl; the default is /.


Defines whether the source signatures should be checked using GPG.


Defines the URL for a GPG key.


Defaults to 1 (on). Set to 0 to disable the repository: it will be known and listed, but not used.


Defaults to 1 (on). When on, the local package cache will be updated to the remote version whenever package management actions are performed.


Defines a source priority, from 1 (lowest) to 200 (highest). The default is 99. Setting Timezone

You can set a default timezone. Bear in mind that the configured value must exist in /usr/share/zoneinfo:

timezone: Europe/Berlin Setting Host name

You can set either a host name or, preferably, a fully-qualified domain name for the machine:

hostname: myhost



The option preserve_hostname specifies whether any existing host name (for example, from the kernel command-line) should be retained or not. Enter true or false as required:

preserve_hostname: true Configuring Name server

You can configure the server to manage the resolv.conf file and thus set values of the file:

manage_resolv_conf: true
  nameservers: ['', '']
    rotate: true
    timeout: 1 NTP Server Configuration

You can also configure the NTP server. The following snippet configures three NTP servers during the first boot and the NTP service is enabled and started:

  enabled: true
  - /usr/bin/systemctl enable --now ntpd Salt minion Configuration

You can use the file to set the Salt minion and its communication with the Salt master.


  public_key: |
    -----BEGIN PUBLIC KEY-----
    -----END PUBLIC KEY-----

  private_key: |
   -----END RSA PRIVATE KEY----- Assigning Roles to the Cluster Nodes

You need to specify which node of your cluster will be used as the administration node and which nodes will be used as regular cluster nodes.

To assign the administration node role to the cluster node, add the following to the configuration file:

  role: admin

If the cluster node is assigned the administration node, all required containers are imported and started. Bear in mind, that an NTP server must be configured on that machine.

To other cluster nodes you assign the role cluster. The machine will register itself as Salt minion on the administration node and configure a timesync service with administration node as a reference. You do not have to install any NTP server, but if you need to use one, you need to disable the systemd-timesyncd first. An example of the cluster role assignment follows:

  role: cluster

where the is the host name of the administration node.

5.2.6 The vendor-data Configuration File

The vendor-data is an optional configuration file that typically stores data related to the cloud you use. The data are provided by the entity that launches the cloud instance.

The format is the same as used for user-data.

5.3 Registering Node at SUSE Customer Center, SMT or RMT

To register a node at the SUSE Customer Center, Repository Management Tool (RMT) or Subscription Management Tool (SMT), use SUSEConnect. This can be necessary if you want to install updates on your node but did not register during the installation as described in Section 4.1, “Installing the Administration Node”.

You can also use SUSEConnect to switch from SUSE Customer Center to a local RMT or SMT server.

To register a node at the SUSE Customer Center and for listing available products, use

root # SUSEConnect --list-extensions

Use the displayed commands to enable the required SUSE CaaS Platform repositories.

If you want to register your node at a SMT server, refer to the SMT Guide at

If you want to register your node at a RMT server, refer to the RMT Guide at

Note: Using a Proxy Server with Authentication

Create the file /root/.curlrc with the content:

--proxy https://PROXY_FQDN:PROXY_PORT
--proxy-user "USER:PASSWORD"

Replace PROXY_FQDN with the fully qualified domain name of the proxy server and PROXY_PORT with its port. Replace USER and PASSWORD with the credentials of an allowed user for the proxy server.

A Appendix

A.1 Installing an Administration Node using AutoYaST

To assist with automating the installation of SUSE CaaS Platform clusters as much as possible, it is possible to automatically install the administration node with AutoYaST, similarly to the process used for worker nodes.

Be aware, though, that this requires considerable customisation of the autoyast.xml file.

Here is a sample file to create an administration node.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE profile>
<profile xmlns="" xmlns:config="">
      <chroot-scripts config:type="list">
            <chrooted config:type="boolean">true</chrooted>
            <chrooted config:type="boolean">true</chrooted>
mkdir -p /root/.ssh
chmod 600 /root/.ssh
echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2G7k0zGAjd+0LzhbPcGLkdJrJ/LbLrFxtXe+LPAkrphizfRxdZpSC7Dvr5Vewrkd/kfYObiDc6v23DHxzcilVC2HGLQUNeUer/YE1mL4lnXC1M3cb4eU+vJ/Gyr9XVOOReDRDBCwouaL7IzgYNCsm0O5v2z/w9ugnRLryUY180/oIGeE/aOI1HRh6YOsIn7R3Rv55y8CYSqsbmlHWiDC6iZICZtvYLYmUmCgPX2Fg2eT+aRbAStUcUERm8h246fs1KxywdHHI/6o3E1NNIPIQ0LdzIn5aWvTCd6D511L4rf/k5zbdw/Gql0AygHBR/wnngB5gSDERLKfigzeIlCKf Unsafe Shared Key" >> /root/.ssh/authorized_keys
         <timeout config:type="integer">8</timeout>
         <suse_btrfs config:type="boolean">true</suse_btrfs>
      <ask-list config:type="list" />
         <confirm config:type="boolean">false</confirm>
         <second_stage config:type="boolean">false</second_stage>
         <self_update config:type="boolean">false</self_update>
      <proposals config:type="list" />
         <partition_alignment config:type="symbol">align_optimal</partition_alignment>
         <start_multipath config:type="boolean">false</start_multipath>
          <accept_file_without_checksum config:type="boolean">true</accept_file_without_checksum>
          <accept_non_trusted_gpg_key config:type="boolean">true</accept_non_trusted_gpg_key>
          <accept_unknown_gpg_key config:type="boolean">true</accept_unknown_gpg_key>
          <accept_unsigned_file config:type="boolean">true</accept_unsigned_file>
          <accept_verification_failed config:type="boolean">false</accept_verification_failed>
          <import_gpg_key config:type="boolean">true</import_gpg_key>
   <partitioning config:type="list">
         <initialize config:type="boolean">true</initialize>
      <copy_config config:type="boolean">false</copy_config>
      <import config:type="boolean">false</import>
      <languages />
         <dhclient_client_id />
         <dhcp_hostname config:type="boolean">true</dhcp_hostname>
         <write_hostname config:type="boolean">false</write_hostname>
      <interfaces config:type="list">
      <ipv6 config:type="boolean">true</ipv6>
      <keep_install_network config:type="boolean">true</keep_install_network>
      <setup_before_proposal config:type="boolean">true</setup_before_proposal>
      <managed config:type="boolean">false</managed>
         <ipv4_forward config:type="boolean">false</ipv4_forward>
         <ipv6_forward config:type="boolean">false</ipv6_forward>
      <image />
      <install_recommended config:type="boolean">false</install_recommended>
      <instsource />
      <patterns config:type="list">
      <patterns config:type="list">
         <disable config:type="list">
         <enable config:type="list">


   <users config:type="list">
         <encrypted config:type="boolean">false</encrypted>
         <authorized_keys config:type="list">
             <listentry>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2G7k0zGAjd+0LzhbPcGLkdJrJ/LbLrFxtXe+LPAkrphizfRxdZpSC7Dvr5Vewrkd/kfYObiDc6v23DHxzcilVC2HGLQUNeUer/YE1mL4lnXC1M3cb4eU+vJ/Gyr9XVOOReDRDBCwouaL7IzgYNCsm0O5v2z/w9ugnRLryUY180/oIGeE/aOI1HRh6YOsIn7R3Rv55y8CYSqsbmlHWiDC6iZICZtvYLYmUmCgPX2Fg2eT+aRbAStUcUERm8h246fs1KxywdHHI/6o3E1NNIPIQ0LdzIn5aWvTCd6D511L4rf/k5zbdw/Gql0AygHBR/wnngB5gSDERLKfigzeIlCKf Unsafe Shared Key</listentry>
      <do_registration config:type="boolean">false</do_registration>
      <install_updates config:type="boolean">true</install_updates>
      <slp_discovery config:type="boolean">false</slp_discovery>
      <configure_dhcp config:type="boolean">false</configure_dhcp>
      <peers config:type="list">
          <initial_sync config:type="boolean">true</initial_sync>
          <initial_sync config:type="boolean">true</initial_sync>
          <initial_sync config:type="boolean">true</initial_sync>
      <start_at_boot config:type="boolean">true</start_at_boot>
      <start_in_chroot config:type="boolean">true</start_in_chroot>

Copy the above and paste it into a file named autoyast.xml, then edit it as appropriate for your configuration. After you have prepared the file, you will need to put it on a Web server that is accessible to the SUSE CaaS Platform cluster.

After this, install the admin node by following the same procedure as for a worker node in Section 4.4.3, “Automatic Installation Using AutoYaST”.

For more information about using and customizing AutoYaST, refer to

For more information about using pre-hashed passwords, refer to Section, “Setting Password”.

Print this page