SUSE Linux Enterprise Server 12 SP2

Docker Guide

This guide introduces Docker, a lightweight virtualization solution to run virtual units simultaneously on a single control host.

Publication Date: August 15, 2017

Copyright © 2006– 2017 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

1 Docker Overview

Docker is a lightweight virtualization solution to run multiple virtual units (containers) simultaneously on a single control host. Containers are isolated with Kernel Control Groups ( Control groups ) and Namespace .

Full virtualization solutions such as Xen, KVM, or libvirt are based on the processor simulating a complete hardware environment and controlling the virtual machines. However, Docker only provides operating system-level virtualization where the Linux kernel controls isolated containers.

Before going into detail about Docker, let's define some of the terms used:

Docker engine

The docker engine is a server-client type application that performs all tasks related to virtual machines. The Docker engine comprises the following:

  • daemon - is the server side of the docker engine that manages all docker objects (images, containers, network used by containers, etc.)

  • REST API - applications can use this API to communicate directly with the daemon

  • a CLI client - that enables you to communicate with the daemon. If the daemon is running on a different machine than the CLI client, the CLI client can communicate by using network sockets or the REST API provided by the docker engine.

Image

An image is a read-only template used to create a virtual machine on the host server. A Docker image is made by a series of layers built one over the other. Each layer corresponds to a permanent change, for example an update of an application. The changes are stored in a file called a dockerfile. For more details see the official Docker documentation.

Dockerfile

A dockerfile stores changes made on top of the base image. The Docker engine reads instructions in the dockerfile and builds a new image according to the instructions.

Container

A container is a running instance based on a particular Docker Image. Each container can be distinguished by a unique container ID.

Registry

A registry is storage for already created images. It typically contains several repositories There are two types of registry:

  • public registry - where everyone (usually registered) can download and use images. A typical public registry is Docker Hub.

  • private registry - these are accessible for particular users or from a particular private network.

Repository

A repository is storage in a registry that stores a different version of a particular image. You can pull or push images from or to a repository.

Control groups

Control groups, also called cgroups, is a Linux kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchically organized groups to isolate resources.

Namespace

Docker uses namespaces for its containers that isolates resources reserved for particular containers.

Docker is a platform that allows developers and system administrators to manage the complete life cycle of images. Docker makes it easy to build, ship and run images containing applications.

Docker provides you with the following advantages:

  • Isolation of applications and operating systems through containers.

  • Near native performance, as Docker manages allocation of resources in real time.

  • Controls network interfaces and resources available inside containers through cgroups.

  • Versioning of images.

  • Allows building new images based on existing ones.

  • Provides you with container orchestration.

On the other hand, Docker has the following limitations:

Limitations of Docker
  • Containers run inside the host system's kernel and cannot use a different kernel.

  • Only allows Linux guest operating systems.

  • Docker is not a full virtualization stack like Xen, KVM, or libvirt.

  • Security depends on the host system. Refer to the official security documentation for more details.

1.1 Docker Architecture

Docker uses a client/server architecture. You can use the CLI client to communicate with the daemon. The daemon then performs operations with containers and manages images locally or in registry. The CLI client can run on the same server as the host daemon or on a different machine. The CLI client communicates with the daemon by using network sockets. The architecture is depicted in Figure 1.1, “The docker architecture”.

The docker architecture
Figure 1.1: The docker architecture

1.2 Docker Drivers

1.2.1 Container Drivers

Docker uses libcontainer as the back-end driver to handle containers.

1.2.2 Storage Drivers

Docker supports different storage drivers:

  • vfs: this driver is automatically used when the Docker host file system does not support copy-on-write. This is a simple driver which does not offer some advantages of Docker (like sharing layers, more on that in the next sections). It is highly reliable but also slow.

  • devicemapper: this driver relies on the device-mapper thin provisioning module. It supports copy-on-write, hence it offers all the advantages of Docker.

  • btrfs: this driver relies on Btrfs to provide all the features required by Docker. To use this driver the /var/lib/docker directory must be on a Btrfs file system.

  • AUFS: this driver relies on the AUFS union file system. Neither the upstream kernel nor the SUSE one supports this file system. Hence the AUFS driver is not built into the SUSE Docker package.

SLE 12 uses the Btrfs file system by default, which leads Docker to use the btrfs driver.

It is possible to specify which driver to use by changing the value of the DOCKER_OPTS variable defined inside of the /etc/sysconfig/docker file. This can be done either manually or using YaST by browsing to System › /etc/sysconfig Editor › System › Management › DOCKER_OPTS menu and entering the -s storage_driver string.

For example, to force the usage of the devicemapper driver enter the following text:

DOCKER_OPTS="-s devicemapper"
Important
Important: Mounting /var/lib/docker

It is recommended to have /var/lib/docker mounted on a separate partition or volume to not affect the Docker host operating system in case of a file system corruption.

In case you choose the Btrfs file system for /var/lib/docker, it is strongly recommended to create a subvolume for it. This ensures that the directory is excluded from file system snapshots. If not excluding /var/lib/docker from snapshots, the file system will likely run out of disk space soon after you start deploying containers. Whats more, a rollback to a previous snapshot will also reset the Docker database and images. Disabling copy-on-write for this subvolume is also recommended, to avoid duplication of data blocks. Refer to Creating and Mounting New Subvolumes at https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_snapper_setup.html for details.

2 Docker Installation

2.1 General Preparation

Prepare the host as described below. Before installing any Docker-related packages, you need to enable the container module:

Note
Note: Built-in Docker Orchestration Support

Starting with Docker 1.12 the container orchestration is now an integral part of the Docker engine. Even though this feature is available in SLESSP1 and in SLESSP2, it is not supported and it is only a technical preview. Use Kubernetes for Docker containers orchestration, for details refer to the Kubernetes documentation.

Procedure 2.1: Enabling the Container Module Using YaST
  1. Start YaST, and select Software ›  Software Repositories.

  2. Click Add to open the add-on dialog.

  3. Select Extensions and Modules from Registration Server and click Next.

  4. From the list of available extensions and modules, select Container Module 12 x86_64 and click Next.

    The containers module and its repositories will be added to your system.

  5. If you use Subscription Management Tool, update the list of repositories on the SMT server.

Procedure 2.2: Enabling the Container Module Using SUSEConnect
  • The Container Module can be added also with the following command:

    $ sudo SUSEConnect -p sle-module-containers/12/x86_64 -r ''
    Note
    Note: Note about the SUSEConnect syntax

    The -r '' flag is required to avoid a known limitation of SUSEConnect.

Procedure 2.3: Installing and Setting Up Docker
  1. Install the docker package:

    sudo zypper install docker
  2. To automatically start the Docker service at boot time:

    sudo systemctl enable docker.service

    This will automatically enable docker.socket in consequence.

  3. In case you will use Portus and an SSL secured registry, open the /etc/sysconfig/docker file. Search for the parameter DOCKER_OPTS and add --insecure-registry ADDRESS_OF_YOUR_REGISTRY.

  4. In the production environment when using the SSL secured registry with Portus, add CA certificates to the directory /etc/docker/certs.d/<registry address> and copy the CA certificates to your system:

        sudo cp CA /etc/pki/trust/anchors/ && update-ca-certificates
  5. Start the Docker service:

    sudo systemctl start docker.service

    This will automatically start docker.socket in consequence.

The Docker daemon listens on a local socket which is accessible only by the root user and by the members of the docker group. The docker group is automatically created at package installation time. To allow a certain user to connect to the local Docker daemon, use the following command:

sudo /usr/sbin/usermod -aG docker USERNAME

The user can communicate with the local Docker daemon upon his next login.

2.2 Networking

If you want your containers to be able to access the external network, you must enable the ipv4 ip_forward rule. This can be done using YaST by browsing to System › Network Settings › Routing menu and ensuring Enable IPv4 Forwarding is checked.

This option cannot be changed when networking is handled by the Network Manager. In such cases the /etc/sysconfig/SuSEfirewall2 file needs to be edited manually to ensure the FW_ROUTE flag is set to yes:

FW_ROUTE="yes"

2.2.1 Networking Limitations on Power Architecture

Currently Docker networking has two limitations on the Power architecture.

The first limitation is about iptables. SLE 12 machines cannot run Docker with the iptables support enabled. An update of the kernel is going to solve this issue. In the meantime the Docker package for Power has iptables support disabled via a dedicated directive inside of /etc/sysconfig/docker.

As a result of this limitation Docker containers will not have access to the outer network. A possible workaround is to share the same network namespace between the host and the containers. This however reduces the isolation of the containers.

The network namespace of the host can be shared on a per-container basis by adding --net=host to the docker run command.

Note
Note: iptables support on SLE 12 SP1

SLE 12 SP1 hosts are not affected by this limitation but, given they use the same SLE 12 package, they will have iptables support disabled. This can be changed by removing the -iptables=false setting inside of /etc/sysconfig/docker.

The second limitation is about network isolation between the containers and the host. Currently it is not possible to prevent containers from probing or accessing arbitrary ports of each other.

3 Installing sle2docker

The sle2docker is used to import pre-built SUSE Linux Enterprise images. The imported pre-built images can then be used to create base Docker images.

The tool is part of the official container module. You can install it by using zypper. But prior to installing sle2docker, verify that the following prerequisites are fulfilled:

  • Ruby is installed on the host machine.

  • The docker daemon is running on the system.

  • The user invoking sle2docker must have proper rights to invoke Docker commands.

If the conditions above are fulfilled, you can install the sle2docker tool by running:

sudo zypper in sle2docker

4 Hosting Docker Images On-premise

What can be done with the custom Docker images I created? How can they be shared within my organization? The easiest solution would be to push these images to the Docker Hub. By default all images pushed to the Docker Hub are public. This is probably fine as long as this does not violate your companies policy and your images do not contain sensitive data or proprietary software.

If you need to restrict access to your Docker images, there are two possibilities:

  • Get a subscription on Docker Hub that unlock private repositories.

  • Run an on-site Docker registry where to store all the Docker images used by your organization or company and combine them with Portus to secure the registry.

This chapter describes how to set up an on-site Docker registry and how to combine it with Portus.

4.1 What is a Docker Registry?

The Docker registry is an open source project created by Docker Inc. It allows the storage and retrieval of Docker images. By running a local instance of the Docker registry it is possible to completely avoid usage of the Docker Hub.

The Docker registry is also used by the Docker Hub. However, the Docker Hub, as seen from the user perspective, is made of the following parts at least:

  • The user interface (UI): The part that is accessed by users with their browser. The UI provides a nice and intuitive way to browse the contents of the Docker Hub either manually or by using a search feature. It also allows to create organizations made by different users.

    This component is closed source.

  • The authentication component: This is used to protect the images stored inside of the Docker Hub. It validates all push, pull and search requests.

    This component is closed source.

  • The storage back-end: This is where the Docker images are sent and downloaded from. It is provided by the Docker registry.

    This component is open source.

4.2 Installing and Setting Up Docker Registry

  1. Install the docker-distribution-registry package:

    sudo zypper install docker-distribution-registry
  2. To automatically start the Docker registry at boot time:

    sudo systemctl enable registry
  3. Start the Docker registry:

    sudo systemctl start registry

The Docker registry configuration is defined inside of /etc/registry/config.yml.

With the default configuration the registry listens on ports 5000 and stores the Docker images under /var/lib/docker-registry.

Note
Note: Incompatible Versions of Docker and Docker Registry

Docker registry version 2.3 is not compatible with Docker versions older than 1.10, because v2 manifests were only introduced with Docker 1.10. As Docker and Docker registry can be installed on different boxes, the versions might be incompatible. If you experience communication errors between Docker and Docker registry, update both to the latest versions.

For more details about Docker registry and its configuration, see the official documentation at: https://docs.docker.com/registry/.

4.3 Limitations

The Docker registry has two major limitations:

  • It lacks any form of authentication. That means everybody with access to the Docker registry can push and pull images to it. That also includes the possibility to overwrite already existing images.

  • There is no way to see which images have been pushed to the Docker registry. You can manually take notes of what is being stored inside of it. There is also no search functionality, which makes collaboration harder.

The next section is going to introduce Portus, the solution to all of the problems above.

4.4 Portus

Portus is an authentication service and user interface for the Docker registry. It is an open source project created by SUSE to address all the limitations faced by the local instances of Docker registry.

By combining Portus and Docker registry, it is possible to have a secure and enterprise ready on-premise version of the Docker Hub.

Portus is accessible for SLE 12 customers through the Containers module. To install Portus, use the following command:

sudo zypper in Portus

In order to configure Portus properly, follow these steps:

  1. First of all, you should install Portus's dependencies if you haven't already. This is thoroughly documented here: http://port.us.org/docs/setups/1_rpm_packages.html#portus-dependencies. This document will help you to get through the installation process, and it will also warn you about some of the common pitfalls.

  2. After installing Portus and its dependencies, you need to configure your instance. The initial setup of Portus is explained here: http://port.us.org/docs/setups/1_rpm_packages.html#initial-setup. When you are done with portusctl, you should modify some configurable values before using Portus. This is thoroughly explained in this documentation page: http://port.us.org/docs/Configuring-Portus.html.

  3. To apply the configuration changes, restart Apache (this is required after each configuration change).

  4. Finally, when entering Portus for the first time, you will be required to enter some information about your installed registry. For details, see: http://port.us.org/docs/setups/1_rpm_packages#the-default-installation.html.

  5. The Portus setup is now complete and you can start using Portus.

Currently, Portus is part of SUSE's Docker offer as a technology preview. For more information and documentation about Portus, see: http://port.us.org/.

5 Creating Custom Images

For creating your custom image you need a base docker image of SLES. You can use any of the pre-built SLES images that you can obtain as described in Section 5.2, “Customizing SLES Docker Images”.

Note
Note: No SLES Images in Docker Hub

Usually you can pull a variety of base docker images from the docker hub but that does not apply for SLES. Currently we cannot distribute SLES images for Docker because there is no way to associate an End-User License Agreement (EULA) to a Docker image. sle2docker enables you to import pre-built SLES images that you can use for creating base SLES images.

After you obtain your base docker image, you can modify the image by using a Dockerfile (usually placed in the build directory). Then use the standard docker building tool to create your custom image:

         docker build path_to_build_directory

For more docker build options please refer to the official Docker documentation.

Note
Note: Dockerizing Your Applications

You may want to write a dockerfile for your own application that should be run inside a docker container. For a procedure refer to Chapter 6, Dockerizing Applications.

5.1 Obtaining Base SLES Images

You can install pre-built images of SLES by using Zypper:

        sudo zypper in sles11sp4-docker-image sles12sp2-docker-image

Pre-built images do not have repositories configured. But when the Docker host has an SLE subscription that provides access to the product used in the image, Zypper will automatically have access to the right repositories.

After the pre-built images are installed, you need to list them using sle2docker to get a proper image name:

        sle2docker list

Now you need to activate the pre-built images:

        sle2docker activate PRE-BUILT_IMAGE_NAME

After successful activation, sle2docker will display the name of the docker image. You can customize the docker image as described in Section 5.2, “Customizing SLES Docker Images”.

5.2 Customizing SLES Docker Images

The pre-built images do not have any repository configured and do not include any modules or extensions. They contain a zypper service that contacts either the SUSE Customer Center (SCC) or your Subscription Management Tool (SMT) server, according to the configuration of the SLE host that runs the Docker container. The service obtains the list of repositories available for the product used by the Docker image. You can also directly declare extensions in your Dockerfile (for details refer to Section 5.2.3, “Adding SLE Extenstions and Modules to Images”.

You do not need to add any credentials to the Docker image because the machine credentials are automatically injected into the container by the docker daemon. They are injected inside of the /run/secrets directory. The same applies to the /etc/SUSEConnect file of the host system, which is automatically injected into the /run/secrets directory.

Note
Note: Credentials and Security

The contents of the /run/secrets directory are never committed to a Docker image, hence there is no risk of your credentials leaking.

To obtain the list of repositories use the following command:

zypper ref -s

It will automatically add all the repositories to your container. For each repository added to the system a new file will be created under /etc/zypp/repos.d. The URLs of these repositories include an access token that automatically expires after 12 hours. To renew the token call the zypper ref -s command. It is secure to commit these files to a Docker image.

If you want to use a different set of credentials, place a custom /etc/zypp/credentials.d/SCCcredentials file inside of the Docker image. It contains the machine credentials that have the subscription you want to use. The same applies to the SUSEConnect file: to override the file available on the host system that is running the Docker container, add a custom /etc/SUSEConnect file inside of the Docker image.

Now you can create a custom Docker image by using a Dockerfile. If you want to create a custom SLE 12 image, please refer to Section 5.2.1, “Creating a Custom SLE 12 Image”. If you want to create a custom SLE 11 Docker image, please refer to Section 5.2.2, “Creating a Custom SLE 11 SP4 Image”. In case you would like to move your application to a Docker container, please refer to Chapter 6, Dockerizing Applications.

5.2.1 Creating a Custom SLE 12 Image

The following Docker file creates a simple Docker image based on SLE 12 SP2:

FROM suse/sles12sp2:latest

RUN zypper --gpg-auto-import-keys ref -s
RUN zypper -n in vim

When the Docker host machine is registered against an internal SMT server, the Docker image requires the SSL certificate used by SMT:

FROM suse/sles12sp2:latest

# Import the crt file of our private SMT server
ADD http://smt.test.lan/smt.crt /etc/pki/trust/anchors/smt.crt
RUN update-ca-certificates

RUN zypper --gpg-auto-import-keys ref -s
RUN zypper -n in vim

5.2.2 Creating a Custom SLE 11 SP4 Image

The following Docker file creates a simple Docker image based on SLE 11 SP4:

FROM suse/sles11sp4:latest

RUN zypper --gpg-auto-import-keys ref -s
RUN zypper -n in vim

When the Docker host machine is registered against an internal SMT server, the Docker image requires the SSL certificate used by SMT:

FROM suse/sles11sp4:latest

# Import the crt file of our private SMT server
ADD http://smt.test.lan/smt.crt /etc/ssl/certs/smt.pem
RUN c_rehash /etc/ssl/certs

RUN zypper --gpg-auto-import-keys ref -s
RUN zypper -n in vim

5.2.3 Adding SLE Extenstions and Modules to Images

You may have a subscription of any SLE extension or module that you would like to use in your custom image. To add them to the Docker image, proceed as follows:

Procedure 5.1: Adding Extension and Modules
  1. Add the following into your Dockerfile:

    ADD *.repo /etc/zypp/repos.d/
    ADD *.service /etc/zypp/services.d
    RUN zypper refs && zypper refresh
  2. Copy all .service and .repo files that you will use into the directory where you will build the Docker image from the Dockerfile.

6 Dockerizing Applications

Docker is a technology that can help you to minimize resources used to run or build your applications. There are several types of applications that are suitable to run inside a Docker container like daemons, Web pages or applications that expose ports for communication. You can use Docker for automation of building and deployment processes by adding the build process into a docker image, then building the image and then running containers based on that image.

Running your application inside a docker container provides you with the following advantages:

  • You can minimize the runtime environment of the application as you can add to the docker image of the application just the required processes and applications.

  • The image with your application is portable across machines also with different Linux host systems.

  • You can share the image of your application by using a repository.

  • You can use different versions of required packages in the container than the host system uses without having problems with dependencies.

  • You can run several instances of the same application that are completely independent from each other.

Using Docker for building of applications provides the following features:

  • You can prepare a complete building image.

  • Your build always runs in the same environment.

  • Your developers can test their code in the same environment as used in production.

  • You can set up an automated building process.

The following section provides you with examples and tips on how to dockerize your applications. Prior to reading further, make sure that you have activated your SLES base docker image as described in Section 5.1, “Obtaining Base SLES Images”.

6.1 Running an Application with Specific Package Versions

You may face a problem that your application uses a specific version of a package that is different from the package installed on the system that should run your application. You can modify your application to work with another version or you may create a Docker image with that particular package version. The following example of a Dockerfile shows an image based on a current version of SLES but with an older version of the example package

                FROM suse/sles12sp2:latest
                MAINTAINER Tux

                RUN zypper ref && zypper in -f example-1.0.0-0
                COPY application.rpm /tmp/

                RUN zypper --non-interactive in /tmp/application.rpm

                ENTRYPOINT ["/etc/bin/application"]

                CMD ["-i"]

Now you can build the image by running in the same directory as the Dockerfile resides:

                docker build --tag tux_application:latest .

The Dockerfile examples shown above performs the following operations during the docker build:

  1. Updates the SLES repositories.

  2. Installs the desired version of the example package.

  3. Copies your application package to the image. The source RPM must be placed in the build context.

  4. Unpacks your application.

  5. The last two steps run your application after a container is started.

After a successful build of the tux_application image, you can start a container based on your new image:

                docker run -it --name application_instance tux_application:latest

You have created a container that runs a single instance of your application. Bear in mind that after closing the application, the Docker container exits as well.

6.2 Running Applications with Specific Configuration

You may need to run an application that is delivered in a standard package accessible through SLES repositories but you may need to use a different configuration or use specific environment variables. In case you would like to run several instances of the application with non-standard configuration, you can create your own image that will pass the custom configuration to the application.

An example with the example application follows:

                FROM suse/sles12sp2:latest

                RUN zypper ref && zypper --non-interactive in example

                ENV BACKUP=/backup

                RUN mkdir -p $BACKUP
                COPY configuration_example /etc/example/

                ENTRYPOINT ["/etc/bin/example"]

The above example Dockerfile results in the following operations:

  1. Refreshing of repositories and installation of the example.

  2. Sets a BACKUP environment variable (the variable persists to containers started from the image). You can always overwrite the value of the variable with a new one while running the container by specifying a new value.

  3. Creates the directory /backup.

  4. Copies the configuration_example to the image.

  5. Runs the example application.

Now you can build the image and after a successful build, you can run a container based on your image.

6.3 Sharing Data Between an Application and the Host System

You may run an application that needs to share data between the application's container and the host file system. Docker enables you to do data sharing by using volumes. You can declare a mount point directly in the Dockerfile. But you cannot specify a directory on the host system in the Dockerfile as the directory may not be accessible at the build time. You can find the mounted directory in the /var/lib/docker/volumes/ directory on the host system.

Note
Note: Discarding Changes to the Directory to Be Shared

After you declare a mount point by using the VOLUME instruction, all your changes performed (by using the RUN instruction) to the directory will be discarded. After the declaration, the volume is part of a temporary container that is then removed after a successful build. In case you need to e.g. change permissions, perform the change before you declare the directory as a mount point in the Dockerfile.

You can specify a particular mount point on the host system when running a container by using the -v option:

                docker run -it --name testing -v /home/tux/data:/data sles12sp2:latest /bin/bash
Note
Note

Using the -v option overwrites the VOLUME instruction if you specify the same mount point in the container.

Now let's create an example image with a Web server that will read Web content from the host's file system. The Dockerfile could look as follows:

                FROM suse/sles12sp2:latest

                RUN zypper ref && zypper --non-interactive in apache2

                COPY apache2 /etc/sysconfig/

                RUN chown -R admin /data

                EXPOSE 80

                VOLUME /data

                ENTRYPOINT ["apache2ctl"]

The example above installs the Apache Web server to the image and copies all your configuration to the image. The data directory will be owned by the admin user and will be used as a mount point to store your web pages.

6.4 Applications Running in the Background

Your application may need to run in the background as a daemon or as an application exposing ports for communication. In that case a typical Docker container may be run in background. Do not run your application in the container in the background as it may cause exiting of the container. Run the container in the background instead. An example Dockerfile for an application exposing a port looks as follows:

                FROM suse/sles12sp23:latest

                RUN zypper ref && zypper --non-interactive in postfix
                RUN mkdir -p /var/spool/mail

                COPY main.cf /etc/postfix/main.cf

                EXPOSE 25 587

                VOLUME ["/var/spool/mail"]
                ENTRYPOINT ["/usr/sbin/postfix"]

Now you can build your image. Docker performs the following operations according to the instructions in the Dockerfile:

  1. Docker refreshes repositories and installs the postfix mail server as it is not installed by default in the SLES docker image.

  2. The /var/spool/mailboxes directory is created in the file system of the image. The directory will store all mailboxes if you configure the mail server to store data in this directory.

  3. Here you copy a configuration file to the particular directory. Make sure that the main.cf is located in the same directory as the Dockerfile. Bear in mind that the configuration will be the same for all instance run in the future. In case you need a different configuration for each container, you will need to edit it after running the container.

  4. Each started container will expose the ports: 25 and 587. In case you will run several instances of this image on the same machine, you should define a specific host name for each container.

  5. The VOLUME instruction creates a mount point at the /var/spool/mail directory in the container.

  6. The last instructions runs the postfix mail server in the started container.

After a successful build, you can run a container based on the image:

                docker run -d --name mail_server -v /var/spool/mail:/var/spool/mail postfix:latest /bin/bash

The -d option runs the container in a detached mode and further communication by using the CLI will not be possible. To reattach the container run:

                docker attach <container identification>

7 Working with Containers

After you have created your images, you can start your containers based on that image. You can run an instance of the image by using the docker run command. The Docker engine then creates and starts the container. The command docker run takes several arguments:

  • A container name - it is recommended to name your container.

  • Specify a user to use in your container.

  • Define a mount point.

  • Specify a particular hostname, etc.

The container typically exits if its main process finishes. For example, if your container starts a particular application, as soon as you quit the application, the container exits. You can start the container again by running:

docker start -ai <container name>

You may need to remove unused containers, you can achieve this by using:

docker rm <container name>

7.1 Linking Containers

Docker enables you to link containers together which allows for communication between containers on the same host server. If you use the standard networking model, you can link containers by using the --link option when running containers:

First create a container to link to:

docker run -d --name sles sles12sp2 /bin/bash

Then create a container that will link to the sles container:

docker run --link sles:sles sles12sp2 /bin/bash

The container that links to sles has defined environment variables that enable connecting to the linked container.

A Documentation Updates

This chapter lists content changes for this document.

This manual was updated on the following dates:

A.1 November 2016 (Initial Release of SUSE Linux Enterprise Server 12 SP2)

General
  • The e-mail address for documentation feedback has changed to doc-team@suse.com.

  • The documentation for Docker has been enhanced and renamed to Docker Guide.

General Changes to this Guide
Bugfixes
Print this page