Jump to content
SUSE Linux Enterprise Server 15

Docker Open Source Engine Guide

This guide introduces Docker Open Source Engine, a lightweight virtualization solution to run virtual units simultaneously on a single control host.

Publication Date: December 20, 2018

Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

1 Docker Open Source Engine Overview

Docker Open Source Engine is a lightweight virtualization solution to run multiple virtual units (containers) simultaneously on a single control host. Containers are isolated with Kernel Control Groups ( Control groups ) and Namespaces .

Full virtualization solutions such as Xen, KVM, or libvirt are based on the processor simulating a complete hardware environment and controlling the virtual machines. However, Docker Open Source Engine only provides operating system-level virtualization where the Linux kernel controls isolated containers.

Before going into detail about Docker Open Source Engine, let us define some of the terms used:

Docker Open Source Engine

Docker Open Source Engine is a server-client type application that performs all tasks related to virtual machines. Docker Open Source Engine comprises the following:

  • Daemon:  The server side of Docker Open Source Engine manages all Docker objects (images, containers, network connections used by containers, etc.).

  • REST API:  Applications can use this API to communicate directly with the daemon.

  • CLI Client:  Enables you to communicate with the daemon. If the daemon is running on a different machine than the CLI client, the CLI client can communicate by using network sockets or the REST API provided by Docker Open Source Engine.

Image

An image is a read-only template used to create a virtual machine on the host server. A Docker image is made by a series of layers built one over the other. Each layer corresponds to a permanent change, for example an update of an application. The changes are stored in a file called a Dockerfile. For more details see the official Docker documentation.

Dockerfile

A Dockerfile stores changes made on top of the base image. The Docker Open Source Engine reads instructions in the Dockerfile and builds a new image according to the instructions.

Container

A container is a running instance based on a particular Docker Image. Each container can be distinguished by a unique container ID.

Registry

A registry is storage for already created images. It typically contains several repositories There are two types of registry:

  • public registry - where everyone (usually registered) can download and use images. A typical public registry is Docker Hub.

  • private registry - these are accessible for particular users or from a particular private network.

Repository

A repository is storage in a registry that stores a different version of a particular image. You can pull or push images from or to a repository.

Control groups

Control groups, also called cgroups, is a Linux kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchically organized groups to isolate resources.

Namespaces

Docker Open Source Engine uses namespaces for its containers that isolates resources reserved for particular containers.

Orchestration

In a production environment you typically need a cluster with many containers on each cluster node. The containers must cooperate and you need a framework that enables you to manage the containers automatically. The act of automatic container management is called container orchestration and is typically handled by Kubernetes.

Docker Open Source Engine is a platform that allows developers and system administrators to manage the complete lifecycle of images. Docker Open Source Engine makes it easy to build, ship and run images containing applications.

Docker Open Source Engine provides you with the following advantages:

  • Isolation of applications and operating systems through containers.

  • Near native performance, as Docker Open Source Engine manages allocation of resources in real time.

  • Controls network interfaces and resources available inside containers through cgroups.

  • Versioning of images.

  • Allows building new images based on existing ones.

  • Provides you with container orchestration.

On the other hand, Docker Open Source Engine has the following limitations:

Limitations of Docker Open Source Engine
  • Containers run inside the host system's kernel and cannot use a different kernel.

  • Only allows Linux guest operating systems.

  • Docker Open Source Engine is not a full virtualization stack like Xen, KVM, or libvirt.

  • Security depends on the host system. Refer to the official security documentation for more details.

1.1 Docker Open Source Engine Architecture

Docker Open Source Engine uses a client/server architecture. You can use the CLI client to communicate with the daemon. The daemon then performs operations with containers and manages images locally or in registry. The CLI client can run on the same server as the host daemon or on a different machine. The CLI client communicates with the daemon by using network sockets. The architecture is depicted in Figure 1.1, “The Docker Open Source Engine Architecture”.

The Docker Open Source Engine Architecture
Figure 1.1: The Docker Open Source Engine Architecture

1.2 Docker Drivers

1.2.1 Container Drivers

Docker Open Source Engine uses libcontainer as the back-end driver to handle containers.

1.2.2 Storage Drivers

Docker Open Source Engine supports different storage drivers:

  • vfs: this driver is automatically used when the Docker host file system does not support copy-on-write. This is a simple driver which does not offer some advantages of Docker Open Source Engine (like sharing layers, more on that in the next sections). It is highly reliable but also slow.

  • devicemapper: this driver relies on the device-mapper thin provisioning module. It supports copy-on-write, hence it offers all the advantages of Docker Open Source Engine.

  • btrfs: this driver relies on Btrfs to provide all the features required by Docker Open Source Engine. To use this driver the /var/lib/docker directory must be on a Btrfs file system.

  • AUFS: this driver relies on the AUFS union file system. Neither the upstream kernel nor the SUSE kernel supports this file system. Hence the AUFS driver is not built into the SUSE docker package.

SLE 12 uses the Btrfs file system by default, which leads Docker Open Source Engine to use the btrfs driver.

It is possible to specify which driver to use by changing the value of the DOCKER_OPTS variable defined inside of the /etc/sysconfig/docker file. This can be done either manually or using YaST by browsing to System › /etc/sysconfig Editor › System › Management › DOCKER_OPTS menu and entering the -s storage_driver string.

For example, to force the usage of the devicemapper driver enter the following text:

DOCKER_OPTS="-s devicemapper"
Important
Important: Mounting /var/lib/docker

It is recommended to have /var/lib/docker mounted on a separate partition or volume to not affect the operating system that Docker Open Source Engine runs on in case of a file system corruption.

In case you choose the Btrfs file system for /var/lib/docker, it is strongly recommended to create a subvolume for it. This ensures that the directory is excluded from file system snapshots. If not excluding /var/lib/docker from snapshots, the file system will likely run out of disk space soon after you start deploying containers. In addition, a rollback to a previous snapshot will also reset the Docker database and images. For more information, see Book “Administration Guide”, Chapter 7 “System Recovery and Snapshot Management with Snapper”, Section 7.1.3.3 “Creating and Mounting New Subvolumes”.

2 Docker Open Source Engine Installation

2.1 General Preparation

Prepare the host as described below. Before installing any Docker-related packages, you need to enable the container module:

Note
Note: Built-in Docker Orchestration Support

Starting with Docker Open Source Engine 1.12, the container orchestration is now an integral part of Docker Open Source Engine. Even though this feature is available in SUSE Linux Enterprise Server, it is not supported by SUSE and is only provided as a technical preview. Use Kubernetes for Docker container orchestration, for details refer to the Kubernetes documentation.

Procedure 2.1: Enabling the Container Module Using YaST
  1. Start YaST, and select Software ›  Software Repositories.

  2. Click Add to open the add-on dialog.

  3. Select Extensions and Modules from Registration Server and click Next.

  4. From the list of available extensions and modules, select Container Module 15 x86_64 and click Next.

    The containers module and its repositories will be added to your system.

  5. If you use Repository Mirroring Tool, update the list of repositories on the RMT server.

Procedure 2.2: Enabling the Container Module Using SUSEConnect
  • The Container Module can be added also with the following command:

    tux > sudo SUSEConnect -p sle-module-containers/15/x86_64 -r ''
    Note
    Note: SUSEConnect Syntax

    The -r '' flag is required to avoid a known limitation of SUSEConnect.

Procedure 2.3: Installing and Setting Up Docker Open Source Engine
  1. Install the docker package:

    tux > sudo zypper install docker
  2. To automatically start the Docker service at boot time:

    tux > sudo systemctl enable docker.service

    This will automatically enable docker.socket in consequence.

  3. In case you will use Portus and an SSL secured registry, open the /etc/sysconfig/docker file. Search for the parameter DOCKER_OPTS and add --insecure-registry ADDRESS_OF_YOUR_REGISTRY.

  4. In the production environment when using the SSL secured registry with Portus, add CA certificates to the directory /etc/docker/certs.d/REGISTRY_ADDRESS and copy the CA certificates to your system:

    tux > sudo cp CA /etc/pki/trust/anchors/ && update-ca-certificates
  5. Start the Docker service:

    tux > sudo systemctl start docker.service

    This will automatically start docker.socket.

The Docker daemon listens on a local socket which is accessible only by the root user and by the members of the docker group. The docker group is automatically created at package installation time. To allow a certain user to connect to the local Docker daemon, use the following command:

tux > sudo /usr/sbin/usermod -aG docker USERNAME

The user can communicate with the local Docker daemon upon their next login.

2.2 Networking

If you want your containers to be able to access the external network, you must enable the ipv4 ip_forward rule. This can be done using YaST by browsing to System › Network Settings › Routing menu and ensuring Enable IPv4 Forwarding is checked.

This option cannot be changed when networking is handled by the Network Manager. In such cases the /etc/sysconfig/SuSEfirewall2 file needs to be edited manually to ensure the FW_ROUTE flag is set to yes:

FW_ROUTE="yes"

2.2.1 Networking Limitations on Power Architecture

Currently Docker networking has two limitations on the POWER architecture.

The first limitation is about iptables. SLE machines cannot run Docker Open Source Engine with the iptables support enabled. An update of the kernel is going to solve this issue. In the meantime the docker package for POWER has iptables support disabled via a dedicated directive inside of /etc/sysconfig/docker.

As a result of this limitation Docker containers will not have access to the outer network. A possible workaround is to share the same network namespace between the host and the containers. This however reduces the isolation of the containers.

The network namespace of the host can be shared on a per-container basis by adding --net=host to the docker run command.

Note
Note: iptables Support on SUSE Linux Enterprise Server

SUSE Linux Enterprise Server hosts are not affected by this limitation but they may have iptables support disabled. This can be changed by removing the -iptables=false setting inside of /etc/sysconfig/docker.

The second limitation is about network isolation between the containers and the host. Currently it is not possible to prevent containers from probing or accessing arbitrary ports of each other.

2.3 Updates

All updates to the docker package are marked as interactive (that is, no automatic updates) to avoid accidental updates break running container workloads. In general, we recommend stopping all running containers before applying an update to Docker Open Source Engine.

To avoid the potential for data loss, we do not recommend having workloads rely on containers being startable after an update to Docker Open Source Engine. Although it is technically possible to keep containers running during an update via the --live-restore option, experience has shown that such updates can introduce regressions. SUSE does not support this feature.

3 Storing Images

Prior to creating your own images, you should decide where you will store the images. The easiest solution is to push these images to the Docker Hub. By default, all images pushed to the Docker Hub are public. This is probably fine as long as this does not violate your company's policy and your images do not contain sensitive data or proprietary software.

If you need to restrict access to your Docker images, there are two options:

  • Get a subscription on Docker Hub that unlocks the feature to create private repositories.

    Run an on-site Docker Registry where to store all the Docker images used by your organization or company and combine them with Portus to secure the registry.

This chapter describes how to set up an on-site Docker Registry and how to combine it with Portus.

3.1 What is Docker Registry?

The Docker Registry is an open-source project created by Docker Inc. It allows the storage and retrieval of Docker images. By running a local instance of the Docker Registry it is possible to completely avoid usage of Docker Hub.

Docker Registry is also used by Docker Hub. However, Docker Hub, as seen from the user perspective, is made of the following parts at least:

  • The user interface (UI): The part that is accessed by users with their browser. The UI provides a nice and intuitive way to browse the contents of Docker Hub either manually or by using a search feature. It also allows to create organizations made by different users.

    This component is closed-source.

  • The authentication component: This is used to protect the images stored inside of Docker Hub. It validates all push, pull and search requests.

    This component is closed-source.

  • The storage back-end: This is where Docker images are sent and downloaded from. It is provided by Docker Registry.

    This component is open-source.

3.2 Installing and Setting Up Docker Registry

  1. Install the docker-distribution-registry package:

    tux > sudo zypper install docker-distribution-registry
  2. To automatically start the Docker Registry at boot time:

    tux > sudo systemctl enable registry
  3. Start the Docker Registry:

    tux > sudo systemctl start registry

The Docker Registry configuration is defined inside of /etc/registry/config.yml.

With the default configuration the registry listens on ports 5000 and stores the Docker images under /var/lib/docker-registry.

Note
Note: Incompatible Versions of Docker Open Source Engine and Docker Registry

Docker Registry 2.3 is not compatible with Docker Open Source Engine versions older than 1.10, because v2 manifests were only introduced with Docker Open Source Engine 1.10. As Docker Open Source Engine and Docker Registry can be installed on different boxes, the versions might be incompatible. If you experience communication errors between Docker Open Source Engine and Docker Registry, update both to the latest versions.

For more details about Docker Registry and its configuration, see the official documentation at: https://docs.docker.com/registry/.

3.3 Limitations

The Docker Registry has two major limitations:

  • It lacks any form of authentication. That means everybody with access to the Docker Registry can push and pull images to it. That also includes the possibility to overwrite already existing images.

  • There is no way to see which images have been pushed to the Docker Registry. You need to manually take notes of what is being stored inside of it. There is also no search functionality, which makes collaboration harder.

3.4 Portus

Portus is an authentication service and user interface for the Docker Registry. It is an open source project created by SUSE to address all the limitations faced by the local instances of Docker Registry. By combining Portus and Docker Registry, it is possible to have a secure and enterprise ready on-premise version of the Docker Hub.

Portus is available for SLES customers as a Docker image from SUSE Container Registry. For example, to pull the 2.4.0 tag, run the following command:

tux > docker pull registry.suse.com/sles12/portus:2.4.0

In addition to the official version of the Portus image from SUSE Container Registry, there is a community version that can be found on Docker Hub. However, as a SLES customer, we strongly suggest you use the official Portus image instead. The Portus image for SLES customers has the same code as the one from the community. Therefore, the setup instructions from http://port.us.org/docs/deploy.html apply for both images.

4 Creating Custom Images

For creating your custom image you need a base Docker image of SLES. You can use any of the pre-built SLES images that you can obtain as described in Section 4.2, “Customizing SLES Docker Images”.

After you obtain your base Docker image, you can modify the image by using a Dockerfile (usually placed in the build directory). Then use the standard docker building tool to create your custom image:

tux > docker build PATH_TO_BUILD_DIRECTORY

For more information about docker build options, see the official Docker documentation.

Note
Note: Creating a Docker Image for an Application

For information about creating a Dockerfile for the application you want to run inside a Docker container, see Chapter 5, Creating Docker Images of Applications.

4.1 Obtaining Base SLES Images

To obtain the base SLES images from SUSE registry, use the following command:

tux > docker pull registry.suse.com/suse/IMAGENAME

For example, to get the one for SUSE Linux Enterprise Server 15, use:

tux > docker pull registry.suse.com/suse/sle15

Pre-built images do not have repositories configured. But when the Docker host has a SLE subscription that provides access to the product used in the image, Zypper will automatically have access to the right repositories.

You can customize the Docker image as described in Section 4.2, “Customizing SLES Docker Images”.

4.2 Customizing SLES Docker Images

The pre-built images do not have any repository configured and do not include any modules or extensions. They contain a zypper service that contacts either the SUSE® Customer Center (SUSE Customer Center) or your Repository Mirroring Tool (RMT) server, according to the configuration of the SLE host that runs the Docker container. The service obtains the list of repositories available for the product used by the Docker image. You can also directly declare extensions in your Dockerfile (for details refer to Section 4.2.3, “Adding SLE Extensions and Modules to Images”.

You do not need to add any credentials to the Docker image because the machine credentials are automatically injected into the container by the docker daemon. They are injected inside of the /run/secrets directory. The same applies to the /etc/SUSEConnect file of the host system, which is automatically injected into the /run/secrets directory.

Note
Note: Credentials and Security

The contents of the /run/secrets directory are never committed to a Docker image, hence there is no risk of your credentials leaking.

To obtain the list of repositories, use the following command:

tux > sudo zypper ref -s

It will automatically add all the repositories to your container. For each repository added to the system a new file will be created under /etc/zypp/repos.d. The URLs of these repositories include an access token that automatically expires after 12 hours. To renew the token call the zypper ref -s command. It is secure to commit these files to a Docker image.

If you want to use a different set of credentials, place a custom /etc/zypp/credentials.d/SCCcredentials file inside of the Docker image. It contains the machine credentials that have the subscription you want to use. The same applies to the SUSEConnect file: to override the file available on the host system that is running the Docker container, add a custom /etc/SUSEConnect file inside of the Docker image.

Now you can create a custom Docker image by using a Dockerfile as described in Section 4.2.1 and Section 4.2.2. In case you would like to move your application to a Docker container, refer to Chapter 5, Creating Docker Images of Applications. After you have edited the Dockerfile, build the image by running the following command in the same directory in which the Dockerfile resides:

tux > docker build .

4.2.1 Creating a Custom SLE 12 Image

The following Dockerfile creates a simple Docker image based on SLE 12 SP3:

FROM registry.suse.com/suse/sles12sp3

RUN zypper ref -s
RUN zypper -n in vim

When the Docker host machine is registered against an internal RMT server, the Docker image requires the SSL certificate used by RMT:

FROM registry.suse.com/suse/sles12sp3

# Import the crt file of our private SMT server
ADD http://smt.test.lan/smt.crt /etc/pki/trust/anchors/smt.crt
RUN update-ca-certificates

RUN zypper ref -s
RUN zypper -n in vim

4.2.2 Creating a Custom SLE 15 Image

The following Dockerfile creates a simple Docker image based on SLE 15:

FROM registry.suse.com/suse/sle15

RUN zypper ref -s
RUN zypper -n in vim

When the Docker host machine is registered against an internal RMT server, the Docker image requires the SSL certificate used by RMT:

FROM registry.suse.com/suse/sle15

# Import the crt file of our private SMT server
ADD http://smt.test.lan/smt.crt /etc/pki/trust/anchors/smt.crt
RUN update-ca-certificates

RUN zypper ref -s
RUN zypper -n in vim

4.2.3 Adding SLE Extensions and Modules to Images

You may have subscriptions to SLE extensions or modules that you would like to use in your custom image. To add them to the Docker image, proceed as follows:

Procedure 4.1: Adding Extension and Modules
  1. Add the following into your Dockerfile:

    ADD *.repo /etc/zypp/repos.d/
    ADD *.service /etc/zypp/services.d
    RUN zypper refs && zypper refresh
  2. Copy all .service and .repo files that you will use into the directory where you will build the Docker image from the Dockerfile.

5 Creating Docker Images of Applications

Docker Open Source Engine is a technology that can help minimize resources used to run or build applications. There are several types of applications that are suitable to run inside a Docker container like daemons, Web pages or applications that expose ports for communication. You can use Docker Open Source Engine to automate building and deployment processes by adding the build process into a Docker image, then building the image and then running containers based on that image.

Running an application inside a Docker container has the following advantages:

  • You can minimize the runtime environment of the application as you can add to the Docker image of the application just the required processes and applications.

  • The image with your application is portable across machines also with different Linux host systems.

  • You can share the image of your application by using a repository.

  • You can use different versions of required packages in the container than the host system uses without having problems with dependencies.

  • You can run several instances of the same application that are completely independent from each other.

Using Docker Open Source Engine to build applications has the following advantages:

  • You can prepare a complete building image.

  • Your build always runs in the same environment.

  • Developers can test their code in the same environment as used in production.

  • You can set up an automated building process.

The following section provides examples and tips on creating Docker images for applications. Prior to reading further, make sure that you have activated your SLES base Docker image as described in Section 4.1, “Obtaining Base SLES Images”.

5.1 Running an Application with Specific Package Versions

You may face the problem that your application uses a specific version of a package that is different from the package installed on the system that should run your application. You can modify your application to work with another version or you can create a Docker image with that particular package version. The following example of a Dockerfile shows an image based on a current version of SLES but with an older version of the example package

FROM registry.suse.com/suse/sles12sp3
MAINTAINER Tux

RUN zypper ref && zypper in -f example-1.0.0-0
COPY application.rpm /tmp/

RUN zypper --non-interactive in /tmp/application.rpm

ENTRYPOINT ["/etc/bin/application"]

CMD ["-i"]

Build the image by running the following command in the directory that the Dockerfile resides in:

tux > docker build --tag tux_application:latest .

The Dockerfile example shown above performs the following operations during the docker build:

  1. Updates the SLES repositories.

  2. Installs the desired version of the example package.

  3. Copies the application package to the image. The source RPM must be placed in the build context.

  4. Unpacks the application.

  5. The last two steps run the application after a container is started.

After a successful build of the tux_application image, you can start a container based on the new image:

tux > docker run -it --name application_instance tux_application:latest

You have created a container that runs a single instance of the application. Bear in mind that after closing the application, the Docker container exits as well.

5.2 Running Applications with Specific Configuration

You may need to run an application that is delivered in a standard package accessible through SLES repositories but you may need to use a different configuration or use specific environment variables. In case you would like to run several instances of the application with non-standard configuration, you can create an own image that will pass the custom configuration to the application.

An example with the example application follows:

FROM registry.suse.com/suse/sles12sp3
RUN zypper ref && zypper --non-interactive in example

ENV BACKUP=/backup

RUN mkdir -p $BACKUP
COPY configuration_example /etc/example/

ENTRYPOINT ["/etc/bin/example"]

The above example Dockerfile results in the following operations:

  1. Refreshing of repositories and installation of the example.

  2. Sets a BACKUP environment variable (the variable persists to containers started from the image). You can always overwrite the value of the variable with a new one while running the container by specifying a new value.

  3. Creates the directory /backup.

  4. Copies the configuration_example to the image.

  5. Runs the example application.

You can now build the image. After a successful build, you can run a container based on your image.

5.3 Sharing Data Between an Application and the Host System

You may run an application that needs to share data between the application's container and the host file system. Docker Open Source Engine enables you to do data sharing by using volumes. You can declare a mount point directly in the Dockerfile. But you cannot specify a directory on the host system in the Dockerfile as the directory may not be accessible at the build time. You can find the mounted directory in the /var/lib/docker/volumes/ directory on the host system.

Note
Note: Discarding Changes to the Directory to Be Shared

After you declare a mount point by using the VOLUME instruction, all changes performed (by using the RUN instruction) to the directory will be discarded. After the declaration, the volume is part of a temporary container that is then removed after a successful build. For example, to change permissions, perform the change before you declare the directory as a mount point in the Dockerfile.

You can specify a particular mount point on the host system when running a container by using the -v option:

tux > docker run -it --name testing -v /home/tux/data:/data sles12sp3:latest /bin/bash
Note
Note

Using the -v option overwrites the VOLUME instruction if you specify the same mount point in the container.

Now create an example image with a Web server that will read Web content from the host's file system. The Dockerfile could look as follows:

FROM registry.suse.com/suse/sles12sp3
RUN zypper ref && zypper --non-interactive in apache2
COPY apache2 /etc/sysconfig/
RUN chown -R admin /data
EXPOSE 80
VOLUME /data
ENTRYPOINT ["apache2ctl"]

The example above installs the Apache Web server to the image and copies all configuration to the image. The data directory will be owned by the admin user and will be used as a mount point to store Web pages.

5.4 Applications Running in the Background

Your application may need to run in the background as a daemon or as an application exposing ports for communication. In that case, the Docker container can be run in the background.

An example Dockerfile for an application exposing a port looks as follows:

Example 5.1: Building an Apache2 Web Server Docker Container (Dockerfile)
FROM registry.suse.com/suse/sle15 1
MAINTAINER tux 2

ADD etc/ /etc/zypp/ 3
RUN zypper refs && zypper refresh 4
RUN zypper --non-interactive in apache2 5

RUN echo "The Web server is running" > /srv/www/htdocs/test.html 6
# COPY data/* /srv/www/htdocs/ 7

EXPOSE 80 8

ENTRYPOINT ["/usr/sbin/httpd"]
CMD ["-D", "FOREGROUND"]

1

Base image, taken from Section 4.1, “Obtaining Base SLES Images”.

2

Maintainer of the image (optional).

3

The repositories and service files. These are copied to /etc/zypp/repos.d and /etc/zypp/services.d to make these files available on the host in the Docker container too.

4

Command to refresh repositories and services

5

Command to install Apache2.

6

Test line for debugging purposes, can be removed if everything works as expected.

7

The copy instruction to copy own data to the server's directory. The leading hash character (#) marks this line as a comment, so it is not executed.

8

The exposed port for the Apache Web server.

Note
Note: Check for Running Apache2 Instances on the Host

Make sure there are no Apache2 server instances running on the host. Otherwise, the Docker container will not serve any data. Remove or stop any Apache2 servers on your host.

To use the container, proceed as follows:

Procedure 5.1: Testing the Apache2 Web Server
  1. Prepare the host system for the build process:

    1. Make sure the host system is subscribed to the Server Applications Module of SUSE Linux Enterprise Server. To see installed modules or install additional modules, open YaST and select Add System Extensions or Modules.

    2. Make sure the SUSE Linux Enterprise images from the SUSE Registry are installed, as described in Section 4.1, “Obtaining Base SLES Images”.

    3. Save the Dockerfile from Example 5.1, “Building an Apache2 Web Server Docker Container (Dockerfile)” into the docker directory.

    4. Within the Docker container, you need access to software repositories and services that are registered on the host. To make them available, copy repositories and service files from the host to the docker/etc directory:

      tux > cd docker
      tux > mkdir etc
      tux > sudo cp -a /etc/zypp/{repos.d,services.d} etc/

      Instead of copying all repository and service files, you can also copy only the subset that is required by the Docker container.

    5. Add Web site data (such as HTML files) into the docker/data directory. The contents of this directory are copied to the Docker image and are thus published by the Web server.

  2. Build the container. Set a tag for your image with the -t option (here tux/apache2, but you can use any name you want):

    tux > sudo docker build -t tux/apache2 .

    Docker Open Source Engine will now execute the instructions provided in Dockerfile: It will take the base image, copy content, refresh repositories and install the Apache2, etc.

  3. Create a Docker container instance from the Docker image created in the previous step:

    tux > docker run --detach --interactive --tty tux/apache2

    Docker Open Source Engine returns the container ID, for example:

    7bd674eb196d330d50f8a3cfc2bc61a243a4a535390767250b11a7886134ab93
  4. Point a browser at http://localhost:80/test.html. You should see the message The Web server is running.

  5. To see an overview of running containers, use:

    tux > docker ps --latest
    CONTAINER ID        IMAGE               COMMAND                  [...]
    7bd674eb196d        tux/apache2         "/usr/sbin/httpd -..."   [...]

    To stop and delete the Docker container, use the following command:

    tux > docker rm --force 7bd674eb196d

The above procedure describes building an image containing the Apache2 Web server. You can use the resulting container to serve your data with the Apache2 Web server by following these steps:

Procedure 5.2: Creating a Docker Container with your Own Data
  1. In Dockerfile:

  2. Rebuild the image as described in Step 2 of Procedure 5.1.

  3. Run the image in detached mode:

    tux > docker run --detach --interactive --tty tux/apache2

    Docker Open Source Engine responds with the container ID, for example:

    e43fff4ae9832ecdb7677c058a73039d7610c32145a1d9b6ad0a4ed52b5c4dc7

View the published data, point a browser at http://localhost:80/test.html.

To avoid copying Web site data into the Docker container, share a directory of the host with the container. For information, see https://docs.docker.com/storage/volumes/.

6 Working with Containers

After you have created your images, you can start your containers based on that image. You can run an instance of the image by using the docker run command. The Docker Open Source Engine then creates and starts the container. The command docker run takes several arguments:

  • A container name - it is recommended to name your container.

  • Specify a user to use in your container.

  • Define a mount point.

  • Specify a particular host name, etc.

The container typically exits if its main process finishes. For example, if your container starts a particular application, as soon as you quit the application, the container exits. You can start the container again by running:

tux > docker start -ai <container name>

You may need to remove unused containers, you can achieve this by using:

tux > docker rm <container name>

6.1 Linking Containers

Docker Open Source Engine enables you to link containers together which allows for communication between containers on the same host server. If you use the standard networking model, you can link containers by using the --link option when running containers:

First, create a container to link to:

tux > docker run -d --name sles sles12sp3 /bin/bash

Then create a container that will link to the sles container:

tux > docker run --link sles:sles sles12sp3 /bin/bash

The container that links to sles has defined environment variables that enable connecting to the linked container.

A Documentation Updates

This chapter lists content changes and updates for this document.

A.1 SUSE Linux Enterprise Server 15 SP0

A.1.2 September 2018

A.1.3 August 2018

Bugfixes
Print this page