Hackweek project: Docker registry mirror

Tuesday, 14 August, 2018

As part of SUSE Hackweek 17 I decided to work on a fully fledged docker registry mirror.

You might wonder why this is needed, after all it’s already possible to run a docker distribution (aka registry) instance as a pull-through cache. While that’s true, this solution doesn’t address the needs of more “sophisticated” users.

The problem

Based on the feedback we got from a lot of SUSE customers it’s clear that a simple registry configured to act as a pull-through cache isn’t enough.

Let’s go step by step to understand the requirements we have.

On-premise cache of container images

First of all it should be possible to have a mirror of certain container images locally. This is useful to save time and bandwidth. For example there’s no reason to download the same image over an over on each node of a Kubernetes cluster.

A docker registry configured to act as a pull-through cache can help with that. There’s still need to warm the cache, this can be left to the organic pull of images done by the cluster or could be done artificially by some script run by an operator.

Unfortunately a pull-through cache is not going to solve this problem for nodes running inside of an air-gapped environment. Nodes operated in such an environment are located into a completely segregated network, that would make it impossible for the pull-through registry to reach the external registry.

Retain control over the contents of the mirror

Cluster operators want to have control of the images available inside of the local mirror.

For example, assuming we are mirroring the Docker Hub, an operator might be fine with having the library/mariadb image but not the library/redis one.

When operating a registry configured as pull-through cache, all the images of the upstream registry are at the reach of all the users of the cluster. It’s just a matter of doing a simple docker pull to get the image cached into the local pull-through cache and sneak it into all the nodes.

Moreover some operators want to grant the privilege of adding images to the local mirror only to trusted users.

There’s life outside of the Docker Hub

The Docker Hub is certainly the most known container registry. However there are also other registries being used: SUSE operates its own registry, there’s Quay.io, Google Container Registry (aka gcr) and there are even user operated ones.

A docker registry configured to act as a pull-through cache can mirror only one registry. Which means that, if you are interested in mirroring both the Docker Hub and Quay.io, you will have to run two instances of docker registry pull-through caches: one for the Docker Hub, the other for Quay.io.

This is just overhead for the final operator.

A better solution

During the last week I worked to build a PoC to demonstrate we can create a docker registry mirror solution that can satisfy all the requirements above.

I wanted to have a single box running the entire solution and I wanted all the different pieces of it to be containerized. I hence resorted to use a node powered by openSUSE Kubic.

I didn’t need all the different pieces of Kubernetes, I just needed kubelet so that I could run it in disconnected mode. Disconnected means the kubelet process is not connected to a Kubernetes API server, instead it reads PODs manifest files straight from a local directory.

The all-in-one box

I created an openSUSE Kubic node and then I started by deploying a standard docker registry. This instance is not configured to act as a pull-through cache. However it is configured to use an external authorization service. This is needed to allow the operator to have full control of who can push/pull/delete images.

I configured the registry POD to store the registry data to a directory on the machine by using a Kubernetes hostPath volume.

On the very same node I deployed the authorization service needed by the docker registry. I choose Portus, an open source solution created at SUSE a long time ago.

Portus needs a database, hence I deployed a containerized instance of MariaDB on the same node. Again I used a Kubernetes hostPath to ensure the persistence of the database contents. I placed both Portus and its MariaDB instance into the same POD. I configured MariaDB to listen only to localhost, making it reachable only by the Portus instance (that’s because they are in the same Kubernetes POD).

I configured both the registry and Portus to bind to a local unix socket, then I deployed a container running HAProxy to expose both of them to the world.

The HAProxy is the only container that uses the host network. Meaning it’s actually listening on port 80 and port 443 of the openSUSE Kubic node.

I went ahead and created two new DNS entries inside of my local network:

  • registry.kube.lan: this is the FQDN of the registry

  • portus.kube.lan: this is the FQDN of portus

I configured both the names to be resolved with the IP address of my container host.

I then used cfssl to generate a CA and then a pair of certificates and keys for registry.kube.lan and portus.kube.lan.

Finally I configured HAProxy to:

  • Listen on port 80 and 443.

  • Automatically redirect traffic from port 80 to port 443.

  • Perform TLS termination for registry and Portus.

  • Load balance requests against the right unix socket using the Server Name Indication (SNI).

By having dedicated FQDN for the registry and Portus and by using HAProxy’s SNI based load balancing, we can leave the registry listening on a standard port (443) instead of using a different one (eg: 5000). In my opinion that’s a big win, based on my personal experience having the registry listen on a non standard port makes things more confusing both for the operators and the end users.

Once I was over with these steps I was able to log into https://portus.kube.lan and perform the usual setup wizard of Portus.

Mirroring images

We now have to mirror images from multiple registries into the local one, but how can we do that?

Sometimes ago I stumbled over this tool, which can be used to copy images from multiple registries into a single one. While doing that it can change the namespace of the image to put it all the images coming from a certain registry into a specific namespace.

I wanted to use this tool, but I realized it relies on the docker open-source engine to perform the pull and push operations. That’s a blocking issue for me because I wanted to run the mirroring tool into a container without doing nasty tricks like mounting the docker socket of the host into a container.

Basically I wanted the mirroring tool to not rely on the docker open source engine.

At SUSE we are already using and contributing to skopeo, an amazing tool that allows interactions with container images and container registries without requiring any docker daemon.

The solution was clear: extend skopeo to provide mirroring capabilities.

I drafted a design proposal with my colleague Marco Vedovati, started coding and then ended up with this pull request.

While working on that I also uncovered a small glitch inside of the containers/image library used by skopeo.

Using a patched skopeo binary (which include both the patches above) I then mirrored a bunch of images into my local registry:

$ skopeo sync –source docker://docker.io/busybox:musl –dest-creds=”flavio:password” docker://registry.kube.lan

$ skopeo sync –source docker://quay.io/coreos/etcd –dest-creds=”flavio:password” docker://registry.kube.lan

The first command mirrored only the busybox:musl container image from the Docker Hub to my local registry, while the second command mirrored all the coreos/etcd images from the quay.io registry to my local registry.

Since the local registry is protected by Portus I had to specify my credentials while performing the sync operation.

Running multiple sync commands is not really practical, that’s why we added a source-file flag. That allows an operator to write a configuration file indicating the images to mirror. More on that on a dedicated blog post.

At this point my local registry had the following images:

  • docker.io/busybox:musl

  • quay.io/coreos/etcd:v3.1

  • quay.io/coreos/etcd:latest

  • quay.io/coreos/etcd:v3.3

  • quay.io/coreos/etcd:v3.3

  • … more quay.io/coreos/etcd images …

As you can see the namespace of the mirrored images is changed to include the FQDN of the registry from which they have been downloaded. This avoids clashes between the images and makes easier to track their origin.

Mirroring on air-gapped environments

As I mentioned above I wanted to provide a solution that could be used also to run mirrors inside of air-gapped environments.

The only tricky part for such a scenario is how to get the images from the upstream registries into the local one.

This can be done in two steps by using the skopeo sync command.

We start by downloading the images on a machine that is connected to the internet. But instead of storing the images into a local registry we put them on a local directory:

$ skopeo sync –source docker://quay.io/coreos/etcd dir:/media/usb-disk/mirrored-images

This is going to copy all the versions of the quay.io/coreos/etcd image into a local directory /media/usb-disk/mirrored-images.

Let’s assume /media/usb-disk is the mount point of an external USB drive. We can then unmount the USB drive, scan its contents with some tool, and plug it into computer of the air-gapped network. From this computer we can populate the local registry mirror by using the following command:

$ skopeo sync –source dir:/media/usb-disk/mirrored-images –dest-creds=”username:password” docker://registry.secure.lan

This will automatically import all the images that have been previously downloaded to the external USB drive.

Pulling the images

Now that we have all our images mirrored it’s time to start consuming them.

It might be tempting to just update all our Dockerfile(s), Kubernetes manifests, Helm charts, automation scripts, … to reference the images from registry.kube.lan/<upstream registry FQDN>/<image>:<tag>. This however would be tedious and unpractical.

As you might know the docker open source engine has a –registry-mirror. Unfortunately the docker open source engine can only be configured to mirror the Docker Hub, other external registries are not handled.

This annoying limitation lead me and Valentin Rothberg to create this pull request against the Moby project.

Valentin is also porting the patch against libpod, that will allow to have the same feature also inside of CRI-O and podman.

During my experiments I figured some little bits were missing from the original PR.

I built a docker engine with the full patch applied and I created this /etc/docker/daemon.json configuration file:

{
 “registries”: [
   {
     “Prefix”: “quay.io”,
     “Mirrors”: [
       {
         “URL”: “https://registry.kube.lan/quay.io”
       }
     ]
   },
   {
     “Prefix”: “docker.io”,
     “Mirrors”: [
       {
         “URL”: “https://registry.kube.lan/docker.io”
       }
     ]
   }
 ]
}

Then, on this node, I was able to issue commands like:

$ docker pull quay.io/coreos/etcd:v3.1

That resulted in the image being downloaded from registry.kube.lan/quay.io/coreos/etcd:v3.1, no communication was done against quay.io. Success!

What about unpatched docker engines/other container engines?

Everything is working fine on nodes that are running this not-yet merged patch, but what about vanilla versions of docker or other container engines?

I think I have a solution for them as well, I’m going to experiment a bit with that during the next week and then provide an update.

Show me the code!

This is a really long blog post. I’ll create a new one with all the configuration files and instructions of the steps I performed. Stay tuned!

In the meantime I would like to thank Marco Vedovati, Valentin Rothberg for their help with skopeo and the docker mirroring patch, plus Miquel Sabaté Solà for his help with Portus.

 

Cloud Foundry Buildpacks or Dockerfiles

Saturday, 24 March, 2018

I often hear questions about what are these buildpacks that cloud foundry uses. And then frequently, why use a buildpack when Dockerfiles are commonplace?

The most common answer alludes to a topic that is most commonly phrased as a ‘separation of concerns’. Unfortunately, these discussions tend towards an odd combination of the academic and the ethereal when trying to explain the concept. In other words, folk don’t really digest the simple point that buildpacks facilitate.

The separation of concerns is real. And it has value. For fun, let’s look at it from an enterprise workflow perspective. We can examine this from both an application development viewpoint as well as that for application deployment.

 

First, the Dockerfile approach. To produce Dockerfiles a developer can sit at their desk and build everything needed for the Dockerfile. There is great power in this, but great responsibility too. The following diagram depicts the stack the developer controls.

 

The important point here is the developer takes on the whole responsibility to define the full container stack. Note, this approach is specifically more complicated if you are interested in maintaining consistency across an enterprise.

 

If we look at the buildpack approach, it enhances developer efficiency by allowing developers to focus on the application alone.

With buildpacks, you separate the process into specific layers that can be controlled by different roles. A snippet from the cloud foundry docs (https://docs.cloudfoundry.org/buildpacks/)

“Buildpacks provide framework and runtime support for apps. Buildpacks typically examine your apps to determine what dependencies to download and how to configure the apps to communicate with bound services.”

 

 

 

 

 

Developers that I talk to like the idea that with a buildpack, there is the concept of App :: Buildpack :: System at each level and that this facilitates supporting three distinct roles that have separate concerns. The roles:

  1. End application developer (App)
  2. Cloud Foundry admin (various admin controls which includes control over buildpacks)
  3. Cluster manager (System, stemcell and stack)

 

With this in mind, let’s look at what happens when something within the container needs to be updated, maybe due to a security update to a language library.

 

 

For the Dockerfile approach, the developer needs to get involved. The container image needs to be rebuilt. All of the assets that were used to build the image need to be re-used with updates applied. And if the updates affect multiple container images across numbers of developers, then they all of these developers need to update their images and have them redeployed.

 

With buildpacks, the workflow is very different.

 

The operator or CF admin in this case can apply the ‘lib’ updates to the build process in the platform. This allows the platform to rebuild and redeploy all of the affected containers from this single update. And this happens without having to sidetrack development. This ability to allow development to handle the application code while separating out the activities that operations can handle is at the root of the term ‘separation of concerns’.

 

It is also worth noting that if the example was an OS image that needed an update, it would use the same process receiving the same efficiencies.

For a world that is focused on cloud native applications which promise the ability to isolate changes in the name of efficiency, the buildpack approach with its concept or separation of concerns fits right into today’s modern application development model.

Ultimately the buildpack approach provides some great benefits. It facilitates consistent best practice implementations across the enterprise as well as offering solid time savings for your developers.

Docker and Dads

Sunday, 19 June, 2016

I’m one of those lucky guys who woke up this morning with a 5-year-old’s arms around my neck and cries of “Happy Father’s Day!” Even the teenagers smiled, didn’t protest too much as they brought me breakfast (eggs for protein and fruit to keep me healthy). And they all tried to understand what on earth would possess me to rush my breakfast and run out the door to catch a flight for Seattle…  Well, sure – it’s the job. But as I walked through the airport, I realized that I didn’t feel as grumpy as a lot of the other dads traveling on Father’s Day actually looked. Maybe they didn’t get breakfast? Maybe they didn’t get hugs? Or maybe they aren’t headed to DockerCon 2016?

imagesIt may sound strange, but I’m really looking forward to spending the afternoon getting the SUSE booth ready at DockerCon 2016 (#G27 – come see us!), along with hundreds of other Dads who are excited for the next couple of days with all the cool technology and revolutionary ideas we will experience there.

Their web site says: “DockerCon is the community and industry event for makers and operators of next generation distributed apps built with containers. The two-and-a-half-day conference provides talks by practitioners, hands-on labs, an expo of Docker ecosystem innovators and great opportunities to share experiences with your peers.”

They’re modest. This tech is so cool that they actually had to turn away hundreds of people from the show this year. They sold out the show weeks in advance and have so many people on the wait list that they had to go back to Sponsors to ask for unused tickets back.

SUSE is one of the must-see stops on the show floor this year. As usual, we’ll have fun, games, prizes galore at the booth and handing out prizes to people wearing the green hat – get yours at Booth #G27! And we’ll be showing off some very cool tech of our own as well. We’ll be talking about full Docker support in SUSE Linux Enterprise, helping you collaborate securely to create Docker apps and integrating container applications in private and public clouds (think OpenStack!!) The whole idea of containers is to make IT life simpler and easier – you deserve an enterprise foundation for your infrastructure so that you don’t have to spend a lot of time supporting your container environment. We can also tell you more about Portus – the open source project SUSE started for authorization service and frontend for Docker registry.

And be sure to mark your schedule for Tuesday, from 4:20-4:40 (Ecosystem B track) to hear Michal Svec talk about “Bimodal IT” bridging the gap between traditional and agile IT services. He came all the way from the Czech Republic to tell us about it!

I’m looking forward to an awesome event this week, and to sharing this experience with 4,000 other technology fans. And to the select few (well, hundreds of) Dads who gave up some adoration/adulation time to come in and get the show ready for this week’s action – I salute you!

Creating Linux Test Environments using Docker

Tuesday, 1 March, 2016

Authors: Arun Ramanathan & Pavankumar Mudalagi

Assume you need to deploy 10 SLES 11 SP3 machines. The most common run of the mill way we use is to deploy 10 Linux servers in an ESX host machine or Virtual Box or XEN server. The only drawback is we need to clone and install these Linux desktops at least 9 times which is time consuming. Do we have an alternative?

Yes we do!

First install your Base Linux Machine. I installed openSUSE 13.2 (Harlequin) (x86_64)

Deploying SLES 11 SP3 Container with Docker

Step 1: Download SLES 11 SP3 images.

  • Signup in https://hub.docker.com/
  • Search for SLES 11 SP3
  • Select your image, we selected gbeutner/sles-11-sp3-x86_64 since it had large number of pull.

    creating-linux-test-environments-using-docker-1

  • Above Image can be pulled from Docker’s central index at http://index.docker.io using the command:
  • # docker pull gbeutner/sles-11-sp3-x86_64
  • Verify the image is downloaded.
    # docker images
    
    REPOSITORY                   TAG	IMAGE ID       CREATED        VIRTUAL SIZE
    gbeutner/sles-11-sp3-x86_64  latest	22de129c1cdd   10 months ago  193.1 MB

Step 2: Run this Image with an IMAGE ID “22de129c1cdd” :

#docker run -i -t 22de129c1cdd /bin/bash
ebf803aaabe9:/ # ----> it will open a new bash prompt

Step 3: Install and configure all required software in this image. In my case I needed Go Programming language.

Step 4: In a new Terminal of OpenSUSE 13.2, commit the container.

ebf803aaabe9:/# mkdir -p /home/Golang ; cd /home/Golang
ebf803aaabe9:/home/Golang # scp fa.ke.ad.dr:/home/Golang/go1.4.2.linux-amd64.tar.gz /home/Golang/
ebf803aaabe9:/home/Golang # tar -xvzf go1.4.2.linux-amd64.tar.gz

Get Container ID.

# docker ps

CONTAINER ID   IMAGE                 COMMAND       CREATED          STATUS         PORTS     NAMES
ebf803aaabe9   22de129c1cdd:latest   "/bin/bash"   9 minutes ago    Up 9 minutes             reverent_blackwell

Commit the image using Container ID.

# docker commit ebf803aaabe9

Step 5: Clone and Run the images.

First, get the list of images

# docker images
REPOSITORY			TAG		IMAGE ID	CREATED		VIRTUAL SIZE
<none>				<none>		9d36165e91df	2 minutes ago	485 MB
gbeutner/sles-11-sp3-x86_64	latest		22de129c1cdd	10 months ago	193.1 MB

In 10 new SSH terminals, run the latest Image with Image ID “9d36165e91df” to get 10 unique SLES 11 machine.

# docker run -i -t 9d36165e91df /bin/bash

To see list of Running Containers

# docker ps
CONTAINER ID   IMAGE         	     COMMAND       CREATED          STATUS          PORTS   NAMES
62382eed12af   9d36165e91df:latest   "/bin/bash"   14 seconds ago   Up 14 seconds           adoring_mayer

To close a Running Container say with Container ID “62382eed12af

62382eed12af:/ # exit
Exit

Conclusion: If you want a quicker way to deploy Linux / Windows machines, it’s time to move to Docker. You can save many man hours with it!

References:
https://docs.docker.com/reference/commandline/cli/
https://www.suse.com/documentation/sles-12/singlehtml/dockerquick/dockerquick.html

Docker mini-course (in case you missed this), and more

Wednesday, 22 July, 2015

Are you still looking for Docker training videos? Come and take a look at our popular free mini-course: Docker in SUSE Linux Enterprise Server 12. Presented by Flavio Castelli, senior engineer at SUSE, this mini-course shows you how to use Docker, fully supported in SUSE Linux Enterprise Server 12, step by step. Be sure to register an account on the web page to get future updates.

For more information, you may download the white paper here or refer to documentation here.

 

SUSE Elevates Docker in SUSE Linux Enterprise Server 12

Monday, 22 June, 2015

SUSE® today announced significant enhancements to its container toolset, further embracing Docker as an integral component of SUSE Linux Enterprise Server. SUSE now fully supports Docker in production environments and has added an option for customers to build a private on-premise registry to host container images in a controlled and secure environment. These enhancements further strengthen Docker as an application deployment tool, helping customers significantly improve operational efficiency.

“The advent of virtualization reduced the time needed to bring up a server from hours to minutes; containers and Docker have reduced that time to seconds,” said Ralf Flaxa, SUSE vice president of engineering. “SUSE has long participated in the evolution of lightweight and efficient container deployment technology, with the inclusion of Docker in SUSE Linux Enterprise Server 12 last year and our support for Linux containers in SUSE Linux Enterprise Server 11 for several years before that. This is another example of SUSE’s commitment to providing innovative technologies for enterprise customers.”

Fully supported as part of SUSE Linux Enterprise Server 12, enterprise-ready Docker from SUSE improves operational efficiency and is accompanied by easy-to-use tools to build, deploy and manage containers.

  • SUSE provides pre-built images from a verified and trusted source. In addition, customers can create an on-premise registry behind the enterprise firewall, minimizing exposure to malicious attacks and providing better control of intellectual property. Portus, an open source front-end and authorization tool for an on-premise Docker registry, enhances security and user productivity.
  • As integral parts of SUSE Linux Enterprise Server, Docker and containers provide additional virtualization options to improve operational efficiency. SUSE Linux Enterprise Server includes the Xen and KVM hypervisors and is a perfect guest in virtual and cloud environments. With the addition of Docker, customers can build, ship and run containerized applications on SUSE Linux Enterprise Server in physical, virtual or cloud environments.
  • The efficient YaST management framework provides a simple overview of the available Docker images and allows customers to run and easily control Docker containers. In addition, the KIWI image-building tool has been extended to support the Docker build format.

Thomas Brottrager, head of IT at manufacturing company STIA Holzindustrie GmbH, said, “Using the Docker tool included in SUSE Linux Enterprise Server has enabled us to reduce the number of virtual machines we need to manage in our development environment. As a result, we have seen significant savings in system administration.”

SUSE’s current Docker offering supports x86-64 servers with support for other hardware platforms in the works. Integration with SUSE Manager for lifecycle management is also planned. For more information about Docker in SUSE Linux Enterprise Server, including a series of Docker mini-course videos, visit www.suse.com/promo/sle/docker/mini-course.html and www.suse.com/promo/sle.

About SUSE
SUSE, a pioneer in open source software, provides reliable, interoperable Linux, cloud infrastructure and storage solutions that give enterprises greater control and flexibility. More than 20 years of engineering excellence, exceptional service and an unrivaled partner ecosystem power the products and support that help our customers manage complexity, reduce cost, and confidently deliver mission-critical services. The lasting relationships we build allow us to adapt and deliver the smarter innovation they need to succeed – today and tomorrow. For more information, visit www.suse.com.

Copyright 2015 SUSE LLC. All rights reserved. SUSE and the SUSE logo are registered trademarks of SUSE LLC in the United States and other countries. All third-party trademarks are the property of their respective owners.

Category: Uncategorized Comments closed

Customer success story: Docker with SUSE

Tuesday, 26 May, 2015

SUSE Linux Enterprise Server 12 includes many new tools and features to improve the operational efficiency for enterprises. Docker is one of them. Docker offers benefits such as an efficient development cycle and a lightweight containerization model. As a new technology, Docker is evolving, but even today, it’s producing concrete results. Check out how STIA, a global wooden flooring and panels manufacturer sees the benefits of Docker.  Find more details here.

And stay tuned. SUSE will announce its official support for Docker very soon.

A Visual Way to Play with Docker

Tuesday, 7 April, 2015

You may know that Docker is a lightweight virtualization solution for running multiple virtual units (containers) simultaneously on a single control host. Containers are isolated with kernel control groups (cgroups) and kernel namespaces.

We have had Docker as an integral part of SUSE Linux Enterprise Server for some time already, so let me share some tips on how start using it.

The base Docker packages are included right on the SUSE Linux Enterprise Server media so you can use the regular package management tools to install it and then just go to the docker tool and start using it right away. There are more details in the Docker Quick Start manual, which you could find here.

I want to point out a new thing which we have made available recently in the SUSE Linux Enterprise Server update channels: that’s a Docker YaST module. You could find it in the YaST Control Center easily:

Screenshot from 2015-03-25 17:16:11_

The Docker YaST module’s purpose is to give a simple overview of the available Docker images, running Docker containers and allow for easy manipulation of running containers. This is how it looks like:

Screenshot from 2015-03-25 17:16:40

You can spawn a container out of an image, optionally selecting mapping of volumes for exposed storage to the container or mapping of network ports.

Screenshot from 2015-03-25 17:17:41

Once you are done, you will see the container running:

Screenshot from 2015-03-25 17:18:04

In case you would like to debug your container or make changes, you can easily inject a terminal using the YaST module. What that means is that YaST spawns a selected shell (normally bash) and attaches that to the running container. Then you could work inside of the container like if you would connect to a normal system. When connected, you could check for differences in contents easily.

That outlines the changes between the original image from which the container started and the currently running container. If and when you are okay with the changes, you can  commit those changes back to the image, creating a new version with the changes you have made.

Screenshot from 2015-03-25 17:18:35

All this gives you an easy start with Docker, a comfortable overview of the available images and running containers. You can manipulate the inner contents of the containers, record changes back to the images, etc. For complex tasks you can use the docker tool; the YaST module is meant to simplify onboarding and let you get to know Docker.

The capabilities describe here are what we have released as the initial version, which is made available as an update to SUSE Linux Enterprise Server 12 (just run zypper patch). Stay tuned as we release further updates to the tool itself, as well as add other components related to containers and Docker. And don’t hesitate to let us know about your experience!

Welcome Docker to SUSE Linux Enterprise Server

Monday, 16 June, 2014

Lightweight virtualization is a hot topic these days. Also called “operating system-level virtualization,” it allows you to run multiple applications or systems on one host without a hypervisor. The advantages are obvious: not having a hypervisor, the layer between the host hardware and the operating system and its applications, is eliminated, allowing a much more efficient use of resources. That, in turn, reduces the virtualization overhead while still allowing for separation and isolation of multiple tasks on one host. As a result, lightweight virtualization is very appealing in environments where resource use is critical, like server hosting or outsourcing business.

One specific example of operating system-level virtualization is Linux Containers, also sometimes called “LXC” for short. We already introduced Linux Containers to SUSE customers and users in February 2012 as a part of SUSE Linux Enterprise Server 11 SP2. Linux Containers employ techniques like Control Groups (cgroups) to perform resource isolation to control CPU, memory, network, block I/O and namespaces to isolate the process view of the operating system, including users, processes or file systems. That provides advantages similar to those of “regular” virtualization technologies – such as KVM or Xen –, but with much smaller I/O overhead, storage savings and the ability to apply dynamic parameter changes without the need to reboot the system. The Linux Containers infrastructure is supported in SUSE Linux Enterprise 11 and will remain supported in SUSE Linux Enterprise 12.

Now, we are taking a next step to further enhance our virtualization strategy and introduce you to Docker. Docker is built on top of Linux Containers with the aim of providing an easy way to deploy and manage applications. It packages the application including its dependencies in a container, which then runs like  a virtual machine. Such packaging allows for application portability between various hosts, not only across one data center, but also to the cloud. And starting with SUSE Linux Enterprise Server 12 we plan to make Docker available to our customers so they can start using it to build and run their containers. This is the another step in enhancing the SUSE virtualization story, building on top of what we have already done with Linux Containers. Leveraging the SUSE ecosystem, Docker and Linux Containers are not only a great way to build, deploy and manage applications; the idea nicely plugs into tools like Open Build Service and Kiwi for easy and powerful image building or SUSE Studio, which offers a similar concept already for virtual machines. Docker easily supports rapid prototyping and a fast deployment process; thus when combined with Open Build Service, it’s a great tool for developers aiming to support various platforms with a unified tool chain. This is critical for the future because those platforms easily apply also to clouds, public, private and hybrid. Combining Linux Containers, Docker, SUSE’s development and deployment infrastructures and SUSE Cloud, our OpenStack-based cloud infrastructure offering, brings flexibility in application deployment to a completely new level.

Introducing Docker follows the SUSE philosophy by offering choice in the virtualization space, allowing for flexibility, performance and simplicity for Linux in data centers and the cloud.

SUSE News Wrap-up from the Cloud Foundry Summit EU

Wednesday, 17 October, 2018

Cloud Foundry Summit SunsetWhen I wrote two weeks ago that SUSE will be busy at the Cloud Foundry Summit EU, I didn’t fully grasp exactly how busy we’d actually be. The Cloud Foundry Summit just seems to get bigger and better every year. With that comes more scheduled and impromptu meetings, discussions, and opportunities to learn and share. This was my first Summit in a couple of years, and it was a little overwhelming trying to keep up with everything that was happening as well as meeting with as many analysts and members of the press as possible to help spread our news around Cloud Foundry and Kubernetes.

The big news at the Summit, from SUSE’s point of view, was around the CF Containerization and Eirini projects, both of which SUSE is contributing to. Both projects are now part of the Cloud Foundry Foundation and they are key to integrating Cloud Foundry with Kubernetes.

CF Containerization was contributed by SUSE. Its roots go back several years when it was known as Fissile, and it has contributors other than SUSE, including IBM and SAP. In a nutshell, CF Containerization takes Cloud Foundry BOSH releases and converts them into Docker containers and corresponding Helm charts, ready to be installed into an existing Kubernetes. It’s what we use to build SUSE Cloud Application Platform today, and results in a smaller installation that requires no knowledge or installation of BOSH, instead leveraging an organization’s Kubernetes infrastructure and expertise.

Eirini was contributed by IBM and receives contributions from SUSE and SAP. Eirini’s goal is to offer a Cloud Foundry operator the choice of using native Kubernetes for container scheduling instead of Cloud Foundry’s Diego. This makes a ton of sense for SUSE Cloud Application Platform — because it already runs inside Kubernetes, adopting Eirini would remove what we believe is an unnecessary layer of complexity (using Diego in our product essentially means that we are running application containers inside of Diego containers). That’s why we announced at the Summit that we would be adopting Eirini for future versions of SUSE Cloud Application Platform. As soon as it’s fully baked and tested, we’ll be shipping it. My colleague, Ron Nunan, posted some additional background information on this last week.

As a product marketing wonk, part of my responsibility is helping to craft press releases for announcements and events and then hopefully convincing press and analysts to write about it. It turned out that our announcements at this Summit complemented the Cloud Foundry Foundation’s announcements very nicely, so several articles have been published so far (with more to come) where I didn’t even speak to the author! Thanks, Abby, Chip, Devin, and others at the Foundation for that!

Without further ado, here is a round-up of what was in the news:

SUSE Operates Across Communities to Deliver Kubernetes and Cloud Foundry Innovation to the Enterprise (SUSE press release)

Gerald Pfeifer, SUSE vice president of Products and Technology Programs, said, “Our approach is to identify leading open source technologies and bring them together in a way that makes sense for our customers. Today, that means bringing the unsurpassed productivity of the Cloud Foundry model together with modern Kubernetes infrastructure in SUSE Cloud Application Platform. This unique combination enables our customers to reduce complexity and become more agile to meet the changing demands of the digital economy.”

Cloud Foundry Focus on Interoperability Continues with Two New Projects Integrating Kubernetes (Cloud Foundry Foundation press release)

“Eirini and CF Containerization are the latest examples of the Cloud Foundry community’s approach to continuously exploring future evolutionary directions for the platform,” said Chip Childers, CTO, Cloud Foundry Foundation. “Developers have made it clear they need a simple, agile and flexible delivery method to push apps to production, which Cloud Foundry Application Runtime delivers. They also have multiple use cases in which deployment and management of software packaged into containers is critical. These new projects demonstrate additional approaches to combining Kubernetes and Cloud Foundry technologies.”

Cloud Foundry expands its support for Kubernetes (TechCrunch)

Clearly then, Kubernetes is becoming part and parcel of what the Cloud Foundry PaaS service will sit on top of and what developers will use to deploy the applications they write for it in the near future. At first glance, this focus on Kubernetes may look like it’s going to make Cloud Foundry superfluous, but it’s worth remembering that, at its core, the Cloud Foundry Application Runtime isn’t about infrastructure but about a developer experience and methodology that aims to manage the whole application development lifecycle.

SUSE Integrates Kubernetes with Cloud Foundry in Cloud Application Platform (ServerWatch)

Smithurst explained that SUSE containerized Cloud Foundry and deployed it into Kubernetes because there was an opportunity to increase Cloud Foundry’s efficiency by taking advantage of the popularity of Kubernetes, and eliminate the need for Cloud Foundry users to learn and use BOSH. BOSH is a lifecycle management tool that has long been a central component of Cloud Foundry.

Cloud Foundry embraces Kubernetes (ZDNet)

The overall goal is to give end-users a more consistent operational experience between application and container platforms. To further help this, additional projects that focus on shared logging and metrics and unified networking — via technologies like Istio and Open Service Broker API (OSBAPI)-compliant service catalog synchronization — are also on their way.

New Kubernetes-Native Implementation of Cloud Foundry is Coming to SUSE Cloud Application Platform (DevOps Digest)

This is SUSE’s latest move to provide Kubernetes users with the top cloud native DevOps experience by combining Kubernetes and Cloud Foundry technologies. SUSE Cloud Application Platform boosts developer productivity with automation that eliminates the need to build and manage container images.

Kubernetes won, and that’s OK. Cloud Foundry into the future… (Diversity Limited)

And whereas OpenStack was all about cloud infrastructure, since its inception, Cloud Foundry has been more about a developer experience and DevOps lifecycle management story. While OpenStack spent its early years telling anyone who would listen that it enabled users to compete with AWS, Cloud Foundry simply focused on its core message of developer agility – smart strategy, it seems.

Cloud Foundry Goes All-In With Kubernetes (DataCenter Knowledge)

What all this essentially means is that Cloud Foundry has joined the rest of the world in making Kubernetes an integral part of its container strategy.

Cloud Foundry announces new Kubernetes projects (Enterprise Times)

This is more than just container fever. The orchestration capabilities and smaller footprint of containers make it easier for companies to scale-out and scale-up their applications. Cloud Foundry has had a spectacular year in terms of new members and applications in its online marketplace. The latter is up tenfold in just 10 years. The problem that many developers faced was that they were unable to easily take advantage of Kubernetes despite last years announcement.

Cloud Foundry Foundation tightens Kubernetes integration with new projects (DevClass)

Eirini and CF Containerization are the newest additions to the portfolio of non-profit Cloud Foundry Foundation. Both projects should mainly help users to combine Kubernetes and Cloud Foundry if needed, which is something developers have been asking for for a while now.

Cloud Foundry Adopts a Pair of Kubernetes-Based Projects (SDxCentral)

Childers earlier this year indicated that the organization was working through gaining more confidence in the maturity and direction of Kubernetes and how it would fit into Cloud Foundry. “We don’t chase the shiny ball,” Childers said, noting that the organization was more focused on only adding components that will help developers.