Cloud Foundry Buildpacks or Dockerfiles

Saturday, 24 March, 2018

I often hear questions about what are these buildpacks that cloud foundry uses. And then frequently, why use a buildpack when Dockerfiles are commonplace?

The most common answer alludes to a topic that is most commonly phrased as a ‘separation of concerns’. Unfortunately, these discussions tend towards an odd combination of the academic and the ethereal when trying to explain the concept. In other words, folk don’t really digest the simple point that buildpacks facilitate.

The separation of concerns is real. And it has value. For fun, let’s look at it from an enterprise workflow perspective. We can examine this from both an application development viewpoint as well as that for application deployment.

 

First, the Dockerfile approach. To produce Dockerfiles a developer can sit at their desk and build everything needed for the Dockerfile. There is great power in this, but great responsibility too. The following diagram depicts the stack the developer controls.

 

The important point here is the developer takes on the whole responsibility to define the full container stack. Note, this approach is specifically more complicated if you are interested in maintaining consistency across an enterprise.

 

If we look at the buildpack approach, it enhances developer efficiency by allowing developers to focus on the application alone.

With buildpacks, you separate the process into specific layers that can be controlled by different roles. A snippet from the cloud foundry docs (https://docs.cloudfoundry.org/buildpacks/)

“Buildpacks provide framework and runtime support for apps. Buildpacks typically examine your apps to determine what dependencies to download and how to configure the apps to communicate with bound services.”

 

 

 

 

 

Developers that I talk to like the idea that with a buildpack, there is the concept of App :: Buildpack :: System at each level and that this facilitates supporting three distinct roles that have separate concerns. The roles:

  1. End application developer (App)
  2. Cloud Foundry admin (various admin controls which includes control over buildpacks)
  3. Cluster manager (System, stemcell and stack)

 

With this in mind, let’s look at what happens when something within the container needs to be updated, maybe due to a security update to a language library.

 

 

For the Dockerfile approach, the developer needs to get involved. The container image needs to be rebuilt. All of the assets that were used to build the image need to be re-used with updates applied. And if the updates affect multiple container images across numbers of developers, then they all of these developers need to update their images and have them redeployed.

 

With buildpacks, the workflow is very different.

 

The operator or CF admin in this case can apply the ‘lib’ updates to the build process in the platform. This allows the platform to rebuild and redeploy all of the affected containers from this single update. And this happens without having to sidetrack development. This ability to allow development to handle the application code while separating out the activities that operations can handle is at the root of the term ‘separation of concerns’.

 

It is also worth noting that if the example was an OS image that needed an update, it would use the same process receiving the same efficiencies.

For a world that is focused on cloud native applications which promise the ability to isolate changes in the name of efficiency, the buildpack approach with its concept or separation of concerns fits right into today’s modern application development model.

Ultimately the buildpack approach provides some great benefits. It facilitates consistent best practice implementations across the enterprise as well as offering solid time savings for your developers.

Docker and Dads

Sunday, 19 June, 2016

I’m one of those lucky guys who woke up this morning with a 5-year-old’s arms around my neck and cries of “Happy Father’s Day!” Even the teenagers smiled, didn’t protest too much as they brought me breakfast (eggs for protein and fruit to keep me healthy). And they all tried to understand what on earth would possess me to rush my breakfast and run out the door to catch a flight for Seattle…  Well, sure – it’s the job. But as I walked through the airport, I realized that I didn’t feel as grumpy as a lot of the other dads traveling on Father’s Day actually looked. Maybe they didn’t get breakfast? Maybe they didn’t get hugs? Or maybe they aren’t headed to DockerCon 2016?

imagesIt may sound strange, but I’m really looking forward to spending the afternoon getting the SUSE booth ready at DockerCon 2016 (#G27 – come see us!), along with hundreds of other Dads who are excited for the next couple of days with all the cool technology and revolutionary ideas we will experience there.

Their web site says: “DockerCon is the community and industry event for makers and operators of next generation distributed apps built with containers. The two-and-a-half-day conference provides talks by practitioners, hands-on labs, an expo of Docker ecosystem innovators and great opportunities to share experiences with your peers.”

They’re modest. This tech is so cool that they actually had to turn away hundreds of people from the show this year. They sold out the show weeks in advance and have so many people on the wait list that they had to go back to Sponsors to ask for unused tickets back.

SUSE is one of the must-see stops on the show floor this year. As usual, we’ll have fun, games, prizes galore at the booth and handing out prizes to people wearing the green hat – get yours at Booth #G27! And we’ll be showing off some very cool tech of our own as well. We’ll be talking about full Docker support in SUSE Linux Enterprise, helping you collaborate securely to create Docker apps and integrating container applications in private and public clouds (think OpenStack!!) The whole idea of containers is to make IT life simpler and easier – you deserve an enterprise foundation for your infrastructure so that you don’t have to spend a lot of time supporting your container environment. We can also tell you more about Portus – the open source project SUSE started for authorization service and frontend for Docker registry.

And be sure to mark your schedule for Tuesday, from 4:20-4:40 (Ecosystem B track) to hear Michal Svec talk about “Bimodal IT” bridging the gap between traditional and agile IT services. He came all the way from the Czech Republic to tell us about it!

I’m looking forward to an awesome event this week, and to sharing this experience with 4,000 other technology fans. And to the select few (well, hundreds of) Dads who gave up some adoration/adulation time to come in and get the show ready for this week’s action – I salute you!

Creating Linux Test Environments using Docker

Tuesday, 1 March, 2016

Authors: Arun Ramanathan & Pavankumar Mudalagi

Assume you need to deploy 10 SLES 11 SP3 machines. The most common run of the mill way we use is to deploy 10 Linux servers in an ESX host machine or Virtual Box or XEN server. The only drawback is we need to clone and install these Linux desktops at least 9 times which is time consuming. Do we have an alternative?

Yes we do!

First install your Base Linux Machine. I installed openSUSE 13.2 (Harlequin) (x86_64)

Deploying SLES 11 SP3 Container with Docker

Step 1: Download SLES 11 SP3 images.

  • Signup in https://hub.docker.com/
  • Search for SLES 11 SP3
  • Select your image, we selected gbeutner/sles-11-sp3-x86_64 since it had large number of pull.

    creating-linux-test-environments-using-docker-1

  • Above Image can be pulled from Docker’s central index at http://index.docker.io using the command:
  • # docker pull gbeutner/sles-11-sp3-x86_64
  • Verify the image is downloaded.
    # docker images
    
    REPOSITORY                   TAG	IMAGE ID       CREATED        VIRTUAL SIZE
    gbeutner/sles-11-sp3-x86_64  latest	22de129c1cdd   10 months ago  193.1 MB

Step 2: Run this Image with an IMAGE ID “22de129c1cdd” :

#docker run -i -t 22de129c1cdd /bin/bash
ebf803aaabe9:/ # ----> it will open a new bash prompt

Step 3: Install and configure all required software in this image. In my case I needed Go Programming language.

Step 4: In a new Terminal of OpenSUSE 13.2, commit the container.

ebf803aaabe9:/# mkdir -p /home/Golang ; cd /home/Golang
ebf803aaabe9:/home/Golang # scp fa.ke.ad.dr:/home/Golang/go1.4.2.linux-amd64.tar.gz /home/Golang/
ebf803aaabe9:/home/Golang # tar -xvzf go1.4.2.linux-amd64.tar.gz

Get Container ID.

# docker ps

CONTAINER ID   IMAGE                 COMMAND       CREATED          STATUS         PORTS     NAMES
ebf803aaabe9   22de129c1cdd:latest   "/bin/bash"   9 minutes ago    Up 9 minutes             reverent_blackwell

Commit the image using Container ID.

# docker commit ebf803aaabe9

Step 5: Clone and Run the images.

First, get the list of images

# docker images
REPOSITORY			TAG		IMAGE ID	CREATED		VIRTUAL SIZE
<none>				<none>		9d36165e91df	2 minutes ago	485 MB
gbeutner/sles-11-sp3-x86_64	latest		22de129c1cdd	10 months ago	193.1 MB

In 10 new SSH terminals, run the latest Image with Image ID “9d36165e91df” to get 10 unique SLES 11 machine.

# docker run -i -t 9d36165e91df /bin/bash

To see list of Running Containers

# docker ps
CONTAINER ID   IMAGE         	     COMMAND       CREATED          STATUS          PORTS   NAMES
62382eed12af   9d36165e91df:latest   "/bin/bash"   14 seconds ago   Up 14 seconds           adoring_mayer

To close a Running Container say with Container ID “62382eed12af

62382eed12af:/ # exit
Exit

Conclusion: If you want a quicker way to deploy Linux / Windows machines, it’s time to move to Docker. You can save many man hours with it!

References:
https://docs.docker.com/reference/commandline/cli/
https://www.suse.com/documentation/sles-12/singlehtml/dockerquick/dockerquick.html

Docker mini-course (in case you missed this), and more

Wednesday, 22 July, 2015

Are you still looking for Docker training videos? Come and take a look at our popular free mini-course: Docker in SUSE Linux Enterprise Server 12. Presented by Flavio Castelli, senior engineer at SUSE, this mini-course shows you how to use Docker, fully supported in SUSE Linux Enterprise Server 12, step by step. Be sure to register an account on the web page to get future updates.

For more information, you may download the white paper here or refer to documentation here.

 

SUSE Elevates Docker in SUSE Linux Enterprise Server 12

Monday, 22 June, 2015

SUSE® today announced significant enhancements to its container toolset, further embracing Docker as an integral component of SUSE Linux Enterprise Server. SUSE now fully supports Docker in production environments and has added an option for customers to build a private on-premise registry to host container images in a controlled and secure environment. These enhancements further strengthen Docker as an application deployment tool, helping customers significantly improve operational efficiency.

“The advent of virtualization reduced the time needed to bring up a server from hours to minutes; containers and Docker have reduced that time to seconds,” said Ralf Flaxa, SUSE vice president of engineering. “SUSE has long participated in the evolution of lightweight and efficient container deployment technology, with the inclusion of Docker in SUSE Linux Enterprise Server 12 last year and our support for Linux containers in SUSE Linux Enterprise Server 11 for several years before that. This is another example of SUSE’s commitment to providing innovative technologies for enterprise customers.”

Fully supported as part of SUSE Linux Enterprise Server 12, enterprise-ready Docker from SUSE improves operational efficiency and is accompanied by easy-to-use tools to build, deploy and manage containers.

  • SUSE provides pre-built images from a verified and trusted source. In addition, customers can create an on-premise registry behind the enterprise firewall, minimizing exposure to malicious attacks and providing better control of intellectual property. Portus, an open source front-end and authorization tool for an on-premise Docker registry, enhances security and user productivity.
  • As integral parts of SUSE Linux Enterprise Server, Docker and containers provide additional virtualization options to improve operational efficiency. SUSE Linux Enterprise Server includes the Xen and KVM hypervisors and is a perfect guest in virtual and cloud environments. With the addition of Docker, customers can build, ship and run containerized applications on SUSE Linux Enterprise Server in physical, virtual or cloud environments.
  • The efficient YaST management framework provides a simple overview of the available Docker images and allows customers to run and easily control Docker containers. In addition, the KIWI image-building tool has been extended to support the Docker build format.

Thomas Brottrager, head of IT at manufacturing company STIA Holzindustrie GmbH, said, “Using the Docker tool included in SUSE Linux Enterprise Server has enabled us to reduce the number of virtual machines we need to manage in our development environment. As a result, we have seen significant savings in system administration.”

SUSE’s current Docker offering supports x86-64 servers with support for other hardware platforms in the works. Integration with SUSE Manager for lifecycle management is also planned. For more information about Docker in SUSE Linux Enterprise Server, including a series of Docker mini-course videos, visit www.suse.com/promo/sle/docker/mini-course.html and www.suse.com/promo/sle.

About SUSE
SUSE, a pioneer in open source software, provides reliable, interoperable Linux, cloud infrastructure and storage solutions that give enterprises greater control and flexibility. More than 20 years of engineering excellence, exceptional service and an unrivaled partner ecosystem power the products and support that help our customers manage complexity, reduce cost, and confidently deliver mission-critical services. The lasting relationships we build allow us to adapt and deliver the smarter innovation they need to succeed – today and tomorrow. For more information, visit www.suse.com.

Copyright 2015 SUSE LLC. All rights reserved. SUSE and the SUSE logo are registered trademarks of SUSE LLC in the United States and other countries. All third-party trademarks are the property of their respective owners.

Category: Uncategorized Comments closed

Customer success story: Docker with SUSE

Tuesday, 26 May, 2015

SUSE Linux Enterprise Server 12 includes many new tools and features to improve the operational efficiency for enterprises. Docker is one of them. Docker offers benefits such as an efficient development cycle and a lightweight containerization model. As a new technology, Docker is evolving, but even today, it’s producing concrete results. Check out how STIA, a global wooden flooring and panels manufacturer sees the benefits of Docker.  Find more details here.

And stay tuned. SUSE will announce its official support for Docker very soon.

A Visual Way to Play with Docker

Tuesday, 7 April, 2015

You may know that Docker is a lightweight virtualization solution for running multiple virtual units (containers) simultaneously on a single control host. Containers are isolated with kernel control groups (cgroups) and kernel namespaces.

We have had Docker as an integral part of SUSE Linux Enterprise Server for some time already, so let me share some tips on how start using it.

The base Docker packages are included right on the SUSE Linux Enterprise Server media so you can use the regular package management tools to install it and then just go to the docker tool and start using it right away. There are more details in the Docker Quick Start manual, which you could find here.

I want to point out a new thing which we have made available recently in the SUSE Linux Enterprise Server update channels: that’s a Docker YaST module. You could find it in the YaST Control Center easily:

Screenshot from 2015-03-25 17:16:11_

The Docker YaST module’s purpose is to give a simple overview of the available Docker images, running Docker containers and allow for easy manipulation of running containers. This is how it looks like:

Screenshot from 2015-03-25 17:16:40

You can spawn a container out of an image, optionally selecting mapping of volumes for exposed storage to the container or mapping of network ports.

Screenshot from 2015-03-25 17:17:41

Once you are done, you will see the container running:

Screenshot from 2015-03-25 17:18:04

In case you would like to debug your container or make changes, you can easily inject a terminal using the YaST module. What that means is that YaST spawns a selected shell (normally bash) and attaches that to the running container. Then you could work inside of the container like if you would connect to a normal system. When connected, you could check for differences in contents easily.

That outlines the changes between the original image from which the container started and the currently running container. If and when you are okay with the changes, you can  commit those changes back to the image, creating a new version with the changes you have made.

Screenshot from 2015-03-25 17:18:35

All this gives you an easy start with Docker, a comfortable overview of the available images and running containers. You can manipulate the inner contents of the containers, record changes back to the images, etc. For complex tasks you can use the docker tool; the YaST module is meant to simplify onboarding and let you get to know Docker.

The capabilities describe here are what we have released as the initial version, which is made available as an update to SUSE Linux Enterprise Server 12 (just run zypper patch). Stay tuned as we release further updates to the tool itself, as well as add other components related to containers and Docker. And don’t hesitate to let us know about your experience!

Welcome Docker to SUSE Linux Enterprise Server

Monday, 16 June, 2014

Lightweight virtualization is a hot topic these days. Also called “operating system-level virtualization,” it allows you to run multiple applications or systems on one host without a hypervisor. The advantages are obvious: not having a hypervisor, the layer between the host hardware and the operating system and its applications, is eliminated, allowing a much more efficient use of resources. That, in turn, reduces the virtualization overhead while still allowing for separation and isolation of multiple tasks on one host. As a result, lightweight virtualization is very appealing in environments where resource use is critical, like server hosting or outsourcing business.

One specific example of operating system-level virtualization is Linux Containers, also sometimes called “LXC” for short. We already introduced Linux Containers to SUSE customers and users in February 2012 as a part of SUSE Linux Enterprise Server 11 SP2. Linux Containers employ techniques like Control Groups (cgroups) to perform resource isolation to control CPU, memory, network, block I/O and namespaces to isolate the process view of the operating system, including users, processes or file systems. That provides advantages similar to those of “regular” virtualization technologies – such as KVM or Xen –, but with much smaller I/O overhead, storage savings and the ability to apply dynamic parameter changes without the need to reboot the system. The Linux Containers infrastructure is supported in SUSE Linux Enterprise 11 and will remain supported in SUSE Linux Enterprise 12.

Now, we are taking a next step to further enhance our virtualization strategy and introduce you to Docker. Docker is built on top of Linux Containers with the aim of providing an easy way to deploy and manage applications. It packages the application including its dependencies in a container, which then runs like  a virtual machine. Such packaging allows for application portability between various hosts, not only across one data center, but also to the cloud. And starting with SUSE Linux Enterprise Server 12 we plan to make Docker available to our customers so they can start using it to build and run their containers. This is the another step in enhancing the SUSE virtualization story, building on top of what we have already done with Linux Containers. Leveraging the SUSE ecosystem, Docker and Linux Containers are not only a great way to build, deploy and manage applications; the idea nicely plugs into tools like Open Build Service and Kiwi for easy and powerful image building or SUSE Studio, which offers a similar concept already for virtual machines. Docker easily supports rapid prototyping and a fast deployment process; thus when combined with Open Build Service, it’s a great tool for developers aiming to support various platforms with a unified tool chain. This is critical for the future because those platforms easily apply also to clouds, public, private and hybrid. Combining Linux Containers, Docker, SUSE’s development and deployment infrastructures and SUSE Cloud, our OpenStack-based cloud infrastructure offering, brings flexibility in application deployment to a completely new level.

Introducing Docker follows the SUSE philosophy by offering choice in the virtualization space, allowing for flexibility, performance and simplicity for Linux in data centers and the cloud.

Taking off at OpenStack Summit — Vancouver

Monday, 28 May, 2018

We’re just finishing up here at the Vancouver OpenStack Summit – an amazing event this year, returning to an amazing city. The weather even cooperated and we were treated with spectacular views of snow-capped peaks in the distance while float planes entertained us, landing and taking off from the harbor, just outside the convention center.

This year’s event was billed as The Summit: Home of Open Infrastructure, and was attended by builders and operators of container infrastructure, CI/CE, Telecom + NFV, Public Cloud, Private & Hybrid Cloud and members of open source communities like Kubernetes, Docker, OPNFV, Ansible, Ceph, and others.

Attendees took advantage of over 200 sessions and workshops. SUSE focused on helping attendees understand how to take advantage of SUSE OpenStack Cloud, to “get private cloud to take off and make infrastructure fly”. Our SUSE-green float plane and flight crew uniforms were a hit at the event and helped attendees envision how they could “Navigate to Sunnier Skies with SUSE”.

Of course, when it comes to soaring with private and hybrid cloud infrastructure, OpenStack is a leading force in the industry. 84% of CSPs say that OpenStack is essential or important to their company’s success, and 451 Research estimates the overall market for OpenStack-based solutions will exceed $6 billion by 2021

Containers were a big focus at the summit as well, since OpenStack is a prime deployment environment for containers. In 2015, application containers were a $495 million market. By 2020, that’s expected to grow to $2.7 billion according to 451 Research.

As OpenStack matures, SUSE continues to partner with leading SDN/NFV, storage, platform, and other ecosystem vendors to help customers reach new heights. So how do you minimize turbulence when taking-off to the blue skies of private cloud ? I’d suggest considering the truly open provider of open source infrastructure solutions, SUSE! SUSE’s been providing OpenStack to the enterprise longer than anyone. SUSE OpenStack Cloud is the easiest private cloud infrastructure solution to deploy and manage – ready for your development, test and production workloads. It’s fully supported, compliant and secure.

Farewell to Vancouver for now. Maybe we’ll be fortunate and Summit will return once again, but for now, we’ll set our sights on Berlin for November. We’re looking forward to seeing many of you again!

Ready Certification for SUSE CaaS Platform Now Available

Wednesday, 2 May, 2018


As a flexible and customizable general-purpose container orchestration platform, Kubernetes is dominating the container management tool landscape. Enterprise adoption of containers is exploding, and as a result software vendors are moving quickly to deliver their applications – both legacy and new cloud native applications – as docker containers orchestrated by Kubernetes.

Which is why we’re excited to announce this week the extension of our SUSE Ready certification program to SUSE CaaS Platform, our enterprise class container management solution powered by Kubernetes. SUSE Ready certification provides ISVs a way to assert that their solution has been tested against and is supported with SUSE platforms.

SUSE CaaS Platform delivers the best in class container infrastructure that enables the digital transformation of our customers’ businesses. Additionally, for enterprises to be successful in deploying containers, microservices, and cloud-based infrastructure, they need the right DevOps tools, and the ability to secure, monitor and manage real world, large-scale deployments. As the “open open” source company, SUSE’s approach has always been to partner with vendors that deliver best of breed components that complement a SUSE solution. Take a look at the SUSE Partner Software Catalog and check the SUSE CaaS Platform box in the Platform filter, and you’ll see that SUSE is already working with a number of partners who have tested and certified their solutions with SUSE CaaS Platform.

To find out more about the SUSE Ready program and Ready for SUSE CaaS Platform partners, visit https://www.suse.com/partners/isv/caas-platform/ and visit us in booth #G-C28 at Kubecon.