Comparing Docker Swarm and Kubernetes

lunes, 7 agosto, 2017

For teams building and deploying containerized applications using
Docker, selecting the right orchestration engine can be a challenge.
The decision affects not only deployment and management, but how
applications are architected as well. DevOps teams need to think about
details like how data is persisted, how containerized services
communicate with one another, load balancing, service discovery,
packaging and more. It turns out that the choice of orchestration
engine is critical to all these areas. While Rancher has the nice
property that it can support multiple orchestration engines
concurrently, choosing the right solution is still important. Rather
than attempting to boil the ocean by looking at many orchestrators, we
chose to look at two likely to be on the short list for most
organizations – Kubernetes and Docker Swarm.

Evolving at a rapid clip

To say these frameworks are evolving quickly is an understatement. In
the just the past year there have been four major releases of Docker
(1.12, 1.13, 17.03 and 17.06) with dozens of new features and a
wholesale change to the Swarm architecture. Kubernetes has been
evolving at an even more frenetic pace. Since Kubernetes 1.3 was
introduced in July of 2016 there have been four additional major
releases and no less than a dozen minor releases. Kubernetes is at
version 1.7.2 at the time of this writing with 1.8.0 now in alpha 2.
Check out the Kubernetes
changelog
to get a sense of the pace of development. *Comparing Kubernetes and
Docker Swarm is a little like trying to compare two Rocket ships
speeding along on separate trajectories. By the time you catch up with
one and get close enough to see what’s happening, the other is in a
whole different place! *

Points of comparison

Despite the challenges posed by their rapid evolution, we decided to
take a crack at comparing Swarm and Kubernetes in some detail, taking a
fresh look at new capabilities in each solution. At a high-level the
two solutions do broadly similar things, but they differ substantially
in their implementation. We took both solutions out for a test drive
(using Rancher running in AWS), got into the weeds, and compared them
systematically in these areas:

  • Architecture
  • User experience
  • Ease-of-use
  • Networking model
  • Storage management
  • Scheduling
  • Service discovery
  • Load balancing
  • Healthchecks
  • Scalability

Lots for DevOps teams to ponder

Both Swarm and Kubernetes are impressive, capable solutions. Depending
on their needs, organizations could reasonably choose either solution.
If you are new to one solution or the other, understanding the
strengths and weaknesses of different solutions, and differences in how
they are implemented, can help you make a more informed decision. Swarm
is impressive for its simplicity and seamless integration with Docker.
For those experienced with Docker, evolving to use Swarm is simple.
Swarm’s new DAB format for multi-host, multi-service applications
extends naturally from docker-compose, and the Swarm command set is now
part of Docker Engine, so administrators face a minimal learning curve.
Customers considering larger, more complex deployments will want to look
at Kubernetes. Docker users will need to invest a little more time to
get familiar with Kubernetes, but even if you don’t use all the features
out of the gate, the features are there for good reason. Kubernetes has
its own discrete command set, API and an architecture that is discrete
from Docker. For Kubernetes, the watchword is flexibility. Kubernetes
is extensible and configurable and can be deployed in a variety of ways.
It introduces concepts like Pods, Replica Sets and Stateful Sets not
found in Swarm along with features like autoscaling. While Kubernetes
is a little more complex to learn and master, for users with more
sophisticated requirements, Kubernetes has the potential to simplify
management by reducing the need for ongoing manual interventions.

About the whitepaper

Our comparison was done using Rancher’s container management framework
to deploy separate environments for Docker Swarm and Kubernetes. Rather
than focus on Rancher however, comparisons are made at the level of
Swarm and Kubernetes themselves. Whether you are using Rancher or a
different container management framework, the observations should still
be useful. Included in the paper are:

  • Detailed comparisons between Kubernetes and Swarm
  • Considerations when deploying both orchestrators in Rancher
  • Considerations for application designers
  • High-level guidance on what orchestrator to consider when

Download the free
whitepaper
for an
up to date look at Kubernetes and Docker Swarm.
As always, we
appreciate your thoughts and feedback!

Setting Up a Docker Registry with JFrog Artifactory and Rancher

jueves, 27 julio, 2017

For any team using
containers – whether in development, test, or production – an
enterprise-grade registry is a non-negotiable requirement. JFrog
Artifactory
is much beloved by Java
developers, and it’s easy to use as a Docker registry as well. To make
it even easier, we’ve put together a short walkthrough to setting things
up Artifactory in Rancher.

Before you start

For this article, we’ve assumed that you already have a Rancher
installation up and running (if not, check out our Quick Start
guide
), and
will be working with either Artifactory Pro or Artifactory Enterprise.
Choosing the right version of Artifactory depends on your development
needs. If your main development needs include building with Maven
package types, then Artifactory open source may be suitable. However,
if you build using Docker, Chef Cookbooks, NuGet, PyPI, RubyGems, and
other package formats then you’ll want to consider Artifactory Pro.
Moreover, if you have a globally distributed development team with HA
and DR needs, you’ll want to consider Artifactory Enterprise. JFrog
provides a detailed
matrix
with the differences between the versions of Artifactory. There’s
several values you’ll need to select in order to set Artifactory up as
a Docker registry, such as a public name, or public port. In this
article, we refer to them as variables; just substitute the values you
choose in for the variables throughout this post. To deploy Artifactory,
you’ll first need to create (or already) have a wildcard imported into
Rancher for “*.$public_name”. You’ll also need to create DNS entries
to the IP address for artifactory-lb, the load balancer for the
Artifactory high availability architecture. Artifactory will be reached
via $publish_schema://$public_name:$public_port, while the Docker
registry will be reachable at
$publish_schema://$docker_repo_name.$public_name:$public_port

Installing Artifactory

While you can choose to install Artifactory on your own with the
documented
instructions
,
you also have the option of using Rancher catalog. The Rancher community
has recently contributed a template for Artifactory, which deploys the
package, the Artifactory server, its reverse proxy, and a Rancher load
balancher.

**A note on reverse proxies: **to use Artifactory as a Docker registry,
a reverse proxy is required. This reverse proxy is automatically
configured using the Rancher catalog item. However, if you need to apply
a custom nginx configuration, you can do so by upgrading the
artifactory-rp container in Rancher.

Note that installing Artifactory is a separate task from setting up
Artifactory to serve as a Docker registry, and from connecting that
Docker registry to Rancher (we’ll cover how to do these things as
well). To launch the Artifactory template, navigate to the community
catalog in Rancher. Choose “Pro” as the Artifactory version to launch,
and set parameters for schema, name, and port:

Once the package is deployed, the service is accessible through
[$publish_schema://$publish_name:$publish_port]

Configure Artifactory

At this point, we’ll need to do a bit more configuration with
Artifactory to complete the setup. Access the Artifactory server using
the path above. The next step will be to configure the reverse proxy and
to enable Docker image registry integration. To configure the reverse
proxy, set the following parameters:

  • Internal hostname: artifactory
  • Internal port: 8081
  • Internal context: artifactory
  • Public server name: $public_name
  • Public context path: [leave blank]
  • http port: $public_port
  • Docker reverse proxy settings: Sub Domain

Next, create a
local Docker repository. Make sure to select Docker as the package type:
Verify that the
registry name is correct; it should be formatted as
$docker_rep_name.$public_name
Test that the
registry is working by logging into it:

# docker login $publish_schema://$docker_repo_name.$public_name

Add Artifactory into Rancher

Now that Artifactory is all set up, it’s time to add the registry to
Rancher itself, so any application built and managed in Rancher can pull
images from it. On the top navigation bar, visit Infrastructure, then
select Registries from the drop down menu. On the resulting screen,
choose “Add Registry”, then select the “Custom” option. All you’ll need
to do is enter the address for your Artifactory Docker registry, along
with the relevant credentials:
Once it’s been
added, you should see it show up your list of recognized registries
(which shows up after visiting Infrastructure -> Registries on the top
navigation bar). With that, you should be all set to use Artifactory as
a Docker registry within Rancher! Raul is a DevOps Lead at Rancher
Labs.

Tags: ,, Category: Sin categoría Comments closed

Building a Super-Fast Docker CI/CD Pipeline with Rancher and DroneCI

miércoles, 5 julio, 2017
Build a CI/CD Pipeline with Kubernetes and Rancher
Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.

At Higher Education, we’ve tested and used
quite a few CI/CD tools for our Docker CI pipeline. Using Rancher and
Drone CI has proven to be the simplest, fastest, and most enjoyable
experience we’ve found to date. From the moment code is pushed/merged
to a deployment branch, code is tested, built, and deployed to
production in about half the time of cloud-hosted solutions – as little
as three to five minutes (Some apps take longer due to a larger
build/test process). Drone builds are also extremely easy for our
developers to configure and maintain, and getting it setup on Rancher is
just like everything else on Rancher – very simple.

Our Top Requirements for a CI/CD Pipeline

The CI/CD pipeline is really the core of the DevOps experience and is
certainly the most impactful for our developers. From a developer
perspective, the two things that matter most for a CI/CD pipeline are
speed and simplicity. Speed is #1 on the list, because nothing’s worse
than pushing out one line of code and waiting for 20 minutes for it to
get to production. Or even worse…when there is a production issue, a
developer pushes out a hot fix only to have company dollars continue to
grow wings and fly away as your deployment pipeline churns. Simplicity
is #2, because in an ideal world, developers can build and maintain
their own application deployment configurations. This makes life easier
for everyone in the long run. You certainly don’t want developers
knocking on your (Slack) door every time their build fails for some
reason.

Docker CI/CD Pipeline Speed Pain points

While immutable containers are far superior to maintaining stateful
servers, they do have a few drawbacks – the biggest one being
deployment speed: It’s slower to build and deploy a container image
than to simply push code to an existing server. Here are all the places
that a Docker deployment pipeline can spend time: 1) [CI: pull base image for application from
Docker registry] 2) [CI: build test image (with test dependencies) and
run tests] 3) [CI: build production image (no test dependencies)] 4)
[CI: push application image to Docker Registry] 5) [Infrastructure:
pull application image from Docker Registry] 6) [Stop old containers,
start new ones] Depending on the size of your application
and how long it takes to build, latency with the Docker registry (steps
1, 4, 5) is probably where most of your time will be spent during a
Docker build. Application build time (steps 2, 3) might be a fixed
variable, but it also might be drastically affected by the memory or CPU
cores available to the build process. If you’re using a cloud-hosted CI
solution, then you don’t have control over where the CI servers run
(registry latency might be really slow) and you might not have control
over the type of servers/instances running (application build might be
slow). There will also be a lot of repeated work for every build such
as downloading base images for every build.

Enter Drone CI

Drone runs on your Rancher infrastructure much like a tool like Jenkins
would, but, unlike Jenkins, Drone is Docker-native – every part of your
build process is a container. Running on your infrastructure speeds up
the build process, since base images can be shared across builds or even
projects. You can also avoid a LOT of latency if you push to a Docker
registry that is on your own infrastructure such as ECR for AWS. Drone
being Docker-native removes a lot of configuration friction as well.
Anyone who’s had to configure Jenkins knows that this is a big plus. A
standard Drone deployment does something like this:

  • Run a container to notify Slack that a build has started
  • Configure any base image for your “test” container, code gets
    injected and tests run in the container
  • Run a container that builds and pushes your production image (to
    Docker Hub, AWS ECR, etc)
  • Run a container that tells Rancher to upgrade a service
  • Run a container to notify Slack that a build has completed/failed

A .drone.yml file looks strikingly similar to a docker-compose.yml file
– just a list of containers. Since each step has a container dedicated
to that task, configuration of that step is usually very simple.

Getting Drone Up and Running

The to do list here is simple:

  • Register a new GitHub OAuth app
  • Create a Drone environment in Rancher
  • Add a “Drone Server” host and one or more “Drone Worker” hosts
    • Add a drone=server tag to the Drone Server host
  • Run the Drone stack

The instance sizes are up to you – at Higher Education we prefer fewer,
more powerful workers, since that results in faster builds. (We’ve
found that one powerful worker tends to handle builds just fine for
teams of seven) Once your drone servers are up, you can run this stack:

version: '2'
services:
  drone-server:
    image: drone/drone:0.5
    environment:
      DRONE_GITHUB: 'true'
      DRONE_GITHUB_CLIENT: <github client>
      DRONE_GITHUB_SECRET: <github secret>
      DRONE_OPEN: 'true'
      DRONE_ORGS: myGithubOrg
      DRONE_SECRET: <make up a secret!>
      DRONE_GITHUB_PRIVATE_MODE: 'true'
      DRONE_ADMIN: someuser,someotheruser,
      DRONE_DATABASE_DRIVER: mysql
      DRONE_DATABASE_DATASOURCE: user:password@tcp(databaseurl:3306)/drone?parseTime=true
    volumes:
    - /drone:/var/lib/drone/
    ports:
    - 80:8000/tcp
    labels:
      io.rancher.scheduler.affinity:host_label: drone=server
  drone-agent:
    image: drone/drone:0.5
    environment:
      DRONE_SECRET: <make up a secret!>
      DRONE_SERVER: ws://drone-server:8000/ws/broker
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock
    command:
    - agent
    labels:
      io.rancher.scheduler.affinity:host_label_ne: drone=server
      io.rancher.scheduler.global: 'true'

This will run one Drone Server on your drone=server host, and one
drone agent on every other host in your environment. Backing Drone with
MySQL via the DATABASE_DRIVER and DATASOURCE values are optional,
but highly recommended. We use a small RDS instance. Once the stack is
up and running, you can login to your Drone Server IP address and turn
on a repo for builds (from the Account menu). You’ll notice that
there’s really no configuration for each repo from the Drone UI. It all
happens via a .drone.yml file checked into each repository.

Adding a Build Configuration

To build and test a node.js project, add a .drone.yml file to your repo
that looks like this:

pipeline:
  build:
    image: node:6.10.0
    commands:
      - yarn install
      - yarn test

It’s simple and to-the-point, your build step simply sets the container
image that the repository code gets put in and specifies the commands to
run in that container. Anything else will be managed with Drone
plugins
, which are just containers designed
for one task. Since plugins live in Docker Hub, you don’t install
them
, just add them to your .drone.yml file A more full-featured build
like I mentioned above uses Slack, ECR, and Rancher plugins to create
this .drone.yml:

pipeline:
  slack:
    image: plugins/slack
    webhook: <your slack webhook url>
    channel: deployments
    username: drone
    template: "<{{build.link}}|Deployment #{{build.number}} started> on <http://github.com/{{repo.owner}}/{{repo.name}}/tree/{{build.branch}}|{{repo.name}}:{{build.branch}}> by {{build.author}}"
    when:
      branch: [ master, staging ]
  build:
    image: <your base image, say node:6.10.0>
    commands:
      - yarn install
      - yarn test
    environment:
      - SOME_ENV_VAR=some-value
  ecr:
    image: plugins/ecr
    access_key: ${AWS_ACCESS_KEY_ID}
    secret_key: ${AWS_SECRET_ACCESS_KEY}
    repo: <your repo name>
    dockerfile: Dockerfile
    storage_path: /drone/docker
  rancher:
    image: peloton/drone-rancher
    url: <your rancher url>
    access_key: ${RANCHER_ACCESS_KEY}
    secret_key: ${RANCHER_SECRET_KEY}
    service: core/platform
    docker_image: <image to pull>
    confirm: true
    timeout: 240
  slack:
    image: plugins/slack
    webhook: <your slack webhook>
    channel: deployments
    username: drone
    when:
      branch: [ master, staging ]
      status: [ success, failure ]

While this may be 40 lines, it’s extremely readable and 80% of this is
copy and paste from the Drone plugin docs. (Try doing all of these
things in a cloud hosted CI platform and you’ll likely have a day’s
worth of docs-reading ahead of you.) Notice how each plugin really
doesn’t need much configuration. If you want to use Docker Hub instead
of ECR, use the Docker
plugin
instead.
That about it! In a few minutes, you can have a fully-functioning CD
pipeline up and running. It’s also a good idea to use the Rancher
Janitor catalog stack to keep your workers’ disk space from filling up,
just know that the less-often you clean up, the faster your builds will
be, as more layers will be cached. Will Stern is a Software Architect
for HigherEducation and also provides Docker training through
LearnCode.academy and O’Reilly Video Training.

Build a CI/CD Pipeline with Kubernetes and Rancher
Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.
Tags: ,, Category: Rancher Blog Comments closed

Unlocking the Business Value of Docker

martes, 25 abril, 2017

Why Smart Container Management is Key

For anyone working in IT, the excitement around containers has been hard
to miss. According to RightScale, enterprise deployments of Docker over
doubled in 2016 with 29% of organizations using the software versus just
14% in 2015 [1]. Even more impressive, fully 67%
of organizations surveyed are either using Docker or plan to adopt it.
While many of these efforts are early stage, separate research shows
that over two thirds of organizations who try Docker report that it
meets or exceeds expectations [2], and the
average Docker deployment quintuples in size in just nine months.

Clearly, Docker is here to stay. While exciting, containers are hardly
new. They’ve existed in various forms for years. Some examples include
BSD jails, Solaris Zones, and more modern incarnations like Linux
Containers (LXC). What makes Docker (based on LXC) interesting is that
it provides the tooling necessary for users to easily package
applications along with their dependencies in a format readily portable
between environments. In other words, Docker has made containers
practical and easy to use.

Re-thinking Application Architectures

It’s not a coincidence that Docker exploded in popularity just as
application architectures were themselves changing. Driven by the
global internet, cloud, and the explosion of mobile apps, application
services are increasingly designed for internet scale. Cloud-native
applications are comprised of multiple connected components that are
resilient, horizontally scalable, and wired together via secured virtual
networks. As these distributed, modular architectures have become the
norm, Docker has emerged as a preferred way to package and deploy
application components. As Docker has matured, the emphasis has shifted
from the management of the containers themselves to the orchestration
and management of complete, ready-to-run application services. For
developers and QA teams, the potential for productivity gains are
enormous. By being able to spin up fully-assembled dev, test and QA
environments, and rapidly promote applications to production, major
sources of errors, downtime and risk can be avoided. DevOps teams
become more productive, and organizations can get to market faster with
higher quality software. With opportunities to reduce cost and improve
productivity, Docker is no longer interesting just to technologists –
it’s caught the attention of the board room as well.

New Opportunities and Challenges for the Enterprise

Done right, deploying a containerized application environment can bring
many benefits:

  • Improved developer and QA productivity
  • Reduced time-to-market
  • Enhanced competitiveness
  • Simplified IT operations
  • Improved application reliability
  • Reduced infrastructure costs

While Docker provides real opportunities for enterprise deployments, the
devil is in the details. Docker is complex, comprised of a whole
ecosystem of rapidly evolving open-source projects. The core Docker
projects are not sufficient for most deployments, and organizations
implementing Docker from open-source wrestle with a variety of
challenges including management of virtual private networks, managing
databases and object stores, securing applications and registries, and
making the environment easy enough to use that it is accessible to
non-specialists. They also are challenged by skills shortages and
finding people knowledgeable about various aspects of Docker
administration. A business guide to effective
container app management –
Compounding these challenges, orchestration technologies essential to
realizing the value of Docker are also evolving quickly. There are
multiple competing solutions, including Kubernetes, Docker Swarm and
Mesos. The same is true with private cloud management frameworks.
Because Docker environments tend to grow rapidly once deployed,
organizations are concerned about making a misstep, and finding
themselves locked into a particular technology. In the age of rapid
development and prototyping, what is a sandbox one day may be in
production the next. It is important that the platform used for
evaluation and prototyping has the capacity to scale into production.
Organizations need to retain flexibility to deploy on bare-metal, public
or private clouds, and use their choice of orchestration solutions and
value-added components. For many, the challenge is not whether to deploy
Docker, but how do so cost-effectively, quickly and in a way that
minimizes business and operational risk so the potential of the
technology can be fully realized.

Reaping the Rewards with Rancher

In a sense, the Rancher® container management platform is to Docker what
Docker is to containers: just as Docker makes it easy to package,
deploy and manage containers, Rancher software does the same for the
entire application environment and Docker ecosystem. Rancher software
simplifies the management of Docker environments helping organizations
get to value faster, reduce risk, and avoid proprietary lock-in.
Written with a
technology and business audience in mind, in a recently published
whitepaper, Unlocking the Value of Docker in the Enterprise,
Rancher Labs explores the challenges of container management and
discusses and quantifies some of the specific areas that Rancher
software can provide value to the business. To learn more about Rancher,
and understand why it has become the choice of leading organizations
deploying Docker, download the whitepaper and
learn what Rancher can do for your business.

[1]
http://assets.rightscale.com/uploads/pdfs/rightscale-2016-state-of-the-cloud-report-devops-trends.pdf
[2]
https://www.twistlock.com/2016/09/23/state-containers-industry-reports-shed-insight/

Tags: ,, Category: Rancher Blog Comments closed

DockerCon 2017 – Thoughts and Impressions

viernes, 21 abril, 2017

We’ve just returned from DockerCon 2017, which was a fantastic
experience. I thought I’d share some of my thoughts and impressions of
the event, including my perspective on some of the key announcements,
while they are still fresh in my mind.

New open source projects

Container adoption for production environments is very real. The
keynotes on both days included some exciting announcements that should
further accelerate adoption in the enterprise as well as foster
innovation in the open source community. Day 1 included demos of
multi-stage docker builds (introduced in Docker 17.04), which is an
incredibly cool feature. During the keynote, Docker also announced two
new open source projects for system builders who want to create their
own modular container-based systems. With the Moby Project, Docker has
essentially created a Fedora/RHEL split that enables users to
build container-based systems from a component library and reference
blueprints. Darren Shepherd, Chief Architect at Rancher Labs, provides
some more background and explanation about the Moby Project and how it
affects Rancher, RancherOS, and our users
here. The second project,
LinuxKit, provides a way to build customized Linux subsystems for each
type of container, which is useful if you want to assemble your own
Linux distribution for specialized hardware or features. LinuxKit is
based on containerd, which Docker contributed to the CNCF project in
March of this year. Containerd gives each LinuxKit subsystem its own
Linux kernel and allows each system daemon or system service to be
allocated its own container. Docker’s announcement of LinuxKit generated
a lot of interest for RancherOS. Our GA
announcement
turned out
to be extremely well-timed! We’ve been working on the RancherOS for a
couple of years, so it’s great to see how much interest there is in
small footprint Linux operating systems. However, I would like to make
sure it’s clear that LinuxKit and RancherOS serve different purposes.
LinuxKit enables you to build your own static Linux distro. RancherOS
is a minimal stable Linux made from containers for containers, which
uses cloud-init to run container services. We plan to evaluate whether
we can use LinuxKit to build some RancherOS components.

Container orchestration

In 2016, we invested a great deal to make Rancher the only product in
the market that supports multiple orchestrators. This has been a unique
differentiating attribute for the Rancher container management platform,
and has brought us a great deal of customer interest. As the
orchestrators become more complex, it is increasingly more important for
us to the provide high-quality support that our customers demand. I was
happy to see that the Rancher Labs’
announcement
to
embed Docker Enterprise Edition (EE) into the Rancher platform as well
as provide support was included in the keynote. This is an important
announcement for Rancher Labs as bundling Docker EE enables us to focus
our engineering efforts on Kubernetes, while still being able to offer
enterprise-grade support to Swarm customers. There was understandably
less talk about Kubernetes at this show. Still, the opportunity for
Kubernetes is very real. Last year the big challenge for Kubernetes was
how to set it up. The Rancher platform’s Kubernetes environment
addressed that problem beautifully. The big challenge this year,
however, is how to operate Kubernetes without a skilled SRE team.
Imagine being able to leave a Rancher Kubernetes environment running for
years without having to worry about host disconnecting, network
breaking, load balancer failures, or etcd problems! That’s what we’re
delivering this year. Many of Rancher users use Cattle, the integrated
container orchestration embedded within Rancher. Cattle is vitally
important as a bootstrap orchestrator for various infrastructure
services. We think of it more as a seamless extension of the “docker
run” experience than a complex orchestration framework with a steep
learning curve. We believe there is an opening for simple and
easy-to-use orchestration frameworks like Cattle.

Thanks for all the great feedback

At previous DockerCon conferences many attendees heard about Rancher for
the first time. Most people I met at the booth this year have already
heard of Rancher and are already Rancher users. With over 33M downloads
Rancher is definitely gaining traction, and we were happy to receive so
much attention at the show. When we met with users and customers, while
the gave us plenty of compliments, they were also not shy about areas
they want Rancher to improve. By learning from our users and customers,
I have no doubt we can continue to improve the product in the coming
weeks and months. We will continue to delight our customers and users! I
also had a lot of conversations with storage industry people about
Project Longhorn,
which we announced on Monday. The idea of microcontrollers for storage
resonated with many of them. Most application developers and operations
people, however, really just want to see a system that works better than
what they already have. Now that we have announced the project, the real
work begins. We intend to integrate Longhorn into Kubernetes and
Rancher, and demonstrate that it delivers unique value.

Rancher Labs at DockerCon

Rancher Labs was a Gold Sponsor for DockerCon, and many attendees
stopped by our booth Monday evening through Wednesday afternoon to
request a demo or just to say hi. NetApp also had a demo in their booth
showing how to deploy nDVP using the Rancher catalog, which had a steady
stream of interest. The Rancher Labs team also spent some time in the
CNCF booth educating attendees on the value of the Foundation. During a
breakout session, Darren Shepherd, our Chief Architect, presented a
session titled “Using Containers Shouldn’t Be This Hard”. He spoke to
about 100 attendees about the complexities of using containers in
production and provided guidance on how to implement load balancers,
orchestrators, etc. We had multiple customers and users stop by our open
office hours on Tuesday afternoon. There were a variety of different
questions, from trouble shooting current implementations to discussing
support options. We also hosted guests from Japan at a Rancher JP
meetup, and many of you joined us to kick your heels up with Rancher
Labs and {code} by Dell EMC at Austin’s iconic Container Bar on Rainey
Street. This is a really exciting time for the Docker community. I look
forward to hearing about the next wave of innovations as well as sharing
some of our own at DockerCon Europe. We’ll see you in Copenhagen!

Press Release: Rancher Labs Partners with Docker to Embed Docker Enterprise Edition into Rancher Platform

martes, 18 abril, 2017

Docker Enterprise Edition technology and support now available from Rancher Labs

Cupertino, Calif. – April 18, 2017 – Rancher
Labs
, a provider of container management
software, today announced it has partnered with
Docker to integrate Docker Enterprise Edition
(Docker EE) Basic into its Rancher container management platform. Users
will be able to access the usability, security and portability benefits
of Docker EE through the easy to use Rancher interface. Docker provides
a powerful combination of runtime with integrated orchestration,
security and networking capabilities. Rancher provides users with easy
access to these Docker EE capabilities, as well as the Rancher
platform’s rich set of infrastructure services and other container
orchestration tools. Users will now be able to purchase support for both
Docker Enterprise Edition and the Rancher container management platform
directly from Rancher Labs. “Since we started Rancher Labs, we have
strived to provide users with a native Docker experience,” said Sheng
Liang, co-founder and CEO, Rancher Labs. “As a result of this
partnership, the native Docker experience in the Rancher platform
expands to include Docker’s enterprise-grade security, management and
orchestration capabilities, all of which is fully supported by Rancher
Labs.” Rancher is a comprehensive container management platform that, in
conjunction with Docker EE, helps to further reduce the barriers to
adopting containers. Users no longer need to develop the technical
skills required to integrate a complex set of open source technologies.
Infrastructure services and drivers, such as networking, storage and
load balancers, are easily configured for each Docker EE environment.
The robust Rancher application catalog makes it simple to package
configuration files as templates and share them across the organization.
The partnership enables Rancher customers to obtain official support
from Rancher Labs for Docker Enterprise Edition. Docker EE is a fully
integrated container platform that includes built in orchestration
(swarm mode), security, networking, application composition, and many
other aspects of the container lifecycle. Rancher users will now be able
to easily deploy Docker Enterprise Edition clusters and take advantage
of features such as:

  • Certified infrastructure, which provides an integrated
    environment for enterprise Linux (CentOS, Oracle Linux, RHEL, SLES,
    Ubuntu) Windows Server 2016, and Cloud providers like AWS and Azure.
  • Certified containers that provide trusted ISV products packaged
    and distributed as Docker containers – built with secure best
    practices cooperative support.
  • Certified networking and volume plugins, making it easy to
    download and install containers to the Docker EE environment.

“The release of Docker Enterprise Edition last month was a huge
milestone for us due to its integrated, and broad support for both Linux
and Windows operating systems, as well as for cloud providers, including
AWS and Azure,” said Nick Stinemates, VP Business Development &
Technical Alliances, Docker. “We are committed to offering our users
choice, so it was natural to partner with Rancher Labs to embed Docker
Enterprise Edition into the Rancher platform. Users will now have the
ability to run Docker Enterprise Edition on any cloud from the easy to
use Rancher interface, while also benefitting from a Docker solution
that provides a simplified yet rich user experience with its integrated
runtime, multi-tenant orchestration, security, and management
capabilities as well as access to an ecosystem of certified
technologies.”

Product Availability

Rancher with Docker EE Basic is available in the US and Europe
immediately, with more advanced editions and other territories planned
for future. For additional information on Rancher software and to learn
more about Rancher Labs, please visit
www.rancher.com or contact
sales@rancher.com. Supporting Resources

  • Company blog
  • Twitter
  • LinkedIn

About Rancher Labs Rancher Labs builds
innovative, open source software for enterprises leveraging containers
to accelerate software development and improve IT operations. With
infrastructure services management and robust container orchestration,
as well as commercially-supported distributions of Kubernetes, Mesos and
Docker Enterprise Edition, the flagship
Rancher container management platform
allows users to easily manage all aspects of running containers in
production, on any infrastructure.
RancherOS is a simplified Linux
distribution built from containers for running containers. For
additional information, please visit
www.rancher.com. All product and company
names herein may be trademarks of their registered owners.
Media
Contact
Eleni Laughlin MindsharePR (510) 406-0798
eleni@mindsharepr.com

Tags: , Category: Sin categoría Comments closed

Top 5 challenges with deploying docker containers in production

viernes, 24 febrero, 2017

Docker containers make app development easier. But deploying them in production can be hard.

Software developers are typically focused on a single application,
application stack or workload that they need to run on a specific
infrastructure. In production, however, a diverse set of applications
run on a variety of technology (e.g. Java, LAMP, etc.), which need to be
deployed on heterogeneous infrastructure running on-premises, in the
cloud or both. This gives rise to several challenges with running
containerized applications in production:

  1. Controlling the complexity of extremely dense, fast changing
    environments
  2. Taking maximum advantage of a highly volatile technology ecosystem
  3. Ensuring developers have the freedom to innovate
  4. Deploying containers across disparate, distributed infrastructure
  5. Enforcing organizational policy and controls

Controlling the complexity of extremely dense, fast changing environments

According to the June 2016 Cloud Foundry “Hope Versus Reality:
Containers in 2016” report, 45 percent of survey respondents said their
biggest deployment worry is that Docker is too complex to integrate into
their environments. A big reason for this is
the density and volatility of containerized environments. Because
operating systems and kernels do not need to be loaded for
each container, containerized environments enable better workload
density within a given amount of infrastructure than more traditional
virtualized environments. As a result, the total volume of components
that need to be created, monitored and destroyed across the production
environment is exponentially larger, significantly increasing the
complexity of managing container-based environments. Not only are there
more things to be managed, but they are also changing faster than ever
before. A Datadog survey shows that, while traditional and cloud-based
VMs have an average lifespan of almost 15 days, Docker containers have
an average lifespan of 2.5 days. The result
is an order-of-magnitude increase in the number of things that need to
be individually managed and monitored. The complexity of these dense,
fast-changing environments is further compounded by the complexity of
the architecture. Containers are typically deployed over highly
distributed environments; on a single cluster or on a multi-cluster
environment. The makeup of these clusters is highly disparate and they
may be located on-premises, in the cloud or some combination of the two.
Organizations therefore need an easier approach to orchestrate containers and manage the
underlying infrastructure services for multi-container, multi-host
applications. This is particularly important for applications with a
microservices architecture, such as a web application that consists of a
container cluster running web servers to host multiple instances of the
frontend (for failover and load balancing), as well as multiple backend
services each running in separate containers.

Taking advantage of a highly volatile technology ecosystem

The Docker ecosystem is very volatile and complex. Over the past few
years a flurry of third-party tools and services have emerged to help
developers deploy, configure and manage their containerized workflows as
they move from development to production. Because they are based on
open source technologies, the rate at which these tools and services
change and the volume of new documentation makes it very challenging to
put together a stable technology stack to run containers in production.
It also makes it hard for companies to build and maintain the
engineering skills needed to take advantage of the rich ecosystem.
According to RightScale’s fifth annual State of the Cloud Survey, for
companies who are not currently using containers, lack of experience was
by far the top challenge (39 percent) for container
adoption.

Ensuring developers have the freedom to innovate

In simplifying container management, it’s important not to lose the
flexibility developers require to innovate. They need to be able to
pick and choose the tools and frameworks they want to use when they need
them. RedMonk refers to this as the “era of permissionless
development”. When asked to solve a problem,
most developers no longer ask what tools they can use, they look for the
best tool for the job. They also prefer to use the most recent releases,
which isn’t necessarily the most stable version, so they can quickly
take advantage of any new capabilities. However, they are also
increasingly being required to take responsibility for ensuring that any
application logic they create runs in production and quickly fixing it
if it does not. This means that they also need to be able to roll back a
deployment if they run into issues. Developers require the freedom of
root access and they want to be able to install any open source software
they like. This is why they typically avoid traditional platform as a
service (PaaS) solutions. PaaS abstracts away containers, so developers
can focus on coding instead of managing containers. However, they are
also proprietary and are not as versatile as a home-grown open source
stacks. They constrain the developers’ ability to innovate by locking
them into one vendor or infrastructure provider.

Deploying containers across disparate, distributed infrastructure

One of the primary benefits of containers is that they are portable—an
application and all its dependencies can be bundled into a single
container, which is independent from the host version of Linux kernel,
platform distribution or deployment model. This container can be
transferred to another host running Docker and executed without
compatibility issues. Infrastructure services vary dramatically between
clouds and data centers, however, making real application portability
almost impossible without architecting around those differences in the
application. Using containers to make applications portable across
diverse infrastructure therefore requires more than just a standardized
unit for shipping code. It requires infrastructure services, which
include:

  1. Hosts (CPU, memory, storage and network connectivity) running Docker
    containers, including virtual machines or physical machines running
    on-premises as well as on the cloud
  2. A network that enables containers on different hosts to communicate
    with each other using either coordinated port mappings or software
    defined networking
  3. Load balancers to expose services to the Internet
  4. DNS, which is commonly used to implement service discovery
  5. Integrated health checks ensure only healthy containers are used to
    serve requests
  6. A way to perform actions triggered by certain events, such as
    restarting new containers after a host fails, ensuring a fixed
    number of healthy containers are available or ensuring new hosts and
    containers are created in response to increased load
  7. A way to scale services by creating new containers from existing
    containers
  8. Storage snapshot and backup for backing up a stateful container for
    disaster recovery purposes

Kubernetes infrastructure provides, nowadays all the above services leveraging developer experience allowing them to focus on the development part

Enforcing organization policy and controls

There are security and compliance concerns related to deploying
containers that must be addressed for larger enterprises to use them in
production, particularly those in regulated industries such as finance
and healthcare. Companies such as Docker have continued to push for
fixes and create new software and integration across the toolchain to
cope with that problem. However, there is still a lack of parity between
application container security and what enterprises are used to with
virtual machines. This includes enforcing organizational policy and
ensuring secure access to the containers and cluster administration,
including managing certificates for transport layer security (TLS).
Users and groups need to be able to share or deny access to resources
and environments (e.g. development or production) via role-based access
control (RBAC). User authentication requires integration with Active
Directory, LDAP and/or GitHub.

SUSE Rancher container management platform can help

Containers make software development easier, enabling you to write code
faster and run it better. However, running containers in production can
be hard. There are a wide variety of technologies to integrate and
manage, and new tools are emerging every day.
SUSE Rancher makes it easy for you to manage
all aspects of running containers. You no longer need to develop the
technical skills required to integrate a complex set of open source
technologies.

SUSE Rancher includes everything you need to make
containers work in production on any infrastructure. A portable layer of
infrastructure services is easily configured and integrated. An easy to
use user interface enables you to take advantage of a rich set
orchestration features and then deploy your containers with a single
click. The robust application catalog makes it simple to package
configuration files as templates and share them across your
organization. With millions downloads and enterprise-class
support, SUSE Rancher has quickly become the open source platform of choice
for running containers in production.

It’s easy to get started with Rancher.

Just follow these steps:

  1. Download – SUSE Rancher is deployed as a
    set of container images, easy to deploy on your cluster or even your laptop.
  2. Get started – Deploying SUSE Rancher takes less than 5 minutes if you follow the steps in
    the quick start guide.
  3. Use the docs – SUSE Rancher is incredibly easy
    to use. However, there’s a wealth of information in the technical
    documents in case you need it.
  4. Take advantage of our awesome community of
    users
    – The forums are the best place to
    hear about the latest product releases as well as interact with your
    peers and Rancher engineers.

Resources:

[1] https://www.cloudfoundry.org/hope-versus-reality-containers-in-2016/
[2] https://www.datadoghq.com/docker-adoption/
[3] http://redmonk.com/fryan/2016/02/16/docker-containers-and-the-cio/
Tags: , Category: Containers Comments closed

Playing Catch-up with Docker and Containers

viernes, 17 febrero, 2017

This article is essentially a guide to getting started with Docker for
people who, like me, have a strong IT background but feel a little
behind the curve when it comes to containers. We live in an age where
new and wondrous technologies are being introduced into the market
regularly. If you’re an IT professional, part of your job is to identify
which technologies are going to make it into the toolbox for the average
developer, and which will be relegated to the annals of history. Docker
is one of those technologies that sounded interesting when it first
debuted in 2013, but was easy to ignore because at the time it was not
clear whether Docker would ever graduate beyond something that
developers liked to play with in their spare time. Personally, I didn’t
pay close attention to Docker containers in Docker’s early days. They
got lost amid all the other noise in the IT world. That’s why, in 2016,
as Docker continued to rise in prominence, I realized that I’d missed
the container boat. Docker was becoming a must-know technology, and I
was behind the curve. If you’re reading this, you may well be in a
similar position. But there’s good news:
Register now for free online training on deploying containers with
Rancher Container technology, and Docker specifically, are
not hard to pick up and learn if you already have a background in IT.

Sure, containers can be a little scary when you’re first getting
started, just like any new technology. But rest assured that it’s not
too late to get on the container train, even if you weren’t writing
Docker files back in 2013. I’ll explain what Docker is and how container
technology works, then go through the first steps in setting Docker up
on your workstation and getting a container running that you can
interact with. Finally, I’ll direct you to some of the resources I used
to familiarize myself with Docker, so you can continue your journey.

What is Docker and How Does it Work?

Docker is technology that allows you to create and deploy an application
together with a filesystem and everything needed to run it. The Docker
container, as it is called, can be installed on any machine, as long as
the Docker engine has been installed, and can be expected to always run
in the same manner. A physical machine with the Docker Engine installed
can host multiple Docker containers, each sharing the resources of the
host machine. You may already be familiar with machine virtualization,
either as a result of running local virtual machines using VMware on
your workstations, or interacting with cloud services like Amazon Web
Services or Microsoft Azure. Container technology is similar in some
ways, and different in others. Let’s start by comparing the two by
looking at the diagram below which shows the basic structure of a
machine hosting Docker containers, and another hosting virtual machines.
In both cases the host machine has its infrastructure and host operating
system. Virtual machines then require a hypervisor which is software or
firmware that allows virtual machines to be hosted. The virtual machines
themselves each contain their own operating system and the application,
together with its required binaries, libraries and any other
dependencies. Similarly, the machine hosting the Docker containers has
its own infrastructure and operating system. Instead of the hypervisor,
it has the Docker Engine installed, and this is what interacts with the
containers. Each container holds its application and the required
binaries, libraries and other dependencies. It is important to note that
they don’t require their own guest operating system. This allows the
containers to be significantly smaller in size, and able to be
distributed, deployed and started in a fraction of the time taken by
virtual machines.

Other key differences are that virtual machines have specifically
allocated access to the system resources, while Docker containers share
host system resources through the Docker engine.

Installing Docker and Discovering Docker Hub

I can’t think of a better way to learn about new technology than to
install it, and get your hands dirty. Let’s install the Docker Engine on
your workstation and a simple Docker container. Before we can deploy a
container, we’ll need the Docker Engine. This is the platform that will
host the container and allow it to interact with the underlying
operating system. You’ll want to pick the appropriate download from the
Docker products page, and
install it on your workstation. Downloads are available for OS X,
Windows, Linux, and a host of other operating systems. Once we have the
Docker platform installed, we’re now ready to get a container running.
Before we do that though, let’s familiarize ourselves with Docker
Hub
. Docker Hub is a central repository for
Docker Container images. Let’s pretend that you’re working on a Windows
machine, and you’d like to deploy an app on SUSE Linux. If you go to
Docker Hub, and search for OpenSuse, you’ll be shown a list of
repositories. At the time of writing there were 212 repositories listed.
You’ll want to look for the “official” repository. The official
repositories are maintained by a team of engineers sponsored by Docker.
Official repositories have clear documentation and promote best
practices. Now search for BusyBox.
Busybox is a tiny Unix distribution, which provides all of the
functionality we’ll need for this example. If you go to the official
repository, you’ll be able to read some good documentation on the image.
Let’s get a BusyBox container running on your workstation.

Getting Your First Container Running

Assuming you’ve installed the Docker Engine, open a new command prompt
on your workstation. If you’re on a Windows machine, I’d recommend using
the Docker Quick Start link which was included as part of your
installation. This will launch an interactive shell that will make it
easier to work with Docker. You don’t need this on IOS or other
Linux-based system. Enter the following command:

$ docker run -it --rm busybox

This will search the local machine for the latest BusyBox image, and
then download it from DockerHub if it isn’t found. The process should
take only a couple of seconds, and you should have something similar to
the the text shown below on your screen:

$ docker run -it --rm busybox
Unable to find image `busybox:latest` locally
latest: Pulling from library/busybox
4b0b=bc1c4050b: Pull complete
Digest: sha256”817q12c32a39bbe394944ba49de563e08f1d3c5266eb89723256bc4448680e
Status: Downloaded newer image for busybox:latest
/ #

We started a new Docker container, using the BusyBox image. We used the
-it parameters to specify that we want an interactive, pseudo TTY
session, and the –rm flag indicates that we want to delete the
container once we exit it. If you execute a command like ‘ls’ you’ll see
that you have access to a new Linux filesystem. Play around a little,
and when you’re done, enter `exit` to exit the container, and remove
it from the system. Congratulations! You’ve now created, interacted
with, and shut down your own Docker container.

Creating Your Own Docker Image

Being able to start up and close down a container is fun, but it doesn’t
have much practical use. Let’s start a new container, install something
on it, and then save it as a container for someone else to use. We’ll
start with a Debian container, install Git on it, and then save it for
later use. This time, we’ll start the container without the –rm flag,
and we’ll specify a version to use as well. Type the following into your
command prompt:

$ docker run -it debian:jessie

You should now have a Debian container running—specifically the jessie
tag/release from Docker Hub. Type the `git` command when you have the
container running. You should observe something similar to the
following:

root@4a4882a7ed59:/# git
bash: git: command not found

So it appears this container doesn’t have Git installed. Let’s rectify
that situation by installing Git:

root@4a4882a7ed59:# apt-get update && apt-get install -y git

This may take a little longer to run, but it will update the apt-get
utility, and then install Git. When it finishes up, type `git` again.
Voila! At this point, we have a container started, and we’ve installed
Git. We started the container without the –rm parameter, so when we
exit it, it won’t destroy the container. Let’s exit now. Type `exit`.
Now we want to get the ID of the container we just ran. To find this, we
type the following command:

$ docker ps -a

You should now see a list of recent containers. My results looked
similar to what’s below:

CONTAINER ID       IMAGE            COMMAND       CREATED        STATUS                          PORTS       NAMES
4a4882a7ed59       debian:jessie    “/bin/bash”   9 minutes ago  Exited (1) About a minute ago               hungry_fermet

It can be a little hard to read, especially if the results get wrapped
in your command window. What we’re looking for is the container ID,
which in my case was 4a4882a7ed59. Yours will be different, but similar
in format. Run the following command, replacing my container ID with
yours. Test:example are arbitrary names as well—Test will be the
name of your saved image, and example will be the version or tag of
that image.

$ docker commit 4a4882a7ed59 test:example

You should see a sha256 response once the container is saved. Now, run
the following to list all the images available on your local machine:

$ docker images

Docker will list the images on your machine. You should be able to find
a repository called test with a tag of example. Let’s see if it worked.
Start up your container using the following command, assuming you saved
your image with the same name and tag as I did.

$ docker run -it test:example

Once you have it running, try and execute the git command. It should
return with a list of possible options for Git. You did it! You created
a custom image of Debian with Git installed. You’re practically a Docker
Master at this point.

Following the Container Ecosystem

Using containers effectively also requires a familiarity with the trends
that are defining the container ecosystem. In 2013, when Docker debuted,
the ecosystem consisted of, well, Docker. But it has changed in big ways
since then. Orchestrators, which automate the provisioning of
infrastructure for containers, have evolved and become an essential part
of large-scale container deployment. Storage options have become more
sophisticated, simplifying the task of moving data between containers
and external, persistent storage systems. Monitoring solutions for
containers have been extended from basic tools like the Docker stats
command to include commercial monitoring and APM tools designed for
containers. And Docker now even runs on Windows as well as Linux (albeit
with some important caveats, like limited networking support at this
time). Discussing all of the container ecosystem trends in detail is
beyond the scope of this article. But in order to make the most of
containers, you should follow the news in the container ecosystem to
gain a sense of what is coming next as containers and the solutions that
support them become more and more sophisticated.

Continuing to Learn About Containers

Obviously this just scratches the surface of what containers offers, but
this should give you a good start, and afford you enough of a base of
understanding to create, modify and deploy your own containers locally.
If you would like to know more about Docker, the Web is full of useful
tutorials and additional information:

Mike Mackrory is a Global citizen who has settled down in the Pacific
Northwest – for now. By day he works as a Senior Engineer on a Quality
Engineering team and by night he writes, consults on several web based
projects and runs a marginally successful eBay sticker business. When
he’s not tapping on the keys, he can be found hiking, fishing and
exploring both the urban and the rural landscape with his kids.

Tags: , Category: Sin categoría Comments closed

Resilient Workloads with Docker and Rancher: Part 5

martes, 24 enero, 2017

This is the last part in a series on designing resilient containerized
workloads. In case you missed it, Parts 1, 2, 3, and 4 are already
available online.
In Part 4 last week, we covered in-service and
rolling updates for single and multiple hosts. Now, let’s dive into
common errors that can pop up during these updates:

Common Problems Encountered with Updates

Below is a brief accounting of all the supporting components required
during an upgrade. Though the Rancher UI does a great job of presenting
the ideal user experience, it does hides some of the complexities that
occur with operating container deployments in production: Rancher Upgrade support
The blue indicates parts of the system under control by Rancher. The
types of bugs that exist on this layer require the end user to be
comfortable digging into Rancher container logs. We briefly discussed
ways to dig into Rancher networking in Part
2
.
Another consideration is scaling Rancher Server along with your
application size; since Rancher writes to a relational database, the
entire infrastructure may be slowed down by I/O and CPU issues as
multiple services are updated. The yellow parts of the diagram indicate
portions of the system managed by the end user. This requires some
degree of knowledge of setting the infrastructure hosts up for
production. Otherwise, a combination of errors in the yellow and blue
layers will create very odd problems for service upgrades that are very
difficult to replicate. Broken Network to Agent Communication
Rancher Free Ebook 'Continuous Integration and Deployment with Docker and Rancher' Free
eBook: Continuous Integration and Deployment with Docker and
Rancher Suppose I shutoff my node2 (the one with my
Wordpress containers). How does a lost host affect the ability of
Rancher to coordinate?

$> vagrant halt node2

Rancher did not automatically migrate my containers, because there was
no health check in place – we need to establish a
health-check to
have the containers automatically migrate. With a health-check in place,
our ‘mywordpress’ service will automatically migrate to other hosts.
In this case, our application went down because the load balancer on
node1 was unable to route traffic to services on node2. Since our
application utilizes a HAProxy load balancer that is dynamically
configured by Rancher, this problem is a combination of problems with
user- and Rancher-managed networking. In this setting, though the app
sees downtime, an upgrade still works as expected:

$> rancher-compose up --upgrade --force-upgrade

Rancher starts the containers on the node1 (which is reachable) and then
marks node2 as being on reconnecting state:

Agent on Host Unable to Locate Image Suppose now that I have a
custom repository for my organization. We will need to add a registry to
the Rancher instance to pull custom images. A host node might not have
access to my custom registry. When this occurs, Rancher will
continuously attempt to maintain the scale of the service and keep
cycling the containers. Rancher does this by allowing all agents to add
various Docker registries in
Rancher
.
The credentials are kept on the agents and are not available to the
host. Sometimes, it is also important to check if a host node has access
to the registry. A firewall rule or a IAM profile misconfiguration on
AWS may cause your host to fail to pull the image, even with proper
credentials. This is a issue where errors most commonly reside in the
yellow user-controlled infrastructure. Docker on Host Frozen Running
a resilient Docker host is a technical challenge in itself. When using
Docker in a development capacity, all that was required was installing
Docker engine. In production, the Docker engine requires many
configuration options, such as the production OS, the type of storage
driver
to
be used, and how much space to allocate to the Docker daemon, all of
which play a part in the user-controlled environment. Managing a
reliable Docker host layer carries problems similar those in traditional
hosting: both require maintaining up-to-date software on bare metal.
Since the Rancher agent directly interfaces with the Docker daemon on
the host, if the Docker daemon is unresponsive, then Rancher components
have little control over this component. An example of when a Docker
daemon failure prevents a rollback is when the old containers reside on
one host, but the Docker daemon freezes up. This usually requires the
daemon or host to be force rebooted. Sometimes the containers will be
lost, and our rollback candidates are purged.

Problems with In Service Deployment

A short list of issues we encountered or discussed in our experiments:

  • Port conflicts
  • Service State issues
  • Network Routing issues
  • Registry Authentication issues
  • Moving Containers
  • Host issues

In summary, in-service deployment suffer from the following issues:

  • Unpredictable under failure scenarios
  • Rollbacks don’t always work

If you like to dig more into CI/CD theory, you can continue your deep
dive with an exert from the CI/CD
Book
.
In general, we can have confidence that stateless application containers
behind load balancers (like a node app or WordPress) can be quickly
redeployed when needed. However, more complex stacks with interconnected
behavior and state require a new deployment model. If the deployment
process is not exercised daily in a CI/CD process, an ad-hoc in-place
update may surface unexpected bugs, which require the operator to dig
into Rancher’s behavior. Coupled with multiple lower layer failures,
this may make upgrading more complicated applications a dangerous
proposition. This is why we introduce the blue-green deployment method.

Blue-Green Deployment

With in-place updates having so many avenues of failure, one should only
rely on it as part of CI/CD pipeline that exercises the updates in a
repeatable fashion. When the deployment is regularly exercised and bugs
are fixed as they arise, the cost of deployment issues is negligible for
most common web-apps and light running services. But what if it is a
stack due for update is rarely touched? Or the stack that requires many
complex data and network interactions? A blue-green deployment pattern
creates breathing space to more reliably upgrade the stack.
The Blue-Green Deployment
Section
of
the CI/CD book details how to leverage internet routing to redirect
traffic to another stack. So instead of keeping traffic coming into a
service or stack while it is updating, we will make upgrades in a
separate stack and then adjust the DNS entry or proxy to switch over the
traffic.

Once the blue stack is updated, we will update the load-balancer to
redirect traffic to the green stack. Now our blue stack has become our
staging environment, and we make changes and upgrades there until the
next release. To test such a stack, we deploy a new stack of services
that is only load-balancers. We set it as a global service, so it will
intercept requests at port :80 on each host. We will then modify our
Wordpress application to internal port :80, so no conflicts occur.

We then clone this setup to another stack, and call it
wordpress-single-blue.

mywordpress:
 tty: true
 image: wordpress
 links:
 - database:mysql
 stdin_open: true
database:
 environment:
 MYSQL_ROOT_PASSWORD: pass1
 tty: true
 image: mysql
 volumes:
 - '/data:/var/lib/mysql'
 stdin_open: true
wordpresslb:
 # internal load balancer
 expose:
 - 80:80
 tty: true
 image: rancher/load-balancer-service
 links:
 - mywordpress:mywordpress
 stdin_open: true
.
.
.
wordpresslb:
 image: rancher/load-balancer-service
 ports:
 # Listen on public port 80 and direct traffic to private port 80 of the service
 - 80:80
 external_links:
 # Target services in a different stack will be listed as an external link
 - wordpress-single/wordpresslb:mywordpress
 # - wordpress-single-blue/wordpresslb:mywordpress
 labels:
 - io.rancher.scheduler.global=true

Summary of Deployments on Rancher

We can see that using clustered environments entails a lot of
coordination work. Leveraging the internal overlay network and various
routing components to support the containers provides flexibility in how
we maintain reliable services within Rancher. This also increases the
amount of knowledge needed to properly operate such stacks. A simpler
way to use Rancher would be to lock environments to specific hosts with
host tags. Then we can schedule containers onto specific hosts, and use
Rancher as a Docker manager. While this doesn’t use all the features
that Rancher provides, it is a good first step to ensure reliability,
and uses Rancher to model a deployment environment that you are
comfortable with. Rancher supports container scheduling policies that
are modeled closely after Docker Swarm. In an excerpt from the Rancher
documentationon
scheduling, they include scheduling based on:

  • port conflicts
  • shared volumes
  • host tagging
  • shared network stack: ‘net=container:dependency
  • strict and soft affinity/anti-affinity rules by using both
    environment variables (Swarm) and labels (Rancher)

You can organize your cluster environment however that is needed to
create a comfortable stack.

Supporting Elements

Since Rancher is a combination of blue Rancher manged components, and
yellow end user managed components. We want to pick robust components
and processes to support our Rancher environment. In this section, we
briefly go through the following:

  • Reliable Registry
  • Encoding Services for Repeatable Deployments
  • Reliable Host

Reliable Registry First step is to get a reliable registry, a good
reading would be to look at the requirements to setup your own private
registry
on
the Rancher blogs. If you are unsure there may be interest in taking a
look at Container Registries You Might Have
Missed
. This
article evaluates registry products, and can be a good starting point
for picking out a registry for your use cases. Though, to save on some
mental clutter, we recommend taking a look at an excellent article on
Amazon Elastic Container Registry (ECR) called Using Amazon Container
Registry
Service
.
AWS ECR is a fully managed private docker registry. It is cheap (S3
storage prices) and provides fine grained permissions controls.

Caveat, be sure to use a special container tag for production, e.g.
:prod, :dev, :version1. Sometimes with multiple builds, we would like to
avoid one developer builds overwriting the same :latest tag.

Encoding Services for Repeatable Deployments We have been encoding
our experiments in compose files over the past few blog posts. Though
the docker-compose.yml and rancher-compose.yml sections are
great for single services, we can extend this to a large team through
the use of Rancher catalogs. There is an Article Chain on Rancher
Catalog
Creation
and
the documentation for Rancher
catalogs
. This will
go hand in hand with the Blue-Green deploy method, as we can just start
a new catalog under a new name, then redirect our top level loadbalancer
to the new stack. Reliable Hosts Setting up a Docker host is also
critical. For development and testing, using the default Docker
installation is enough. Docker itself has many resources, in particular
a commercial Docker engine and a compatibility
matrix
for
known stable configurations. You can also use Rancher to create AWS
hosts for you, as Rancher provides an add-host feature that provisions
the agent for you, this can be found in the
documentation.
Building your own hosts allows for some flexibility for more advanced
use cases, but will require some additional work from the user-end. In a
future article, we will take a look at how to setup a reliable docker
host layer, as part of a complete Rancher deployment from scratch.

Summary

Now we have a VM environment that can be spun up with Vagrant to test
the latest Rancher. It should be a great first step to test out
multi-node Rancher locally to get a feel for how the upgrades work for
your applications. We have also explored the layers that support a
Rancher environment, and highlighted parts that may require additional
attention such as hosting and docker engine setup. With this, we hope
that we have provided a good reading guide into what additional steps
your infrastructure needs prior to a production deployment. Nick Ma is
an Infrastructure Engineer who blogs about Rancher and Open Source. You
can visit Nick’s blog, CodeSheppard.com, to
catch up on practical guides for keeping your services sane and reliable
with open-source solutions.

Tags: ,,, Category: Sin categoría Comments closed