Rancher 2.6.7 Delivers Kubernetes 1.24 and AWS Marketplace Support and RKE Encryption Key Rotation

Tuesday, 23 August, 2022

SUSE is happy to announce the latest release of Rancher, 2.6.7. In this release, we have added several new features:

  • Kubernetes 1.24 support
  • AWS Marketplace support
  • Azure Active Directory (Azure AD) with MSAL (Microsoft Authentication Library)
  • RKE2 Encryption Key Rotation

Our latest Kubernetes release addition means Rancher can now manage clusters with the newer capabilities offered by upstream Kubernetes 1.24. This applies to RKE, RKE2, and K3s distributions as well as any CNCF-certified Kubernetes you wish to manage through Rancher.

With our AWS Marketplace support, customers can now purchase support contracts for Rancher directly through the AWS marketplace. This means there are no additional procurement processes to go through if your organization already has a commercial relationship with AWS. This also allows customers to use their Enterprise Discount Program (EDP) spending commitments with AWS towards Rancher, which is helpful for large consumers of AWS products and services. Look for a detailed blog on this AWS Marketplace support to follow shortly.

Rancher’s support for integrating with Azure AD has been upgraded to use the new Microsoft Graph API (AKA MSAL). Microsoft is decommissioning the old API before the end of the year, and all users will need to switch to the newer standard. You can read more about this change and any steps you may need to take here.

RKE2 Encryption Key Rotation gives security-minded customers an easy way to increase their security posture. Rotating encryption keys are a best practice with many security standards and compliance frameworks, so this feature makes it easier to uplevel the security of your RKE2 clusters. Key rotation is another way that RKE2 continues to invest in increasing the security posture of Kubernetes for the enterprise.

Learn More and Try Rancher 2.6.7

Do you know that Rancher customers can join a quarterly Customer Advisory Board and share direct feedback with Product and Engineering leaders? Speak to your CSM or Account executive if you aren’t already signed up. We look forward to your feedback on this latest Rancher release!

Try out the latest Rancher release here

Integrate AWS Services into Rancher Workloads with TriggerMesh

Wednesday, 9 September, 2020
Don’t miss the Computing on the Edge with Kubernetes conference on October 21.

Many businesses use cloud services on AWS and also run workloads on Kubernetes and Knative. Today, it’s difficult to integrate events from AWS to workloads on a Rancher cluster, preventing you from taking full advantage of your data and applications. To trigger a workload on Rancher when events happen in your AWS service, you need an event source that can consume AWS events and send them to your Rancher workload.

TriggerMesh Sources for Amazon Web Services (SAWS) are event sources for AWS services. Now available in the Rancher catalog, SAWS allows you to quickly and easily consume events from your AWS services and send them to your workloads running in your Rancher clusters.

SAWS currently provides event sources for the following Amazon Web Services:

TriggerMesh SAWS is open source software that you can use in any Kubernetes cluster with Knative installed. In this blog post, we’ll walk through installing SAWS in your Rancher cluster and demonstrate how to consume Amazon SQS events in your Knative workload.

Getting Started

To get you started, we’ll walk you through installing SAWS in your Rancher cluster, followed by a quick demonstration of consuming Amazon SQS events in your Knative workload.

SAWS Installation

  1. TriggerMesh SAWS requires the Knative serving component. Follow the Knative documentation to install the Knative serving component in your Kubernetes cluster. Optionally, you may also install the Knative eventing component for the complete Knative experience. We used:
    kubectl --namespace kong get service kong-proxy

    We created a cluster from the GKE provider. A LoadBalancer service will be assigned an external IP, which is necessary to access the service over the internet.

  2. With Knative serving installed, search for aws-event-sources from the Rancher applications catalog and install the latest available version from the helm3-library. You can install the chart at the Default namespace.

    Image 01

Remember to update the Knative Domain and Knative URL Scheme parameters during the chart installation. For example, in our demo cluster we used Magic DNS (xip.io) for configuring the DNS in the Knative serving installation step, so we specified 34.121.24.183.xip.io and http as the values of Knative Domain and Knative URL Scheme, respectively.

That’s it! Your cluster is now fully equipped with all the components to consume events from your AWS services.

Demonstration

To demonstrate the TriggerMesh SAWS package functionality, we will set up an Amazon SQS queue and visualize the queue events in a service running on our cluster. You’ll need to have access to the SQS service on AWS to create the queue. A specific role is not required. However, make sure you have all the permissions on the queue: see details here.

Step 1: Create SQS Queue

Image 02

Log in to the Amazon management console and create a queue.

Step 2: Create AWS Credentials Secret

Create a secret named awscreds containing your AWS credentials:

$ kubectl -n default create secret generic awscreds 

--from-literal=aws_access_key_id=AKIAIOSFODNN7EXAMPLE 

--from-literal=aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Update the values of aws_access_key_id and aws_secret_access_key in the above command.

Step 3: Create the AWSSQSSource Resource

Create the AWSSQSSource resource that will bring the events that occur on the SQS queue to the cluster using the following snippet. Remember to update the arn field in the snippet with that of your queue.

$ kubectl -n default create -f - << EOF

apiVersion: sources.triggermesh.io/v1alpha1

kind: AWSSQSSource

metadata:

name: my-queue

spec:

arn: arn:aws:sqs:us-east-1:043455440429:SAWSQueue

credentials:

  accessKeyID:

    valueFromSecret:

     name: awscreds

     key: aws_access_key_id

  secretAccessKey:

    valueFromSecret:

     name: awscreds

     key: aws_secret_access_key

sink:

 ref:

    apiVersion: v1

   kind: Service

   name: sockeye

EOF

Check the status of the resource using:

$ kubectl -n default get awssqssources.sources.triggermesh.io

NAME READY REASON SINK AGE

my-queue True http://sockeye.default.svc.cluster.local/ 3m19s

Step 4: Create Sockeye Service

Sockeye is a WebSocket-based CloudEvents viewer. Our my-queue resource created above is set up to send the cloud events to a service named sockeye as configured in the sink section. Create the sockeye service using the following snippet:

$ kubectl -n default create -f - << EOF

apiVersion: serving.knative.dev/v1

kind: Service

metadata:

name: sockeye

spec:

template:

 spec:

   containers:

    - image: docker.io/n3wscott/sockeye:v0.5.0@sha256:64c22fe8688a6bb2b44854a07b0a2e1ad021cd7ec52a377a6b135afed5e9f5d2

EOF

Next, get the URL of the sockeye service and load it in the web browser.

$ kubectl -n default get ksvc

NAME URL LATESTCREATED LATESTREADY READY REASON

sockeye http://sockeye.default.34.121.24.183.xip.io sockeye-fs6d6 sockeye-fs6d6 True

Step 5: Send Messages to the Queue

We now have all the components set up. All we need to do is to send messages to the SQS queue.

Image 03

The cloud events should appear in the sockeye events viewer.

Image 04

Conclusion

As you can see, using TriggerMesh Sources for AWS makes it easy to consume cloud events that occur in AWS services. Our example uses Sockeye for demonstration purposes: you can replace Sockeye with any of your Kubernetes workloads that would benefit from consuming and processing events from these popular AWS services.

The TriggerMesh SAWS package supports a number of AWS services. Refer to the README for each component to learn more. You can find sample configurations here.

Don’t miss the Computing on the Edge with Kubernetes conference on October 21.
Tags: ,,, Category: Products Comments closed

3 Ways to Run Kubernetes on AWS

Tuesday, 12 May, 2020

Kubernetes is hugely popular and growing, and is primarily used on the cloud — 83 percent of organizations included in a large CNCF survey said they run Kubernetes on at least one public cloud. Amazon is a natural option for Kubernetes clusters, due to its mature and robust infrastructure, and a variety of deployment options with a varying degree of automation.

Read on to understand three key options for running Kubernetes on AWS, how they work and which is best for your organization’s needs.

In this article you will learn:

  • The options for running Kubernetes on AWS
  • How to create a Kubernetes cluster on AWS with kops
  • How to create a Kubernetes cluster with Elastic Kubernetes Service
  • How to create a Kubernetes Cluster with Rancher on EKS

Kubernetes on AWS: What are the Options?

Kubernetes is an open source container orchestration platform created by Google. You can use Kubernetes for on-premises, cloud or edge deployments. When used in combination with AWS, you use Kubernetes to manage clusters of Amazon Elastic Compute Cloud (EC2) instances that host your containers.

When deploying Kubernetes in AWS, you can configure and manage your deployment by yourself for full flexibility and control. You also have the option of using either AWS-provided services or third-party services to manage your implementation.

Alternatives to self-management include:

  • kops — an open source tool you can use to automate the provisioning and management of clusters in AWS. Although not a managed tool, kops does enable you to simplify deployment and maintenance processes. It is officially supported by AWS.
  • Amazon Elastic Kubernetes Service (EKS) — a managed service offered by AWS. EKS uses automatically provisioned instances and provides a managed control plane for your deployment.
  • Rancher — a complete enterprise computing platform to deploy Kubernetes clusters everywhere: on-premises, in the cloud and at the edge. Rancher unifies these clusters to ensure consistent operations, workload management and enterprise-grade security.

Creating a Kubernetes Cluster on AWS with kops

Kops lets you create Kubernetes clusters in a few simple steps.

Prerequisites for kops:

  • Create an AWS account
  • Install the AWS CLI
  • Install kops and kubectl
  • Create a dedicated user for kops in IAM
  • You can set up DNS for the cluster, or, as an easy alternative, create a gossip-based cluster by having the cluster name end with k8s.local

To create a cluster on AWS using kops:

  1. For convenience, create two environment variables: NAME set to your cluster name, and KOPS_STATE_STORE set to the URL of your cluster state store on S3.
  2. Check which availability zones are available on EC2, by running the command aws ec2 describe-availability-zones --region us-west-2 (ending with the region you want to launch the instances in). Select an available zone, for example us-west-2a.
  3. Build your cluster as follows – this is a basic cluster with no high availability:
    kops create cluster 
        --zones=us-west-2a 
        ${NAME}
  4. View your cluster configuration by running the command kops edit cluster ${NAME}. You can leave all settings as default for now.
  5. Run the command kops update cluster ${NAME} --yes. This boots instances and downloads Kubernetes components until the cluster reaches a “ready” state.
  6. Check which nodes are alive by running kubectl get nodes.
  7. Validate that your cluster is working properly by running kops validate cluster.

For more details, refer to the kops documentation.

Creating a Kubernetes Cluster with Elastic Kubernetes Service

EKS helps manage cluster set up and creation. It offers multi-AZ support and provides automatic replacement of failed or nodes. It also enables on-demand patches and upgrades to clusters. EKS automatically creates three master nodes for each cluster, spread out across three availability zones, as illustrated below. This prevents single points of failure and provides high availability out of the box.

Source: Amazon Web Services
Source: Amazon Web Services

A few prerequisites for creating a cluster on EKS:

  • Create an AWS account
  • Create an IAM role that Kubernetes can use to create new AWS resources
  • Create a VPC and security group for your Kubernetes cluster – Amazon strongly recommend creating a separate VPC and security group for each cluster
  • Install kubectl – see instructions for installing the Amazon EKS-vended version
  • Install the Amazon CLI

To create a Kubernetes cluster using EKS:

  1. Open the Amazon EKS console and select Create cluster.
  2. On the Configure cluster page, type a name for your cluster, and select the Kubernetes version – if you don’t have a reason to run a specific version, select the latest.
  3. Under Cluster service role, select the IAM role you created for EKS.
  4. The Secrets encryption option lets you encrypt Kubernetes secrets using the AWS Key Management Service (KMS). This is an important option for production deployments, but you can leave it off just for this tutorial. Another option is Tags, which lets you apply tags to your cluster so you can manage multiple Kubernetes clusters together with other AWS resources.
  5. Click Next to view the Specify networking page. Select the VPC you created previously for EKS. Under Subnets, select which subnets you would like to host Kubernetes resources. Under Security groups, you should see the security group defined when you created the VPC (as defined in the CloudFormation template).
  6. Under Cluster endpoint access, select Public to enable only public access to the Kubernetes API server, Private to only enable private access from within the VPC, or Public and Private to enable both.
  7. Select Next to view the Configure logging page and select logs you want to enable (all logs are disabled by default).
  8. Select Next to view the Review and create page. Have a look at the cluster options you selected you can click Edit to make changes. When you’re ready, click Create. The status field shows the status of the cluster, until provisioning is complete (this can take between 10-15 minutes).
  9. When the cluster finishes creating, save your API server endpoint and Certificate authority – you will need these to connect to kubectl and work with your cluster.

To learn more, see the EKS getting started guide.

Creating a Kubernetes Cluster with Rancher on EKS

Using Rancher, you can manage Kubernetes clusters directly on AWS, within the EKS service or across hybrid or multi-cloud systems. Rancher enables you to centrally manage your cluster policies and helps ensure consistent and reliable container access.

Image Rancher AWS architecture

Rancher provides the following additional capabilities not fully available in plain Amazon EKS:

  • Centralized user authentication & RBAC – you can integrate Rancher with LDAP, Active Directory or SAML-based authentication services. This enables you to consistently enforce role-based access control (RBAC) policies across your environments. Centralized RBAC is the preferred way to manage access and permissions as it reduces administrative requirements and makes management of permissions easier.
  • UI in a single pane of glass – you manage Rancher from an intuitive web interface. This enables DevOps teams to easily deploy and troubleshoot workloads and operations teams to smoothly release and link services and applications across environments. Simplified management also eliminates the need to know specifics of your infrastructure of Kubernetes distribution and promotes greater workflow efficiency.
  • Enhanced cluster security – Rancher enables you to centrally define security policies and procedures. Security teams can set policies dictating how users are allowed to interact with clusters and how workloads operate across infrastructures. These policies can then be immediately pushed to any clusters as needed.
  • Multi and hybrid-cloud support – included with Rancher are global application catalogs that you can use across Kubernetes clusters, regardless of location. These catalogs provide access to apps ready for immediate deployment, creating standardized application configurations across your services. Using these apps, you can significantly reduce the load on your operations and development teams.
  • Tools integration – Rancher includes built-in integrations with the Istio service mesh, Prometheus and Grafana for monitoring, Fluentd for logging. In combination, these integrations help you manage deployments across clouds regardless of service variations.

Let’s see how to create a cluster on AWS with Rancher. The prerequisites are the same as for EKS (see the previous section).

To create a Kubernetes cluster on AWS with Rancher and EKS:

  1. Prepare a Linux host with a supported version of Linux, and install a supported version of Docker on the host (see all supported versions).
  2. Start the Rancher server by running this Docker command:
    $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
  3. Open a browser and go to the hostname or address where you installed your Docker container. You will see the Rancher server UI.
    Rancher Serve UI
  4. Select Clusters and click Add cluster. Choose Amazon EKS.
  5. Type a Cluster Name. Under Member Roles, click Add Member to add users that will be able to manage the cluster, and select a Role for each user.
  6. Enter the AWS Region, Access Key and Secret Key you got when creating your VPC.
  7. Click Next: Select Service Role. For this tutorial, select Standard: Rancher-generated service role. This means Rancher will automatically add a service role for the cluster to use. You can also select an existing AWS service role.
  8. Click Next: Select VPC and Subnet. Choose whether there will be a Public IP for Worker Nodes. If you choose No, select a VPC & Subnet to allow instances to access the Internet, so they can communicate with the Kubernetes control plane.
  9. Select a Security Group (defined when you created your VPC).
  10. Click Select Instance Options and select:
    a. Instance type – you can choose which Amazon instance should be used for your Kubernetes worker nodes.
    b. Customer AMI override – you can choose a specific Amazon Machine Image to install on your instances. By default, Rancher provides its EKS-optimized AMI.
    c. Desired ASG size – the number of instances in your cluster.
    d. User data – custom commands for automated configuration, do not set this when you’re just getting started.
  11. Click Create. Rancher is now provisioning your cluster. You can access your cluster once its state is Active.

For more details, refer to the Rancher AWS quick start guide, or learn more about the Rancher platform.

Conclusion

In this article we showed three ways to automatically spin up a Kubernetes cluster:

  • kops – an open source library that lets you quickly create a cluster using CLI commands.
  • Amazon Elastic Kubernetes Service – creating a cluster managed by Amazon, with high availability and security built in.
  • Rancher with EKS – creating a cluster with Rancher as an additional management layer, which provided user authentication and RBAC, enhanced security, and the ability to launch Kubernetes clusters on other public clouds or in your local data center, and manage everything on one pane of glass.

Learn more about the Rancher platform and see how easy it is to manage Kubernetes across multiple cloud environments.

Tags: ,, Category: Uncategorized Comments closed

Running Containers in AWS with Rancher

Tuesday, 10 March, 2020

READ OUR FREE WHITE PAPER:
How to Build an Enterprise Kubernetes Strategy

This blog will examine how Rancher improves the life of DevOps teams already invested in AWS’s Elastic Kubernetes Service (EKS) but looking to run workloads on-prem, with other cloud providers or, increasingly, at the edge. By reading this blog you will also discover how Rancher helps you escape the undeniable attractions of a vendor monoculture while lowering costs and mitigating risk.

AWS is the world’s largest cloud provider, with over a million customers and $7.3 billion in 2018 operating income. Our friends at StackRox recently showed that AWS still commands 78 percent market share despite the aggressive growth of rivals Microsoft Azure and Google Cloud Platform.

However, if you choose only AWS services for all your Kubernetes needs, you’re effectively locking yourself into a single vendor ecosystem. For example, by choosing Elastic Load Balancing for load distribution, AWS App Mesh for service mesh or AWS Fargate for serverless compute with EKS, your future is certain but not yours to control. It’s little wonder that many Amazon EKS customers look to Rancher to help them deliver a truly multi-cloud strategy for Kubernetes.

The Benefits of a Truly Multi-Cloud Strategy for Kubernetes

As discussed previously, multi-cloud has become the “new normal” of enterprise IT. But what does “multi-cloud” mean to you? Does it mean supporting the same vendor-specific Kubernetes distribution on multiple clouds? Wouldn’t that just swap out one vendor monoculture for another? Or does it mean choosing an open source management control plane that treats any CNCF-certified Kubernetes distribution as a first-class citizen, enabling true application portability across multiple providers with zero lock-in?

Don’t get me wrong – there are use cases where a decision-maker will see placing all their Kubernetes business with a single vendor as the path of least resistance. However, the desire for short-term convenience shouldn’t blind you to the inherent risks of locking yourself into a long-term relationship with just one provider. Given how far the Kubernetes ecosystem has come in the past six months, are you sure that you want to put down all your chips on red?

As with any investment, the prudent money should always go on the choice that gives you the most value without losing control. Given this, we enthusiastically encourage you to continue using EKS – it’s a great platform with a vast ecosystem. But remember to keep your options open – particularly if you’re thinking about deploying Kubernetes clusters as close as possible to where they’re delivering the most customer value – at the edge.

Kubernetes on AWS: Using Rancher to Manage Containers on EKS

If you’re going to manage Kubernetes clusters on multiple substrates – whether on AKS/GKE, on-prem or at the edge – Rancher enhances your container orchestration with EKS. With Rancher’s integrated workload management capabilities, you can allow users to centrally configure policies across their clusters and ensure consistent access. These capabilities include:

1) Role-based access control and centralized user authentication
Rancher enforces consistent role-based access control (RBAC) policies on EKS and any other Kubernetes environment by integrating with Active Directory, LDAP or SAML-based authentication. Centralized RBAC reduces the administrative overhead of maintaining user or group profiles across multiple platforms. RBAC also makes it easier for admins to meet compliance requirements and delegate administration of any Kubernetes cluster or namespace.

RBAC Controls in Rancher
RBAC Controls in Rancher

2) One intuitive user interface for comprehensive control
DevOps teams can deploy and troubleshoot workloads consistently across any provider using Rancher’s intuitive web UI. If you’ve got team members new to Kubernetes, they can quickly learn to launch applications and wire them together at production level in EKS and elsewhere with Rancher. Your team members don’t need to know everything about a specific Kubernetes distribution or infrastructure provider to be productive.

Multi-cluster management with Rancher
Multi-cluster management with Rancher

3) Enhanced cluster security
Rancher admins and their security teams can centrally define how users should interact with Kubernetes and how containerized workloads should operate across all their infrastructures, including EKS. Once defined, these policies can be instantly assigned any Kubernetes cluster.

Adding customer pod security policies
Adding customer pod security policies

4) Global application catalog & multi-cluster apps
Rancher provides access to a global catalog of applications that work across multiple Kubernetes clusters, whatever their location. For enterprises running in a multi-cloud Kubernetes environment, Rancher reduces the load on operations teams while increasing productivity and reliability.

Selecting multi-cluster apps from Rancher's catalog
Selecting multi-cluster apps from Rancher’s catalog

5) Streamlined day-2 operations for multi-cloud infrastructure
Using Rancher to provision your Kubernetes clusters in a multi-cloud environment means your day-2 operations are centralized in a single pane of glass. Benefits to centralizing your operations include one-touch deployment of service mesh (upstream Istio), logging (Fluentd), observability (Prometheus and Grafana) and highly available persistent storage (Longhorn).

What’s more, if you ever decide to stop using Rancher, we provide a clean uninstall process for imported EKS clusters so that you can manage them independently. You’ll never know Rancher was there.

Next Steps

See how Rancher can help you run containers in AWS and enhance your multi-cloud Kubernetes strategy. Download the free whitepaper, A Guide to Kubernetes with Rancher.

READ OUR FREE WHITE PAPER:
How to Build an Enterprise Kubernetes Strategy

Tags: Category: Uncategorized Comments closed

Reducing Your AWS Spend with AutoSpotting and Rancher

Thursday, 7 December, 2017

Ye Olde Worlde

Back in older times, B.C. as in Before Cloud, to put a service live you
had to:

  1. Spend months figuring out how much hardware you needed
  2. Wait at least eight weeks for your hardware to arrive
  3. Allow another four weeks for installation
  4. Then, configure firewall ports
  5. Finally, add servers to config management and provision them

All of this was in an organised company!

The Now

The new norm is to use hosted instances. You can scale these up and down
based on requirements and demand. Servers are available in a matter of
seconds. With containers, you no longer care about actual servers. You
only care about compute resource. Once you have an orchestrator like
Rancher, you don’t need to worry
about maintaining scale or setting where containers run, as Rancher
takes care of all of that. Rancher continuously monitors and assesses
the requirements that you set and does its best to ensure that
everything is running. Obviously, we need some compute resource, but it
can run for days or hours. The fact is, with containers, you pretty much
don’t need to worry.

Reducing Cost

So, how can we take advantage of the flexibility of containers to help
us reduce costs? There are a couple of things that you can do. Firstly
(and this goes for VMs as well as containers), do you need all your
environments running all the time? In a world where you own the kit and
there is no cost advantage to shutting down environments versus keeping
them running, this practice was accepted. But in the on-demand world,
there is a cost associated with keeping things running. If you only
utilise a development or testing environment for eight hours a day, then
you are paying four times as much by keeping it running 24 hours a day!
So, shutting down environments when you’re not using them is one way to
reduce costs. The second thing you can do (and the main reason behind
this post) is using Spot Instances.
Not heard of them? In a nutshell, they’re a way of getting cheap
compute resource in AWS. Interested in saving up to 80% of your AWS EC2
bill? Then keep reading. The challenge with Spot Instances is that they
can terminate after only two minutes’ notice. That causes problems for
traditional applications, but containers handle this more fluid nature
of applications with ease. Within AWS, you can directly request Spot
Instances, individually or in a fleet, and you set a maximum price for
the instance. Once you breach this price, AWS gives you two minutes and
then terminates the instance.

AutoSpotting

What if you could have an Auto Scaling Group (ASG) with an On-Demand
Instance type to which you could revert if you breached the Spot
Instance price? Step forward an awesome piece of open-source software
called AutoSpotting. You can find the source and more information on
GitHub. AutoSpotting works by
replacing On-Demand Instances from within an ASG with individual Spot
Instances. AutoSpotting takes a copy of the launch config of the ASG and
starts a Spot Instance (of equivalent spec or more powerful) with the
exact same launch config. Once this new instance is up and running,
AutoSpotting swaps out one of the On-Demand Instances in the ASG with
this new Spot Instance, in the process terminating the more expensive
On-Demand Instance. It will continue this process until it replaces all
instances. (There is a configuration option that allows you to specify
the percentage of the ASG that you want to replace. By default, it’s
100%.) AutoSpotting isn’t application aware. It will only start a
machine with an identical configuration. It doesn’t perform any
graceful halting of applications. It purely replaces a virtual instance
with a cheaper virtual instance. For these reasons, it works great for
Docker containers that are managed by an orchestrator like Rancher. When
a compute instance disappears, then Rancher takes care of maintaining
the scale. To facilitate a smoother termination, I’ve created a helper
service, AWS-Spot-Instance-Helper, that monitors to see if a host is
terminating. If it is, then the helper uses the Rancher evacuate
function to more gracefully transition running containers from the
terminating host. This helper isn’t tied to AutoSpotting, and anyone
who is using Spot Instances or fleets with Rancher can use it to allow
for more graceful terminations. Want an example of what it does to the
cost of running an environment?

Can you guess which day I implemented it? OK, so I said up to 80%
savings but, in this environment, we didn’t replace all instances at the
point when I took this measurement. So, why are we blogging about it
now? Simple: We’ve taken it and turned it into a Rancher Catalog
application so that all our Rancher AWS users can easily consume it.

3 Simple Steps to Saving Money

Step 1

Go to the Catalog > Community and select AutoSpotting.

Step 2

Fill in the AWS Access Key and Secret Key. (These are the only
mandatory fields.) The user must have the following AWS permissions:

autoscaling:DescribeAutoScalingGroups

autoscaling:DescribeLaunchConfigurations

autoscaling:AttachInstances

autoscaling:DetachInstances

autoscaling:DescribeTags

autoscaling:UpdateAutoScalingGroup

ec2:CreateTags

ec2:DescribeInstances

ec2:DescribeRegions

ec2:DescribeSpotInstanceRequests

ec2:DescribeSpotPriceHistory

ec2:RequestSpotInstances

ec2:TerminateInstances

Optionally, set the Tag Name. By default, it will look for
spot-enabled. I’ve slightly modified the original code to allow the
flexibility of running multiple AutoSpotting containers in an
environment. This modification allows you to use multiple policies in
the same AWS account. Then, click Launch.

Step 3

Add the tag (user-specified or spot-enabled, with a value of
true) to any AWS ASGs on which you want to save money. Cheap (and
often more powerful) Spot Instances will gradually replace your
instances. To deploy the AWS-Spot-Instance-Helper service, simply
browse to the Catalog > Community and launch the application.

Thanks goes out to Cristian Măgherușan-Stanciu and other
contributors

for writing such a great piece of open-source software.

About the Author

Chris Urwin works
as a field engineer for Rancher Labs based out of the UK, helping our
enterprise clients get the most out of Rancher.

Tags: Category: Uncategorized Comments closed

Checking Out Rancher 2.0 with Kops AWS Clusters

Friday, 27 October, 2017

One of the hallmark features of Rancher
2.0
is its ability to consume
Kubernetes clusters from anywhere. In this post, I’m going to walk you
through using the popular kops tool to create and manage Kubernetes
clusters on AWS and then bring them under Rancher 2.0 management. This
walkthrough will help you create a non-HA Kubernetes cluster, though
kops does support HA configurations. With this new cluster, we will
run the Rancher 2.0 tech preview in a pod with a persistent volume
claim.

Prerequisites

To follow along, you will need a properly configured kops setup, as
outlined in the kops AWS Getting Started
Guide
. The
AWS guide will walk you through setting up:

  • AWS CLI configuration
  • Working DNS managed by Route 53
  • IAM roles configured for EC2 resources
  • The S3 kops State Store
  • Installation of the kops tool
  • Installation of the kubectl CLI

Creating Your Rancher-Kubernetes Cluster

First, we will set some environment variables to make typing less
painful on the CLI. Set a NAME for your cluster:

export NAME=rancher-management.k8s.cloudnautique.com

If you do not already have one, create a state store bucket:

aws s3 mb s3://cloudnautique-s3-bucket-for-cluster-state

Then, set an environment variable:

export KOPS_STATE_STORE=s3://cloudnautique-s3-bucket-for-cluster-state

Of note, for this post, I’m going to use real DNS managed by Route53.
The k8s.cloudnautique.com domain above is a managed Route53 zone.
Let’s create the cluster:

kops create cluster --zones us-west-1b --node-count 1 ${NAME}

This command generates a cluster in a single zone, us-west-1b, with a
single worker node. Next, actually deploy the cluster:

kops update cluster ${NAME} --yes

It will take 10-15 minutes to provision. Now would be a good time to
take a walk or get a cup of coffee. You can check the status of the
cluster using the command:

kops validate cluster

Here’s an example for this cluster:

```
> kops validate cluster
Using cluster from kubectl context: rancher-mgmt.k8s.cloudnautique.com

Validating cluster rancher-mgmt.k8s.cloudnautique.com

INSTANCE GROUPS
NAME            ROLE    MACHINETYPE MIN MAX SUBNETS
master-us-west-1b   Master  m3.medium   1   1   us-west-1b
nodes           Node    t2.medium   1   1   us-west-1b

NODE STATUS
NAME                        ROLE    READY
ip-172-20-54-160.us-west-1.compute.internal master  True
ip-172-20-56-231.us-west-1.compute.internal node    True

Your cluster rancher-mgmt.k8s.cloudnautique.com is ready
```

Once the cluster is up and running, we can start interacting with it via
kubectl.

kubectl get nodes

To see the pods currently running, enter this command:

kubectl -n kube-system get pods

Now, let’s deploy our Rancher 2.0 server container. First, create a
namespace for our app.

kubectl create ns rancher-server

You can deploy the Rancher server stack below after you replace the
###YOUR DNSNAME### variable with the domain name you want to use for
the Rancher UI.

```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rancher-db-claim
spec:
  storageClassName: default
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: rancher-server
  name: rancher-server
spec:
  replicas: 1
  selector:
    matchLabels:
      run: rancher-server
  strategy: {}
  template:
    metadata:
      labels:
        run: rancher-server
    spec:
      containers:
      - image: rancher/server:v2.0.0-alpha7
        name: rancher-server
        volumeMounts:
          - mountPath: "/var/lib/mysql"
            name: rancher-db
            subPath: mysql
      volumes:
        - name: rancher-db
          persistentVolumeClaim:
            claimName: rancher-db-claim
---
apiVersion: v1
kind: Service
metadata:
  name: rancher
  annotations:
    dns.alpha.kubernetes.io/external: ###YOUR DNS NAME###
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
spec:
  selector:
    run: rancher-server
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer
```

After a few minutes, you should be able to visit
http://##YOURDNSNAME### and load up the UI. A future improvement
would be to add TLS termination at the ELB, which kops supports. You
can optionally register your management cluster into Rancher. This will
deploy the Kubernetes Dashboard, giving you access to kubectl from the
Rancher UI. To do so, when you visit the Rancher UI, select Use
existing Kubernetes
. Then, copy and paste the kubectl command that
displays, and run it from your CLI.

Adding a User Cluster

Now that we have a Kubernetes cluster to manage our Rancher server
cluster, let’s add an additional cluster for our user workload and to
checkout Rancher 2.0’s multi-cluster management. In this case, we are
going to deploy into the same VPC as the Rancher management cluster to
save on resources. Going this route is not a hard requirement for
Rancher. We will need the VPC ID. Assuming you have Jq installed, you
can use the following command:

export VPC=$(aws ec2 describe-vpcs --region us-west-1 --filters Name=tag:Name,Values="${NAME}" |grep -v ^kops|jq -r .Vpcs[].VpcId)

Then, let’s set our development cluster name environment variable:

export DEV_NAME=development.k8s.cloudnautique.com

Now it’s time to create our cluster:

kops create cluster --zones us-west-1b --node-count 3 --vpc ${VPC} ${DEV_NAME}

This time we still deploy to the same availability zone, but we will
deploy three worker nodes instead of just a single node. Also, we need
to edit out cluster configuration to ensure that our subnets do not
overlap.

kops edit cluster --name ${DEV_NAME}

You should see your VPCID and CIDR configured properly under the
following keys:

```
...
networkCIDR: 172.20.0.0/16
networkID: ${VPC}
...
```

If the networkCIDR is incorrect, now is the time to set it to the VPC
CIDR. You can find this setting by editing your management cluster
kops edit cluster. You also need to edit the subnets CIDR so that
it’s non-overlapping with the management cluster. For this, we set it
to 172.20.64.0/19. Now, let’s deploy our cluster:

kops update cluster ${DEV_NAME} --yes

It will take a few minutes to provision. While that is happening, you
can go to the Rancher UI and click Manage Clusters from the
Environment menu in the right-hand corner. Manage Clusters On the Clusters &
Environments page, click Add Cluster. Add a Cluster Provide the cluster
a name. Name the Cluster Select Use
existing Kubernetes
to import your existing cluster. Import K8s Copy the command.
Copy the Command Before running the
registration command, verify your kubectl command is using the correct
context:

kubectl config current-context

This should show the development cluster we created above. If it
doesn’t, go ahead and set it to the second cluster’s context.

kubectl config set-context ${DEV_NAME}

Replace NAME with the development cluster defined above. In the case
of this example, it is development.k8s.cloudnautique.com. On your
command line, paste the command copied from above and register your
development cluster. Now, when you click the Hosts tab, you see the
three hosts registered into the environment. View the Hosts You are now ready
to use your Rancher environment to deploy apps from the Catalog, or from
your compose files. Once you are done playing, you can clean up all of
your resources with this command:

kops delete cluster ${DEV_NAME} --yes

Then, switch to the management cluster and do the same:

kops delete cluster ${NAME} --yes

Summary

Now you can see how easy it is to bring in multiple Kubernetes clusters
within Rancher 2.0. Kubernetes clusters from kops are just one type of
cluster you can use; you can consume Kubernetes from Google’s GKE,
DigitalOcean, or Azure. Rancher 2.0 continues to forge ahead with the
cross-cloud container story by allowing users flexibility to run to
where they need. We hope you’ll give Rancher
2.0
a try!

About the Author

Bill MaxwellBill Maxwell is a
senior software engineer at Rancher Labs. He has extensive experience in
software engineering and operations, and he has led continuous
integration and continuous delivery (CI/CD) initiatives. Prior to
Rancher Labs, Bill worked at GoDaddy in engineering, development, and
managing various cloud services product deployments. He holds a Masters
in Information Management degree from Arizona State University and has a
BSEE in Electrical Engineering from California State Polytechnic
University.

Deploying Rancher from the AWS Marketplace

Monday, 28 August, 2017

A Detailed Overview of Rancher’s Architecture
This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

A step-by-step guide

Rancher is now available for easy deployment from the Amazon Web
Services (AWS)
Marketplace
.
While Rancher has always been easy to install, availability in the
marketplace makes installing Rancher faster and easier than ever. In
the article below, I provide a step-by-step guide to deploying a working
Rancher environment on AWS. The process involves two distinct parts:

  • In part I I step through the process of installing a Rancher
    management node from the AWS Marketplace
  • In **part II **I deploy a Kubernetes cluster in AWS using the
    Rancher management node deployed in part I

From my own experience, it is often small details missed that can lead
to trouble. In this guide I attempt to point out some potential pitfalls
to help ensure a smooth installation.

Before you get started

If you’re a regular AWS user you’ll find this process straightforward.
Before you get started you’ll need:

  • An Amazon EC2 account – If you don’t already have an account,
    you can visit AWS EC2 (https://aws.amazon.com/ec2/) and select
    Get started with Amazon EC2 and follow the process there to
    create a new account.
  • An AWS Keypair – If you’re not familiar with Key Pairs, you can
    save yourself a little grief by familiarizing yourself with the
    topic. You’ll need a Key Pair to connect via ssh to the machine you
    create on AWS. Although most users will probably never have a need
    to ssh to the management host, the installation process still
    requires that a Key Pair exist. From within the Network & Security
    heading in your AWS account select Key Pairs. You can create a Key
    Pair, give it a name, and the AWS console will download a PEM file
    (a ASCII vase64 X.509 certificate) that you should keep on your
    local machine. This will hold the RSA Private Key that you’ll need
    to access the machine via ssh or scp. It’s important that you
    save the key file, because if you lose it, it can’t be replaced and
    you’ll need to create a new one. The marketplace installation
    process for Rancher will assume you already have a Key Pair file.
    You can more read about Key Pairs in the AWS on-line
    documentation
    .
  • Setup AWS Identity and Access Management – If you’re new to
    AWS, this will seem a little tedious, but you’ll want to create an
    IAM users account at some point through the AWS console. You don’t
    need to do this to install Rancher from the AWS Marketplace, but
    you’ll need these credentials to use the Cloud Installer to add
    extra hosts to your Rancher cluster as described in part II of this
    article. You can follow the instructions to Create your Identity
    and Access Management
    Credentials
    .

With these setup items out of the way, we’re ready to get started.

Part I – Installing Rancher from the AWS Marketplace

Step 1: Select a Rancher offering from the marketplace

There are three different offerings in the Marketplace as shown below.

  • Rancher on
    RancherOS

    – This is the option we’ll use in this example. This is a single
    container implementation of the Rancher environment running on
    RancherOS, a lightweight Linux optimized for container environments
  • RancherOS –
    HVM

    This marketplace offering installs the RancherOS micro Linux
    distribution only without the Rancher environment. You might use
    this as the basis to package your own containerized application on
    RancherOS. HVM refers to the type of Linux AMI used – you can
    learn more about Linux AMI Virtualization Types
    here
    .
  • RancherOS – HVM – ECS
    Enabled

    – This marketplace offering is a variant of the RancherOS offering
    above intended for use with Amazon’s EC2 Container Service
    (ECS)
    .

We’ll select the first option – Rancher on RancherOS:
** **
After you select Rancher on RancherOS you’ll see additional
informational including pricing details. There is no charge for the use
of the software itself, but you’ll be charged for machine hours and
other fees like EBS magnetic volumes and data transfer at standard AWS
rates. Press Continue once you’ve reviewed the details and the
pricing.

** ** Step2: Select an installation type and provide installation
details
The next step is to select an installation method and provide
required settings that AWS will need to provision your machine running
Rancher. There are three installation types:

  1. Click Launch – this is the fastest and easiest approach. Our
    example below assumes this method of installation.
  2. Manual Launch – this installation method will guide you through
    the process of installing Rancher OS using the EC2 Console, API
    or CLI.
  3. Service Catalog – you can also copy versions of Rancher on
    RancherOS to a Service Catalog specific to a region and assign users
    and roles. You can learn more about AWS Service Catalogs
    here.

Select Click Launch and provides installation options as shown:

  • Version – select a version of Rancher to install. By default
    the latest is selected.
  • Region – select the AWS region where you will deploy the
    software. You’ll want to make a note of this because the AWS EC2
    dashboard segments machines by Region (pull-down at the top right of
    the AWS EC2 dashboard). You will need to have the correct region
    selected to see your machines. Also, as you add additional Rancher
    hosts, you’ll want to install them in the same Region, Availability
    Group and Subnet as the management host.
  • EC2 Instance Type – t2.medium is the default (a machine with 4GB
    of RAM and 2 virtual cores). This is inexpensive and OK for
    testing, but you’ll want to use larger machines to actually run
    workloads.
  • VPC Settings (Virtual Private Cloud) – You can specify a
    virtual private cloud and subnet or create your own. Accept the
    default here unless you have reason to select a particular cloud.
  • Security Group – If you have an appropriate Security Group
    already setup in the AWS console you can specify it here. Otherwise
    the installer will create one for you that ensures needed ports are
    open including port 22 (to allow ssh access to the host) and port
    8080 (where the Rancher UI will be exposed).
  • Key Pair – As mentioned at the outset, select a previously
    created Key Pair for which you’ve already saved the private key (the
    X.509 PEM file). You will need this file in case you need to connect
    to your provisioned VM using ssh or scp. To connect using ssh you
    would use a command like this: ssh -i key-pair-name.pem
    <public-ip-address>

When you’ve entered these values select “Launch with 1-click

Once you launch Rancher,you’ll see the screen below confirming details
of your installation. You’ll receive an e-mail as well. This will
provide you with convenient links to:

  • Your EC2 console – that you can visit anytime by visiting
    http://aws.amazon.com/ec2
  • Your Software page, that provides information about your various
    AWS Marketplace subscriptions

Step 3: Watch as the machine is provisioned

From this point on, Rancher should install by itself. You can monitor
progress by visiting the AWS EC2 Console. Visit
http://aws.amazon.com, login with your AWS credentials, and select EC2
under AWS services. You should see the new AWS t2.medium machine
instance initializing as shown below. Note the pull-down in the top
right of “North Virginia”. This provides us with visibility to machines
in the US East region selected in the previous step.

Step 4: Connect to the Rancher UI

The Rancher machine will take a few minutes to provision, but once
complete, you should be able to connect to the external IP address for
the host (shown in the EC2 console above) on port 8080. Your IP address
will be different but in our case the Public IP address was
54.174.92.13, so we pointed a browser to the URL
http://54.174.92.13:8080. It may take a few minutes for Rancher UI to
become available but you should see the screen below.

Congratulations! If you’ve gotten this far you’ve successfully
deployed Rancher in the AWS cloud!
** **

Part II – Deploying a Container Environment using Rancher

Having the Rancher UI up and running is nice, but there’s not a lot you
can do with Rancher until you have cluster nodes up and running. In
this section I’ll look at how to deploy a Kubernetes cluster using the
Rancher management node that I deployed from the marketplace in Part I.

Step 1 – Setting up Access Control

You’ll notice when the Rancher UI is first provisioned, there is no
access control. This means that anyone can connect to the web
interface. You’ll be prompted with a warning indicating that you should
setup Authentication before proceeding. Select Access Control under
the ADMIN menu in the Rancher UI. Rancher exposes multiple
authentication options as shown including the use of external Access
Control providers. DevOps teams will often store their projects in a
GitHub repository, so using GitHub for authentication is a popular
choice. We’ll use GitHub in this example. For details on using other
Access Control methods, you can consult the Rancher
Documentation
.

GitHub users should follow the directions, and click on the link
provided in the Rancher UI to setup an OAuth application in GitHub.
You’ll be prompted to provide your GitHub credentials. Once logged into
GitHub, you should see a screen listing any OAuth applications and
inviting you to Register a new application. We’re going to setup
Rancher for Authentication with Git Hub.

Click the Register a new application button in Git Hub, and
provide details about your Rancher installation on AWS. You’ll need the
Public IP address or fully qualified host name for your Rancher
management host.

Once you’ve supplied details about the Rancher application to Git Hub
and clicked Register application, Git Hub will provide you with a
Client ID and a Client Secret for the Rancher application as
shown below.

Copy and paste the Client ID and the Client Secret that appears in Git
Hub into the Rancher Access Control setup screen, and save these values.

Once these values are saved, click Authorize to allow Git Hub
authentication to be used with your Rancher instance.

If you’ve completed these steps successfully, you should see a message
that Git Hub authentication has been setup. You can invite additional
Git Hub users or organizations to access your Rancher instance as shown
below.

Step 2 – Add a new Rancher environment

When Rancher is deployed, there is a single Default environment that
uses Rancher’s native orchestration engine called Cattle. Since
we’re going to install a Rancher managed Kubernetes cluster, we’ll need
to add a new environment for Kubernetes. Under the environment selection
menu on the left labelled Default, select Add Environment.
Provide a name and description for the environment as shown, and select
Kubernetes as the environment template. Selecting the Kubernetes
framework means that Kubernetes will be used for Orchestration, and
additional Rancher frameworks will be used including Network Services,
Healthcheck Services and Rancher IPsec as the software-defined network
environment in Kubernetes.

Once you add the new environment, Rancher will immediately begin trying
to setup a Kubernetes environment. Before Rancher can proceed however a
Docker host needs to be added.

Step 3 – Adding Kubernetes cluster hosts

To add a host in Rancher, click on Add a host on the warning message
that appears at the top of the screen or select the Add Host option
under the Infrastructure -> Hosts menu. Rancher provides multiple
ways to add hosts. You can add an existing Docker host on-premises or in
the cloud, or you can automatically add hosts using a cloud-provider
specific machine driver as shown below. Since our Rancher management
host is running on Amazon EC2, we’ll select the Amazon EC2 machine
driver to auto-provision additional cluster hosts. You’ll want to select
the same AWS region where your Rancher management host resides and
you’ll need your AWS provided Access key and Secret key. If you
don’t have an AWS Access key and Secret key, the AWS
documentation

explains how you can obtain one. You’ll need to provide your AWS
credentials to Rancher as shown so that it can provision machines on
your behalf.

After you’ve provided your AWS credentials, select the AWS Virtual
private cloud and subnet. We’ve selected the same VPC where our Rancher
management node was installed from the AWS marketplace.

Security groups in AWS EC2 express a set of inbound and outbound
security rules. You can choose a security group already setup in your
AWS account, but it is easier to just let Rancher use the existing
rancher-machine group to ensure the network ports that Rancher needs
open are configured appropriately.

After setting up the security group, you can set your instance options
for the additional cluster nodes. You can add multiple hosts at a time.
We add five hosts in this example. We can give the hosts a name. We use
k8shost as our prefix, and Rancher will append a number to the
prefix naming our hosts k8shost1 through k8shost5. You can
select the type of AWS host you’d like for your Kubernetes cluster. For
testing, a t2.medium instance is adequate (2 cores and 4GB of RAM)
however if you are running real workloads, a larger node would be
better. Accept the default 16GB root directory size. If you leave the
AMI blank, Rancher will provision the machine using an Ubuntu AMI. Note
that the ssh username will be ubuntu for this machine type. You
can leave the other settings alone in case you want to change the
defaults.

Once you click Create, Rancher will use your AWS credentials to
provision the hosts using your selected options in your AWS cloud
account. You can monitor the creation of the new hosts from the EC2
dashboard as shown.

Progress will also be shown from within Rancher. Rancher will
automatically provision the AWS host, install the appropriate version of
Docker on the host, provide credentials, start a rancher Agent, and once
the agent is present Rancher will orchestrate the installation of
Kubernetes pulling the appropriate rancher components from the Docker
registry to each cluster host.

You can also monitor the step-by-step provisioning process by
selecting Hosts as shown below under the Infrastructure menu.
This view shows our five node Kubernetes cluster at different stages of
provisioning.

It will take a few minutes before the environment is provisioned and up
and running, but when the dust settles, the Infrastructure Stacks
view should show that the Rancher stacks comprising the Kubernetes
environment are all up and running and healthy.

Under the Kubernetes pull-down, you can launch a Kubernetes shell and
issue kubectl commands. Remember that Kubernetes has the notion of
namespaces, so to see the Pods and Services used by Kubernetes itself,
you’ll need to query the kube-system namespace. This same screen also
provides guidance for installing the kubectl CLI on your own local host.

Rancher also provides access to the Kubernetes Dashboard following the
automated installation under the Kubernetes pull-down.

Congratulations! If you’ve gotten this far, give yourself a pat on the
back. You’re now a Rancher on AWS expert!

A Detailed Overview of Rancher’s Architecture
This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

Category: Uncategorized Comments closed

AWS and Rancher: Building a Resilient Stack

Thursday, 16 March, 2017
A Detailed Overview of Rancher’s Architecture
This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

In my prior posts, I’ve written
about how to ensure a highly resilient workloads using Docker, Rancher,
and various open source tools. For this post, I will build on this prior
knowledge, and to setup an AWS infrastructure for Rancher with some
commonly used tools.

If you check out the
repository here,
you should be able to follow along and setup the same infrastructure.
The final output of our AWS infrastructure will look like the following
picture:
cloudcraft In case you missed
the prior posts, they’re available on the Rancher
blog
and cover some reliability
talking points. Lets use those learning and create a running stack.

Host VM Creation

The sections we will build are the three lower yellow section:
Golden Image
First, we will need a solution to create Docker hosts that use a
reliable combination of storage drivers and OS. We would also like to
replace these with different parts in the future. So we build our base
VM, or the “golden image” as it is more commonly referred to. As for
the tools, Packer will be used to communicate
with the AWS API for creating VM images (and various other cloud
providers). Ansible will be used to describe
the provisioning steps in a readable manner. The full source can be
found
here, if
you want to jump ahead. Since the previous chain of posts on reliability
used Ubuntu 14.04, our example will provision a VM with Ubuntu 14.04
using AUFS3 for the Docker storage driver. To start, we create a Packer
configuration called ubuntu_1404_aufs3.json. In this case, my config
searches for the AMI ID for most recent 14.04 AMI ID on AWS us-east
through source_ami_filter, which as of writing returns ami-af22d9b9.
It also creates a 40GB drive attached as /dev/sdb, which we will use
to store Docker data; We are using Docker 1.12.3, because it is
supported in the latest Rancher’s compatibility
matrix
.

 {
 "variables": {
 "aws_access_key": "",
 "aws_secret_key": "",
 "docker_version": "1.12.4"
 },
 "builders": [{
 "type": "amazon-ebs",
 "access_key": "{{user `aws_access_key`}}",
 "secret_key": "{{user `aws_secret_key`}}",
 "region": "us-east-1",
 "source_ami_filter": {
 "filters": {
 "virtualization-type": "hvm",
 "name": "*ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*",
 "root-device-type": "ebs"
 },
 "most_recent": true
 },
 "ami_virtualization_type": "hvm",
 "instance_type": "m3.medium",
 "ssh_username": "ubuntu",
 "ami_name": "Docker {{user `docker_version`}} Ubuntu 14.04 AUFS3 {{timestamp}}",
 "launch_block_device_mappings": [{
 "device_name": "/dev/sdb",
 "volume_size": 40,
 "volume_type": "gp2",
 "delete_on_termination": true
 }],
 "tags": {
 "OS_Version": "Ubuntu",
 "Release": "14.04",
 "StorageDriver": "AUFS3",
 "Docker_Version": "{{user `docker_version`}}"
 }
 }]
 }

 $> packer validate ubuntu_1404_aufs3.json
 Template validated successfully.

Great! It passes validation, but if we actually ran it, Packer would
just create a copy of the base AMI with a 40GB drive attached, which
isn’t very helpful. To make it useful, we will also need to provision
Docker on it. Packer has built-in hooks for various configuration
management (CM) tools such as Ansible, Chef, and Puppet. In our case, we
will use the Ansible provisioner.

{
"variables": ["..."],
"builders": ["..."],

"provisioners": [
{
"type": "ansible",
"playbook_file": "./playbook.yml",
"extra_arguments": [
"--extra-vars",
"docker_pkg_name='docker-engine={{user `docker_version`}}-0~ubuntu-trusty'"
]
}
]
}

The contents of our playbook.yml is as follows:

---
- name: Install Docker on Ubuntu 14.04
hosts: all
# run as root
become: true
become_user: root

pre_tasks:
- name: format the extra drive
filesystem:
dev: /dev/xvdb
fstype: ext4
- name: mount the extra drive
mount:
name: /secondary
# ubuntu renames the block devices to xv* prefix
src: /dev/xvdb
fstype: ext4
state: mounted
roles:
- role: angstwad.docker_ubuntu
docker_opts: "--graph /secondary --storage-driver=aufs"

Prior to running the tool, we will need to grab the Docker installation
role at the root directory containing ubuntu_1404_aufs3.json, and run
ansible-galaxy install angstwad.docker_ubuntu -p to download a
pre-configured Docker installation role. The
popular angstwad.docker_ubuntu role
exposes a lot of options for Docker installation on Ubuntu and follows
the official Docker installation tutorial closely. Finally, we execute
the script below and await our new base image. The end result will be
your base Docker image going forward.

 $> packer build ubuntu_1404_aufs3.json
 ... output
 ... output
 ==> amazon-ebs: Creating the AMI: Docker 1.12.4 Ubuntu 14.04 AUFS3 1486965623
 amazon-ebs: AMI: ami-1234abcd
 ==> amazon-ebs: Waiting for AMI to become ready...

AWS Infrastructure Creation

To start creating infrastructure components, please checkout the
following repository for a Rancher architecture template on AWS
Networking Layer Next up, most AWS services require setting up a VPC
to provision services without errors. To do this, we will create a
separate VPC with public subnets. The following provides a straight
forward way to setup a standard template. Check out the networking
module
here.
In main.tf, our entry file for the infrastructure we reference our
network configurations from ./database, followed by passing in
parameters into our module:

module "networking" {
source = "./networking"

aws_region = "${var.aws_region}"
tag_name = "${var.tag_name}"
aws_vpc_cidr = "${var.aws_vpc_cidr}"
aws_public_subnet_cidrs = "${var.aws_public_subnet_cidrs}"
}

You can now run the creation of our simple network layer.

terraform plan -target="module.networking"
... output ...
Plan: 6 to add, 0 to change, 0 to destroy.

$> terraform apply -target="module.networking"
... output ...
module.networking.aws_subnet.rancher_ha_c: Creation complete
module.networking.aws_subnet.rancher_ha_b: Creation complete
module.networking.aws_subnet.rancher_ha_a: Creation complete
module.networking.aws_route.rancher_ha: Creation complete

Apply complete! Resources: 6 added, 0 changed, 0 destroyed. HA Rancher
Server
Next up, let’s setup our networking and use our AMI to setup
HA mode on Rancher. To start, we automate the HA setup of Rancher. With
the latest update to HA process in 1.2+, Rancher no longer requires a
bootstrap node and interdependent steps to put up a HA cluster. The new
steps are:

  • Create an External Database (RDS in this post)
  • Create a free SSL cert for the HA loadbalancer
  • Use an external loadbalancer to route between the 3 nodes (ELB in
    this post)
  • Launch HA nodes with an additional flag --advertise-address and
    Port Forwarding on 9345.

With the removal of the bootstrap node, the automation of HA Rancher
setup becomes much easier. Lets begin by creating our external database.
Create an External Database Continuing in main.tf, we then stand
up our RDS database.

 module "database" {
 source = "./database"

vpc_id = "${module.networking.vpc_id}"
 database_subnet_ids = [
 "${module.networking.vpc_subnet_a}",
 "${module.networking.vpc_subnet_b}",
 "${module.networking.vpc_subnet_c}",
 ]
 database_port = "${var.database_port}"
 database_name = "${var.database_name}"
 database_username = "${var.database_username}"
 database_password = "${var.database_password}"
 database_instance_class = "${var.database_instance_class}"
 }

The database will then create security groups that consist of subnets
defined in our networking layer. You can see the complete database
terraform template on
GitHub
.

 $> terraform plan -target="module.database"
 ... output ...
 Plan: 3 to add, 0 to change, 0 to destroy.

$> terraform apply -target="module.database"
 ... output ...
 module.database.aws_db_instance.rancherdb: Still creating... (4m20s elapsed)
 module.database.aws_db_instance.rancherdb: Creation complete

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Creating a Free Cert for our ELB For this walkthrough, we use AWS
Certificate Manager (ACM) to manage a SSL cert for our Rancher HA cert.
You can look up how to request a free SSL certificate on the ACM
docs
.
The process of requesting a cert from ACM contains manual steps to
verify the domain name, so we don’t automate this section. Once
provisioned, referencing the SSL certificate is as simple as adding the
following data resource, you can view the file on
GitHub
.

 data "aws_acm_certificate" "rancher_ha_cert" {
 domain = "${var.fqdn}"
 statuses = ["ISSUED"]
 }

Creating the HA Server Group Next up we create an ELB setup with its
accompanying security groups. Afterwards, we will add in three EC2 hosts
for Rancher to reside in.

 module "rancher_server_ha" {
 source = "./rancher_server_ha"

vpc_id = "${module.networking.vpc_id}"
 tag_name = "${var.tag_name}"

# ssled domain without protocol e.g. moo.test.com
 acm_cert_domain = "${var.acm_cert_domain}"
 # domain with protocol e.g. https://moo.test.com
 fqdn = "${var.fqdn}"

# ami that you created with packer
 ami = {
 us-east-1 = "ami-f05d91e6"
 }

subnet_ids = [
 "${module.networking.vpc_subnet_a}",
 "${module.networking.vpc_subnet_b}",
 "${module.networking.vpc_subnet_c}",
 ]

# database variables to be passed into Rancher Server Nodes
 database_port = "${var.database_port}"
 database_name = "${var.database_name}"
 database_username = "${var.database_username}"
 database_password = "${var.database_password}"
 database_endpoint = "${module.database.endpoint}"
 }

The details of the HA Server template creates security groups, ELBs, and
autoscaling groups. This may take a few moments to stand up, as we will
need to wait for EC2 instances to start up.

 $> terraform plan -target="module.rancher_server_ha"
 ... output ...
 Plan: 11 to add, 0 to change, 0 to destroy.

$> terraform apply -target="module.rancher_server_ha"
 ... output ...

Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

[Cloud Config on HA Instance] We
provision our server node resources inside ./files/userdata.template
file. It essentially fills in variables to create a cloud-init config
for our instance. The cloud init
docs
writes a
file called start-rancher.sh and then executes it on instance start.
You can view the details of the file
here.
[Point DNS at the ELB] Now you can
point your DNS server at our Rancher ELB that we created. Navigate to
ELB console from
there you should see the created ELB. You then grab the DNS name for the
ELB and on your domain name provider, add a CNAME record to it. For
example, in this post, I setup Rancher on rancher.domain.com and then
access the admin panel on https://rancher.domain.com.

Rancher Node Setup

At this point, we have already setup the Rancher server and we can add
custom hosts or use the Rancher-provided hosts drivers. If we want to
try more automation, here is a potential way to automate autoscaled
slave node clusters on AWS. From the Rancher UI, we follow the
documentation for adding custom
hosts
. We will
need to grab a few variables to pass into our cluster setup template. At
the time of writing the custom host command is:

 sudo docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:${rancher_agent_version} ${rancher_reg_url}

# Example at the time of writing.
 rancher_reg_url = https://rancher.domain.com/v1/scripts/AAAAABBBBB123123:150000000000:X9asiBalinlkjaius91238
 rancher_agent_version = v1.2.0

After pulling a those variables, we can then run the node creation step.
Since this is a separate process than setting up HA, in the file we
initially comment out this the creation of the Rancher nodes.

 $> terraform plan -target="module.rancher_nodes"
 ... output ...
 Plan: 3 to add, 0 to change, 0 to destroy.

$> terraform apply -target="module.rancher_nodes"
 ... output ...

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

After a few moments, you should see your Rancher host show up in your
Rancher UI.

Summary

That was a lot steps, but with this template, we can now build each
Terraform component separately and iterate on the infrastructure layers,
kind of like how Docker images are built up. The nice thing about all
these various components is replaceability. If you don’t like the
choice of OS for the Docker Host, then you can change up the Packer
configurations and update the AMI ID in Terraform. If you don’t like
the networking layer, then take a peek at the Terraform script to update
it. This setup is just a starter template to get Rancher up and your
projects started. By no means is this the best way to standup Rancher,
but the layout of Terraform should allow for continuous improvement as
your project takes off. Additional Improvements

  • The VPC shown here resides in the public subnet (for simplicity),
    but if you want to secure the network traffic between the database
    and servers, you’ll need to update the networking (this would
    require a rebuild).
  • We might be able to look into passing our Rancher nodes into a
    separate project instead of commenting it out.
  • Also, we should take a look at how to backup state on Terraform in
    case we lose the folder for state. So a bit more setup into S3
    backup would help for those who plan to use this in production.
  • EFS can also be a candidate into the script to add distributed files
    system support to our various nodes.
  • Cross region RDS
    replication: terraform-community-modules/tf_aws_rds
  • Use Terraform VPC module managed by the terraform
    community: terraform-community-modules/tf_aws_vpc

Collection of Reference Architectures

There are many reference architectures from various community members
and Rancher contributors that are created by the community. They are
further references after testing this template, and you can reference
their structures to improve on the infrastructure.

Terraform

For advanced networking variants, there is also a
Cloudformation reference
here Nick Ma is
an Infrastructure Engineer who blogs about Rancher and Open Source. You
can visit Nick’s blog, CodeSheppard.com,
to catch up on practical guides for keeping your services sane and
reliable with open-source solutions.

A Detailed Overview of Rancher’s Architecture
This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.
Tags: Category: Rancher Blog Comments closed

Preview Rancher at AWS re:Invent 2016

Tuesday, 22 November, 2016

In less than a week, over 24,000 developers, sysadmins, and engineers
will arrive in Las Vegas to attend AWS re:Invent (Nov. 28 – Dec 2). If
you’re headed to the conference, we look forward to seeing you there!
We’ll be onsite previewing enhancements included in our upcoming Rancher
v1.2 release:

Support for the latest versions of Kubernetes and Docker: As we’ve
previously
mentioned
,
we’re committed to supporting multiple container orchestration
frameworks, and we’re eager to show off our latest support for Docker
Native Orchestration and Kubernetes.

Better load balancing for AWS: We recently completed additional work
on the external load balancer for AWS, which is also available via
Rancher
catalog
.

An update to our storage and networking services: While we’ve
covered some changes to how we handle container networking and storage
in our prior meetups,
we’ve built more improvements to show off since then.

Come say hello and check out the newest features! The Rancher team will
be at booth #110, with great swag and prizes, and we’re always excited
to meet our users and community in person. Not attending AWS re:Invent?
Stay up-to-date with Rancher and with our upcoming release by following
us on Twitter @Rancher_Labs. **You might
also be interested in: **