It is tire kicking time. SUSE Container as a Service Platform images are available in Amazon EC2, Google Compute Engine, and Microsoft Azure, well they’ve been there for a while for self discovery purposes. The latest images can be found with pint and are located in the General Catalog in EC2, the Marketplace in Azure, and are launch-able from the suse-byos-cloud project in GCE.
While version 3.0 of CaaSP has been released to the data center, the CaaSP images in the Public Cloud are based on CaaSP 2.1. We are working through the changes to bring CaaSP 3.0 images to the Public Cloud in the not too distant future. This will also include an upgrade path from 2.1 to 3.0.
What can you do with CaaSP 2.1 in the Public Cloud? Well, you can run a Kubernetes cluster, yay. To get started see the documentation. Getting started is a matter of running a script and pushing a few buttons in the Velum UI. Note that for each framework credential handling is different and that requires your attention. We tried to make the cluster setup as painless as possible. A route to the Internet and the pint server must be available for cluster setup. The images are BYOS (Bring Your Own Subscription) and therefore you want to register with SCC to receive updates to your cluster. SUSE Cloud Application Platform also works on top of CaaSP in the Public Cloud but still requires some fiddling, I did mention to kick the tires, right?
Technical details are in the documentation and thus I do not want to delve into those here. Rather I want to focus on where this thing is going.
The goal for CaaSP is reasonably straight forward to formulate, anywhere anytime. Meaning run CaaSP in AWS, Azure, GCP, in your data center on top of hardware or on top of SUSE OpenStack Cloud and get a consistent experience. This is of course not achieved over night and we have a long list of plans to get us there. From a Public Cloud perspective that includes integration of cloud native features to make it easy to use the framework native load balancer, integrate with auto scaling, and storage, for example. Another public cloud specific feature is the development of a rolling update process. Today when the cluster gets updated a worker node gets updated via transactional updates, containers get evacuated, the nodes gets rebooted and then the node gets repopulated. Rolling updates will stand up a new instance in the given cloud framework, then containers will be migrated to the new instance and the old instance will be terminated. The list of things we have in mind is long and the outlook is very exciting, so hope on the train and come along for the ride.