- Enables flexible deployment of containerized applications
- Cuts cluster deployment times by 97%, from three months to just two days
- Streamlines and standardises development and deployment of applications across different countries
Workload scheduling places containers according to their needs while improving resource utilization
Service discovery and load balancing provides an IP address for your service, and distributes load behind the scenes
Health monitoring and management supports application self-healing and ensures application availability
Non-disruptive Rollout/Rollback of new applications and updates enables frequent change without downtime
Application scaling up and down, accommodates changing load
SUSE CaaS Platform is deployed as a cluster of server nodes, each of which is intended to host some number of containers as workloads. Each node must therefore meet minimal requirements for running the CaaS Platform software, and must also meet additional memory and disk requirements of the containers that will run simultaneously on the node.
Minimally, each physical server that will be included in the CaaS Platform cluster must meet the following system requirements:
SUSE CaaS Platform can run on bare metal servers, in privately operated virtual machines, and on private and public cloud infrastructure resources. It supports the use of a wide range of popular storage options as well.
Looking for applications and services for your SUSE CaaS Platform environment? Find Ready for CaaS Platform partner applications in the SUSE Partner Software Catalog.
It only seems like a few years ago that virtual servers changed the face of application deployment…
Enterprise customers can confidently deploy ISV containerized offerings that are tested and…