Share with friends and colleagues on social media

One of the first questions in any discussion about cluster sizing tends to be “How many containers are you running?”. While this is a good data point (especially if you are pushing the scheduler to its limit) it doesn’t show the whole story.

We tend to abstract out a container as this homogeneous building block that represents any workload.

This abstraction has a lot of value for learning how containers work and how the system treats all workloads similarly (which is hugely valuable). However, it falls down when we start looking at planning our hardware requirements.

Variety is the spice of life

It becomes pretty apparent, when you look at the type of work being done in your containers, that the diagrams should look more like this:

Credit: Flikr User jaysantiago. Usage under Creative Commons

A cluster might have a bunch of really tiny microservices that hardly take any memory or CPU, moderately sized java or .NET application stacks, or even huge instances of in-memory data stores where it might not be a crazy request to have 100GB of memory allocated to a single instance (looking at you DataHub…). It likely has a mixture of all these different workloads.

Container Tetris

Luckily, this is also where a well set up cluster can save money. It can allow these different size instances to run with the least amount of overhead possible (to eliminate waste). It’s a bit of a Tetris game but the Kubernetes scheduler can fit in the smaller instances around the larger ones to fill up each node efficiently.

While we can definitely be more efficient overall, it takes some skill to plan out your cluster hardware needs. The (dark) art of scaling that IT teams have practiced forever is more relevant than ever.

Just don’t make the assumption that number of containers is all you need to know to judge sizing. Done right, our clusters could look more like this:

Credit: Wikimedia Commons

If you are having issues with sizing and scaling your workloads, SUSE can help you get started with containers. In fact, the SUSE CaaS Platform and SUSE CAP solutions are backed by SUSE Support, which provides your business with peace of mind!  As a Kubernetes Certified Service Provider, the SUSE Global Services team can provide a range of consulting and premium support services offerings that let you focus on your actual business instead of getting caught up in the weeds.

Share with friends and colleagues on social media

Category: Cloud and as a Service Solutions, Cloud Computing, Containers, Containers as a Service, Digital Transformation, Kubernetes, SUSE CaaS Platform, SUSE News, SUSE Services
This entry was posted Tuesday, 16 April, 2019 at 4:12 pm
You can follow any responses to this entry via RSS.

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet