Wie die Barmenia innovative Services für Millionen Versicherte schneller bereitstellt

Montag, 17 April, 2023

Die Barmenia-Versicherungsgruppe transformiert ihre Anwendungslandschaft, um Versicherten auf allen Kanälen ein optimales Kundenerlebnis zu bieten. Der Wechsel zu modernen, Container-basierten Applikationen stellte das IT-Team jedoch zunächst vor Herausforderungen. Mit Kubernetes und Rancher Prime konnte die Barmenia diese Herausforderungen meistern und die Entwicklung innovativer digitaler Services beschleunigen – ohne Kompromisse bei der Datensicherheit.

Die Barmenia ist eine unabhängige Versicherungsgruppe mit Hauptsitz in Wuppertal, die rund 4.300 Mitarbeiterinnen und Mitarbeiter deutschlandweit beschäftigt. Das Produktangebot der Unternehmensgruppe reicht von Kranken- und Lebensversicherungen über Unfall- und Kfz-Versicherungen bis hin zu Haftpflicht- und Sachversicherungen.

Mittlerweile verlagert sich das Business der Versicherungsgruppe immer stärker in die digitale Welt. „Um auch in Zukunft erfolgreich zu sein, müssen wir das digitale Kundenerlebnis kontinuierlich verbessern und es unseren Versicherten so einfach wie möglich machen, mit uns zu kommunizieren“, sagt Daniel Oberdick, Systems Engineer und DevOps-Spezialist bei der Barmenia.

Self-Service-Tools wie das Online-Portal „Meine Barmenia“ und die mobile „BarmeniaApp“ werden laufend um neue Funktionen erweitert. Um innovative Services möglichst schnell auf den Markt bringen zu können, befasste sich das Unternehmen frühzeitig mit DevOps-Ansätzen und Container-Technologien. „Das Konzept der Container-Technologie hat uns von Anfang an begeistert“, berichtet Oberdick. „Wir haben allerdings auch gesehen, dass Lösungen wie Docker alleine nicht für den produktiven Betrieb von Enterprise-Anwendungen ausreichen.“ Das IT-Team suchte daher nach einer Lösung, die die Anforderungen der großen Versicherungsgruppe in puncto Effizienz, Verfügbarkeit und Sicherheit abdecken kann.

Rancher Prime erfüllt die Erwartungen

Nachdem sich die Barmenia mehrere Lösungen angesehen hatte, entschied sich die Versicherungsgruppe schließlich für den Einsatz von Rancher Prime. „Unser IT-Partner SVA System Vertrieb Alexander GmbH konnte uns in einer Lab-Umgebung sehr beeindruckend demonstrieren, welche Mehrwerte die Plattform für das Management von Kubernetes-Clustern bietet“, erklärt Oberdick.

Sehr positiv bewertete das IT-Team der Barmenia die einfache Bedienbarkeit von Rancher Prime. Über die intuitive GUI lassen sich alle Vorgänge rund um die Verwaltung von Kubernetes-Clustern zentral und effizient ausführen – von der Provisionierung über das Monitoring bis zum Backup.

Auch im Bereich Security konnte Rancher Prime alle Anforderungen der Barmenia erfüllen. Die Plattform vereinfacht die Durchsetzung einheitlicher Sicherheitsrichtlinien auf allen Clustern und unterstützt eine rollenbasierte Zugriffskontrolle.

„Ein großer Mehrwert von Rancher Prime ist, dass wir die einzelnen Business-Units granular separieren können“, sagt Oberdick. „Wir haben für jede Business-Unit ein eigenes Projekt in Rancher Prime angelegt. Innerhalb dieser Projekte können die Mitarbeiter dann selbstständig Namespaces anlegen und ihren Workloads nach Bedarf Ressourcen zuweisen. Diese Kombination von Segmentierung und Self-Service macht den Betrieb gleichzeitig sicherer und effizienter – und ist etwas, das Kubernetes von Haus aus nicht mitbringt.“

Effizientere Entwicklungsprozesse und schnelleres Onboarding neuer Spezialisten

Die Barmenia hatte schon in der Vergangenheit daran gearbeitet, die DevOps-Effizienz zu verbessern, und dafür ein eigenes Automatisierungsframework aufgebaut. Ziel war, die Entwickler so weit wie möglich von Aufgaben zu befreien, die nichts mit der eigentlichen Programmierung zu tun haben.

„Mit Rancher Prime gehen wir nun noch einen großen Schritt weiter und bieten Entwicklern eine moderne Container-Plattform, die sie wirklich als Service konsumieren können“, betont Oberdick. „Sie können sich ganz darauf konzentrieren, ihren Code zu schreiben, und bringen diesen dann mit wenigen Klicks auf die Plattform – ohne dass sie sich mit Konfigurationsdetails der Infrastruktur befassen müssen.“

Die weitgehende Automatisierung des Deployments hilft der Barmenia dabei, neue Funktionen schneller bereitzustellen – und sich damit Wettbewerbsvorteile am Markt zu verschaffen. Gleichzeitig profitiert die Versicherungsgruppe nach Einschätzung von Oberdick auch beim Onboarding neuer Entwickler von den automatisierten und standardisierten Workflows.

„Letztlich eröffnet uns Rancher Prime ganz neue Möglichkeiten, das Tempo bei der digitalen Business-Transformation zu erhöhen und unseren Kunden noch besseren Service zu bieten“, fasst Oberdick zusammen. „Wir können innovative Anwendungen schneller auf den Markt bringen und die Rüstzeiten für neue Technologien erheblich verkürzen.“

Lesen Sie hier die ganze Geschichte und erfahren Sie, wie viel Zeit die Barmenia heute bei der Bereitstellung von Kubernetes-Clustern spart – und wie die weiteren Pläne der Versicherungsgruppe für die Cloud-native Transformation aussehen.

Accelerating Machine Learning with MLOps and FuseML: Part One

Sonntag, 25 Juli, 2021

Building successful machine learning (ML) production systems requires a specialized re-interpretation of the traditional DevOps culture and methodologies. MLOps, short for machine learning operations, is a relatively new engineering discipline and a set of practices meant to improve the collaboration and communication between the various roles and teams that together manage the end-to-end lifecycle of machine learning projects.

Helping enterprises adapt and succeed with open source is one of SUSE’s key strengths. At SUSE, we have the experience to understand the difficulties posed by adopting disruptive technologies and accelerating digital transformation. Machine learning and MLOps are no different.

The SUSE AI/ML team has recently launched FuseML, an open source orchestration framework for MLOps. FuseML brings a novel holistic interpretation of MLOps advocated practices to help organizations reshape the lifecycle of their Machine Learning projects. It facilitates frictionless interaction between all roles involved in machine learning development while avoiding massive operational changes and vendor lock-in.

This is the first in a series of articles that provides a gradual introduction to machine learning, MLOps and the FuseML project. We start here by rediscovering some basic facts about machine learning and why it is a fundamentally atypical technology. In the next articles, we will look at some of the key MLOps findings and recommendations and how we interpret and incorporate them into the FuseML project principles.

MLOps Overview

Old habits that need changing can be difficult to unlearn, even more difficult than re-learning everything. It’s true for people, and it’s even truer for teams and organizations where the combined inertia that makes important changes difficult to implement is several orders of magnitude greater.

With the AI hype on the rise, organizations have been investing more and more in machine learning to make better and faster business decisions or automate key aspects of their operations and production processes. But if history taught us anything about adopting disruptive software technologies like virtualization, containerization and cloud computing, it’s that getting results doesn’t happen overnight. It often requires significant operational and cultural changes. With machine learning, this challenge is very pronounced, with more than 80 percent of AI projects failing to deliver business outcomes, as reported by Gartner in 2019 and repeatedly confirmed by business analysts and industry leaders throughout 2020 and 2021.

Naturally, following this realization about the challenges of using machine learning in production, a lot of effort went into investigating the “whys” and “whats” about this state of affairs. Today, the main causes of this phenomenon are better understood. A brand new engineering discipline – MLOps – was created to tackle the specific problems that machine learning systems encounter in production.

The recommendations and best practices assembled under the MLOps label are rooted in the recognition that machine learning systems have specialized requirements that demand changes in the development and operational project lifecycle and organizational culture. MLOps doesn’t propose to reinvent how we do DevOps with software projects. It’s still DevOps but pragmatically applied to machine learning.

MLOps ideas can be traced back to the defining characteristics of machine learning. The remainder of this article is focused on revisiting what differentiates machine learning from conventional programming. We’ll use the fundamental insights in this exercise as stepping stones when we dive deeper into MLOps in the next chapter of this series.

Machine Learning Characteristics

Solving a problem with traditional programming requires a human agent to formulate a solution, usually in the form of one or more algorithms, and then translate it into a set of explicit instructions that the computer can execute efficiently and reliably. Generally speaking, conventional programs, when correctly developed, are expected to give accurate results and to have highly predictable and easily reproducible behaviors. When a program produces an erroneous result, we treat that as a defect that needs to be reproduced and fixed. As a best practice, we also process conventional software through as much testing as possible before deploying it in production, where the business cost incurred for a defect could be substantial. We rely on the results of proactive testing to give us some guarantees about how the program will behave in the future, another characteristic derived from the predictability aspect of conventional software. As a result, once released, a software product is expected to take significantly less effort to maintain compared to development.

Some of these statements are highly generic. One might say they could even be used to describe products in general, software or otherwise. They all have in common that they no longer hold as entirely valid when applied to machine learning.

Machine learning algorithms are distinguished by their ability to learn from experience (i.e., from patterns in input data) to behave in a desired way, rather than being programmed to do so through explicit instructions. Human interaction is only required during the so-called training phase when the ML algorithm is carefully calibrated and data is fed into it, resulting in a trained program, also called an ML model. With proper automation in place, it may even seem that human interaction could be eliminated. Still, as we’ll see later in this post, it’s just that the human responsibilities shift from programming to other activities, such as data collection and processing and ML algorithm selection, tuning and monitoring.

Machine learning can be used to solve a specific class of problems:

  • the problem is extremely difficult to solve mathematically or programmatically, or it has only solutions that are too computationally expensive to be practical
  • a fair amount of data exists (or can be generated) containing a pattern that an ML algorithm can learn

Let’s look at two examples, similar but situated at opposite ends of the spectrum as far as utility is concerned.

Sum of Two Numbers

A very simple example, albeit with no practical application whatsoever, is training an ML model to calculate the sum of two real numbers. Doing this with conventional programming is trivial and always yields very accurate results.

Training and using an ML model for the same task could be summarized by the following phases:

Data Preparation

First, we need to prepare the input data that will be used to train the ML model. Generally speaking, training data is structured as a set of entries. Each entry associates a concrete set of values used as input for the target problem with the correct answer (sometimes known as a target or label in ML terms). In our example, each entry maps a pair of real input values (X, Y) to the desired result (X+Y) that we expect the model to learn to compute. For this purpose, we can generate the training data entirely using conventional programming. Still, it’s often the case with machine learning that training data is not readily available and expensive to acquire and prepare. The code used to generate the input dataset could look like this:

import numpy as np 
train_data = np.array([[1.0,1.0]])
train_targets = np.array([2.0])
for i in range(3,10000,2):
  train_data = np.append(train_data,[[i,i]],axis=0)
  train_targets = np.append(train_targets,[i+i])

Deciding what kind of data is needed, how much of it and how it needs to be structured and labeled to yield acceptable results during ML training is the realm of data science. The data collection and preparation phase is critical to ensuring the success of ML projects. It takes experimentation and experience to find out which approach yields the best result, and data scientists often need to iterate several times through this phase and improve the quality of their training data to raise the accuracy of ML models.

Model Training

Next, we need to define the ML algorithm and train it (also known as fitting) on the input data. For our goal, we can use an Artificial Neural Network (ANN) suitable for this type of problem (regression). The code for it could look like this:

import tensorflow as tf
from tensorflow import keras
import numpy as np


model = keras.Sequential([
  keras.layers.Flatten(input_shape=(2,)),
  keras.layers.Dense(20, activation=tf.nn.relu),
  keras.layers.Dense(20, activation=tf.nn.relu),
  keras.layers.Dense(1)
])


model.compile(optimizer='adam', 
  loss='mse',
  metrics=['mae'])


model.fit(train_data, train_targets, epochs=10, batch_size=1)

Similar to data preparation, deciding which ML algorithm to use and what values should be configured for its parameters for best results (e.g., the neural network architecture, optimizer, loss, epochs) requires specific ML knowledge and iterative experimentation. However, by now, ML is mature enough to make finding an algorithm that fits the problem not difficult, especially given that there are countless open source libraries, examples, ready-to-use ML models and documented use-case patterns and recipes available for all major classes of problems that can be solved with ML, that one can start from. Moreover, many of the decisions and activities required to develop a high-performing ML model (e.g., hyper-parameter tuning, neural architecture search) can already be fully automated or accelerated through partial automation through a special category of tools called AutoML.

Model Prediction

We now have a trained ML model that we can use to calculate the sum of any two numbers (i.e. make predictions):

def sum(x, y):
  s = model.predict([[x, y]])[0][0]
  print("%f + %f = %f" % (x, y, s))

The first thing to note is that the summation results produced by the trained model are not at all accurate. It’s fair to say that the ML model is not behaving like it’s calculating the result, but more like it’s giving a ballpark estimation of what the result might be, as shown in this set of examples:

# sum(2000, 3000)
2000.000000 + 3000.000000 = 4857.666992
# sum(4, 5)
4.000000 + 5.000000 = 9.347977

Another notable characteristic is, as we move further away from the pattern of values on which the model was trained, the model’s predictions get worse. In other words, the model is better at estimating summation results for input values that are more similar to the examples on which it was trained:

# sum(10, 10000)
10.000000 + 10000.000000 = 8958.944336
# sum(1000000, 4)
1000000.000000 + 4.000000 = 1318969.375000
# sum(4, 1000000)
4.000000 + 1000000.000000 = 895098.750000
# sum(0.1, 0.1)
0.100000 + 0.100000 = 0.724608
# sum(0.01, 0.01)
0.010000 + 0.010000 = 0.549576

This phenomenon is well known to ML engineers. If not properly understood and addressed, it can lead to ML specific problems that take various forms and names:

  • bias: using incomplete, faulty or prejudicial data to train ML models that end up producing biased results
  • training-serving skew: training an ML model on a dataset that is not representative of the real-world conditions in which the ML model will be used
  • data drift, concept drift or model decay: the degradation, in time, of the model quality, as the real-world data used for predictions changes to the point where the initial assumptions on which the ML model was trained are no longer valid

In our case, it’s easy to see that the model is performing poorly due to a skew situation: we inadvertently trained the model on pairs of equal numbers, which is not representative of the real-world conditions in which we want to use it. Our model also completely missed the point that addition is commutative, but that’s not surprising, given that we didn’t use training data representative of this property either.

When developing ML models to solve complex, real-world problems, detecting and fixing this type of problem is rarely that simple. Machine learning is as much an art as it is a science and engineering endeavor.

In training ML models, there is usually also a validation step involved, where the labeled input data is split, and part of it is used to test the trained model and calculate its accuracy. This step is intentionally omitted here for the sake of simplicity. The full exercise of implementing this example, with complete code and detailed explanations, is covered in this article.

The Three-Body Problem

At the other end of the spectrum is a physics (classical mechanics) problem that has inspired one of the greatest mathematicians of all times, Isaac Newton, to invent an entirely new branch of math and nowadays a source of constant frustration among high school students: Calculus.

Finding the solution to the set of equations that describe the motion of two celestial bodies (e.g., the Earth and the Moon) given their initial positions and velocities is already a complicated problem. Extending the problem to include a third body (e.g., the Sun) complicates things to the point where a solution cannot be found, and the entire system starts behaving chaotically. With no mathematical solution in sight, Newton himself felt that supernatural powers had to be at play to account for the apparent stability of our solar system.

This problem and its generalized form, the many-body problem, are so famous because solving them is a fundamental part of space travel, space exploration, cosmology and astrophysics. Partial solutions can be calculated using analytical and numerical methods, but it requires immense computational power.

All life forms on this planet are constantly used to dealing with gravity. We are well equipped to learn from experience, and we’re able to make pretty accurate predictions regarding its effects on our bodies and the objects we interact with. It is not entirely surprising that Machine Learning can estimate the motion of objects under the effect of gravity.

Using Machine Learning, researchers at the University of Edinburgh have been able to train an ML model capable of solving the three-body problem 100 million times faster than traditional means. The full story covering this achievement is available here, and the original scientific paper can be read here.

Solving the three-body problem with ML is similar to our earlier trivial example of adding two numbers together. The training and validation datasets are also generated through simulation, and an ANN is also involved here, albeit one with a more complex structure. The main differences are the complexity of the problem and ML’s immediate practical application to this use case. However, the observations previously stated about general ML characteristics apply equally to both cases, regardless of complexity and utility.

Conclusion

We haven’t even begun to look at MLOps in detail. Still, we can already identify and summarize key takeaways representative of ML in general just by comparing classical programming to Machine Learning:

  1. Not all problems are good candidates for machine learning
  2. The process of developing ML models is iterative, exploratory and experimental
  3. Developing a machine learning system requires dealing with new categories of artifacts with specialized behaviors that don’t fit the patterns of conventional software
  4. It’s usually not possible to produce fully accurate results with ML models
  5. Developing and working with machine learning based systems requires a specialized set of skills, in addition to those needed for traditional software engineering
  6. Running ML systems in the real world is far less predictable than what we’re used to with regular software
  7. Finally, developing ML systems would be next to impossible without specialized tools

Machine Learning characteristics summarized here are reflected in the MLOps discipline and distilled in the principles on which we based the FuseML orchestration framework project. The next article will give a detailed account of MLOps recommendations and how an MLOps orchestration framework like FuseML can make developing and operating ML systems an automated and frictionless experience.

Category: Featured Content, Rancher Kubernetes Comments closed

SUSE Cloud Application Platform 2.0 beschleunigt die Software-Erstellung und erhöht die Business Agilität

Mittwoch, 24 Juni, 2020

In meinen Gesprächen mit Software-Experten und Unternehmern, steht die Durchführung ergebnisorientierter Software-Projekte, die Innovationen beschleunigen, ganz oben auf der Agenda. Genau dieses Thema adressiert SUSE Cloud Application Platform 2.0, die wir heute vorgestellen.  SUSE Cloud Application Platform 2.0 bietet eine vollständige Automatisierung des Lebenszyklus von Anwendungen und ermöglicht es Unternehmen, ihre Release-Zyklen von Monaten auf Minuten zu verkürzen. Dadurch steigern Unternehmen deutlich ihre Agilität und können so die Zufriedenheit ihrer Kunden kontinuierlich verbessern.

Der neue Kubernetes-Operator in dieser Version ermöglicht eine einfachere Bereitstellung und Verwaltung der Cloud-Foundry-basierten Plattform auf der Infrastruktur von Kubernetes. Die Installation, der Betrieb und die Wartung von SUSE Cloud Application Platform 2.0 auf Kubernetes Plattformen sind vereinfacht – On-Premise und in Public Clouds. Anwender von Cloud Foundry können so schnell und pragmatisch auf eine moderne, Kubernetes-basierte Architektur umsteigen.

Verbesserungen der webbasierten Verwaltungskonsole der Plattform helfen Administratoren, Anwendungen einfacher und sicherer zu verwalten. Eine umfangreiche neue Workload-Ansicht gewährt einen tieferen Einblick in Anwendungen und die Einstellungen für die Bereitstellung. Verbesserte Sicherheit wird in dieser Version durch Helm 3 geboten, das die Zugriffskontrolle direkt auf Kubernetes verlagert. Durch den Wegfall von Tiller, der serverseitigen Komponente, die in Helm 2 verwendet wird, werden die theoretischen Angriffsmöglichkeiten reduziert.

Seit langem arbeitet SUSE intensiv mit der Open Source Community daran, die hochproduktive Entwicklungserfahrung von Cloud Foundry in die Container Management-Platform Kubernetes einzubringen, die heute Industriestandard ist. SUSE bietet bereits eine containerisierte Implementierung der Cloud Foundry Application Runtime, die in Kubernetes ausgeführt wird, eine einheitliche Verwaltung von Cloud Foundry- und Kubernetes-Implementierungen über eine einzige Konsole, einen unterstützten Betrieb auf jeder zertifizierten Kubernetes-Plattform sowie eine technische Vorschau auf Kubernetes-natives Scheduling.

Die SUSE Cloud Application Platform 2.0 erweitert diese Zusammenarbeit, indem sie verschiedene Upstream-Technologien integriert, die SUSE der Cloud Foundry Community zur Verfügung gestellt hat. Dazu gehören KubeCF, eine containerisierte Version der Cloud Foundry Application Runtime, die für die Ausführung auf Kubernetes entwickelt wurde, und Projekt Quarks, ein Kubernetes-Operator zur Automatisierung der Bereitstellung und Verwaltung von Cloud Foundry auf Kubernetes.

Da SUSE auf dem Upstream KubeCF-Projekt aufbaut und regelmäßig und inkrementell neue Kubernetes-Funktionen integriert, können Benutzer von Cloud Foundry, die Vorteile der neuen Kubernetes-Funktionen sofort nutzen und so den Übergang zu einer Kubernetes-Architektur beschleunigen. Auf dem Cloud Foundry Summit heute und morgen erfahren Sie mehr darüber, wie SUSE Cloud Foundry in der SUSE Cloud Application Platform bereitstellt.

Bei SUSE ist die Begeisterung gross über die neuen Funktionen, die mit der SUSE Cloud Application Platform 2.0 eingeführt wurden – testen Sie es es selbst! Wenn Sie ein Entwickler sind, melden Sie bei  der Developer Sandbox an – Sie erhalten so kostenlosen Zugriff auf die Plattform. Erleben Sie aus erster Hand die einfache, schnelle und effektive Anwendungsbereitstellung. Wenn Sie Plattformadministrator sind, nutzen Sie das kostenlose, zeitlich begrenzte Accelerate Innovation Angebot von SUSE: Es bietet Ihnen den gesamten Container- und Application-Platform-Stack von SUSE, einschließlich der SUSE Cloud Application Platform, der SUSE CaaS-Plattform (unsere Kubernetes-Distribution) und SUSE Enterprise Storage sowie Support, Schulungen und Consulting, mit denen Sie Ihre Implementierung sicher forcieren können. Weitere Informationen zeigen Ihnen, wie Sie Anwendungen schneller bereitstellen können.

Ich freue mich darauf von Ihnen zu erfahren, wie die schnelle Bereitstellungen von Anwendungen den Wandel in Ihrem Unternehmen vorantreibt!

Edge Computing: Drei Use Cases, die für exponentielles Wachstum sorgen

Dienstag, 23 Juni, 2020

Edge Computing entwickelt sich derzeit zu einem der wichtigsten Trends der IT. Unternehmen setzen zunehmend auf verteilte IT-Infrastrukturen und bringen immer mehr Intelligenz an den Netzwerkrand. So lassen sich Daten direkt dort verarbeiten, wo sie erzeugt und gesammelt werden, statt sie zur Analyse erst in die Cloud oder in ein Rechenzentrum zu übertragen.

Warum ist Edge Computing wichtig?

Der dezentrale IT-Ansatz ermöglicht es, Erkenntnisse in Echtzeit zu gewinnen und schnellere, intelligentere Entscheidungen zu treffen. Außerdem werden Anwender nicht mehr durch hohe Latenzen oder Ausfälle von Services gebremst. Mit Edge Computing können zukunftsorientierte Unternehmen neue Dienste einführen, die die Customer Experience massiv verbessern, die Produktivität steigern, Kosten sparen und für erhebliche Wettbewerbsvorteile sorgen.

Bei all diesen potenziellen Vorteilen ist es kein Wunder, dass Branchenexperten und Analysten dem Edge Computing eine große Zukunft prognostizieren. Laut IDC werden Unternehmen bis 2023 mehr als 50 Prozent ihrer neuen IT-Infrastruktur im Edge-Bereich implementieren. Die Anzahl der am Netzwerkrand genutzten Anwendungen soll im Vergleich zu heute um 800 Prozent steigen.

Diese Vorhersagen wurden gemacht, bevor sich die Covid-19-Pandemie ausbreitete. Aber die weltweite Ausnahmesituation hat die Nachfrage nach verteilten IT-Systemen sogar eher noch verstärkt. Immer mehr Anwender arbeiten derzeit von zu Hause aus. Dies führt auch dazu, dass sehr viele Daten remote erfasst, analysiert, verwaltet und kontrolliert werden. Die prognostizierten Wachstumsraten werden daher mit hoher Wahrscheinlichkeit tatsächlich eintreten.

Lösungen für den Edge-Bereich entwickeln

Der Trend zum Edge Computing macht aber Cloud Services oder On-Premise-Infrastrukturen im Rechenzentrum nicht überflüssig – ganz im Gegenteil. Das gesamte IT-Ökosystem wächst dadurch und entwickelt sich weiter.

Grundlage für das Edge Computing sind eine Reihe konvergierender Technologien, die sich zu innovativen Lösungen verknüpfen lassen. Dazu gehören neue 5G-Netzwerke, HPC-Systeme, KI- und ML-Fähigkeiten, IoT-Geräte, Edge Security, verteilte private Clouds und vieles mehr.

So eröffnet Edge Computing einen riesigen Spielraum für neue digitale Dienste und Anwendungen. Einige davon zielen ausschließlich auf Consumer-Märkte ab, andere wiederum adressieren kritische Business-Anwendungen. Die folgenden drei Use Cases werden das Wachstum von Edge Computing in den nächsten Jahren beschleunigen:

Autonome Fahrzeuge

Moderne Autos haben sich in den letzten Jahren Schritt für Schritt zu ‚mobilen Rechenzentren‘ entwickelt – und jeder Automobilhersteller ist mittlerweile auch ein Softwareentwickler. Kameras, Sensoren und andere IT-gestützte Systeme im Fahrzeug erfassen und verarbeiten heute eine Vielzahl von Informationen. Motorsteuerung, Fahrhilfen, Sicherheits- und Unterhaltungssysteme sind ohne digitale Intelligenz nicht mehr vorstellbar.

Durch den Trend zum autonomen Fahren wird in den nächsten Jahren noch deutlich mehr IT-Power im Auto benötigt. Experten schätzen, dass in einem selbstfahrenden Kraftfahrzeug in acht Stunden Fahrtzeit rund 40 Terabyte an Daten verarbeitet werden. Einige dieser Daten sind relativ unkritisch und können zur Analyse auch in die Cloud übertragen werden – andere Daten müssen aber zwingend in Echtzeit an Bord des Fahrzeugs ausgewertet werden. Verbindungsausfälle oder zu hohe Latenzzeiten bei der Übertragung in die Cloud könnten sonst die Sicherheit und Gesundheit der Passagiere gefährden.

Um die Mobilität von morgen mitzugestalten und eine leistungsfähige Basis für das softwaregesteuerte Auto zu schaffen, arbeitet SUSE seit Kurzem eng mit Elektrobit zusammen – einem visionären globaler Anbieter von Embedded- und Connected-Software-Lösungen und Dienstleistungen für die Automobilindustrie. Ziel dieser strategischen Partnerschaft ist, ein Linux-basiertes Betriebssystem der nächsten Generation für die intelligenten Fahrzeuge der Zukunft bereitzustellen.

Digitales Gesundheitswesen

IoT-Geräte werden für die Echtzeit-Überwachung von Patientendaten immer wichtiger. Experten rechnen damit, dass dieser Markt bis 2025 ein Volumen von 500 Milliarden US-Dollar erreichen wird. Die Gründe dafür lassen sich einfach nachvollziehen. In Situationen, in denen es um Leben oder Tod geht, zählt jede Sekunde. Wenn Ärzte auf Testergebnisse oder die Analyse von Daten warten müssen, schmälert dies unter Umständen die Überlebenschancen eines Patienten. Sind wichtige Daten aus der Intensivmedizin hingegen sofort verfügbar oder lassen sich diese direkt vom Krankenwagen an das Krankenhaus übermitteln, können Mediziner auch sofort die notwendigen lebensrettenden Maßnahmen ergreifen.

Leistungsfähige Technologien im Edge-Bereich machen dies erst möglich, da sie die Geschwindigkeit der Datenverarbeitung erheblich steigern. Vereinfachte Workflows im klinischen Bereich verbessern dabei auch die Kosteneffizienz des Gesundheitswesens. Und schließlich kann die lokale Datenhaltung beim Edge Computing auch helfen, die Sicherheit sensibler Patientendaten zu erhöhen.

Intelligente Städte

In Deutschland leben bereits 77 Prozent aller Menschen in Städten oder städtischen Ballungsräumen und Experten rechnen damit, dass die urbane Bevölkerung in Zukunft weiter anwachsen wird. Schon in 20 Jahren werden beispielsweise in München 300.000 Menschen mehr leben als heute – so eine aktuelle Prognose.

Nur intelligente digitale Städte können mit diesem Wachstum Schritt halten und trotz hoher Nachverdichtung weiterhin eine gute Lebensqualität bieten. Die Infrastrukturen dafür entstehen bereits. In unserer Umgebung werden immer mehr Daten von IoT-Geräten gesammelt. Dazu gehören Straßenlaternen, Videokameras, Verkehrssensoren, Stromzähler und eine ganze Reihe weiterer Geräte und Systeme.

Der Schlüssel zu einer Smart City liegt jedoch in der Fähigkeit, alle diese Daten möglichst in Echtzeit zu analysieren. Erst auf dieser Grundlage können Aktionen angestoßen werden, die das Leben in der Stadt angenehmer und sicherer machen. Ganz gleich, ob es um ein intelligentes Verkehrs- und Transportmanagement, eine möglichst nachhaltige Ressourcennutzung oder die zuverlässige Überwachung der öffentlichen Sicherheit geht – Edge Computing spielt stets eine Schlüsselrolle. Wenn etwa Notsituationen durch intelligente Systeme automatisch erkannt werden, können Polizei, Rettungsdienst und Feuerwehr schneller vor Ort sein, um die Situation zu bewältigen.

Die Entwicklung autonomer Fahrzeuge, die intelligente Vernetzung unserer Städte und die Digitalisierung des Gesundheitswesens sind drei der wichtigsten Treiber für das Edge Computing.

Es gibt mehrere Aspekte, die diese Anwendungsfelder gemeinsam haben. Alle drei Use Cases erfordern Software-Stacks auf Enterprise-Niveau für den geschäftskritischen Einsatz. Es darf keine Kompromisse bei Leistung, Zuverlässigkeit, Sicherheit und Support geben. Zudem müssen Edge-Lösungen in diesen drei Bereichen individuell anpassbar sein und sich möglichst einfach implementieren, verwalten und warten lassen.

Genau diese Faktoren zählen zu den größten Stärken von SUSE. Und genau aus diesen Gründen setzen viele Unternehmen bei der Entwicklung ihrer Edge- und IoT-Lösungen auf unser Lösungsportfolio. Erfahren Sie jetzt, wie Sie Ihre Organisation mit SUSE auf das Edge Computing der nächsten Generation vorbereiten.

 

 

SUSE Home Office Workplace: Unser Angebot für Ihre Business Continuity-Strategie

Mittwoch, 8 April, 2020

Mitarbeitern im Home Office einen sicheren und zuverlässigen Zugriff auf ihre geschäftskritischen Anwendungen zu ermöglichen – das ist aktuell die große Herausforderung für Unternehmen. Hardware-Engpässe, begrenzte Budgets und enormer Zeitdruck erschweren zusätzlich in vielen Organisationen die Umsetzung von Notfallplänen. Für ein reibungsloses Arbeiten von zu Hause bieten wir ein kostengünstiges Business Continuity-Konzept, das Sie schnell und einfach einführen können: den SUSE Home Office Workplace.

Working from Home

Im normalen Geschäftsalltag wird dem Thema Business Continuity meist wenig Aufmerksamkeit geschenkt. Solange alle Prozesse wie gewohnt laufen, findet sich in Unternehmen selten die Zeit, über Notfallszenarien für außergewöhnliche Situationen, wie wir sie heute erleben, nachzudenken. Daher gibt es oft nur rudimentäre Planungen für den Umgang mit unvorhergesehenen Ereignissen.

Die Covid-19-Pandemie hat innerhalb weniger Wochen vieles radikal verändert. Unternehmen und Organisationen auf der ganzen Welt müssen jetzt sehr schnell Maßnahmen ergreifen, um ihren Geschäftsbetrieb so gut wie möglich aufrechtzuerhalten. Eine Schlüsselrolle spielt dabei das Thema Heimarbeitsplätze. Wenn möglichst viele Anwender ihre Aufgaben von zu Hause aus erledigen, sinkt das Infektionsrisiko – und Unternehmen bleiben trotz der Krise weiter handlungsfähig.

Business Continuity: Herausforderungen für IT-Manager

In vier Punkten zusammengefasst, stehen IT-Verantwortliche vor den folgenden Problemen:

  • Hardware-Engpässe: Der Markt kann die sprunghaft gestiegene Nachfrage nach entsprechender Hardware kaum noch bedienen. Business-Notebooks oder Desktop-Rechner für eine große Anzahl von Home Office-Anwendern sind kurzfristig oft nicht lieferbar.
  • Begrenzte Budgets: Für die ungeplante Beschaffung von Hardware und Lizenzen ist in vielen Organisationen kein Budget vorhanden. Organisationen tragen zudem ein hohes Investitionsrisiko, da noch gar nicht absehbar ist, wie lange das zusätzliche Equipment überhaupt benötigt wird.
  • Zeitlicher Druck: Jeder Tag zählt. Je länger es dauert, bis Mitarbeiter im Home Office wieder produktiv arbeiten können, desto größer sind die Verluste für das Unternehmen.
  • Sicherheit: Die Verbindung vom Home-Office in das Firmennetzwerk sollte sicher und ohne die Notwendigkeit spezieller und damit teurer Hard- und Software möglich sein.

Viele dieser Herausforderungen können Sie mit Open Source-Softwarelösungen bewältigen.  Für die kurzfristige Bereitstellung von Home Office-Arbeitsplätzen haben wir für Sie den SUSE Home Office Workplace entwickelt. Unternehmen können mit dem SUSE Home Office Workplace eine Business Continuity-Strategie umsetzen, die genau auf ihren Bedarf zugeschnitten ist und die oben beschriebenen Bereiche adressiert:

  • Linux-Betriebssysteme bieten auch auf Hardware mit geringen Systemressourcen eine gute Performance. So können Unternehmen ihre ausgemusterten PCs und Notebooks ganz unkompliziert in sichere Endgeräte für Home Office-Anwender verwandeln. Selbst auf einem Raspberry Pi lässt sich SUSE Linux Enterprise Server for Arm ausführen.
  • Sie können die Lösung kostenlos evaluieren und damit sofort mit dem Aufbau Ihrer Notfallumgebung starten. Später zahlen sie durch das Abonnement-Modell von SUSE nur für die PCs oder Laptops, die Sie auch tatsächlich im Einsatz haben, und nur so lange, wie Sie die Systeme benötigen. Es gibt keine einmaligen Lizenzkosten.
  • Frei verfügbare Handbücher und Best Practices ermöglichen zudem eine schnelle Umstellung der Endgeräte auf einen Linux-Desktop.

Was benötigen Anwender, um zu Hause sicher und produktiv zu arbeiten?

Für die sichere Anbindung der Home Office-Arbeitsplätze an das Unternehmen über das Internet gibt es ebenfalls leistungsfähige Open Source-Softwarelösungen. SUSE empfiehlt hier den Einsatz der VPN-Lösung OpenVPN. Anwender erhalten damit sehr einfach einen geschützten Zugang zu allen Ressourcen im Unternehmensnetzwerk.

Als lokale Anwendungen bringt der Linux-Desktop von SUSE unter anderem ein vollständiges LibreOffice-Paket sowie freie Software für E-Mail, Collaboration und Instant Messaging mit. Via Webbrowser können die Mitarbeiter auch auf die Anwendungen von Microsoft Office 365 zugreifen. Verbindungen zu virtuellen Desktops von Microsoft oder anderen Anbietern lassen sich mit der entsprechenden Client-Software ebenfalls schnell herstellen. Damit stehen Mitarbeitern im Home Office alle Anwendungen zur Verfügung, um sofort produktiv arbeiten zu können.

Remote-Arbeitsplätze zentral managen und konfigurieren

IT-Abteilungen sind mit dem SUSE Home Office Workplace in der Lage, auch eine größere Anzahl von Remote-Arbeitsplätzen effizient zu verwalten. Die Managementlösung SUSE Manager vereinfacht die automatische Installation und Erstellung von Festplatten-Images und ermöglicht zudem ein zentrales Konfigurationsmanagement aller Clients. Administratoren erhalten darüber hinaus jederzeit einen vollständigen Überblick über alle nötigen Updates und Patches. Sollte der Anstieg an Remote-Arbeitsplätzen den Betrieb zusätzlicher virtueller Server erfordern, ist eine Lastverteilung im Rechenzentrum mittels SUSE Linux Enterprise Server und der Open Source-Virtualisierungslösung KVM möglich.

Zusammengefasst bietet der SUSE Home Office Workplace folgende Bausteine und Lösungen für ein bedarfsgerechtes Business Continuity-Konzept:

  • Schnelle Bereitstellung von Home Office-Arbeitsplätzen mit SUSE Linux Enterprise Desktop und SUSE Linux Enterprise Server for Arm für den Raspberry Pi.
  • Integrierte OpenVPN-Lösung für die sichere Anbindung der Remote-Anwender an das Unternehmen
  • Zentrales Konfigurationsmanagement und Image-Verwaltung mit SUSE Manager
  • 60 Tage kostenfreie Evaluierung von SUSE Linux Enterprise Desktop und SUSE Linux Enterprise Server sowie SUSE Manager – danach zahlen Sie im Rahmen unseres Abonnement-Modells nur für die Systeme, die Sie auch tatsächlich im Einsatz haben.
  • Anfragen und Hilfe unter kontakt-de@suse.com

Weitere Informationen zum SUSE Home Office Workplace haben wir in unserem QuickStart-Guide für Sie zusammengefasst. Laden Sie sich das Dokument jetzt herunter und starten Sie sofort mit der Umsetzung. Bei Fragen wenden Sie sich bitte an kontakt-de@suse.com.

The Business Case for Container Adoption

Dienstag, 2 April, 2019

Developers often believe that demonstrating the need for an IT-based solution should be very easy. They should be able to point to the business problem that needs a solution, briefly explain what technology should be selected, and the funds, staff, and computer resources will be provided by the organization. Unfortunately, this is seldom the actual process that is followed.

Developing a Business Case for New Technology Isn’t Always Easy

Most organizations require that both a business and a technical case be made before a project can be approved. Depending on the size and culture of the organization, building both cases can be a long, and sometimes arduous, process.

Part of the challenge developers face can be summed up simply: business decision-makers and technical decision-makers have different priorities, use different metrics, and, in short, think differently.

Business Managers Think in Different Terms Than Developers

Business decision-makers are almost always thinking in terms of the investment required, the costs expected, and the revenues the organization can expect that can be attributed to the successful completion of the project not the technical merit, the tools selected, or the development methodology that will be deployed to complete the project.

They may use technology every day, but many think of it as a means to an end, not something they enjoy using.

As David Ingram pointed out in his recent article on business decision making, managers often use a 7-step process:

  1. Identify the problem
  2. Seek information to clarify what’s actually happening
  3. Brainstorm potential solutions
  4. Weigh the alternatives
  5. Choose an alternative
  6. Implement the chosen plan
  7. Evaluate the outcome

You’ll note that the best technology, the best approach to development, the best platform, how to achieve the best performance, how to achieve the highest levels of availability, and other technical factors that technologists consider may be seen as secondary issues. From the perspective of a business decision-maker, the extensive work that constitutes this type of evaluation might all be wrapped up into the “weigh the alternatives” step.

Factors of the Business Decision

Let’s break this down a bit. Business decision-makers will consider the *overall** investment required and weigh it against the potential benefits that might be received. This includes a number of factors that may not appear to be directly associated with a specific project.

They also will be considering if this the right project to be addressing at this time or whether other issues are more pressing.

While working with an executive at a major IT supplier, I was once told “solving the wrong problem, no matter how efficiently and well-done, is still solving the wrong problem.”

Here are a few of the factors they are likely to consider:

  • Staff: the number of staff, the levels of expertise, the amount of time they’ll need to be assigned to the project, the business overhead associated with having those people on staff, whether they should be full-time, part-time, or contractors
  • Costs: the costs of all resources required, including:
    • Data center operational costs: floor space, power, air conditioning, networking, maintenance, real estate
    • Systems: number of systems, memory required, external storage, maintenance
    • Software: software licenses, software maintenance
  • Time to market: can this project be completed quickly enough to address the needs of the market. This sometimes is called “time to profit.”
  • Revenues: will the project directly or indirectly lead to increased revenues?

If the costs of doing the project outweigh the projected revenues that can be attributed to the completion of the project, the business decision-makers are likely to look for another solution which may include not doing it at all, purchasing a packaged software product that will solve the problem in a general way, or subscribing to an online service that will address the issue.

In the end, business decision-makers will be focused on increasing the organization’s revenues and decreasing its costs.

What Developers Think About

Developers, on the other hand, tend to think more about the technical problem in front of them and how it can be solved.

What Needs to Be Accomplished

Often, a developer’s first consideration is to fully understand what needs to be accomplished to address the situation. It is quite possible that the developers will be unable to focus on the issues in a way that takes into account the needs of the whole organization. This siloed perspective sometimes results in several business units solving the same problem in different, and sometimes incompatible, ways.

How It Can Be Accomplished

The next consideration for developers is how a solution can be accomplished. Developers are very busy people and need to get things done quickly and efficiently. This often means that they select development tools and methodology they are most familiar with rather than casting about to discover new, and potentially better, approaches. The result is that, from an outsider’s perspective, developers will select the same tool regardless of if it is the best one for the job. As Abraham Maslow pointed out, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (“The Psychology of Science” 1962).

How To Systematize or Automate Solutions

Developers have a tendency to also focus on how to systematize or automate the approach to accomplishing a solution. Developers who have experience introducing new systems will not only consider how to accomplish this difficult task, but also whether the current manual processes have some merit as well.

Costs Are Often Ignored or Secondary to Other Considerations

Developers often do not have access to reports showing the overall costs, the investment required, or even the revenues of a given project. Since they are busy working on projects, they often don’t think about those factors at all. This situation, by the way, is the root of many communication challenges faced when developers are attempting to persuade business decision-makers to approve a project. They don’t have all of the data they need.

I’m reminded of a conversation with a CFO of my company who didn’t understand the need for a different type of database than the one used by the company for another purpose. At first I thought of him a “a man who knows the price of everything and the value of nothing,” to quote Oscar Wilde.

After thinking about his comments, I built a different justification that focused on speaking to him in his own language by discussing the project in terms of the investment required, the costs that were going to be incurred, and the revenue potential the new approach would provide. It took some work to obtain that information, but it was worth the effort in the end.

It was only after a longer conversation with the CFO that he began to be able to understand why Lotus Notes wasn’t the best tool for the creation of a transaction-oriented system for research and analysis.

Are you speaking to your business decision-makers using acronyms, development procedures and the names of open source projects you’d like to deploy? If so, you’re not helping your cause.

Where to Start

A good place to start is to think in terms of where and how money can be saved, where and how previous investments can be enhanced or reused rather than being discarded, and how your proposed project would result in increased opportunities for revenue.

It would also be wise to offer a vision of how the use of containers will help the organization achieve its overall goals, including factors such as:

  • Scaling to address the needs of a larger or at least a new market
  • Reducing overall IT costs
  • Allow the organization to rapidly adapt to a rapidly changing environment, to take advantage of emerging opportunities
  • Quickly develop new products or services
  • Reach new customers while being able to maintain relationships with today’s customer base

For Many Companies Adoption of Containers Must Be Carefully Justified

The move to a Container-based environment is one of those journeys that developers can easily understand as beneficial that can be challenging to justify to a business decision-maker.

After all, some things aren’t fully known until they’ve been done at least once. So, quantifying investments required, cost savings that will be realized, and the actual size of revenue increases can be difficult.

What can be said is that adopting Containers can reduce costs and reduce risk by supporting rapid and inexpensive prototyping of solutions. Pointing out that doing this prototyping in inexpensive cloud computing services rather than acquiring new systems would help them understand that you are focused on meeting your objects while still helping the organization keep costs under control. Tell the business decision makers that this approach also offers them a choice in the future. Once something is developed, documented, and proven to be able to do the job, it can either stay where it is or be moved in-house depending upon which will be the best overall business decision.

Where Can Containers Help a Company Reduce Costs?

Developers understand that being able to decompose a problem into smaller, more manageable problems can improve their efficiency, reduce their time-to-solution, and make reuse of code and services easier.

Reducing the Number of Operating System Instances to Maintain

Explain that containerized applications need fewer copies of operating systems when compared to using virtual machine technology, less processor power, less system memory, and less external storage. Developers can to speak in terms of reducing system requirements and how they can result in a direct savings that the business decision-makers can appreciate.

A few related factors are helpful to bring up as well. This approach reduces the number of software licenses that are required and the cost of software maintenance agreements.

Increasing the Amount of Useful Work Systems Can Accomplish

Since the systems won’t be carrying the heavy weight of unneeded operating systems for each application component or service, performance should be improved. After all, switching from one container to another is much faster than switching from one VM to another. There is no need to roll huge images into and out of storage.

Improving Productivity

Since productivity is important to most organizations, show that a move to containers is a great foundation for the use of a rapid application development and deployment (DevOps) strategy. By decomposing applications into functions, application development can be faster because functions are easier to build, document, and support. This should result in lower development costs while improving overall time to solution.

This approach also can reduce the time to deployment because functions can be developed in parallel by smaller independent teams.

Improving Application Capabilities

Adopting a container-based approach provides a number of other benefits that should be mentioned as well, including:

  • Container management and automation functions are improving all the time which should result in lower costs of administration and operations
  • Container workload management and migration technology is also improving all the time which should result in higher levels of application availability, higher levels of performance, and fewer losses due to downtime
  • Decomposing applications into independent functions and services also makes them easier to develop and maintain which should reduce the costs of development, support, and operations

Facilitating a Move to the Cloud

Most business decision-makers have read about cloud computing, but don’t really understand how it can be adopted. Help them understand that the adoption of containers can facilitate the organization’s ability to deploy functions or complete applications locally, in the cloud, or in a combined hybrid environment, quickly and easily.

So, the answer to the question of whether to move to the cloud or continue on-premise computing is “yes, both.”

Reducing Time to Profit

When the business decision-maker begins to understand the business benefits of containerization, they’ll also see that this approach not only can reduce the overall time to market for applications, but, more importantly, it can reduce the time to profit. Lower development and support costs combined with rapid development can lead to quicker streams of revenue and profit.

Establishing a Foundation for the Future

It is also helpful for the business decision-maker to understand that one of your goals is establishing a platform for the future. Containers are supported in many different computing environments, by many different suppliers, and the organization gets the benefits.

Some of those benefits are:

  • Containerized functions can be used as part of many applications without having to be rearchitected or redeveloped
  • They can be enhanced or updated as needed without requiring other unrelated functions to be changed.
  • Support of the application can be easier and less costly.
  • Scalability is improved since the same functions can be run in multiple places with the help of workload management technology

How Can Containers Help a Company Increase Revenue?

A key question to consider is how adopting Containers can help the company increase its revenues. There are a number of elements that directly and indirectly address that question.

Since applications can be developed quicker, perform better, and can be supported more easily, the organization can address a rapidly changing business and regulatory environment more effectively. This also means that the organization can capture additional market share from organizations that continue to only use older approaches to information systems.

It also means that the organization can conduct experiments and prototype solutions quickly. This means that the organization can succeed or fail quicker and that organizational learning will be accelerated.

Where an application or its components execute are flexible. This means that a successful solution can execute locally, in the cloud, or in both places as needed. Business decision-makers usually appreciate flexible solutions that don’t impose extra costs.

This approach also ensures that the resulting solutions can scale from small to large as needed. So, organizations can feel more comfortable trying out something new and know that if it succeeds, it can be put into production effectively. Business decision-makers are often encouraged by approaches that allow for a low investment at first and with opportunities for growth as revenues increase rather than forcing a heavy investment up front. This means that the organization is exposed to lever levels of risk.

Summary

Adopting a container-focused approach can be beneficial to both technical and business decision-makers because it addresses the needs for rapid and effective solution development and reduction in overall costs and risks. It also results in a foundation for future growth and the ability to address a changing market.

This approach brings greater complexity along with it, but the benefits outweigh the challenges in many environments. The rapid improvement in container system management, automation, as well as the strong industry support for this approach makes it a safer choice.

If developers focus on helping business decision-makers understand how this approach also facilitates lower costs, improved time to market, and time to profit, the business side is likely to get on board quicker. They are likely to appreciate the reduced costs of solution support, operations, and development. They are also likely to be pleased that future investment can be based on revenue production rather than facing investing up front based upon a rosy forecast for future revenues.

Developing a Strategy for Kubernetes adoption

Like containers, Kubernetes sits at the intersection of DevOps and ITOps and many organizations are trying to figure out key questions such as: who should own kubernetes, how many clusters to deploy, how to deliver it as a service, how to build a security policy, and how much standardization is critical for adoption. Rancher co-founder Shannon Williams discusses these questions and more in the free online class Building an Enterprise Kubernetes Strategy.

Tags: ,, Category: Allgemein Comments closed

Rancher 2.2 Hits the GA Milestone

Dienstag, 26 März, 2019
Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

We released version 2.2.0 of Rancher today, and we’re beyond excited. The latest release is the culmination of almost a year’s work and brings new features to the product that will make your Kubernetes installations more stable and easier to manage.

When we released Preview 1 in December and Preview 2 in February, we
covered their features extensively in blog articles, meetups, videos,
demos, and at industry events. I won’t make this an article that
rehashes what others have already written, but in case you haven’t seen
the features we’ve packed into this release, I’ll do a quick recap.

Rancher Global DNS

There’s a telco concept of the “last mile,” which is the final
communications link between the infrastructure and the end user. If
you’re all in on Kubernetes, then you’re using tools like CI/CD or some
other automation to deploy workloads. Maybe it’s only for testing, or
maybe your teams have full control over what they deploy.

DNS is the last mile for Kubernetes applications. No one wants to deploy
an app via automation and then go manually add or change a DNS record.

Rancher Global DNS solves this by provisioning and maintaining an
external DNS record that corresponds to the IP addresses of the
Kubernetes Ingress for an application. This, by itself, isn’t a new
concept, but Rancher will also do it for applications deployed to
multiple clusters.

Imagine what this means. You can now deploy an app to as many clusters
as you want and have DNS automatically update to point to the Ingress
for that application on all of them.

Rancher Cluster BDR

This is probably my favorite feature in Rancher 2.2. I’m a huge fan of
backup and disaster recovery (BDR) solutions. I’ve seen too many things
fail, and when I know I have backups in place, failure isn’t a big deal.
It’s just a part of the job.

When Rancher spins up a cluster on cloud compute instances, vSphere, or
via the Custom option, it deploys Rancher Kubernetes Engine (RKE).
That’s the CNCF-certified Kubernetes distribution that Rancher
maintains.

Rancher 2.2 adds support for backup and restore of the etcd datastore
directly into the Rancher UI/API and the Kubernetes API. It also adds
support for S3-compatible storage as the endpoint, so you can
immediately get your backups off of the hosts without using NFS.

When the unthinkable happens, you can restore those backups directly
into the cluster via the UI.

You’ve already been making snapshots of your cluster data and moving
them offsite, right? Of course you have.…but just in case you
haven’t, it’s now so easy to do that there’s no reason not to do it.

Rancher Advanced Monitoring

Rancher has always used Prometheus for monitoring and alerts. This
release enables Prometheus to reach even further into Kubernetes and
deliver even more information back to you. One of the flagship features
in Rancher is single cluster
multi-tenancy
,
where one or more users have access to a Project and can only see the
resources within that
Project

even if there are other users or other Projects on the cluster.

Rancher Advanced Monitoring deploys Prometheus and Grafana in a way that
respects the boundaries of a multi-tenant environment. Grafana installs
with pre-built cluster and Project dashboards, so once you check the box
to activate the advanced metrics, you’ll be looking at useful graphs a
few minutes later.

Rancher Advanced Monitoring covers everything from the cluster nodes to
the Pods within each Project, and if your application exposes its own
metrics, Prometheus will scrape those and make them available for you to
use.

Multi-Cluster Applications

Rancher is built to manage multiple clusters. It has a strong
integration with Helm via the Application
Catalog
, which takes
Helm’s key/value YAML and turns it into a form that anyone can use.

In Rancher 2.2 the Application Catalog also exists at the Global level,
and you can deploy apps via Helm simultaneously to multiple Projects in
any number of clusters. This saves a tremendous amount of time for
anyone who has to maintain applications in different environments,
particularly when it’s time to upgrade all of those applications.
Rancher will batch upgrades and rollbacks using Helm’s features for
atomic releases.

Because multi-cluster apps are built on top of Helm, they’ll work out of
the box with CI/CD systems or any other automated provisioner.

Multi-Tenant Catalogs

In earlier versions of Rancher the configuration for the Application
Catalog and any external Helm repositories existed at the Global level
and propagated to the clusters. This meant that every cluster had access
to the same Helm charts, and while that worked for most installations,
it didn’t work for all of them.

Rancher 2.2 has cluster-specific and project-specific configuration for
the Application Catalog. You can remove it completely, change what a
particular cluster or project has access to, or add new Helm
repositories for applications that you’ve approved.

Conclusion

The latest version of Rancher gives you the tools that you need for “day
two” Kubernetes operations — those tasks that deal with the management
and maintenance of your clusters after launch. Everything focuses on
reliability, repeatability, and ease of use, because using Rancher is
about helping your developers accelerate innovation and drive value for
your business.

Rancher 2.2 is available now for deployment in dev and staging environments as rancher/rancher:latest. Rancher recommends that production environments hold out for rancher/rancher:stable before upgrading, and that tag will be available in the coming days.

If you haven’t yet deployed Rancher, now is a great time to start! With two easy steps you can have Rancher up and running, ready to help you manage Kubernetes.

Join the Rancher 2.2 Online Meetup on April 3rd

To kick off this release and explain in detail each of these new, powerful features, we’re hosting an Online Meetup on April 3rd. It’s free to join and there will be live Q&A with the engineers who directly worked on the project. Get your spot here.

Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

Continuous Delivery of Everything with Rancher, Drone, and Terraform

Mittwoch, 16 August, 2017

It’s 8:00 PM. I just deployed to production, but nothing’s working.
Oh, wait. the production Kinesis stream doesn’t exist, because the
CloudFormation template for production wasn’t updated.
Okay, fix that.
9:00 PM. Redeploy. Still broken. Oh, wait. The production config file
wasn’t updated to use the new database.
Okay, fix that. Finally, it
works, and it’s time to go home. Ever been there? How about the late
night when your provisioning scripts work for updating existing servers,
but not for creating a brand new environment? Or, a manual deployment
step missing from a task list? Or, a config file pointing to a resource
from another environment? Each of these problems stems from separating
the activity of provisioning infrastructure from that of deploying
software, whether by choice, or limitation of tools. The impact of
deploying should be to allow customers to benefit from added value or
validate a business hypothesis. In order to accomplish this,
infrastructure and software are both needed, and they normally change
together. Thus, a deployment can be defined as:

  • reconciling the infrastructure needed with the infrastructure that
    already exists; and
  • reconciling the software that we want to run with the software that
    is already running.

With Rancher, Terraform, and Drone, you can build continuous delivery
tools that let you deploy this way. Let’s look at a sample system:
This simple
architecture has a server running two microservices,
[happy-service]
and
[glad-service].
When a deployment is triggered, you want the ecosystem to match this
picture, regardless of what its current state is. Terraform is a tool
that allows you to predictably create and change infrastructure and
software. You describe individual resources, like servers and Rancher
stacks, and it will create a plan to make the world match the resources
you describe. Let’s create a Terraform configuration that creates a
Rancher environment for our production deployment:

provider "rancher" {
  api_url = "${var.rancher_url}"
}

resource "rancher_environment" "production" {
  name = "production"
  description = "Production environment"
  orchestration = "cattle"
}

resource "rancher_registration_token" "production_token" {
  environment_id = "${rancher_environment.production.id}"
  name = "production-token"
  description = "Host registration token for Production environment"
}

Terraform has the ability to preview what it’ll do before applying
changes. Let’s run terraform plan.

+ rancher_environment.production
    description:   "Production environment"
    ...

+ rancher_registration_token.production_token
    command:          "<computed>"
    ...

The pluses and green text indicate that the resource needs to be
created. Terraform knows that these resources haven’t been created yet,
so it will try to create them. Running terraform apply creates the
environment in Rancher. You can log into Rancher to see it. Now let’s
add an AWS EC2 server to the environment:

# A look up for rancheros_ami by region
variable "rancheros_amis" {
  default = {
      "ap-south-1" = "ami-3576085a"
      "eu-west-2" = "ami-4806102c"
      "eu-west-1" = "ami-64b2a802"
      "ap-northeast-2" = "ami-9d03dcf3"
      "ap-northeast-1" = "ami-8bb1a7ec"
      "sa-east-1" = "ami-ae1b71c2"
      "ca-central-1" = "ami-4fa7182b"
      "ap-southeast-1" = "ami-4f921c2c"
      "ap-southeast-2" = "ami-d64c5fb5"
      "eu-central-1" = "ami-8c52f4e3"
      "us-east-1" = "ami-067c4a10"
      "us-east-2" = "ami-b74b6ad2"
      "us-west-1" = "ami-04351964"
      "us-west-2" = "ami-bed0c7c7"
  }
  type = "map"
}


# this creates a cloud-init script that registers the server
# as a rancher agent when it starts up
resource "template_file" "user_data" {
  template = <<EOF
#cloud-config
write_files:
  - path: /etc/rc.local
    permissions: "0755"
    owner: root
    content: |
      #!/bin/bash
      for i in {1..60}
      do
      docker info && break
      sleep 1
      done
      sudo docker run -d  --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.1 $${registration_url}
EOF

  vars {
    registration_url = "${rancher_registration_token.production_token.registration_url}"
  }
}

# AWS ec2 launch configuration for a production rancher agent
resource "aws_launch_configuration" "launch_configuration" {
  provider = "aws"
  name = "rancher agent"
  image_id = "${lookup(var.rancheros_amis, var.terraform_user_region)}"
  instance_type = "t2.micro"
  key_name = "${var.key_name}"
  user_data = "${template_file.user_data.rendered}"

  security_groups = [ "${var.security_group_id}"]
  associate_public_ip_address = true
}


# Creates an autoscaling group of 1 server that will be a rancher agent
resource "aws_autoscaling_group" "autoscaling" {
  availability_zones        = ["${var.availability_zones}"]
  name                      = "Production servers"
  max_size                  = "1"
  min_size                  = "1"
  health_check_grace_period = 3600
  health_check_type         = "ELB"
  desired_capacity          = "1"
  force_delete              = true
  launch_configuration      = "${aws_launch_configuration.launch_configuration.name}"
  vpc_zone_identifier       = ["${var.subnets}"]
}

We’ll put these in the same directory as environment.tf, and run
terraform plan again:

+ aws_autoscaling_group.autoscaling
    arn:                            ""
    ...

+ aws_launch_configuration.launch_configuration
    associate_public_ip_address: "true"
    ...

+ template_file.user_data
    ...

This time, you’ll see that rancher_environment resources is missing.
That’s because it’s already created, and Rancher knows that it
doesn’t have to create it again. Run terraform apply, and after a few
minutes, you should see a server show up in Rancher. Finally, we want to
deploy the happy-service and glad-service onto this server:

resource "rancher_stack" "happy" {
  name = "happy"
  description = "A service that's always happy"
  start_on_create = true
  environment_id = "${rancher_environment.production.id}"

  docker_compose = <<EOF
    version: '2'
    services:
      happy:
        image: peloton/happy-service
        stdin_open: true
        tty: true
        ports:
            - 8000:80/tcp
        labels:
            io.rancher.container.pull_image: always
            io.rancher.scheduler.global: 'true'
            started: $STARTED
EOF

  rancher_compose = <<EOF
    version: '2'
    services:
      happy:
        start_on_create: true
EOF

  finish_upgrade = true
  environment {
    STARTED = "${timestamp()}"
  }
}

resource "rancher_stack" "glad" {
  name = "glad"
  description = "A service that's always glad"
  start_on_create = true
  environment_id = "${rancher_environment.production.id}"

  docker_compose = <<EOF
    version: '2'
    services:
      glad:
        image: peloton/glad-service
        stdin_open: true
        tty: true
        ports:
            - 8000:80/tcp
        labels:
            io.rancher.container.pull_image: always
            io.rancher.scheduler.global: 'true'
            started: $STARTED
EOF

  rancher_compose = <<EOF
    version: '2'
    services:
      glad:
        start_on_create: true
EOF

  finish_upgrade = true
  environment {
    STARTED = "${timestamp()}"
  }
}

This will create two new Rancher stacks; one for the happy service and
one for the glad service. Running terraform plan once more will show
the two Rancher stacks:

+ rancher_stack.glad
    description:              "A service that's always glad"
    ...

+ rancher_stack.happy
    description:              "A service that's always happy"
    ...

And running terraform apply will create them. Once this is done,
you’ll have your two microservices deployed onto a host automatically
on Rancher. You can hit your host on port 8000 or on port 8001 to see
the response from the services:
We’ve created each
piece of the infrastructure along the way in a piecemeal fashion. But
Terraform can easily do everything from scratch, too. Try issuing a
terraform destroy, followed by terraform apply, and the entire
system will be recreated. This is what makes deploying with Terraform
and Rancher so powerful – Terraform will reconcile the desired
infrastructure with the existing infrastructure, whether those resources
exist, don’t exist, or require modification. Using Terraform and
Rancher, you can now create the infrastructure and the software that
runs on the infrastructure together. They can be changed and versioned
together, too. In the future blog entries, we’ll look at how to
automate this process on git push with Drone. Be sure to check out the
code for the Terraform configuration are hosted on
[github].
The
[happy-service]
and
[glad-service]
are simple nginx docker containers. Bryce Covert is an engineer at
pelotech. By day, he helps teams accelerate
engineering by teaching them functional programming, stateless
microservices, and immutable infrastructure. By night, he hacks away,
creating point and click adventure games. You can find pelotech on
Twitter at @pelotechnology.

Tags: , Category: Allgemein Comments closed

Joining as VP of Business Development

Montag, 19 Juni, 2017

Nick Stinemates, VP Business DevelopmentI am incredibly excited to be
joining such a talented, diverse group at Rancher Labs as Vice President
of Business Development. In this role, I’ll be building upon my
experience of developing foundational and strategic relationships based
on open source technology. This change is motivated by my desire to go
back to my roots, working with small, promising companies with
passionate teams. I joined Docker, Inc. in 2013, just as it started to
bring containers out of the shadows and empower developers to write
software with the tools of their choice, while redefining their
relationship with infrastructure. Now that Docker is available in every
cloud environment, embedded in developer tools, and integrated in
development pipelines, the focus has shifted to making it more efficient
and sustainable for business. As users look for more integrated
solutions, the complexity of interrelated services and software rises
dramatically, giving an advantage to vendors that are proactively
reaching out and collaborating with best of breed tools. This is, I
believe, one of Rancher Labs’ strengths.

The Rancher container management
platform implements a layer of infrastructure services and drivers
designed specifically to power containerized applications. Since
networking, storage, load balancer, DNS, and security services are
deployed as containers, Rancher is in a unique position to integrate
technology efficiently, holistically, and at scale. Similarly, Rancher
also makes ISV and open source applications available via
its application catalog. The public
catalog delivers more than 90 popular applications and development
tools, many of which are contributed by the Rancher community. In
addition to further developing the Rancher ecosystem via technology and
ISV partnerships, I will be working to expand the Rancher Labs Partner
Network
. We will be building a
comprehensive partner program designed to expand the company’s global
reach, increase enterprise adoption, and provide partners and customers
with tools for success. From what I can tell after my first week, I am
in the right place. I’m looking forward to becoming part of the Rancher
Labs family, and collaborating with the broader ecosystem while
developing new relationships. As for immediate plans, I am coming up to
speed as fast as I can, and spending as much time talking to as many
people in the ecosystem as possible. If you’d like to explore
opportunities to collaborate, please consider becoming a
partner
. Nick is the
Vice President of Business Development at Rancher Labs where he is
focused on defining and executing Partner strategy. Prior to joining
Rancher Labs, Nick was the Vice President of Business Development and
Technical Alliances at Docker for four years. At Docker, Nick was
responsible for creating and driving the overall partner engagement and
strategy, as well as cultivating many company-defining strategic
alliances. Nick has over 15 years’ experience participating in and
contributing to the open source ecosystem as well as 10 years in
management functions in the enterprise financial space.

Tags: , Category: Allgemein Comments closed