We've all heard of Docker. Some of you can fully articulate what DevOps is and how containers fit into the larger conversation of intelligent-workload management. For most of us, though, the jump from physical systems to virtual systems was a lot easier to understand than the jump to containers. Even when we can grasp what a container is and what the benefits are, understanding when and where to use them can be challenging. When you're inventing an entirely new paradigm for workload management, there's bound to be confusion. But let's step back for a minute and cover the basics.
Containers aren't new. They've been in other operating systems for more than a decade, and Linux Containers (LXCs) have been part of Linux for quite some time. What Docker does is take a technology that was complicated to understand and implement and make it simple. Suddenly, everyone could build and deploy containers and take full advantage of the technology. SUSE was one of the first enterprise Linux vendors to fully support Docker because it recognized early on the promise of Docker and added the features and support necessary for it to grow beyond venture capital startups and dot com giants. But what does Docker on top of SUSE give you?
Simply put, Docker gives you a number of capabilities, all available in a very flexible and yet fully standardized solution:
Just imagine a stateless environment where there is no access to production containers which, themselves, have a minimal set of libraries needed to run your application and nothing else. Having AppArmor (or SELinux) to confine your application adds an additional strong layer of protection. The vast majority of security threats are instantly eliminated in such an environment. The entire DevOps automation philosophy helps ensure that consistency is maintained, and communication barriers are broken down. This is the promise of Docker.
But as great as Docker and other container technologies are, they're not a fit for every workload. If we look at the organizations using Docker today, we can see where the low hanging fruit is. The move towards micro-services is perfect for the world of containers. The smaller and more discrete the workload, the easier it will be to package up without having to worry about too many moving pieces and dependencies.
Many web applications also fit very well into the world of containers because there is a clean separation between the application and data. Since the ideal Docker workload is stateless, this is the ideal workload. That's not to say that Docker workloads can't store data; it's just that we want to isolate it so that the data exists independently of the Docker container. A database can run perfectly under Docker; we just need to separate the actual database data from the binaries, libraries and configuration data that make up the workload.
Any type of application deployment that's done repetitively is also another good fit for Docker. If you've already gone through the effort of scripting out the installation and configuration of an application, turning that into a Dockerfile is a natural progression and allows for greater control and flexibility. Automation is the key to scalability.
Unfortunately, there is also a significant number of workloads that don't make sense to put in containers. The most obvious are large, monolithic applications with tightly integrated processes storing state or reading configuration in non-obvious places. A lot of legacy applications live in this world, particularly the more complex ones. Think of an all-in-one appliance image that a vendor supplies to you with a database, a web tier and a processing engine all tied together. While you could potentially break out each individual component into individual Docker containers, your vendor won't support that, and how would you ever upgrade it anyway? Alternatively, you could turn the monolithic environment into a Docker container (operating system installation and all), but you lose every benefit of containers and gain nothing.
Other workloads to avoid include GUI-driven application installations, since it's quite challenging to automate them through scripting or other means, which is necessary for building Docker containers. Any application you're only installing once is also not the ideal candidate for anything DevOps because the amount of resources needed to automate the deployment of the application is often much more significant than building it out by hand. Save these types of applications for after you have already finished the lower hanging fruit.
Once you start mastering Docker, you can take advantage of the large and growing ecosystem of partners solving problems beyond what Docker does. These are things like providing an easy-to-manage, web-driven private Docker registry with full-authorization capabilities built in to allow fine-grained access controls to all of your critical Docker images (SUSE Portus). Another example is building out a massively scalable software-defined storage back end for maintaining the state of your Docker containers (SUSE Enterprise Storage using Ceph). Other examples include any of the dozens of other gaps between building out your first Docker container and a fully automated Docker orchestration and scheduling environment that handles workload management globally. It's an exciting new world with Docker. Embrace the tools available to you as well as the enterprise platform and tools that will power your data center of tomorrow.
Author
At SUSE we believe that the software-defined data center (SDDC) is the future. We also think that DevOps will change forever how software is developed and deployed. How is SUSE preparing SUSE Manager, our operating system lifecycle management solution, for that future? And how do we achieve that with minimal disruption for our existing user base and the functionality they've been used to from SUSE Manager? Keep reading.
First things first: What do we mean by a "software-defined data center? Obviously, software can only run on real hardware. Behind any virtual machine is a real CPU, and behind any software-defined network switch are real ports and cables. But a trend that started many years ago with virtualization of computing resources has now expanded into the network stack and, recently, storage: The underlying hardware has become more and more generic. At the same time, functionality and logic that used to be hard-coded into special-purpose hardware like servers, routers, switches, and network attached storage (NAS) or dedicated storage area networks (which used their own dedicated protocols and wiring like SCSI and Fibrechannel) is increasingly being emulated in a software layer.
This allows for dynamic re-allocations of resources based on the needs of the application. Parameters like CPU and RAM assignments, network topologies and bandwidth, and mass storage can be defined in software and reconfigured on demand. And commodity hardware can, to a large extent, replace expensive purpose-built appliances.
This is where the second trend, DevOps, fits in nicely: In a DevOps world, there is no artificial distinction between development environments and production systems any more. Software is developed and tested on pretty much exactly the same environments that it is later deployed on. With the same highly automated tools that developers use to set up their working environments, QA can build test environments that run a multitude of automated tests every time new code has been submitted. Once the tests succeed, the same automation allows for deploying code into production.
In the world of software-defined IT, the only differences between development, testing, and production environments should be the dimensions and service-level agreements. While an engineer might run a local virtualized environment on a laptop or use infrastructure provided by a private or public cloud, production environments may scale up and out, and provide the redundancy and high availability needed for 24x7 service delivery.
We've shown earlier how open source projects and products from SUSE, such as SUSE Studio™ and the Open Build Service, can help you implement such "continuous integration" environments, where cycles from check-in of new code to deploying it into production can be reduced from months to days or hours without sacrificing quality for agility. In fact, at SUSE we have our own continuous line from code check-ins to package builds, image builds and automated testing.
With SUSE Manager 3, we are laying the foundation for the next wave of automation: event-driven architectures, where software can be deployed and run on a software-defined infrastructure based on rules that describe how the infrastructure should react to certain events.
The key enabler for that is our decision to make Salt, a highly scalable and extensible modular remote execution and configuration management framework, part of SUSE Manager 3.
Originally, our goal was much more limited. We wanted to add more powerful configuration management to SUSE Manager by adding capabilities that customers found in tools like Puppet and Chef. And we wanted to modernize the underlying infrastructure of the proven software and patch management in SUSE Manager to make it more future-proof, scalable and responsive.
After evaluating many options, we chose Salt because it gave us both a strong automation framework with a message-bus architecture that allows us to execute tasks like gathering inventory data from systems or installing software packages on many systems in parallel, and it gave us a powerful configuration management engine that is easy to extend and uses an approachable syntax to describe the desired state of systems.
Later, we learned that we don't have to stop there. Because any system that SUSE Manager 3 controls can become a so-called "Minion" that the Salt server or "Master" can contact via the always-on message bus, and those Minions can send back events to the Master, possibilities for rule-based automation are endless.
This is where SUSE Manager 3 meets DevOps and the software-defined data center. It starts with the tools a software engineer uses to set up working environments. For example, our own developers are making heavy use of a combination of Vagrant, Docker containers, and Salt in a tool called Suminator. Suminator allows us to set up various SUSE Manager environments, from proven SUSE Manager 2.1 to the latest engineering snapshot of SUSE Manager 3, from scratch—again and again. And if Salt can be used to deploy SUSE Manager, you can surely use it to deploy your own services. If you've seen one of our SUSE Manager demos lately, chances are that Suminator was used behind the scenes.
In a DevOps-style continuous integration (CI) environment, Salt's orchestration engine can then be used to listen to events that the build system or CI platform—let's say the Open Build Service or a Jenkins server, emit. So every time a new build is completed, Salt can take over.
If used on an individual system, Salt makes sure that any dependencies that were defined in the so-called state files are met. For example, a software package might require a certain user to be created before it starts. Also, it might have a dependency on a database that needs to be installed, configured, and started first.
But that's not all. Salt can also orchestrate the execution of states on several systems, again allowing users to define dependencies between them. And because Salt has drivers for many public and private cloud frameworks as well as Docker, this is not limited to a specific platform.
Finally, Salt allows you to write so-called Beacons, probes that fire an event if a certain condition is met on a Minion. We demonstrated that at SUSECon 2015 when we used a Beacon that watched file changes and reported them in real time to the SUSE Manager web console, but also allowed a so-called Reactor to trigger an action.
This has a lot of very promising use cases, from implementing real-time intrusion detection to self-healing systems. That kind of event can also be used to trigger a configuration change on a related system. Let's say a new node in a cluster is fired up. Once the clustered application is running on the node, the node can send an event that triggers a configuration change on the load balancer to include the new node.
The foundation for that kind of event-driven architecture has been laid in SUSE Manager 3. Now we are looking forward to your feedback on how they can be leveraged best.
As mentioned before, we've reached the integration of Salt without forcing users of SUSE Manager 3 to radically change the way they are working with it today. Systems that are Salt Minions and systems that are using the traditional SUSE Manager client stack can peacefully co-exist in SUSE Manager 3, and virtually all existing features will work on both stacks.
It's your choice when to start leveraging the new possibilities that Salt opens up to you.
Author
SUSE has just launched SUSE OpenStack Cloud 6. This has been the most extensively tested release yet, with customers and partners providing great input on the process. We recently held some webinars with the title, "Give Me Liberty and Give me SUSE OpenStack Cloud 6"—a play on words based on the upstream OpenStack release code named "Liberty," which is the basis of SUSE OpenStack Cloud 6. Of course, the original quote from Patrick Henry is "Give me liberty or give me death"—uttered as Henry was exhorting his fellow citizens to support the cause of American independence from England. We changed the wording to avoid any negative implication with SUSE OpenStack Cloud.
That discussion started me thinking in broader terms about industry trends and the pressures and challenges facing enterprise IT organizations worldwide, especially the pressure to maintain investment levels while simultaneously addressing the challenge to be more responsive to the business and manage the sheer scale of IT in terms of workloads, endpoints and users. Based on the increasing number of conversations we are having with customers on SUSE OpenStack Cloud and some third-party research that SUSE has recently done, it is clear that enterprises are increasingly looking at private cloud as the preferred approach to modernizing the data center. At the same time, enterprises are embracing public cloud providers such as Amazon, Microsoft, Google, and others as a platform to rapidly and cost-effectively provision capacity for some workloads. This leads to what can be called a "data center without boundaries.”
These are the main drivers behind this:
This is the basis for our roadmap for SUSE OpenStack Cloud in 2016 and beyond. SUSE OpenStack Cloud was the first OpenStack distribution designed to address enterprise requirements. Our first release, in 2012, introduced a framework to simplify the installation of OpenStack and accelerate time-to-value for deployments. Since then we have introduced mixed-hypervisor support, which enables enterprises to deploy open source and proprietary hypervisors and have them work together in a single cloud; and we have introduced high availability support for the OpenStack control node, helping to ensure that OpenStack can be a stable and reliable platform for business applications.
With SUSE OpenStack Cloud 6, we are introducing enhancements in all of these areas:
In future releases, we expect to continue these trends with additional hypervisor support, improved container management and orchestration, non-disruptive upgrades, and the ability to provision physical servers as well as virtual environments on which to deploy workloads. One new area is the integration of the Cloud Foundry Platform as a Service with SUSE OpenStack Cloud, an effort we are working on with our partner SAP.
To return to the thought that started this discussion, in context of the evolution of the data center and the need to manage and exploit software-defined infrastructure and the public cloud, "Give me liberty or give me death" should instead be, "Give me OpenStack or give me death."
Author
We’re making changes in three areas: revamping our Training Partner Program, providing more opportunities for technical certification as well as adding and refreshing courses.
The common goal in these changes is to give our customers and partners a sound understanding of open source technologies so they can use our products and solutions successfully in their jobs and businesses. We also want to make it easier for them to get the training they want, which includes how, when and where they want it.
We want to ensure that our Training Partner Program makes the high-quality training easy to consume for our customers and, at the same time, delivers value to our partners. In the past we’ve either had too many partners in some areas—so providing SUSE training wasn’t profitable for them—or we’ve had too few partners, making it harder for customers to get the training they want.
The new partner program is focused on three things: delivering high quality, engaging training content about our open source technologies, making that training content easily accessible to our training partners, and giving our customers and partners simple ways to find and consume SUSE training courses offered by our training partners. In short, the program enables our partners and instructors to be much more in tune with what SUSE is doing with training and technology.
We have train-the-trainer courses, designed to ensure that the instructors who teach each course are competent and confident in the specific courses they teach to customers. A certified instructor isn’t necessarily certified in all of the SUSE courses, but is certified on the specific courses he or she teaches. That provides a better experience for the attendees and also helps to make sure the partner is successful.
Our partners make the decision about how to offer the content SUSE provides. They can provide online training or face-to-face, instructor-led training. They can offer the training within their own facility or go to the customer’s office and offer that training there.
One helpful new change is that we’re integrating all partner classes and all SUSE classes available into a single calendar here. Customers can choose the course that’s best for them based on geography, a specific partner, or the type of course—whether they want an online course or a face-to-face course. From the calendar, they can connect directly to that course and vendor.
SUSE has added online certification testing as well as cross-certification with the Linux Foundation, so more individuals can be certified. Certification is really important. It shows an employer, colleagues and partners that someone has been tested on a set of skills and can perform tasks associated with those skills. Certification validates an individual’s expertise.
In the past, to get certified you had to travel to a training center to take that test. That’s fine if there’s one near your office or home, but sometimes it’s difficult to find a training center close by.
Now, with online technical certifications, you have the opportunity to take the exam and get certified much more easily. When you feel prepared, you go online, set up a time to take the exam, and take it anywhere, as long as you have an internet connection and a computer—at home, in the office, or anywhere in the world. The CLA (Certified Linux Administrator) and the CLP (Certified Linux Professional) exams are available online for both SUSE Linux Enterprise Server 11 and 12. When you take and pass the exam, you get a certificate just as you would have if you took the exam in a testing center.
As before, we have courses that specifically prepare individuals to take the CLA and CLP certification exams. We also have courses available on OpenStack and Ceph, based on SUSE OpenStack Cloud and SUSE Enterprise Storage, but not the certifications and exams. We are in the process of developing both of those now, so you should soon see some announcements about additional certifications.
The Linux Foundation has a Sysadmin Certification and an Engineer Certification, which are level 1 and level 2 certifications, respectively, just like our CLA and CLP exams. We’re working with the Linux Foundation to help people who already have its certifications also get certified on the SUSE Linux Enterprise distribution. We’ll accept certification on the Linux Foundation’s level 1 exam as a prerequisite to take our level 2 exam. Once the person passes our level 2 exam (CLP), we will then grant that person the CLA certification as well. The opposite situation also holds true. Someone with a CLA certification (our level 1) can take the Linux Foundation’s level 2 exam, and once the person has passed it, he or she will get the Linux Foundation’s level 1 certification as well.
We are developing a larger pool of courses to choose from. Right now, we’ve been focusing new courses around new open source technology: OpenStack, Ceph and new open source capabilities for the enterprise environment. We want our customers to be able to take and use this training immediately—both with our products and with other open source technology in the industry.
We look at what our customers need and how we can help them to be successful, and then we focus coursework around that. In addition, customers can request courses that we seriously consider. We also add courses based on new technology in the market and our own new products—for example, our Ceph-based SUSE Enterprise Storage. We know there will be demand for this training. For that reason we develop courses in conjunction with new product launches. We also refresh existing courses as products evolve.
If someone is interested in training that we don’t offer, we are very happy to get input from them. We have a special email address set up: suse-training@suse.com. If you write to that address, the email will come directly to my team. We evaluate those requests and create courses based on our priorities. That’s another way we get input on courses.
Yes. Our internal training teams have provided specific training on our technology, customized for a specific customer. We also encourage customers who want customized training to engage with our partners—especially for customers who need part of their training on SUSE solutions but who also need training on other apps or solutions that we don’t provide. For more information about the training options, visit here.
Authors
From November 2, 2015 to February 1, 2016, SUSE published Yes Certification Bulletins for 296 hardware devices—most of them for network servers but also for 28 workstations. Almost a third of these servers were from Lenovo. Other hardware vendors included Business IT AG, Cisco, Dell Computing, Fujitsu, HP/HPE, Hitachi, Huawei Technologies, IBM, Inspur Intel, Lenovo, NEC, Positivo Informatica, SGI, Unisys and VMware.
To research certified systems, go to the Certified Hardware Partners' Product Catalog, search the system name, and click on the bulletin number to see the exact configuration that has been certified.
Here is the breakdown of YES Certifications completed in the period:
The software listings in our SUSE Partner Software Catalog (PSC) are constantly being updated by partners as they release new SUSE support statements and add new product features. The catalog now contains more than 7,000 software product listings. You can see summary highlights for a product and then drill down to the level of detail you need for a specific product version with a specific SUSE version and hardware architecture. Our program, which ISVs work with to populate the catalog, is called SUSE Ready. If you have an ISV solution that you want to see advertised on SUSE Linux Enterprise, please refer your ISV to our site. We are happy to work with them.
Some recent updates to the SUSE PSC are: