In This Issue
How Application Containers Can Support DevOps
By: Thomas Di Giacomo
As SUSE CTO, Thomas Di Giacomo's vision is a software-defined and cloud-based, IT-powered future for the enterprise. Prior to joining SUSE, Di Giacomo served as CTO and vice president of innovation at Swisscom Hospitality Services, as well as CTO of the Hoist Group, a global provider of IT services to the hospitality and health care industries. He has expertise in open source platforms, development and support of global information systems and technologies.
As we discussed in the September 2016 article as part of our DevOps series, in order for enterprises to stay relevant beyond expanding and innovating, they need to adapt and meet even more flexible and agile requirements while controlling costs and efforts. DevOps tools, processes and culture provide a key framework for meeting these needs.
Containers are not a new technology. They have been around for years with system containers in Linux. In the past few years, however, we have seen a lot of new development and adoption of application containers and simplified tooling around them. Application-focused containers facilitate DevOps, CI/CD and micro-services based applications, thus enabling businesses to improve their processes, quality and time to market.
Containerization is a great tool for bringing developers and IT operators closer together as a shared resource in both continuous integration and delivery. This is certainly true for cloud-native applications that meet all or a portion of the 12 factors. In addition, containerization can be applied to legacy applications and to application migration strategies.
Building and sandboxing
Think of application containers as non-interacting, independent, iPhone-like applications. They are completely isolated and sandboxed, so they avoid interacting with or impacting each other or the underlying operating system.
Containers are also tightly coupled with the application architecture itself. They frequently drive, or at least support, the division of traditional monolithic applications into à la carte applications or microservices-based applications.
Application containers facilitate the traditional build step of the DevOps flow. From the definition and format of container images, to how to store and fetch them from a public or private secured registry, containers provide a lot of fast-tracked and automated steps.

Shipping and deploying

The container analogy is also valid for how containers are shipped and how they travel. On the digital ocean, they follow the DevOps streams: from staging to test and production environments, and from on-premises infrastructure and data centers to heterogeneous private and public clouds. And once they reach their destination, they ensure that the container and its contents behave exactly the same as they did at their origin. The process also ensures that, assuming a proper container image packaging, containers will run similarly in the different harbors where they are located.
This is another value that application containerization adds to DevOps: it facilitates the redeployment of the same application from the development environment to the test environment to the production environment.
Running and maintaining
Using containers in this DevOps phase provides many benefits. By design, containers are easy to scale; thus, they facilitate how applications can grow and shrink based on business needs, the expansion and growth of new business applications, and so on. When running live, containers also allow for more dynamic, multi-cloud strategies. For example, workloads can be offloaded to the cloud during peak periods and then brought back on premises when there is less demand and internal resources are available.
Looping
The last step of the DevOps flow is to cycle back to development from the running phase, providing analytics and details on the performance of the application, so that further improvements can be made to the application. This step also includes other inputs such as feature requests, patches and fixes that need to make it into the next iteration of the application (these can be added during other steps in the flow as well). During this step, containers add an additional level of analysis on top of the application itself, such as scale-out/scale-up scenarios and monitoring the performance of the containers themselves.
Conclusion and other considerations
Most of the SUSE tools that we detailed in Part 1 of this series already support container-based flows, and we are constantly working to improve them. In fact, our new product SUSE Container as a Service Platform (CaaSP) will be available for a public beta at the end of March. This will further facilitate the use of containers for DevOps because the orchestration element helps to spin up the containers for developers, test, and so forth in their respective environment on their respective abstracted infrastructure and set of resources.
The SUSE CaaSP is an infrastructure platform for containers that allows you to provision, manage and scale container-based applications. It includes three components: MicroOS, based on SLES; Kubernetes for container management; and Salt-based configuration to set up the components plus the container engines, such as the Docker open source software and Linux containers (LXC). For more information, see the webinar SUSE Container as a Service Platform—An Introduction.

Storage and containers are also taken care of. Containerized data disappears with its container. Its whole purpose in life is to die easily, so to speak, to encourage stateless design, even though that can’t always be achieved 100 percent of the time. As a result, persistent storage and data play an important role in ensuring consistency in the DevOps flow when using containers.
Networking is also a specific area to consider, especially in the context of multi-cloud. Solutions exist to address networking needs by bundling the network configuration requirements of a specific application together with its container description, for instance.
Last but not least, in addition to control, the attributes of overall stability, reliability and multi-tiered security for containers are absolutely critical for enterprise DevOps adoption.
At SUSE, we are well versed in DevOps. We welcome the opportunity to talk with you about your DevOps projects and how we can help make them happen!
In the final part of this DevOps series, we will talk about Platform as a Service on top of Containers as a Service, which further enables developers to build applications in addition to and together with DevOps principles.
The Digital Economy: Here Today, Bigger Tomorrow
By: Terri Schlosser
Terri Schlosser, head of SUSE product and solution marketing, has 20 years of experience in the IT Software industry. She was previously at Rackspace, as a Senior Marketing Manager for their managed private cloud offering and prior to that spent more than 15 years at IBM. Her experience includes marketing, software development, product management and strategy in many different IT software areas including networking, storage, management, operating systems and OpenStack. Terri also has international experience having worked with teams around the globe, as well as, spending 2 years on an international assignment in Krakow, Poland. She holds Bachelor of Science and Master of Science degrees in mathematics.
The Rise and Rise of User and Consumer Expectations
Mobile, always-on business IT has been on the rise for years and according to industry analysts, it will continue to rise over the business landscape for years to come. The estimated 4.23 billion smartphones of 2014 will grow to a projected 4.77 billion by this year’s end and to a projected 5.07 billion by 2019. (Source: Statista) The digital transformation these devices have created has changed consumer and user expectations. (See Figure 1)

Figure 1: Excerpts from Lithium and Vanson Bourne Studies (Source: “Can Companies Keep up with Soaring Customer Expectations,” eMarketer, June 2015).
The Business Challenges are Obvious
Growing user and consumer expectations present obvious challenges for businesses and their IT departments, but they aren’t the only challenges IT departments face. IT must have the agility and flexibility to respond not only to raising expectations but also to business needs such as maintaining data privacy and security for regulatory compliance. And they need the ability to safely support business users who purchase public cloud services on their own without regard for security or regulatory compliance. The shadow IT effect these users introduce has the potential to put both businesses’ and consumers’ data at risk.
IT departments must meet all of these and other business needs while also performing manual tasks that do little more than keep the lights on, which is a tall order.
Data growth resulting from mobile technologies such as online banking, digital health apps, internet-of-things wearables and sensors, and so forth compounds the challenges that organizations face on the application and data-storage fronts. IT organizations in all industries must efficiently store, manage, and protect this data without incurring additional costs because, as Gartner reported in late 2016, enterprise IT budgets are flat or increasing only slightly. This lack of budgetary wherewithal requires IT to meet the storage, processing, and networking needs of the digital business while also reducing both capital and operating expenses. (See “Gartner Says Global IT Spending to Reach $3.5 Trillion in 2017,” a Gartner Oct 2016 press release)
The words agile and flexible must apply to the infrastructure that provides support for reliable, leading-edge, on-the-go digital services as well as to the IT departments that deliver these services. Innovation-era infrastructures enable businesses to deliver new and updated services at ever-faster speeds. In its 2014 article titled “Three Essential Steps to a Software-defined Data Center,” Network World’s Brandon Butler argues the need for software-defined networking by pointing out that over 70 percent of end users expect an IT project to take less than two weeks, while 40 percent of IT managers must still use slow manual processes to reconfigure their organizations’ infrastructures to accommodate the changes that these users request.
Challenges, Meet Your Software-defined Solution
Software-defined infrastructure solutions provide a promising way to meet the many challenges IT organizations are facing. Modernizing their data centers to software-defined infrastructures enables IT departments to manage growing data; enable innovation; and drive faster time-to-market with agility, stability, and cost savings.
Acquire the Agility to Provision and Deliver Resources Faster
Provisioning resources in traditional data centers is complex and time-consuming. It can take weeks or months. In sharp contrast, with a software-defined infrastructure, IT departments can provision resources in days or hours—and with less manual intervention thanks to automation and cloud-based self-service capabilities. Agility improvements enable IT to deliver resources quickly and business units to improve time-to-market speeds for new services or applications, an indisputable competitive advantage.
Data centers with software-defined storage have unlimited storage capabilities, which gives them the agility to scale as business operations grow. Digital businesses can efficiently host and maintain large data stores that include audio, video, graphics, and other terabyte-sized files—the very sorts of data stores that support the modern applications customers want most.
Ensure Business Continuity
Software-defined infrastructures enable organizations to embrace new technologies without sacrificing the stability and reliability they so desperately need. They also offer superior business continuity, enabling organizations to avoid the pain of unplanned downtime.
For example, well-designed software-defined storage deployments have no single point of failure and offer highly redundant architectures for system resiliency and availability. And self-healing capabilities keep storage-administrator involvement at a minimum and application availability at a maximum, even following hardware failure.
Reduce Costs
IT departments everywhere face intense pressure to do more with less. How much less? According to the previously mentioned Gartner report, IT spending actually declined by 0.3 percent in 2016. And though the report projects rising expenditures in 2017 by 2.9 percent, this is not enough to offset the expense of addressing the challenges that the first section of this article introduced. Fortunately, software-defined infrastructures are natural enablers in the do-more-with-less effort. Software-defined infrastructures streamline operations, enabling IT to reduce operational expenditures. Well-designed solutions also include a number of tools that provide automated management and storage administration capabilities. Such tools enable organizations to manage their data centers with existing staff—no specialized training required. The result is reduced IT overhead costs.
To further reduce costs, organizations can opt for flexible open source solutions that require no or little software expenditures other than support and that work with products from multiple vendors, which eliminates costly vendor lock in. And because software-defined infrastructures enable organizations to use commodity hardware and other infrastructure that they currently have running in their data centers, software-defined infrastructures also decrease capital expenditures.
How much can updating to a software-defined infrastructure save? Taking software-defined storage as an example, research indicates that it can yield a 30 percent savings compared with average-capacity network attached storage solutions and at least a 50 percent savings compared with the average, capacity-optimized, midrange disk array.
SUSE Has Enterprise-Level Open Source Solutions
Selecting a hardware and software foundation for an enterprise-level software-defined infrastructure requires careful thought. It’s an important decision, after all. Open source solutions such as SUSE Linux Enterprise Server offer organizations the freedom to use their existing investments in both physical and virtual systems. When they choose open source, organizations also get quick access to the accelerated innovations for which large, robust open source communities are famous. And in the case of SUSE Linux Enterprise Server, organizations receive the added benefit of skilled testing and reliable support.
Choose SUSE Solutions for Your Organization’s Software-defined Infrastructure
Only an enterprise-level open source vendor like SUSE is agile and flexible enough to support technologies such as the Docker open source project and Linux containers—technologies that enable organizations to innovate faster while still providing the stability, scalability, and business continuity they need, all in a future-proof design that will endure for years to come.
SUSE is a pioneer in open source solutions for enterprises. In addition to SUSE Linux Enterprise Server, SUSE offers a full set of solutions that enable organizations to transform their traditional data centers into software-defined infrastructures that support modern DevOps methodologies and processes. For example, SUSE OpenStack Cloud dynamically allocates compute, storage, and networking resources on demand and includes self-service access to deliver the services and applications customers need when they need them. Built on Ceph technology that reduces capital and operational expenditures, SUSE Enterprise Storage provides a self-managing and self-healing storage infrastructure. And SUSE Manager delivers a robust infrastructure-management solution that supports multiple Linux distributions; hardware platforms; and physical, virtual, and cloud environments. Taken separately or together, each of these features excels at helping organizations drive innovation.
To learn more about the many ways that SUSE enables software-defined infrastructures for meeting the needs of a digital economy, visit https://www.suse.com/solutions/
The Four Things That Enterprises Hate Most About Storage
By: Jason Phippen
Jason Phippen is the product marketing lead for SUSE Enterprise Storage, the new software-defined storage offering from SUSE. Jason has more than 15 years of product and solution marketing experience previously working with companies such as VERITAS, Computer Associates and Emulex prior to joining SUSE in 2014.
When you think about spending money on your home, you think about things that might make life easier and more enjoyable: the extension on the kitchen that means the entire family can get round the table at Christmas—even the in-laws; the extra bedroom, and the privacy-affording en suite bathroom. This kind of spending is exciting because it makes life better: you and your spouse sit together on an evening and actually enjoy planning the works. There’s another sort of work in the home though, equally complicated, and necessary, yet somehow simply not satisfying.
This is the horrible truth that your roof has had its day, and will need complete—and expensive—refit. Its the central heating boiler that has keeled over and died, leaving you with no choice but to cough up for a replacement. Unsurprisingly, we don’t like this kind of spending, its “dead” money that does nothing to improve our lives and merely sustains us in our present condition. You might sit around and plan the works with your spouse...but this time you won’t be doing it with a glass of wine in hand and the excited look has gone from your faces.
When it comes to improving the enterprise, storage spending has the status of roof works—no matter how elegant the engineering, they are seldom a source of happiness. They are a “sink” cost, something you must do to keep the place running. So, perhaps, it’s really not surprising that hate #1 when it comes to storage is cost. In an independent survey conducted by Loudhouse for SUSE, 80% of over 1200 storage decision makers world-wide cited the cost of storage as their top frustration. We don’t like paying for it, but we pay through our noses for it: storage accounts for a whopping 7% of IT spending.
Coming a close second at 74%, hate #2 is performance. Its bad enough that the enterprising householder has to spend all that cash on things that don’t really improve the bottom line, when you lay out the money but you still don’t get the performance, its like replacing the roof only to find that it still leaks.
Hate #3 is complexity. So, you’re planning works that you didn’t want to do, that add nothing to your happiness, and then you find out its going to be hard work. Really hard work. You thought the roof was one single piece of work, turns out that it isn’t, that the previous owners of your house had a string of different builders in, who used different materials that—sort of—work together. There are all these gutters and pipes funnelling water this way and that instead of a single coherent structure. Fixing it is going to require a lot of thought that takes time away from other more interesting projects.
Coming in as a tie for Hate #4 are “inability to support innovation” and “lack of agility.” You see, at some point you are going to want to do that extension, and actually do works which improve your quality of life—AKA your enterprise’s bottom line. As you set your sights on this goal, though, you don’t want to find the state of the roof is holding you back. All too often it does.
OK, so let’s review: storage is too expensive, it doesn’t perform as well as we want and need it to, its ridiculously complicated, and it holds us back from doing valuable work. That’s quite a few reasons to hate storage, and several more to like software-defined open source storage: cut costs, improve performance, reduce complexity, and free up your time to focus on things that can actually improve the business.
SUSE Solidifies Its Play in the As-a-Service Arena
By: Robin Rees
Robin is a 20-year communications veteran specializing in building brand awareness and market preference for enterprise technology solutions. Her agency experience includes running Microsoft’s analyst relations team at WE Communications, as well as leading several teams on the agency’s SAP account. Robin’s corporate experience includes global roles for industry stalwarts such as Boeing and smaller enterprise technology firms looking to expand their markets and launch new products
SUSE Wins Another One for the Enterprise
According to the RightScale 2017 State of the Cloud Survey, a majority of enterprises are running their workloads in the cloud, a finding that validates SUSE’s strategic focus on strengthening its play in the open source, software-defined infrastructure and application world of enterprise-grade cloud computing. SUSE’s latest steps toward achieving this goal make noteworthy advances in the enterprise Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) arenas. For example, SUSE has recently welcomed several star IaaS and PaaS players from Hewlett Packard Enterprise. One of these acquisitions is OpenStack IaaS technology, another is Cloud Foundry PaaS technology, and a third is a highly talented team of OpenStack and Cloud Foundry experts.
Initial Plans for OpenStack Technologies
OpenStack is poised to become a significant innovation enabler in the software-defined infrastructure space, which is precisely why SUSE initiated negotiations to acquire OpenStack IaaS technologies from HPE. SUSE’s enterprise customers will first see the benefits of the OpenStack technologies that it acquired when it integrates them into SUSE OpenStack Cloud. The addition will create an even stronger solution with new and improved features that will enable enterprises to address a wider variety of use cases.
SUSE is a platinum member of the OpenStack Foundation, which means that it not only contributes to the OpenStack community but also to OpenStack software. SUSE will continue to do this going forward.
Plans for PaaS with Cloud Factory
Cloud Foundry is to PaaS what OpenStack is to IaaS. SUSE plans to use its newly acquired Cloud Foundry PaaS assets to develop and deliver a fully certified, enterprise ready solution that will accelerate its entry into the PaaS market.
PaaS technologies are all about enabling DevOps teams to develop and deploy apps faster. Because web-apps are the currency of choice in the growing digital economy and Cloud Foundry is the industry-standard open source platform for PaaS deployments, closing this acquisition was an important part of SUSE’s strategy. SUSE’s recent upgrade to a platinum-level Cloud Foundry Foundation membership marks a strong commitment to provide enterprise-grade PaaS capabilities to customers and partners across its entire ecosystem. And SUSE CTO Thomas Di Giacomo has now joined the Cloud Foundry board.
New Talent to Make the Most of the Acquisition
With SUSE’s technology acquisitions come several former members of HPE’s technical staff—engineers, product managers, systems engineers and the like who have the knowledge and experience to help SUSE realize its IaaS and PaaS goals. With their help, SUSE will strengthen its SUSE OpenStack Cloud roadmap and accelerate its entry into the PaaS market.
SUSE welcomed these new employees aboard on the day it announced the acquisition’s closing and they are already fully engaged with members of existing SUSE staff.
A Mutually Respectful, Non-Exclusive Relationship
HPE has named SUSE a preferred partner for Linux, OpenStack IaaS, and Cloud Foundry PaaS, and may use SUSE technologies inside its Helion OpenStack and Helion Stacato. This new, stronger relationship means that HPE’s customers will benefit from the investment and innovations that SUSE pours into its Linux, OpenStack IaaS, and Cloud Foundry PaaS products. And because this mutually respectful relationship is non-exclusive, SUSE’s existing and future partners, including its original equipment manufacturers (OEMs) and independent hardware vendors (IHVs), will benefit as well.
More Wins
As you may recall from the previous edition of SUSE Insider, SUSE also recently acquired openATTIC. In conjunction with openATTIC experts who joined the SUSE team at closing, this acquisition strengthened SUSE’s play in the software-defined storage arena just as OpenStack and Cloud Foundry will strengthen its play on IaaS and PaaS turf. While SUSE Insider isn’t able to speak to possible future acquisitions, we can say that all of SUSE’s acquisitions have or will benefit SUSE, its partners, its customers, and the open source community at large. SUSE is, as they say, a company on the move.
Are YES CERTIFICATION Bulletin Config Notes from “the Dark Side” of Hardware Compatibility?
By: Kay Tate
Kay Tate is the ISV Programs Manager at SUSE, driving the support of SUSE platforms by ISVs and across key verticals and categories. She has worked with and designed programs for UNIX and Linux ISVs for fifteen years at IBM and, since 2009, at SUSE. Her responsibilities include managing the SUSE Partner Software Catalog, Sales-requested application recruitment, shaping partner initiatives and streamlining SUSE and Partner Portal processes for ISVs.
The easy answer is nope—no way! That comes from a person with years of YES CERTIFICATION experience, not some Rogue person who suddenly has their Force Awaken one day! Please do read on, because “fear is the path to the dark side,” and “hard to see, the Dark Side is”!
In a previous multi-part set of blogs I outlined, in detail, all the information contained on a YES CERTIFICATION bulletin. These previous blogs highlighted what is on a bulletin, how to read and understand what was validated during certification testing, and finally how each section on the bulletin can help you understand specific hardware compatibility. In this blog I will dive a little deeper and provide more detailed information about the Config Notes section on a bulletin. In the process, answer the question, are Config Notes on a YES CERTIFICATION bulletin a bad thing? From the first sentence above you already know my answer that question, but just like any good vacation, part of the joy is in the journey.
First things, first. If you don’t already know, the best place to search for a YES CERTIFICATION hardware bulletin is https://www.suse.com/
The Config Notes section on a bulletin could be blank or could contain one or more highlights about a certified configuration. It could contain required workarounds or functionality that did or did not work, or even required additions; such as updated drivers. The Config Notes on a bulletin will contain key data you will want to be aware of when implementing SUSE Linux Enterprise on a specific hardware platform.
The information provided by a Config Note or configuration note can range from installation/boot to core dump (kdump) or updated kernel drivers to required maintenance updates. The vast majority of configuration notes are informational in nature, something you should know if you are installing and configuring SUSE Linux Enterprise on a certified hardware platform. They may provide more information on how the disks were configured during certification testing. One of the key value propositions of hardware certification is the ability to capture and document a known working configuration. This can be used as a hardware buying guide or a troubleshooting technique to solve a problem with a system.
Is it possible that a certification bulletin will not have any configuration notes? Yes. Many system certifications are completed without any issues, and the certification bulletin contains a great deal of configuration information. But, in my opinion, if you come across a bulletin that does not have ANY configuration notes, you might wonder what the certifying company (usually the hardware vendor) isn’t telling you! But, again, it is possible that the certification bulletin contains everything you need to know. As a reminder, one of the purposes of a certification bulletin is to provide useful hardware/operating system configuration data!
As referred to above, configuration notes could list how the operating system was installed; maybe from an internal DVD (if an internal DVD is listed in the Tested Configuration, then that may be an indication as well), or a virtual DVD or even a USB attached DVD. The configuration note could indicate that the system was installed from PXE (Preboot Execution Environment) over the network, with a UEFI (Unified Extensible Firmware Interface) boot loader or using a legacy installation. A configuration note could list that an Installation Kit was used, or a Driver Kit or a kISO; all of these are just updated installation media provided by SUSE to solve a known issue on that hardware. Note: the issues these updates solve could be enablement for new cutting-edge hardware that was not available when the operating system originally released.
A configuration note could provide the required amount of memory necessary for kdump to function properly, permitting a valid crash kernel image to be captured (when a default setting doesn’t work). It could tell you whether a SUSE Linux Enterprise maintenance update was used during certification testing. This normally means the hardware requires an update in the operating system to function at peak-compatibility. It could also list a specific driver version that was installed during testing.
Configuration notes also list power management functionality that is or is not supported on the hardware. Power functionality like hibernation, sleep, fan control or thermal monitoring, battery support or CPU frequency scaling. There could be information about a workaround for a specific power management function. It could also document how to enable a power management function by modifying a configuration file, or using specific a command-line so the function works. It could outline a change in system settings.
Configuration notes could include basic information, such as whether the system was tested as a headless configuration with no graphics adapter. It could list a URL where more specific installation information is available from the hardware manufacturer.
The final category of configuration notes I’ll discuss here are for virtualization specific certifications. These could be Xen or KVM certifications, or a Third-party Hypervisor certification. These configuration notes normally have to do with virtualization host setup or boot parameters. They may contain specific SUSE virtualization drivers used during testing, such as the VMDP (Virtual Machine Driver Pack). A configuration note could also contain guest installation tips or a workaround, possibly even a recommended way to install the guest. Beginning with SUSE Linux Enterprise Server 12, all Xen and KVM virtualization bulletins will have a configuration note listing whether the hardware supports network SR-IOV (Single Root I/O Virtualization) or network PCI Pass-through. If one of these features is supported it will also list the network adapter used during this testing. Note: SR-IOV and PCI Pass-through are ways to use a host network adapter directly in a virtualization guest.
Configuration notes also list things that are not compatible between the hardware and the operating system. Terminology such as “does not support” or “not supported” are found in these configuration notes. But the vast majority of configuration notes are only informational, they help our customers have a better experience and provide in-depth hardware/operating system compatibility information.
We hope YES CERTIFICATION and YES bulletins help you make better decisions when purchasing new systems for your company infrastructure. Our goal is give you the ability to say “I’m one with the Force, and the Force is with me,” when buying servers and workstations. You can find more information about SUSE YES CERTIFICATION at https://www.suse.com/