SUSECON 2016 Preview: Technology Reigns Supreme This Fall!

By: Kent Wimmer

Kent Wimmer is the Director of Strategic Events at SUSE. He has held a wide variety of roles within the company for over 20 years, including sales, marketing, alliance and channel management. He loves to be face-to-face with customers, partners and prospects to extol the virtues of SUSE solutions and help them have fun while they learn. If there is ever any free time available, Kent spends it with his wife and seven children, and volunteers with youth groups, church and political groups.

I love the fall. I love fall colors. I love stepping out into the crisp, refreshing morning air that wakes me up and gets my brain working. I love harvesting vegetables from my home garden as the fruits of a whole summer’s labor of love. But most of all, I love knowing that SUSECON is right round the corner!

SUSECON has become a flagship conference in the industry for many of the same reasons I love fall: crisp, refreshing content invigorates your brain and gets you thinking about new possibilities. Innovation abounds as SUSE customers, partners and general open source enthusiasts learn about new open source solutions both from SUSE and from upstream projects. Project contributors meet enterprise users and the world will never be the same…

Every year I get feedback from SUSECON attendees that the quality of content at our conference is extraordinary. Not only are the topics widely varied, but they are presented by SUSE engineers, product managers, customers and partners who are able to give more than just a cursory overview. Our presenters get into the meat of their topics. Presenters are accessible to all attendees for questions and answers throughout the conference, should there be any questions that are not answered during the course of the presentation itself. Session content is not solely focused on SUSE products, but on the underlying technologies and the projects that create them. This approach helps make SUSECON one of the greatest tech conferences in the industry.

Here’s a preview of what’s coming in SUSECON 2016—more than 150 sessions (I kid you not!)

  • 75 Technical Tutorials covering both products and technologies (Come and learn that they are not the same thing!)
    • SUSE Product Tutorials include the latest updates to SUSE products, including: SUSE Linux Enterprise, SUSE Manager, SUSE OpenStack Cloud, SUSE Enterprise Storage and more…
    • Project and technologies covered in Tutorial sessions include: Ceph, CephFS, Cloud Foundry, Docker, Kubernetes, KVM, LDAP, Manila, OAuth, openATTIC, OpenDOC, OpenStack, RADOSGW, Salt, SSSD, Sudoers and many more…
    • Partner products that are included in Tutorial session include: Active Directory, CentOS, Oracle (12c and RAC), RHEL and SAP HANA. And there are a host of Sponsor Sessions where they demonstrate how their outstanding solutions tie into the SUSE ecosystem.
    • Technical sessions cover a broad range of topics, including: Benchmarking, Big Data, Block Mirroring, Containers, DevOps, Disaster Recovery, High Availability, HPC, Hyper-Convergence, IoT, Live Patching, Modules, Monitoring, Public Cloud, Security, System Hardening, VM lifecycle management—and more that I just don’t have room to include here…
  • 100 hours of Hands-on Training on SUSE products as well as other technologies such as: Cloud Foundry, Docker, Kubernetes, Salt and SSSD.
  • 22 Futures Sessions where attendees hear directly from SUSE Engineers and Product Managers about what is coming next in the SUSE products they love. Presenters lay out the roadmaps for product development and openly discuss which upstream elements are going to be incorporated into the next generation of SUSE enterprise solutions. And participants get the chance to go straight to the source to plug their favorite technologies!
  • 16 Case Studies from SUSE customers and partners where they give real-world insight into structuring and implementing SUSE solutions in their organizations. These case studies cover topics such as:
    • SAP HANA deployment and high availability tuning
    • Migrating to SLES on a mainframe
    • Cloud deployment with Kubernetes
    • Implementing NFV on SUSE OpenStack Cloud
    • Migrating legacy SAN environments to a distributed Ceph cluster
    • Converting a data center to an enterprise public cloud
    • And more great customer use cases
  • 16 Business-level sessions where attendees can get high-level overviews of technologies that are new to them or to understand SUSE’s position on controversial issues within the open source community. These sessions are a great way to “break into” a new discussion and understand many sides of a technology topic.

If you’d like to have a truly deep dive on a specific topic, SUSECON also offers four Pre-Conference Workshops for an additional fee. These workshops are a full day of hands-on training under the expert tutelage of the celebrated SUSE Training Team. The four workshops offered are:

  • Securing SUSE Linux Enterprise Server
  • Install and Configure a Ceph Cluster with SUSE Enterprise Storage 3
  • Deploy a Highly Available SUSE OpenStack Cloud
  • SUSE Manager 3 New Features

There is far more content at SUSECON than we can do justice to in this short space. Please take a moment to peruse the Session Catalog and see for yourself at: www.susecon.com/sessions.html

And don’t forget that once you have learned all about open source technologies, you can also prove your knowledge to the world! SUSECON attendees have the opportunity to register for onsite SUSE certification exams on Linux, Storage, OpenStack Cloud and Linux Management—and the exam fee is included in the price of the conference! But seats in these exam sessions are limited, so please register soon in order to guarantee your seat.

So when you wake up on that first fall morning wherever you live, go outside and take a deep breath of that crisp, refreshing air. When you get that invigorating rush and your mind starts racing, make sure it races you to Washington, D.C., on November 7–11 for the most exciting technology, the greatest expert access and the best conference value in the technology industry: SUSECON!

How DevOps Can Support Business Agility for All Companies to Stay Business-relevant

By: Thomas Di Giacomo

As SUSE CTO, Thomas Di Giacomo's vision is a software-defined and cloud-based, IT-powered future for the enterprise. Prior to joining SUSE, Di Giacomo served as CTO and vice president of innovation at Swisscom Hospitality Services, as well as CTO of the Hoist Group, a global provider of IT services to the hospitality and health care industries. He has expertise in open source platforms, development and support of global information systems and technologies.

Introduction

In our modern, fast-paced, digital-first world, responding quickly to internal and external changes without losing vision is absolutely key for all companies who want to survive, thrive and ultimately surpass the competition.

Today, most companies’ success relies on software and applications, directly or indirectly, but in all cases with a significant impact on their overall performance. From that perspective, having the right culture and the right processes and tools for software and application development, as well as their delivery and maintenance, is not only necessary but essential for companies to differentiate themselves and succeed in every market.

DevOps for Business Agility

While achieving business agility requires more than software and IT (for instance, the sales and marketing approach and the service and business models must be considered), applications are necessary to all businesses’ success. And today, DevOps is a dominant way to achieve business agility from a software and application perspective—from ideas to delivery to the market (and looping back indefinitely). When talking about DevOps, the first aspect to acknowledge is that this is a balanced combination of an adapted culture, appropriate tools and delivery/management processes. If one of these doesn’t match, the flow isn’t performing as it should, if it’s performing at all.

In this article, we focus on the appropriate tools for DevOps in the context of business agility, say Enterprise DevOps. And although related processes can be generalized somewhat, together with culture they are more company/situation specific than the tools themselves, thus our focus on tools here. We would, however, be happy to discuss culture and processes with you directly.

From its creation, SUSE has been a truly open, open source company. Deeply rooted in software development, we have applied DevOps principles for years, even before the term itself existed. We have learned many lessons, and continue the open-ended quest to keep learning and improving, while building tools to support the DevOps process. In the spirit and tradition of open source, these tools are available to all (including you) and jointly developed and used by various communities. You can see in the figure below how these tools and others can facilitate the DevOps phases.

The Phases and the Tools

Before digging into the various phases of a DevOps flow (since this is in the context of business agility and enterprise needs), it is obviously important to consider security, interoperability and reliability as pre-requisites to all the steps involved. Automation, from unit tasks to high-level tasks, is also a foundational element of the DevOps approach, where the different phases should actually blur as much as possible to reduce friction and speed up the whole flow.

Let’s go through the various phases of a typical DevOps flow. While there are slightly different ways to represent it and slightly different ways to break it out in phases, the infinity symbol-based representation illustrates a generic DevOps loop. Because it is a closed loop, the order of the phases is not particularly relevant, but let’s start from “plan” for the sake of listing them. Keep in mind that there should be as little hard line as possible between the phases, meaning that most of the tools overlap on different phases on purpose (so splitting them by phases is not an exact science).

Plan
There are a lot of available tools (including of course those in the open-source arena) that can be used for the planning phase: from feature, idea, and project management to issue, bug and general collaborative tracking. This broad category includes tools such as Trello, Taiga, Jira, Redmine, Mantis, Request Tracker and Bugzilla.

Code
Obviously developers need programming languages supported by the underlying operating system or platform where their applications are expected to run. With DevOps that means running similarly on the development and production environments (that is where/why, for instance, a discussion about containers should occur). The applications need some sort of Integrated Development Environments (“or not” would argue some of the developers, but we will save that discussion for another time), especially Source Control Management (SCM) for collaborative continuous development.

In terms of OS, VM, public cloud or container host, Linux is obviously by far the best choice for any coder. One can use developers’ programs from enterprise Linux distros (such as the one SUSE provides) or free community-based distros. openSUSE for instance, with both Leap and its Tumbleweed rolling release, is sharing the same code base as SUSE Linux Enterprise, facilitating the move back and forth and benefitting the whole DevOps approach. It is also important for developers to pre-check whether their dev environment allows them to build their code for their target architecture(x86, Aarch64, z, Power or others). We could also mention minimal/lean/micro-OS as particularly relevant for serving as hosts. SUSE Linux Enterprise JeOS is one example of this.

Regarding SCM, some of you might remember CVS or be familiar with Subversion. But the source control tool you are probably more familiar with is Git (possibly together with web-based related hosting service Github or Gitlab, including issue tracking capabilities).

Build
When coding is complete, it’s time to build the application(s)/package(s)/image(s). This is an area where companies like SUSE have been very active, for themselves as well as for whole open source community. SUSE has put a lot of effort into Open Build Service, a generic system to build and distribute packages from sources consistently and on a wide range of operating systems and hardware architectures. To create an OS/host image, Open Build Service can be complemented with Kiwi or SUSE Studio, for instance, to build and deploy standalone or public and private cloud services. Open Build Service can also be used with PackageHub for integration into supported enterprise Linux distributions.

Test and Continuous Integration/Continuous Deployment
To keep up with the benefits provided by DevOps practices, continuous testing and integration must be included in the flow. OpenQA, for example, is an automated testing framework for GUI applications as well as the bootloader and kernel. This tool complements traditional scripting tests and output checks that are difficult in those cases. One of the most commonly used platforms for CI/CD is Jenkins, but there are also alternative solutions such as Travis CI and Concourse.

Continuous Deployment and Configuration Automation
Automated deployment and configuration is another important phase of the process. Here again, there are a variety of options. From Chef (complemented with Crowbar in SUSE OpenStack Cloud for instance), Puppet, Juju and Ansible, to Salt (integrated with SUSE Manager), there is an appropriate tool for your use, based on your existing architecture and your technical, operational and business needs.

Operate and Monitor
Once deployed, within the DevOps philosophy, applications are operated or managed (container and resource orchestration play a key role here) and monitored to give constant input to the DevOps loop and flow to improve performance, fix issues or adapt to possible shortcomings or new requirements that pop up. For instance, together with Icinga, SUSE Manager provides insights into what is happening on the systems and SUSE Enterprise Storage helps to automatically adjust data placement to improve application performance. A lot of other solutions also help to provide insight, such as traditional Nagios, Zabbix, Monit, Prometheus and Magnum (via SUSE OpenStack Cloud) for containerized environments and so forth. Other solutions focus specifically on application performance, such as New Relic and Graphite, which also provide some analytics to interpret the data (Logz.io or ELK).

Conclusion

Whether your company is already a DevOps ninja or entirely new to the concept, business agility will continue to become more and more important and the tools to help improve agility will become more and more advanced. Thus, you need to be prepared to constantly adapt and improve the way you provide applications to your business. We recently worked with Tyro Payments to facilitate adoption of DevOps practices for them to achieve shorter time to market. You can check out their story to learn how they have improved their business agility.

Much like CI and CD for applications, adopting a DevOps mentality and implementing it in real life is a constant journey with no final destination—at least as far as the industry and analysts can see for now. The next steps depend on your own situation, in terms of the business, the existing culture, and the tools and processes. We would be glad to discuss how we can best support your needs the next time we meet.

In future articles I will share our container and Platform-as-a-Service perspectives as they relate to DevOps and improving your business agility. These are clearly useful and key elements of a DevOps strategy and I look forward to sharing my views in the coming months.

Linux at 25

By: Bryan Lunduke

Bryan Lunduke is the community and developer evangelist for SUSE, elected member of the openSUSE Board, technology journalist for Network World, author of nerdy books, podcaster and creator of ridiculous videos (mostly about Linux).

When I turned 25, I thought I was old. I thought I was wise. I thought I had accomplished so much.

Boy how, was I wrong.

Despite having lived for a full quarter of a century (which sounds a lot more impressive than “25 years”), I was really only just getting started. Between age 25 and 30 I learned more, and had more adventures, in those five short years than in the first 25 combined. And the rate of experiences and adventures increased exponentially from there.

Now, here I sit at age 37. And there are a few things I know for certain: I am not old. I am not wise. And, while I’ve certainly accomplished a lot more than I had at 25, I’m really still just getting started.

On August 25, 2016, Linux turned 25 years old. A quarter of a century. Linux is now one fourth of the way to being 100 years old.

In that time, the Linux kernel has been ported to more computer architectures than you can shake a stick at. It all started with a little 386 and grew to support ARM, DEC Alpha, 68k, x86-64, MIPS, Z Systems, RISC, SPARC, and many others.

The first release, back in 1991, has just over 10,000 lines of code. Today? Version 4.7 contains somewhere in the neighborhood of around 22 million lines of code. Twenty-two Million. That would be 1 line of code for every man, woman and child in the states of Washington, Oregon, Idaho, Montana, Utah, Nevada and New Mexico combined.

That code has been written by over 12,000 people. Twelve thousand. Distributed around the world and developed entirely in the open. The size and scope (and longevity) of the Linux kernel project is absolutely legendary.

In 2011—when the total lines of code in the kernel was a fraction of what it currently is—it was estimated that the cost to redevelop the Linux kernel (using a closed, proprietary development model) would top US $3 billion. That was five years, and many millions of lines of code, ago.

Twenty-five years ago, Linux first worked on the desktop PC of one Finnish man, powered by an Intel 386 CPU. Now, Linux absolutely dominates the computing world. Not including desktop GNU/Linux-based systems at all—and completely ignoring all of the servers, routers and devices that power the bulk of the entire Internet—if we simply consider Android alone (which is running on the Linux kernel), Linux has over half of the market share of every computing device sold.

Not only has Linux, as a project, survived for a quarter of a century, it has absolutely flourished to the point of near total, global domination. Thinking about how the other operating systems of 1991 have fared makes this even more astonishing.

Microsoft hadn’t yet released Windows 3.1. That’s right. Windows 3.0 (running on MS-DOS 5.0) at that point in time was the state of the art from Microsoft. Windows NT was still two years away.

And Apple? MacOS System 7 had just been released. Not OS X. Not MacOS 8 or 9. System 7.

Oh! The Commodore 64 was still in production—no joke. And you could still buy new Amiga computers, as well as the Macintosh Classic (with its monochrome monitor), which would be produced by Apple Computer for almost two more years.

Thinking of all the operating system kernels that have come and gone during the time that Linux has been alive and thriving is absolutely staggering. Heck, Linux existed back when “Apple” still had the word “Computer” in its name.

What blows my mind even more is the simple fact that SUSE has been around since almost the very beginning—founded in 1992—several years before Linux even hit “1.0.” Almost as long as Linux has existed, SUSE has been there, distributing Linux on floppies if needs be.

With all that Linux has accomplished—and all of the Linux competitors that have come and gone—it’s easy to think of how old, wise and accomplished it is. But if the life of Linux is anything like my own, the next five years are going to make the first 25 look downright dull.

Enabling the Transition to a Software-defined Data Center

By: David Byte

David Byte is a Sr. Technical Strategist on the IHV Alliances team at SUSE. Being involved in customer facing roles of the storage business since 1999, his experience spans the breadth and depth of the storage business. When not working with partners and internal stakeholders to bring leading edge technology solutions to market, he is spending time with his wife and six children at his home in Jenks, Oklahoma.

By: Larry Morris

Larry Morris is a Senior Product Manager focused on SUSE’s enterprise software-defined storage product line. He joined SUSE in 2014 and brings over 30 years’ experience in enterprise storage product development.

As a recent analyst poll implies, many of you who are reading this article are familiar with the benefits of software-defined data centers (SDDCs): A whopping 75 percent of the poll's respondents indicated that they are planning to begin the transition from legacy data centers to SDDCs within the next four years. This number is high for many reasons: Legacy data centers are rigid, expensive, process bound and slow to respond. Siloes of compute, data and storage make it difficult for organizations that have legacy data centers to innovate and deploy new applications, appropriately scale existing applications, leverage big data and other new technologies, and in general, benefit from an IT infrastructure that supports the way today's organizations do business.

In contrast, SDDCs are agile—they are silo-free zones that are 50 to 60 percent less expensive to operate than are legacy data centers. SDDCs are capable of quickly responding to changing business drivers, such as the recent DevOps model and the cloud computing capabilities that are spurring innovation. And they are flexible. Organizations can scale their data centers out simply by adding new hardware and can retire old hardware by simply migrating the hardware's functions to new machines.

Given these benefits, you would think that more organizations would have already implemented SDDCs. But many organizations that plan to transition their legacy data centers to SDDCs haven't done so for two understandable reasons: First, they have made large investments in their legacy data centers, and second, they are familiar with their legacy data centers' operational and management processes. If your organization is among the many that have yet to transition to an SDDC for these or similar reasons, you should know that SUSE offers a way for organizations to dip their figurative toes in the software-defined pool without having to sacrifice current investments and familiar management processes.

Get Your Software-defined Feet Wet with SUSE Enterprise Storage

SUSE Enterprise Storage is powered by Ceph, an open source software-defined storage technology. It provides a unified block, file and object interface built upon a scale-out distributed, highly available, storage cluster that runs on industry standard hardware. SUSE Enterprise Storage supports the iSCSI and RBD protocols for block storage, Amazon S3, OpenStack Swift, and RADOS for object storage and a Posix compliant file storage called CephFS. This inherent flexibility makes SUSE Enterprise Storage ideal for legacy storage environments. The SUSE Enterprise Storage team already supports SUSE Enterprise Storage with iSCSI and plans to support Fibre Channel with a future release. The CephFS filesystem provides the base that will be used in later releases to enable CIFS and NFS access as well. This will give your organization the opportunity to deploy software-defined storage (SDS) directly into its traditional data center, where it can gradually become as comfortable managing SDS as it is managing its legacy storage implementations.

Learn SDS at Leisure

Your organization can get to know the ins and outs of working with SDS in several ways that integrate seamlessly into your existing data center. For example, with SUSE Enterprise Storage, your organization can use SDS as a backup target for your application data using the industries current enterprise back-up software applications.

SUSE Enterprise Storage will also be useful for storing large, unstructured data files such as videos, images and other visual and auditory media. When considering large files, Ceph is already the number one storage infrastructure utilized with OpenStack. And when paired with the iSCSI protocol, it provides a solid second tier storage location for both Hyper-V and VMware images.

This is but a sampling of the use cases to which your organization can apply SDS within its existing infrastructure.

Transition to SDS as Current Investments Depreciate

Because you have made a significant investment in a traditional storage infrastructure, it is understandable that your organization wants to get its money's worth. But its current infrastructure won't last forever. On average, organizations experience a 40 percent increase in data growth every year. Typical legacy storage implementations do not scale out to accommodate this data growth, so your organization will eventually need to upgrade its legacy system. If it has calculated correctly, this upgrade will coincide with its hardware refresh cycle. And if you have already become familiar with SDS via SUSE Enterprise Storage, it will be easier, and much less costly, to transition the storage that the legacy systems are currently supporting to SDS.

In other words, your organization can avoid a painful and expensive forklift upgrade by using its customary hardware refresh cycle to deploy the affected storage on an SDS system.

One Down, Two to Go

With SUSE Enterprise Storage, your organization can transition to SDS during its regular hardware refresh cycles. This means that when it comes time to transition other data center components to the world of the SDDC, you will have one less component to worry about.

Certification Update

By: Kay Tate

Kay Tate is the ISV Programs Manager at SUSE, driving the support of SUSE platforms by ISVs and across key verticals and categories. She has worked with and designed programs for UNIX and Linux ISVs for fifteen years at IBM and, since 2009, at SUSE. Her responsibilities include managing the SUSE Partner Software Catalog, Sales-requested application recruitment, shaping partner initiatives and streamlining SUSE and PartnerNet processes for ISVs.

SUSE Partner Software Certifications

In this issue, review examples of new technologies and updates to long-standing partner products including SUSE Linux Enterprise Server 12.

Watch our key management partner, SaltStack, showcase their new enterprise product, which is now available on SUSE Linux Enterprise Server 12 here.

See Synopsys, one of our key silicon-to-software EDA partners, deploy SUSE Linux Enterprise Server 12 updates to a wide range of its applications here.

IBM continues to update all of its major pieces of its Tivoli Monitoring Suite with SUSE Linux Enterprise Server 12. See several examples of their product’s updated branding here.