SUSE Insider Newsletter


The SUSE Insider is a quarterly posting with the latest tips and tricks, product advancements and industry insights only available to SUSE customer subscribers. If there is a specific topic you would like to have covered, please email Marjorie.Westerman@suse.com.



Lars Marowsky-Brée

By: Lars Marowsky-Brée

Software-defined Storage: Sizing and Performance


Lars Marowsky-Brée is a SUSE Distinguished Engineer and currently serves as the architect for the high availability and distributed storage products. He joined SUSE in early 2000 and is a frequent speaker at conferences. He wears black and lives in Berlin, Germany.

Introduction


SUSE Enterprise Storage adds software-defined storage (SDS) to the SUSE product portfolio. Built around the fully open source project Ceph, it is a very flexible and infinitely scalable storage solution. It scales from dozens of terabytes to many petabytes. It supports access via object-based interfaces such as S3 and Swift or through block-based access, either natively on Linux or through iSCSI on all common operating systems, and will support file interfaces in a future version. Use cases include hosting SUSE OpenStack Cloud, as well as media repositories, disk-to-disk backup and other replacements for traditional SAN storage systems. It is built, tested and hardened on top of SUSE Linux Enterprise Server. Learn more >


SDS systems are nothing less than a revolution in the storage world. Legacy SAN systems are limited to the hardware and software options offered by a single vendor. Such vendor lock-in constrains the flexibility and scalability of the solution and comes at a cost premium.


In contrast to those legacy SANs, SDS solutions are simply programs that run on top of a regular operating system and are deployed on commercial off the shelf (COTS) hardware. This allows them to be tailored precisely to the needs of any specific situation, avoids vendor lock-in, encourages competition and reduces costs. SDS solutions aggregate the local storage devices in regular servers via a distributed service into a self-managed, self-healing and dynamic cluster and export it to clients.


A Closer Look at Ceph


Ceph, at the heart of SUSE Enterprise Storage, consumes individual storage devices in each server; it does not make use of local RAID solutions, since redundancy is handled by itself. Those devices are formatted with a standard Linux file system (XFS or btrfs) and mapped to object storage daemons (OSDs). Each of those OSDs in a Ceph cluster represents a single, physical storage device. Those OSDs manage replication of the data among themselves, rebalancing and reconstructing as needed. A single server will typically host many OSDs, which host placement groups, and are aggregated into pools. The current configuration and state of the cluster are decided via special monitor processes (MONs); there are an odd number of these (typically, three or five) for redundancy. The MONs do not directly serve data to clients; they serve a map of the OSDs and a configuration describing how data should be distributed. Ceph does not use the monitors as metadata servers that tell clients where a specific piece of data is; these would be bottlenecks. Instead, Ceph employs a pseudo-random distributed hashing algorithm (called "CRUSH"), which allows all parties to compute any data location itself, avoiding any central instance and achieving evenly distributed access patterns. Clients then can directly connect to the OSDs to access the data. This is the basis for Ceph's massive scalability, both in terms of performance and size.


Access modes such as S3/Swift or iSCSI, respectively, require additional gateways. The upcoming support for CephFS, a distributed, POSIX-compliant file system, will also deliver similarly scalable metadata servers.


Choosing the Type of Solution


All of these components can be tuned for a specific use case. However, such flexibility comes at the price of having to make choices. Thus, there are three different approaches to using SDS in the enterprise: appliances, reference architectures and "build your own" solutions.


Appliances combine a pre-installed and tested software and hardware setup. They can include everything from the hardware components and software to support subscriptions. Such turnkey bundles have been well tested, and—similar to legacy SAN systems—come in a variety of base flavors (such as capacity versus performance-optimized). Their great advantages are that they reduce complexity (and, thus, risk) and their ease and speed of deployment. One example for SUSE Enterprise Storage is available from Thomas-Krenn.


Reference architectures are developed jointly between a hardware partner and the software provider. Similar to the appliances (whose configuration can also be used as a reference), they provide a simplified starting point for customizing your own deployment. SUSE is in the process of developing these with many of our hardware partners.


The third path is to build a configuration from scratch. However, the considerations here are also applicable to customizing or extending an appliance, a reference architecture or even an existing system.


Understanding Your Requirements


The key to a successful design is to understand the requirements. The fundamental properties are performance, density and capacity, reliability and availability, and cost. These goals are not always compatible; for example, there is a tradeoff between density and the redundancy required for availability. While cost per capacity and density are aligned up to a point, performance commands a premium in price and is also at odds with the highest possible density. How to resolve this conundrum?


Let's start with availability and reliability. These are usually mandated by the business requirements and can be expressed differently: how many faults must the storage system be able to tolerate before loss of service or, worse, loss of data (and, thus, a fallback to the backup and recovery mechanisms) occurs? A typical answer is two or three; in a large environment, it is quite likely that more than one component might fail at the same time. The risk of multiple concurrent faults grows with the number of components (especially nodes and disks) in the system. Ceph, and thus SUSE Enterprise Storage, manages replication and rebuilding behind the scenes and also periodically "scrubs" the data to ensure that it is still consistent to detect bit rot.


Performance itself is a multidimensional property: throughput and bandwidth overlap with latency and input/output operations per second (IOPS), but are not the same. Another crucial distinction is the one between the aggregate performances of the whole cluster versus the performance available to a single client thread. The usual complexities of storage performance—writes versus reads, block sizes, sequential or random access and so forth—also apply.


Greatly simplifying, throughput scales with the number of systems and storage devices. Ceph is excellent at parallelizing IO, and the more components it has available for distributing the workload across, the higher the throughput. Of course, the bandwidth is also limited to that of the network layers. On the other hand, latency (the time needed to complete a single IO operation) primarily depends on the characteristics of an individual server's components and the network. To improve latency, faster networking, SSD/NVMe journaling for writes and faster storage devices are key. The number of IOPS that a SDS system can deliver is then a combination of both latency and throughput. For the network hardware, throughput and latency improve together: 10 Gbit/s Ethernet is better than 1 GbE and left far behind by 40 GbE or Infiniband.


As suggested, Ceph OSDs can utilize journals in front of storage devices to speed up writes. Given the performance and price differential, this typically uses a single SSD to host the journal for multiple hard disks. Those SSDs should be carefully selected for durability, since they will see a lot of writes, and a failure or performance degradation of the SSD will affect multiple OSDs. Common ratios are in the range of four to eight disks to one journal device.


Buffered by the journal, the performance of a single disk is no longer so critical; Ceph will distribute IO over many spindles, and aggregate throughput will still be excellent. Reliability is similarly not dependent on a single device anymore; but still, the SAS protocol allows for better error management than SATA. This implies that the very high-end 15k RPM enterprise drives do not provide the best value for the money; certified drives with 5-7k RPM should be considered. As for density, lower capacity drives provide more spindles for Ceph to distribute the load across, but their cost per terabyte is high, similar to the very high capacity drives. The sweet spot is the middle range of 4 TB per (NL-) SAS drive at the time of this writing.


Those storage devices are—today, at least—not directly connected to the network, but components within servers. Those need to have adequate CPU and memory resources as well as internal and external connectivity to avoid bottlenecks. This is easier to achieve if the number of storage devices per server is lower; a 1U server with 12 drives behind a 10 GbE network obviously has more bandwidth (and the other resources) per drive than a 4U server with 90 drives. Also, the failure of a single such large server would subtract a vast amount of capacity and, thus, redundancy and performance from the cluster, and requires massive resources to rebuild. 2U servers currently offer the best trade-off, with 1U servers for performance pools, and 4U servers only in the most dense, very large configurations.


Density and performance are both further influenced by how you configure your Ceph system to provide the redundancy needed. Ceph supports two fundamentally different approaches: replication versus erasure coding. Replication means storing bit-identical copies of the data; so if you need to be able to tolerate two faults without data loss, three copies are needed in total, resulting in a redundancy overhead of 200%. Erasure coding, on the other hand, is more flexible: the data blocks are divided into a number (k) chunks. For these data chunks, a number (m) of parity chunks are computed, from which lost chunks can be reconstructed. The system is thus able to tolerate m faults, resulting in an overhead of only m/(m+k). For example, if you choose k=5, tolerating two faults requires only 2/(5+2)=29% overhead. Yet, nothing is free. Erasure coding uses more CPU power for its computations, and while replication can reconstruct from all remaining sources in parallel, erasure coding will need to read multiple chunks over the network to reconstruct the lost ones. They are also subject to some functional limitations that are beyond the scope of this article.


But a single Ceph cluster can utilize both replication and various forms of erasure coding in different pools, depending on the needs of each data set. It even allows a faster pool to be designated as a cache tier for a slower pool. A common combination is to use a fast, replicated pool on SSD/NVMe devices in front of a slower but denser erasure-coded pool of hard drives. This allows a balance between the two, from which both performance and cost of the cluster benefit.


This article cannot provide an exhaustive treatment of all possible considerations and options. There are many further aspects to take into account, such as power consumption, encryption, compression, the various access modes, specific workloads, file systems, and the different CPU architectures and hardware accelerators available.


If this task now seems insurmountable, there is hope. First, remember you do have the possibility to start from a reference architecture. Second, you do not have to get it perfectly right immediately. This is not a traditional SAN system that locks you into your choices forever!


Prototyping and Adjustments


It is common to start with a prototype to validate and adjust the initial choices. The first production system then will typically be only around five to ten percent of the anticipated "maximum" build. And this, then, is where SDS systems really shine: based on the actual, observed use of the new system, you can continuously refine both the choices of the hardware and software options. As you grow the system and eventually retire components that have reached the end of their life cycle in your environment, the characteristics of the system continuously evolve as well.


It is not necessary to achieve the impossible and to perfectly predict the future: the system grows as needed. If the storage needs do not grow as anticipated, it is always possible to stop at 30 percent of the estimate or to add 50 percent more servers to the configuration if the service is very successful. Similarly, the mix of components can respond to the performance requirements. All of this is possible while the system remains online. Thus, the systems adapts to the changing needs of the business.


If this has piqued your interest, you will be delighted to know that you do not even need real hardware to experiment. A test environment can be hosted perfectly well on virtual machines. So download an evaluation copy of SUSE Enterprise Storage today!



Rich Ashford

By: Rick Ashford

Planning an Enterprise OpenStack Deployment: Successfully Implementing SUSE OpenStack Cloud in an Enterprise Environment


Author

Rick Ashford has been a Senior Systems Engineer for SUSE since 2008. While he enjoys all things SUSE, he has spent much of the last four years being deeply involved with the SUSE cloud offering and has also enjoyed contributing to the community by presenting tutorials and instructional sessions at several OpenStack Summit conferences. Rick lives in Austin, Texas with his wife and four children.

Introduction


Building your own Infrastructure-as-a-Service (IaaS) private cloud is an exciting venture that many IT departments are exploring right now. Many have been hearing the hype for years and are now ready to start taking those first tentative steps towards a new way of approaching their work. If done properly, an IaaS can completely revolutionize the way a corporation interacts with their IT department.


With that power, however, comes great complexity. No two corporations are alike and, therefore, the configuration that a neighboring business uses for its cloud might not fit your needs, culture, or processes. To this end, private clouds, and OpenStack in particular, have evolved to become incredibly complex beasts with amazing amounts of customizability and flexibility. Unfortunately, that also means they've invented many new and spectacular ways for you to shoot yourself in the foot. Avoiding that kind of disaster takes careful planning and preparation.


If done correctly, the actual physical implementation and deployment of your cloud should be the shortest part of the entire project. The vast majority of your time should be spent in conference rooms with whiteboards and takeout food, hashing out the details of how this new world is going to look. The cloud project will require cooperation and input from a wide variety of sources, such as the storage team, network team, physical infrastructure management, end users, legal, procurement and more. Without this coordinated effort, users will get frustrated; your project will not be successful; and in all likelihood, the whole thing will be doomed from the start.


This article is intended as a starting point for a SUSE OpenStack Cloud implementation. It is intended to be used as a guide to help you ask yourself the right questions as you prepare for deployment. In addition, many topics are discussed in greater detail in the SUSE OpenStack Cloud deployment guide. Keep a copy of that deployment guide handy, and reference it often.


As of the time of this writing, the current product version is SUSE OpenStack Cloud 5 (Juno-based release). Community documentation can be found on the OpenStack site, All SUSE-provided documentation, including the deployment guide, is available here.


Preliminary Considerations


Philosophy of your cloud


Before you begin planning the technical details of your implementation, there are several overarching questions you will want to consider that will directly impact the overall architecture:


  • What problem are you trying to solve by implementing a private Infrastructure-as-a-service (IaaS) cloud?
  • Who are the users of your cloud? What do they want out of it?
  • What are your business and technical requirements?
  • What are the constraints for this project?
  • What additional resources will you require to have a successful deployment?

Let's consider each of these questions individually.


What problem are you trying to solve?

This question is probably the most vital one to know before you begin planning your cloud. If you don't know the goal that you are trying to accomplish, your likelihood of achieving it is dramatically reduced.


If, for example, your overall goal is to provide a playground for your developers so they will stop annoying IT with constant requests for additional resources, you will make dramatically different decisions than if your overall goal is to streamline your production environment processes.


In the first scenario, you would likely use a single, non-commercial hypervisor such as KVM or Xen. You would also probably be looking to implement a relatively cheap backend for volume storage, and the high availability of your control plane might not be significant. You would not likely need to have a large address space reserved for floating IP addresses to expose your cloud workloads to the outside world.


In the second scenario, a production environment, you will be much more stringent in your requirements. You will likely need multiple hypervisors to accommodate varying virtual environments (that is, production will be on VMware, developers will have the cheaper KVM environment, and Windows workloads will be deployed on Hyper-V to maximize the efficiency of licensing costs). You will likely be looking for a more reliable storage infrastructure, leveraging a SAN instead of using local disk storage for your volume storage service. You might even have several storage back ends that you need to accommodate. You will likely need to reserve a significant number of IP addresses for exposing these production workloads to the outside world.


Without understanding exactly what you are trying to accomplish with your cloud implementation, you are almost guaranteed to make decisions you will regret, which could result in significant costs (in time and money) to fix. Establishing a correct course here will inform every other decision you make for the better.


Who are the users of your cloud?

This question is similar to the first, but it is still worthy of considering individually. If the intended users of the cloud are not in line with the overall goal of the cloud, then significant frustration will likely boil up. Knowing who all of the intended users are and what their needs and expectations are can help you head that frustration off before it festers and impacts productivity.


If, for example, your stated goal is to provide a playground for developers, but there is also going to be a significant number of less technical users, you will want to make sure you cater to both user groups. The less technical users will require significantly more documentation of your processes for using the cloud, and formal training may be appropriate to ensure that they don't get frustrated and refuse to use it.


What are the business and technical requirements?

Most enterprise IT environments have specific expectations for uptime, typically in the form of a Service-Level Agreement, or SLA. Stringent SLAs require higher budgets to accommodate higher quality and higher quantities of hardware, networking, and physical infrastructure (power, cooling, disaster-recovery processes, and so forth). In addition, if you work in government, retail or healthcare, you will likely have specific compliance requirements for things like PCI, HIPAA or Common Criteria, and these need to be taken into account as well.


What are the constraints of your project?

Every project is short on something, whether it's manpower, money or time. This falls squarely into the realm of the old adage, "You can pick any two of fast, cheap and high quality. You can't have all three." Oftentimes, ill-informed management will attempt to defy the laws of physics and human nature and require all three, but realistically speaking, that's not going to happen. Understanding the priority of your constraints can help you set management expectations appropriately so that you have achievable success criteria.


What additional resources will you need?

Typically, a fairly small team is given the charge to build out your private IaaS cloud. You will need to plan on getting input and assistance from a wide variety of other teams, such as storage, networking, and physical infrastructure. Identifying whose help you will need and when you will need it allows them to plan and give your needs the full attention they require.


Hopefully, as you have read through this, you have found a lot of food for thought. This part of our discussion has been about understanding what exactly you are trying to accomplish, as well as what resources you either have or need to acquire in order to be successful. In subsequent articles we will talk in more detail about some of the specific decisions you will need to make as you build your cloud.


Good luck, and happy "clouding!"



Thorsten Kukuk

By: Thorsten Kukuk

Btrfs and Rollback


Author

Thorsten Kukuk has been a software developer with SUSE for more than 16 years. Currently, he is the Senior Architect for SUSE Linux Enterprise Server. Previously, he was the primary Project Manager for the product for many years. Thorsten has a long history in open source projects.

Introduction


Nearly every system administrator has probably run into this situation: after applying updates or other changes to the system, it no longer comes up after a reboot. Most of the time, this means that the system needs to be recovered with the help of a rescue system or even a backup. Wouldn't it be much better if you only needed tell grub, "boot the status" before the changes were made?


Btrfs, the new default file system of SUSE Linux Enterprise 12, has some nice features that can help with this situation: copy-on-write and subvolumes. SUSE has built a solution around these two features that enables a system administrator to boot an older snapshot.


How Does This Work?


A copy-on-write file system does not modify a file on disk; instead, a copy of that file is created and modified while the original file stays intact. Subvolumes on btrfs are not like logical LVM volumes, but are hierarchical and more like directories, which behave like mountpoints. The root file system is the initial subvolume of btrfs. With Snapper, a tool for Btrfs snapshot management, a snapshot can be created. Such snapshots are nothing more than subvolumes that link to the parent subvolume. This creates a "copy" of the root filesystem with one IOCTL, which is fast and needs only a few bytes of disk space. This means that in the beginning only a few bytes on additional disk space are needed, but over time, the size of the snapshot will increase with every modification of the original files. This is also the reason that you cannot say how much disk space a snapshot really uses.


In a worst-case scenario, this means that in the end, every snapshot needs the same amount of disk space as the original data. But this will most likely only happen when there are major version updates or new service packs of SUSE Linux Enterprise Server.


History


Btrfs and snapshots were introduced with SUSE Linux Enterprise Server 11. At that time, their functionality was limited to file-based rollback, in Snapper terms called "undochange." Since the bootloader was not able to boot from btrfs, /boot needed to be on a separate partition with another file system. This did not allow the kernel to be included in a snapshot or to boot from a snapshot, so users were only able to restore single files, mostly configuration files.


SUSE Linux Enterprise Server 12


With the introduction of SUSE Linux Enterprise Server 12, btrfs became the default filesystem, and grub2 the only bootloader on all SUSE architectures. Because grub2 is able to boot from btrfs, this eliminates the need for an extra /boot partition and enables the kernel to be included in a snapshot. So now during the boot process, a system administrator can select an older snapshot and boot into that.


One problem still remains: you cannot create consistent snapshots across partition boundaries. For this reason, all the data that needs to be part of a snapshot has to be on the root file system and not on different disks or partitions.


With SUSE Linux Enterprise Server 11, there were regular snapshots created by a cron job. This is no longer the case. Since the default root file system normally does not change often, this would create a long list of snapshots without change and the snapshots with changes would be deleted too early. In addition, it would be hard to find the right snapshot in the grub2 boot menu. However, the system administration tools YaST and Zypper create snapshots with every change.


In addition to allowing rollback to the kernel, the benefit of the SUSE Linux Enterprise Server 12 implementation is that creating a snapshot and doing the rollback are fast "atomic" operations. But there are some drawbacks, too. After a rollback, the hierarchical structure of the subvolumes is "broken." While during a normal mount of a btrfs file system subvolumes are mounted automatically, these subvolumes are now no longer children of the parent subvolume. This means they are no longer mounted automatically. For this reason, all subvolumes have to be listed in /etc/fstab. Another problem is that you cannot delete snapshots that contain subvolumes. For this reason, new subvolumes should always be created in the same main subvolume, which should otherwise be empty.


Full System Rollback


There are two "modes" for performing a rollback: reboot "later" and reboot "now." In the reboot "later" case, the system administrator is inside a running system and decides to perform a rollback with the next reboot. The administrator calls "snapper rollback <number>" where <number> is an id of a snapshot; after the next reboot, this snapshot is the new root file system. This is a permanent change.


In the reboot "now" case, the administrator first boots an old snapshot, which he or she selects in the boot menu. The system is booted into this snapshot, and the system will come up in read-only mode. This is enough for some simple services and administration tasks. But for a permanent change, "snapper rollback" must also be used. No id is necessary in this case; the current snapshot will be used.


In both cases, at first a new read-only snapshot from the current root file system will be created. This is to make sure that no data goes lost. After that, a new read-write snapshot of the old snapshot will be created. This is necessary because the old snapshot is read-only; it also allows you to do the rollback several times. Otherwise, the old snapshot would be overwritten and no longer be available for a rollback. After this, a read-write copy of the old read-only snapshot will be the new root file system and used at every boot.


Grub2 Boot Menu of Snapshots


To identify the correct snapshot, grub2 shows some information about it. These menu entries start with the operating version, the installed kernel, the date and time of the snapshot and, in addition, information about who created it and whether this is a pair of snapshots (one done before, one done after a modification) or a single snapshot. An asterisk at the beginning of the line shows that a snapshot is "important." Snapshots are marked as important if a package influencing the boot process is updated. These are, for example, kernel, dracut, glibc, systemd, udev packages. The list of the packages is configurable in /etc/snapper/zypp-plugin.conf.


Starting with SUSE Linux Enterprise Server 12 Service Pack 1, it will be possible to set an own text for the boot menu entry: 'snapper modify --userdata="bootloader=foo bar" <number>', where <number> is again the id of a snapshot.


Challenges


One challenge for snapshotting the root file system and roll back is that you need a consistent snapshot, so this should be done in an atomic way, which means no other modification of the filesystem is taking place at the same time. Btrfs does not support snapshots across partition boundaries, which means part of the snapshot is only what is on the root partition and not inside a subvolume. Subvolumes are excluded from snapshots, too. Another problem is the bootloader. You can have only one bootloader. If that breaks, you can no longer perform a rollback. The different stages of the bootloader need to match, so the complete bootloader needs to be excluded from the snapshot. By contrast, the grub2 configuration needs to be part of the snapshot because it contains all information about the kernels. This means that every new version of grub2 needs to be able to read the old configuration files if used in snapshots.


Data and Rollback


Another problem is what should happen with the data during a rollback? Assume you are running a web shop, and a big order was placed and stored in the database. You don't want to roll back the database and lose the order. On the other hand, there is no guarantee that the old database library is still able to read the database. For SUSE Linux Enterprise Server 12, the decision was made to not rollback certain log files, databases and other files, especially in the /var hierarchy. Since subvolumes are excluded from snapshots, all this data is placed in subvolumes.


The disadvantage of excluding parts from a snapshot is inconsistencies afterwards: for example, if you created a new user after taking the snapshot but before the rollback, the /home/<user> directory exists, but there is no corresponding entry in /etc/passwd, or databases and other software are no longer able to access their data. But because a snapshot is always taken from the old root file system during a rollback, no data is really lost. It is always available in the old root file system and can be copied to the new root file system.


Cleanup of Snapshots


There is a daily cron job, which deletes snapshots once a day. Several rules are possible for a snapshot for cleanup. With "snapper list" you can look up which rule is used for a snapshot in the "cleanup" field. If this field is empty, this snapshot will not be deleted automatically. For the root file system, the last ten important and last ten regular snapshots will remain; everything else will be removed. This ensures that a lot of small changes don't trigger the deletion of, for example, the snapshot with the last kernel update.


Since the cleanup is run only once a day, you can have a lot more snapshots, depending on how many snapshots per day you create. If timeline snapshots are used (for the other partitions), the first snapshot of the last ten days/months/years is kept.


Snapshots that contain subvolumes cannot be deleted and, thus, will never be deleted automatically, nor will root snapshots done during rollback be deleted. The snapshots need to be removed manually, after you ensure that they no longer contain any important data that could be still needed.


Conclusion


If the system is set up correctly, you can perform a rollback with btrfs to recover your system and bring it back to a working state very quickly, compared to booting a rescue system and fixing it manually or restoring it from a backup.




Naji Almahmoud

By: Naji Almahmoud

SUSE Spotlight: A Conversation with Naji Almahmoud, Senior Director, Global Business Development, SUSE


Author

As Senior Director of Global Business Development at SUSE, Naji Almahmoud is responsible for researching new types of business, products and services with an emphasis on identifying gaps in the mitigation of needs of potential clients, attracting new customers, penetrating existing markets and creating new markets. Previously, he served as Director of the SAP Global Alliance, among other positions. Mr. Almahmoud received the 1999 President Award from the CEO Office.

What does the Business Development function do? What are its main activities?

Business Development's core function is managing and developing strategic technology alliances and ecosystem programs for ISVs (Independent Software Vendors), CSPs (Cloud Service Providers), and server chip makers. Analyst Relations is another important function in the team. The purpose of business development is to support annual strategic goals from SUSE and drive new market development.


Our strategic technology Alliance partners include SAP, VMware, Microsoft Azure, Amazon Web Services, Google Cloud Platform and Intel. We are responsible for setting each Alliance partner's strategic direction and developing joint competitive advantages and go-to-market plans. We also collaborate on technology, including development, test, certification, integration, optimization and technical support. For example, SAP and SUSE architects work side by side in a development lab to meet each other's customers' needs and differentiate these products in the marketplace. We also conduct joint marketing and sales activities. For each technology Alliance partner, we have dedicated business and technical resources to manage and nurture these relationships.


In the ISV program, software vendors join our PartnerNet program. We manage the program: creating and applying requirements (through contracts) for partnership and providing benefits, such as access to SUSE software for testing and development and certifying their applications on top of SUSE products. These partners can also interact with our dedicated ISV architects and technical team and get help when they are working toward certification and face a technical challenge. Once their products are certified, they are listed in our SUSE Partner Software Catalog. Our customers consult the catalog to search for certified software on SUSE, so the partners' software gets added visibility. Other marketing help includes developing joint collateral, such as technical whitepapers and best practices and, in some cases, joint participation in events to create awareness.


Another big benefit is training. Through PartnerNet, partners can access full SUSE on-demand courses for beginning-to-advanced administration levels as well as engineering-level and support-level training. Let me add that while we reach out to some software vendors to partner with SUSE, vendors often come to us to certify their applications on our products and to join our ISV program.


Finally, we are responsible for new market development of the latest technology and IT areas where SUSE decides to offer new products. Business Development approaches market development from two sides, and we are proactive in both of these. In the initial phase of decision making, we lead a cross-functional team including Product Management, Engineering and Marketing and make a recommendation on what technology to choose and how to go to market. This is how we selected Ceph as the basis for SUSE Enterprise Storage.


After a decision is made by SUSE management, following the cross-functional team proposal, we are responsible for ecosystem development. We determine which third-party applications will complement and complete our solution, engage with those software vendors to become partners and bring joint new certified or integrated solution to market. Examples include our OpenStack-based cloud partners and Ceph-based storage partners.


Another core responsibility is the SUSE Public Cloud program, which now includes more than 40 CSPs including alliance-level partners such as Amazon Web Services, Microsoft Azure and the Google Cloud platform. The major benefit we provide to CSPs is allowing them, under business and technical agreements, to offer SUSE products in their clouds on a pay-per-use basis. This means that instead of buying a one-to-five-year subscription, the public cloud customers just get charged an hourly rate for their workload processing. This helps our partners appeal to their customers on the basis of pay-per-use pricing as well as the great flexibility and scalability that they can provide. Our CSP partners also have access to our software for testing and development. In the case of CSPs that are Alliance partners, we have a dedicated Alliance manager and a special cloud technical team focused on helping them set up infrastructure and update SUSE products on their cloud.


Analyst Relations ensures that industry analysts are briefed on a regular basis about the SUSE strategy, products, services and solutions, as well as our ability to execute in terms of global scale and go-to-market capabilities. We also seek consultation from industry-leading IT analysts on drafted plans and strategies.


What are the objectives of Business Development? How does it fit into the SUSE strategy?

Business Development has two core, high-level objectives. Not surprisingly, one is to help SUSE achieve its annual financial target. Number two is to develop SUSE opportunities for the upcoming years to establish the ground for future growth. Both are achieved through highly focused execution of current business plans with Alliance partners and proactive design of joint new solutions and expansion of partnership footprint to new technologies.


How does Business Development interact with the SUSE Alliances, Independent Hardware Partner (IHV), OEM, System Integrator and Channel programs and with direct sales?

We work hand-in-hand with all of these groups, looking for go-to-market and sales cross-opportunities, executing joint marketing initiatives and helping with the actual sales. For example, we interact a lot with the IHV team. An example of a joint opportunity might be HP reselling SUSE Linux Enterprise Server for SAP Applications on their servers to SAP customers. As you can see, this is a three-partner go-to-market.


We interact on a day-to-day basis with Direct Sales—both client and partner account executives—in areas where SUSE has a joint solution with a partner or the partner resells SUSE (integrated into a product). For example, Business Development has a special SAP Alliance team that supports direct sales and related activities. Our team collaborates with SAP and our joint IHV partners to develop technical tools and assets that SAP's customers want, such as technical white papers and best practices. Also on the technical side, SUSE and SAP constantly review product roadmaps to strengthen joint solutions that provide competitive advantages. We also work with the SUSE Sales Enablement team to reach out to the field and give SUSE sales and partners training on a joint product so they can understand and position it effectively. If Sales needs help to sell, they can always come back to the Business Development SAP team for technical advice and support.


We help channel partners such as resellers through the SUSE PartnerNet program, where they can get marketing and sales help as mentioned previously. Like all of our partners, they can attend SUSE Sales Enablement sessions.


We also have system integrators (SI) and global consultants: Wipro, HCL, Atos Origin, Tata Consulting Services, and Infosys, for example. We enable and support their go-to-market activities working with the Micro Focus CSI (Consulting and System Integration) team.


Why and how are the Business Development activities beneficial to customers? How do they affect them?

At the end of the day, everything we do is to benefit customers. For example, what we do to support and certify VMware software benefits our mutual customers, because they have assurance that the products will work and support each other fully in an optimal way in a production environment.


With some partners—such as MapR, Pivotal, Intel, SAP, Amazon Web Services, Microsoft Azure, and OpenStack-partners—we go further, collaborating to optimize how our products work together for customers. We are capturing customers' needs and delivering the technology and services that meet them. Examples include a) greater automation, improved high availability and faster deployment for SAP HANA software; b) a Linux-based HPC solution on Microsoft Azure leveraging Intel and SUSE technologies; c) production support of SUSE OpenStack with customers running Pivotal Cloud Foundry; d) teaming with MapR to provide support for MapR Enterprise running on SUSE OpenStack Cloud using the MapR Sahara plugin; and e) the availability of SUSE Manager, SUSE Linux Enterprise Server for SAP Applications and Bring Your Own Subscription on Amazon Web Services.


How would you summarize your work in a nutshell?

Business Development is driven by market development and customer demand.


  1. Private and public clouds have been constantly among the top ten technology trends for some years. We established a cloud program and developed the ecosystem to grow SUSE business and strengthen our market position.
  2. Partner and customer demand are captured and translated into solutions with our alliance partners, for example, in SUSE Linux Enterprise Server for SAP Applications and our Bring Your Own Subscriptions program.


Kay Tate

By: Kay Tate

Certification Update


Authors

  • Kay Tate is the ISV Programs Manager at SUSE, driving the support of SUSE platforms by ISVs and across key verticals and categories. She has worked with and designed programs for UNIX and Linux ISVs for fifteen years at IBM and, since 2009, at SUSE. Her responsibilities include managing the SUSE Partner Software Catalog, Sales-requested application recruitment, shaping partner initiatives, and streamlining SUSE and PartnerNet processes for ISVs.
  • Marjorie Westerman is a Marketing Writer at SUSE. She edits The SUSE Insider and SUSE News.

YES Certified Hardware


From August 1, 2015 to November 1, 2015, SUSE published Yes Certification Bulletins for 229 hardware devices — most of them for network servers, but also a few for workstations and a tape drive. Almost half of these devices were from Hewlett Packard. Other hardware vendors represented included Atos, Cisco, Dell Computing, Fujitsu, H3C Technologies, Hitachi, Huaweii Technologies, IBM, Intel, Lenovo, Oracle, Positivo Informatica, SGI and VMware.


To research certified systems, go to the Certified Hardware Partners' Product Catalog, search the system name and type and click on the bulletin number to see the exact configuration that has been certified.


Highlights

Among YES Certifications completed in the period mentioned above, the newest releases—SUSE Linux Server 12, SUSE Linux Enterprise Desktop 12 and SUSE Linux Enterprise Server 11 Service Pack 4 (SP4)—account for a little more than half of the certifications. Here's the breakdown:


  • Of the network servers certified, 92 were on SUSE Linux Enterprise Server 11 Service Pack SP3, 22 were on the recently released SUSE Linux Enterprise Server 11 SP4, 100 were on SUSE Linux Enterprise Server 12 and 3 (from VMware) had no operating system.

  • Five workstations, from Positivio Informatica, were certified, all on SUSE Linux Enterprise Desktop 12.

  • The remaining device was a Fujitsu tape drive, certified on SUSE Linux Enterprise Server 11.

  • https://www.suse.com/communities/conversations/full-power-management-compatibility-can-save-you-money/


SUSE Partner Software Certifications


With SUSE, customers continue to be able to choose from a large and growing number and variety of certified software packages to meet their needs. Software certifications for SUSE Linux Enterprise Server 12 are increasing in tandem with additional certifications for other versions. To research software certified to run on SUSE products, visit the Partner Software Catalog.


Highlights
  • Couchbase Server 3.1.0 from Couchbase, Inc. Couchbase Server is a NoSQL document database for interactive web applications. It has a flexible data model, is easily scalable, provides consistent high performance and is "always-on," meaning it can serve application data 24 hours a day, 7 days a week.

  • PostgresPURE 2.4 from Splendid Data Nederland B.V. PostgresPURE is a 100-percent open source database product that offers a direct, complete alternative to Oracle. Based on PostgreSQL, as published by the PostgreSQL Global Development Group, it enriches the base component PostgreSQL with additional database tools to create a fully enabled corporate standard platform suitable for large organizations.

  • Family of Data Protector 4.3 products from Repostor. Now available on SUSE Linux Enterprise Server, Repostor Data Protector products protect many flavors of databases using IBM Tivoli Storage Manager. After you access the product family page, click the names of individual products for details.

  • IBM SPSS Statistics Server 23.0.0.0. The IBM SPSS Statistics Server offers the features of SPSS Statistics with faster performance. It can scale from handling the analytical jobs of a single department to jobs for hundreds and even thousands of users across an organization. Processing is centralized, so there is no need to transfer data over the network. This saves time, improves productivity and enhances security.

Sign up to take user tests, and earn Amazon gift cards