SUSE Insider Newsletter


The SUSE Insider is a quarterly posting with the latest tips and tricks, product advancements and industry insights only available to SUSE customer subscribers. If there is a specific topic you would like to have covered, please email Marjorie.Westerman@suse.com.



SUSE logo

SUSECon 2015 News—More Learning


SUSECon 2015—in Amsterdam on November 2-6—offers more learning than ever before:


  • 100 breakout sessions with industry leaders
  • 39-plus hours of hands-on technology sessions
  • 20 demo stations where you can talk with SUSE experts
  • An extra day of hands-on training on SUSE OpenStack Cloud and SUSE Linux Enterprise Server 12
  • Free of charge, as part of your SUSECon registration, take either your Certified Linux Administrator exam or your Certified Linux Professional exam

Sign up here.

SUSE Spotlight: SUSE Partnerships—A Conversation with Michael Miller, Vice President of Global Alliances, Marketing and Product Management


As Vice President of Global Alliances, Marketing and Product Management for SUSE, Michael Miller is responsible for growing the SUSE business globally through key alliances, innovative product and marketing strategies, and global business development initiatives. He has 20 years of experience across a broad range of global leadership roles, including senior management positions in engineering, product management, marketing, sales and business development. Miller applies a practical and results-driven approach to building teams, creating alliances and developing solutions for the enterprise market. He holds an MBA from the University of Baltimore, a Masters in English from Western Washington University and a Bachelor's degree in Literature from Marquette University.

Q. Today I want to focus on just one area of your responsibilities: alliances and partnerships. As a company that provides distribution of an operating system, the industry expects SUSE to partner with major hardware vendors and software vendors. What historically has been our partner focus? Is it changing and why?

Historically, our partner relationships have revolved around our operating system, SUSE Linux Enterprise Server. That's changing now because of two important trends in IT. First, the role of the operating system has been changing dramatically in recent years and will continue to change radically in the future as more and more enterprises incorporate new innovations including cloud technologies, container technologies (like Docker), and more into their operations.


Second, the role of open source technology is changing, expanding into more types of systems that companies use to operate their enterprises. Years ago, people thought of Linux when they thought of open source. Today, open source extends well beyond the Linux operating system into many other areas like private cloud with the OpenStack project, platform as a service with CloudFoundry, and big data with Hadoop, as well as into previously proprietary areas like storage with Ceph. As a result of these IT trends, our relationships with partners—hardware and software partners, cloud service providers, system integrators and others—are evolving and expanding into more and more open source areas.


Q. What's new with our traditional hardware and software partners?

Let's start with the hardware partners. Many of our traditional partners—like Dell, HP, IBM, Fujitsu, Lenovo, Cisco, Unisys, Hitachi and others—are not just hardware companies. They are rapidly evolving into solution companies, combining hardware and software into integrated systems, appliances or what many refer to as “converged solutions,” in which they focus on adding more value to what they offer their customers. Many hardware partners are also becoming cloud providers as well. They are hosting their own public clouds in addition to developing private cloud solutions. IBM is an example: while we work closely with the System z and Power brands on the hardware side, we also partner with IBM's very large software group across a whole range of solutions, as well as partnering around cloud. In response to this shift in the industry, SUSE has evolved our partnership approach with these major vendors. We now collaborate with them to create converged systems and solution-oriented approaches, in addition to continuing the classic model of providing our technology as an OEM for pre-loading or selling with their hardware.


We play in this solution-oriented world very effectively because of the breadth of our open source technology and our expertise in it. We also have deep experience and expertise in co-engineering and collaborating with partners to optimize our joint solutions. A great example of this co-engineering is how we work with Intel to fully expose and optimize the power of their hardware through the operating system integration. So, you can see, it is in our DNA to co-innovate with partners to create integrated, converged solutions that provide added value to customers through increased performance, scalability, reliability, manageability and more.


Another great example is what we did with HP for its SAP HANA Converged System 900. Through a close level of joint engineering and collaboration, HP and SUSE created an industry-leading system with world record-setting benchmarks.


On the software side, for many years the fundamental element of our partnerships was with mutual certification. Together, we tested and certified the combination of the independent software vendor (ISV) workloads and the SUSE operating system to ensure that they ran properly and securely together. We also provided joint support for customers running the ISV's workload on our operating system. Now ISVs are evolving as well, looking to bring their software solutions to market through multiple routes, on new platforms and incorporated into more complex solutions. In addition to traditional avenues like direct sales to the enterprise and joint selling with hardware vendors, they are coming to market with integrators and public cloud vendors.


So it's no surprise that our ISV relationships are evolving as well—often merging with our strategic relationships with other partners, like cloud service providers. A great example is our relationship with SAP. For years, our collaboration with SAP has produced industry-leading joint SAP-SUSE solutions and innovation. And now it has also become an important element of our relationship with major cloud service providers around the world like Microsoft Azure, Amazon Web Services, Google Cloud Compute and others. These have become three-way partnerships and an entirely new path to the market.


Q. Is our partner list growing as well as changing, especially with regard to non-traditional partners like public cloud service providers, and how?

Absolutely—our partner list is growing in all categories, including new categories of technology and go-to-market partners. More and more partners are expanding into the enterprise hardware market. For example, Lenovo purchased IBM's x86 server business. Other players that started in a different part of the market are now entering the server market. For example, Cisco, which has a long history in networking and communications technology, is now also in the enterprise server market with its UCS product line.


In many cases, we now have new levels of collaboration with longstanding partners. For example, Huawei, which historically has been a very strong telco hardware partner for SUSE, has entered the x86 server market, and our partnership has expanded together into that area also. Another example is ARM, where we have both new and existing hardware partners beginning to embrace a new architecture for server computing. So, many of our partnerships have expanded into new areas, and some are new hardware partnerships that we haven't had in the past.


On the software side, we're focusing on whole new categories of partnerships. A great example is big data—with partners like Hortonworks, Cloudera and MapR. We have also expanded our partnerships into new technology areas, like platform as a service, for example, with Pivotal and CloudFoundry. We're also partnering with several other companies that are part of the OpenStack ecosystem: ActiveState, Stratus Technologies and many more.


The cloud service provider side has been taking off like crazy. SUSE has a mature cloud service provider program with more than 40 cloud service partners around the world. Some of them are global providers; others are regional providers. Some are general purpose in scope, and some are very focused on a particular approach such as infrastructure as a service. As a result of all of this, our ecosystem of partners is expanding continuously.


Q. How does SUSE choose partners?

Let's start with something that hasn't changed over the years: our partner philosophy or approach. Our tagline for SUSECon—“always open”—says it all. SUSE is always open to partnering with different companies. This openness differentiates us from vendors looking to create a stack approach (offering everything you need themselves) to lock in their customers. We want to provide choice and prevent lock-in—by enabling customers to continue doing business with all the vendors they know and trust, and at the same time benefiting from the latest innovations SUSE has to offer like virtualization choice, containers and Docker, live kernel patching and more—all tested, certified and jointly supported. We're really looking to give our customers enterprise-grade solutions with choice—whether it's our Linux distribution, our OpenStack distribution or our Ceph-based storage solution, because we partner with all the industry-leading vendors that customers are already doing business with. That open partnering approach gives customers both breadth of choice and a selection of best-of-breed technologies tuned to their needs.


Q. Finally, what is our partnering strategy for our new software-defined storage product, SUSE Enterprise Storage?

There are a couple of elements to our strategy here. First, we are actively engaged in combining our distributed storage platform with major hardware vendor offerings. Just like the migration from proprietary UNIX on expensive hardware to open source Linux on commodity hardware, there's a rapidly growing need in the storage industry for a much more economically scalable solution that combines open source software with industry standard hardware. We believe that the combination of SUSE Enterprise Storage (our Ceph-based distributed storage solution) with flexible, industry standard hardware offerings is going to be really attractive for a wide range of use cases. So, part of our partnership strategy for storage is to work closely with the hardware vendors that are also interested in that market opportunity. A lot of customers want to buy that combined solution versus buying the hardware and the software separately and having to put them together themselves.


As I've said previously, our goal in these partnerships is to create a selection of best-of-breed solutions that give customers choice and added value, and provide differentiation in the market through co-engineering and close collaboration. Such joint solutions are a win-win for SUSE and its partners—which is the basis for a strong relationship—and the result is a win for the customer that values flexibility, innovation and choice.



Reboot Reloaded: Patching the Linux Kernel Online


Author: Vojtech Pavlik is director of SUSE Labs, a department within SUSE R&D focusing on core technologies and research. He is one of the creators of kGraft.

1. Why?


The reliance of mankind on computers to control critical activities like stock trading, flight control or nuclear power plant management is ever increasing.


These services must not fail or have outages. Redundant systems have been proposed and implemented to solve single, unpredictable, independent component failures. Redundant systems composed of components of independent origin are being used to prevent systemic errors that cause larger outages.


Hot-swappable components have been designed to allow replacing components without shutting down systems.


Live kernel patching is the software equivalent of hot-swappable physical components. It allows replacing a faulty function inside the kernel without taking the system offline.


2. When?


The three commonly used tiers of change management, from top to bottom:


  • Incident response
  • Emergency change
  • Scheduled change

In an incident, a system could be down or in the midst of being actively exploited, and a corrective action is needed immediately. In an emergency, there is an identified risk that the system might crash or have a known vulnerability to attack, requiring an expedited action without delay. Scheduled changes are typically improvements that can wait until a window when the system is not needed.


Live patching gives a quick solution to incidents and emergencies caused by kernel issues and, in effect, turns the resolution of such an issue into a scheduled change: the full kernel update can wait until the next maintenance window.


This is of utmost importance to customers who need PCI-DSS, SSAE-16, ISO-27001 or other compliance and security certifications that mandate a certain speed of incident response.


3. Who?


One typical use case for live patching is in memory databases, where the cost of reboot and, thus, the value of avoiding it is highest. The huge processing and analytics power of an in-memory database comes at a cost: loading multiple terabytes of data to memory upon reboot can take a good part of an hour for even the fastest storage systems. Redundancy and replication can avoid externally visible downtime, but even then the switch from one server to another is usually noticeable. Additionally, using live patching could turn out to be much more cost effective than owning a second, very large server that acts only as a backup.


Mission-critical infrastructure services are another use case. These typically are redundant, and the goal is to keep them fully redundant at all times. The redundancy is there to cope with failures. It is not a tool to be used by administrators routinely for introducing changes. Live patching can help by allowing IT to apply fixes without having to go through a reduced redundancy cycle.


Simulations and HPC (high performance computing) calculations with terabytes of data in flight and spread over thousands of systems often cannot afford to stop and save all that data to storage; nor is a rolling reboot of the whole HPC cluster advisable. Live patching can help to keep the calculation going if a bug in the kernel is causing instability in the cluster.


Massive deployments that a cloud provider or an online service would use present a similar case. Live patching helps save on update costs, allowing IT to apply fixes in seconds rather than hours or days to a large farm of servers.


4. What?


The SUSE Live Patching technology is called “kGraft.” It was designed to be fast with no measurable interruption of service, and is easy and transparent to use. Simply installing a kGraft RPM package patches the kernel; upgrading the RPM package to a newer version updates the kernel to the next patch level; and downgrading the RPM package downgrades the patch level. In all cases, kGraft remains live, in memory, but persistent across reboots. Upon reboot, the kernel is patched in memory before the system boots; this provides a perfectly identical state to what is achieved through live patching.


To achieve this identical state, however, certain constraints had to be put on what kGraft can do. Most importantly, the scope of patches that will be available as live patches is limited to CVE vulnerabilities rated at CVSS level 6 and higher, and to bugs that could cause data corruption or severe system instability.


In addition, the fixes must be small in scope, replacing a limited number of kernel functions. This rules out whole-kernel upgrades using this method.


kGraft is available as SUSE Linux Enterprise Live Patching 12, a full-service offering with maintenance and support, providing live patch streams that allow IT to entirely avoid reboots for up to 12 months in one stretch.


5. How?


Let's look at how kGraft works under the hood.


Basically, kGraft puts a "detour" sign (a CALL instruction) into a reserved space at the beginning of a function that contains a bug. This redirects the code flow to ftrace, an infrastructure for kernel tracing, which in turns calls into kGraft. Then kGraft decides which replacement function should be called instead. This is much more reliable than changing all call sites that want to execute the function to call the new one. There can be thousands inside the kernel, and identifying all is a tough, if not impossible, task, particularly given that the Linux kernel has a partially object-oriented architecture, and the address of the affected function may be stored in kernel data.


6. Creating Patches


There are two fundamental ways to create live patches: manual and automated. Automated approaches save effort but tend to make patches larger than required and very hard to review for correctness. In any case, semantic analysis of the patch must be done by a human, which mostly negates the saved effort by automating patch generation.


kGraft provides tools for automation. However, experience has shown that creating patches manually allows users to produce higher quality patches that can be fully independently reviewed in source form and proven to do exactly what they are intended to.


Since kGraft replaces functions inside the kernel, a starting point is to identify which functions need to be replaced. This can be easily seen from the source code changes that need to be applied.


A shortened example of a kGraft patch looks like this:



#include <linux/module.h>
#include <linux/kgraft.h>

static bool kgr_new_capable(int cap)
{
	printk(KERN_DEBUG "we added a printk to capable()\n");
	return ns_capable(&init_user_ns, cap);
}

static struct kgr_patch patch = {
	.name = "sample_kgraft_patch",
	.owner = THIS_MODULE,
	.patches = { KGR_PATCH(capable, kgr_new_capable, true),
		     KGR_PATCH_END }
};

	static int __init kgr_patcher_init(void)
	{
		return kgr_patch_kernel(&patch);
	}
	static void __exit kgr_patcher_cleanup(void)
	{
		kgr_patch_remove(&patch);
	}

	module_init(kgr_patcher_init);
	module_exit(kgr_patcher_cleanup);

	MODULE_LICENSE("GPL");

It starts with including the required header files and then defines the following:


  • A new version of a kernel function
  • A structure containing the description of the patch, as well as a list of functions to replace
  • The steps to initialize and clean up the module that uses the kGraft infrastructure to apply and remove the patch upon insertion and removal of the module into and from the Linux kernel

7. Caveats in Patch Creation


There are a number of stumbling blocks that a patch author―in the case of SUSE Linux Enterprise Live Patching, a SUSE developer―must be mindful of when creating a patch.


The first, very basic, is inlining. A C compiler can decide that a certain function is small enough that instead of being called, it's worth embedding it into the calling function whole. This is called inlining. If the inlined function contains a bug, the bug is replicated into any other function that it has been inlined into. This isn't seen in the C source and is purely a compiler internal decision. All the affected functions need replacing now, not just the original. There is a solution: DWARF debug information that is being built and archived together with the kernel contains all that is needed to know the compiler's inlining decisions. It can be used to expand the list of functions that need to be replaced by the patch author.


Next, there can be unexported symbols. These are symbols used within a kernel object that aren't available outside of its scope for linking. Using such a function from a patch directly is thus impossible and requires a trick: by using the kallsyms infrastructure of the Linux kernel, it is possible to obtain the addresses of all symbols, including unexported symbols. Such symbols then can be called via those addresses. For example:


int patched_fn(void)
{
kgr_orig_static_fn();
}
static int __init kgr_patcher_init(void)
{
kgr_orig_static_fn =
(static_fn_proto)kallsyms_lookup_name("static_fn");
if (!kgr_orig_static_fn) {
pr_err("kgr: function %s not resolved\n",
"static_fn");
return -ENOENT;
}


IPA-SRA, or interprocedural scalar replacement of aggregates, is a feature that is as dangerous as its name sounds. It's a compiler optimization (developed at SUSE) that gives a significant performance boost, but it is also a disaster for patching. It can modify CALL instructions at the end of a function into JMP if the CALL is the last statement of a function. It can transform arguments passed by reference into arguments passed by value if the value is never changed, and it can create multiple variants of a function with fewer arguments, assuming a specific constant value for the removed argument allows for significant reduction of a function. Fortunately, this is all recorded in DWARF, the same as inlining and only results in more effort for the patch author.


8. Patching in Detail


As mentioned earlier, kGraft uses the ftrace framework for call redirection. Ftrace uses 'gcc -pg -mfentry' to generate calls to __fentry__() at the beginning of every function, replacing all those calls with "NOP" instructions at boot and reserving space for call redirection in the future. When required, the "NOP" is automatically replaced with a "CALL" to ftrace. kGraft then registers a tracer with ftrace, taking control when a redirection is needed. And that's it: a function call is redirected.


Before Patching After Patching

Gcc's "-mfentry" argument is unique to the x86-64 architecture. However, similar functionality is offered by "-mprofile-kernel" on the POWER64 architecture, or by "-mhotpatch" on s390x. Supporting ftrace and, by extension, kGraft is thus possible across all major architectures, including Aarch64 (ARM64).


9. The Final Hurdle


Using ftrace, it's fairly straightforward to redirect a single function to a new version. But what happens when multiple functions require being changed simultaneously because they depend on each other? The dependency can be in the form of changed number or types of arguments, return type or even a semantic change not covered by programming language syntax. In this case, we need a consistency model. kGraft uses a consistency model called "leave kernel / switch thread." Its main virtue is no interruption of service and no impact on the running system whatsoever.


In kGraft we want to avoid calling a new function from an old one and vice versa: if the function prototype has changed, this would cause a system crash. We achieve it by remembering a "universe" flag for each thread of execution such as interrupts, user threads or kernel threads. Only when a thread reaches a safe point, where we know that no kernel function is being executed by that thread, can we switch the universe flag and the thread starts executing new functions.


Universe Flagging

This safe point is the end of interrupt for interrupts, kernel exit/entry for userspace threads and the so-called freezer for kernel threads. After applying a patch, threads migrate one by one to the new universe as they pass through their respective safe points. No stopping of anything is needed, and once everyone is in the new universe, kGraft declares patching complete.


But what if a thread never does anything and never passes a safe point? We call these threads "eternal sleepers." They might be server daemons waiting for a request that never comes or gets onto consoles where no one ever logs in or daemons that handle situations that never arise. They just wait for their cue and sleep inside the kernel forever.


Patching cannot be declared complete until even these threads are moved over to the new universe. kGraft has to wake them up. This is done by sending them a signal, "SIGKGRAFT." This special signal wakes up the thread and causes it to attempt to exit the kernel to handle the signal, thus passing a safe point. At the safe point, kGraft catches the signal and returns the thread back to the kernel. The sleeping userspace application never notices, its thread is safely migrated, and success can be declared.


Many other consistency models are also being proposed and implemented. One is the Ksplice consistency model (now categorized as "leave patched set / switch kernel"), which achieves consistency simply by stopping the whole system for patching. Stopping isn't enough, though. After stopping, every thread needs to be checked to determine whether it is executing any of the patched functions. If it is, the kernel is resumed and stopped later to try again. This model is as safe as kGraft's, but could cause up to 40ms of interruption of service for each patch, and fails if eternal sleepers are in any of the patched functions.


10. Community / Upstream


SUSE is a community player. We are proud that all of the kernel work we do is shared with the Linux developer community. SUSE has been working to get kGraft into the upstream Linux kernel since publishing it in 2014. The release of the kGraft technology was followed by the release of kpatch by Red Hat a few weeks later; kpatch is mostly based on the Ksplice model.


Because there are two independent implementations of live patching, SUSE and Red Hat engineers are working to create a joint project, now called “KLP for Kernel Live Patching,” to be included in the upstream kernel. It uses ideas from both implementations and has been merged into the upstream kernel version 4.0. Including live patching was the major reason for increasing the kernel major version to 4. The implementation is very basic at the moment, and SUSE and Red Hat are working together to extend it to be able to fully replace kGraft and kpatch.


When the joint project is complete, live patching will become a standard technology for Linux users.



Ralf Bueker

By: Ralf Bueker

Data Center Automation with SUSE Manager


Author: Since 2010, Ralf Bueker has been working for SUSE as a Dedicated Support Engineer (DSE), at Bundesagentur für Arbeit, the German Federal Employment Agency. Previously, he worked at Novell as a Support Engineer in different knowledge teams. These included working with the Remote Support team as a DSE at Star Alliance from 2005 to 2010 and with the Major Accounts team, starting in 2002, and working with Airbus, K&S, BRZ and other companies.

The Customer


A governmental organization based in Nuremberg, the Federal Employment Agency (Bundesagentur für Arbeit or BA) is the largest provider of labor market services in Germany. With more than 800 branch offices nationwide, it employs approximately 100,000 people with an IT staff of about 2,100 internal and external administrators and engineers who are responsible for around 100 different customized applications.


The Task


SUSE was engaged by BA for the DCA (Data Center Automation) Project to provision a functional server from a web front end. The plan was to start the entire process with a minimum of input and then execute it in full automatically. The web front end would offer a Windows Server and a SUSE Linux Enterprise Server from the same interface. This article looks at the Linux part of this project.


The prototype for automatic installation was a standard Weblogic server on SUSE Linux Enterprise Server, with the Weblogic server provisioned on a VMware ESX Server farm. Because no PXE network was available, it needed to be created and installed automatically in the farm.


Because the new servers were provisioned in a productive live environment, they had to seamlessly match the current admin/update/monitoring infrastructure. This included, among other components, an Active Directory connection, MS SCOM (Microsoft Systems Center Operations Manager) monitoring and instant DNS (Domain Name Service) resolution.


The Data Center Automation project was obliged to follow the ITIL1 process to obtain all necessary resources (such as IP addresses, server names, and virtual machines) for the installation. This process also included the creation of changes. It was a challenge to find (and get access granted to) all of the necessary APIs. The clear and complete documentation (thanks to ITIL) for the installation process was helpful.


For BA, the solution is the first step in creating a Platform-as-a-Service (PaaS) cloud service in the existing architecture. The installation in an existing environment led to some interesting observations on timing. The new, automatic process then needed only 20 to 30 minutes for server provisioning. (The old, semi-automatic approach needed three to four weeks.) The faster pace led to some issues with caches and database (like DNS or AD) updates, especially in testing, where machines with the same names and IPs were created and deleted regularly. In the beginning, the project team needed to allocate days for server creation and for server deletion to overcome these caching issues. Next, we deployed some ad-hoc methods. Nevertheless, the final aim was to create a cloud infrastructure with SUSE OpenStack Cloud.


The Process


Manager Process

The following description explains the role of SUSE Manager in the process. The necessary steps were implemented by a collection of scripts running on the SUSE Manager Server, referred to as the “DCA Program” or just the “DCA” (Data Center Automation).



“ITIL, formerly known as the Information Technology Infrastructure Library, is a set of practices for IT service management (ITSM) that focuses on aligning IT services with the needs of business.” Wikipedia, http://en.wikipedia.org/wiki/ITIL

SUSE Manager Preparation


For an automated distribution, we executed the following steps on the SUSE Manager Server:


  • Create Distribution //An automatic installation cannot be performed without a Distribution.
  • Create profile(s) //Including basis hardware and partitioning information
  • Create configuration channels // All server configuration is performed by SUSE Manager Configuration Channels
  • Create software channel //Software (SUSE and customized) for the automatically installed servers
  • Create activation key(s)
  • /etc/cobbler/settings and set: redhat_management_permissive: 1 // allows login to cobbler api

XML and Schema


For the server provisioning process, several pieces of data were needed:


  • Customization data for the AutoYaST installation (IP address/es, GW, DNS Server, and so forth)
  • Specific data for server configuration (server class, security class, environment)
  • Data for creating the virtual machine on ESX (VLAN, disk size, VMware Cluster, and so forth)
  • Customer internal process data (object ID, change number, and so forth)
  • The command (Do we want to install or delete a server; do we want an ISO installation or a template-based installation?)

Parameters were transferred by an XML file. To control data availability and the validity of the parameters, we created an XML schema, against which incoming XML data were verified. The provisioning process started with the XML file containing all the necessary data being copied to a folder on the DCA machine. A cron process watches the input folder every 30 seconds. If there are new files in the folder, it starts processing the file.


The first step is that the data is validated against the schema. If the validation fails, the installation is canceled, and a message is sent to the back end. If the file is valid, we proceed with the creation of a new profile for this installation in SUSE Manager.


Profile Creation


The basis for creating the profile is a standard minimal AutoYaST file for a 1 NIC / 1 HD server with standard partitioning. For a different hardware /partitioning configuration, different AutoYaST files must be created. The variable parameters needed for the server installation, such as server name, IP address, and so forth, are referenced as variables in the AutoYaST file.


The only part to add to the minimal AutoYaST file is an init script to bootstrap the new server for SUSE Manager.


SUSE Manager Configuration Channels perform all other configuration tasks on the server later.


Init script:
<verbatim><scripts>
<init-scripts config:type="list">
<script>
<filename>bootstrap.sh</filename>
<interpreter>shell</interpreter>
<source>
<![CDATA[
 #!/bin/bash
 curl -Sks http://<IP_SUSE_MANAGER_SERVER>/pub/bootstrap/bootstrap_dca.sh | /bin/bash
 /root/install/install.sh
 exit 0
]]>
</source>
</script>
</init-scripts>
</scripts></verbatim>

Comments:

  • The activation key should be configured for "Configuration File Deployment," so configuration files are copied to the server at bootstrap.
  • At this point there is no name resolution. Therefore, you need to provide the SUSE Manager IP address in the curl line.
  • Executable files (<128kb) are provided in the config channels. They can be automatically installed with an install script called during installation.
  • The bootstrap file must be chosen according the server class (see below).

SUSE Manager Profile


The AutoYaST files are stored under /var/lib/rhn/kickstarts/upload/.


The minimal AutoYaST template file with the variables for dynamic values is located here. We created a new AutoYaST file for current installation with the variables filled in by simple text processing from the XML input.


Other specific values like distribution or the kernel parameter are stored in /var/lib/cobbler/config/profiles.d/<servname>:<org_nummer>:<rg_name>.json Both of them together constitute the profile.


Input data for this must also be evaluated from the input XML file. To create the json file, use the SUSE Manager API, which can be accessed via: cobbler_api = xmlrpclib.Server("http://" + 127.0.0.1 + "/cobbler_api") # DCA scripts are running on SUSE Manager Server
 token = cobbler_api.login(login,password)


Then the new profile is created.


profile_id = server.new_profile(token) # Creates the new profile in SUSE Manager Next, the profile is customized according to the XML- data:


server.modify_profile(profile_id, 'name', bcpname, token) # Name of the profile (that is, "DCA_<servername>:<org_nummer>:<org_name>)


A distribution is added to the profile:
server.modify_profile(profile_id, 'distro', bcpdistro, token) # SUSE Manager Distribution used for profile (that is, SUSE Linus Enterprise Server 11 Service Pack 3) 
server.modify_profile(profile_id, 'kopts', bcpkopts, token) # Kernel Options (something like 'netmask=255.255.255.0 gateway=<gw_ip> hostip=<host_ip> hostname=<host_name>')

server.modify_profile(profile_id, 'kickstart', bcpkickstart, token) # base AutoYaST file, created above for the profile, is added.


This profile is visible in the SUSE Manager GUI, but cannot be edited. This is not necessary because it is deleted again after the installation has finished.


The Kernel Options are necessary to have network connectivity right after the start and to find the SUSE Manager Server.


Because we want to boot from CD, we need to create an appropriate boot ISO.


ISO Creation


SUSE Manager Server provides a cobbler CLI, which enables the creation of an ISO image for a given profile via the command line.


(cobbler buildiso --help provides the necessary information.)


There is also a Python module to access cobbler functionality on the SUSE Manager Server. (pydoc cobbler.api shows the associated documentation.)


Up to this point, there is no command in the cobbler xmlrpc API to create an ISO image.


Build ISO Template


Unfortunately, we cannot easily use cobbler to create the boot ISO for the new profile because cobbler uses a template for the boot menu. The boot menu template defaults to boot from a local disk, but we want to boot in our new profile.


Therefore, we have to change the template file /etc/cobbler/iso/buildiso.template It should look like this:


DEFAULT MENU

PROMPT o

MENU TITLE DCA Boot Menu

TIMEOUT 10

TOTALTIMEOUT 600

ONTIMEOUT <our_new_profiles>

LABEL local

MENU LABEL (local)

LABEL local

MENU DEFAULT

KERNEL chain.c32

APPEND hd0 0 

(It is a good idea to restore the original file after creating the boot ISO for the new server.)


Making the ISO Image Accessible for Virtualization


VMware ESX allows virtual machines to boot from an ISO image stored on a folder accessible to the VMware ESX server. For this purpose, we mount a directory that VMware ESX server can use and we copy the ISO image there.


The ISO image does not necessarily need to be booted on VMware vCenter. It can be booted from any infrastructure where an ISO can be mounted, such as XEN, KVM or the ILO of a physical machine.


VMware vCenter Back End


VMware vCenter provides an API for its services. 
For DCA, this API has been published as a web service and is partially customized. The customization was necessary to add some additional internal data to the installation process. Sending XML to the web service triggers a requested function in the API and provides necessary information or performs requested tasks.


The DCA program queries the web service-wrapped API for the data needed to create the virtual machine i.e., for LUNs (with enough storage), necessary VLANs, and so forth. The response and the data from the input XML are merged into a new XML file, which is then finally sent to the vCenter to create the new virtual machine. As part of the return value from the vCenter API, we receive the Mac address(es) for the new virtual machine. It is provisioned switched off because machines with multiple NICs might need to customize udev rules in the profile. This is necessary to ensure that the NICs are switched to the correct VLAN. With the next command, the virtual machine is switched on. It finds the ISO image and starts the installation of the new server.


Server Configuration


The SUSE Manager Configuration Channels can perform the server configuration completely. The channels can be implemented in modular form, so that you can assemble server classes with specific channel collections.


You might set up the following channels for a modular configuration:


  1. Basic configuration // This is the configuration that is applied on all servers

  2. Task-specific configuration
    • Web server specific config channel
    • DB server specific
    • GUI server specific

  3. Environment specific configuration
    • Production specific
    • Staging specific
    • Test specific

  4. Security class specific information
    • High
    • Medium
    • Low

  5. Special functions
    • Mount points
    • Special users
    • And so forth

The necessary input for the server configuration is provided with a "server class" parameter and an "environment" parameter in the input XML file. The configuration channels are assigned accordingly to an activation key. In the process, different configurations are mapped using different bootstrap files / activation keys.


Reporting the Installation Status


The server installation steps are monitored, and intermediate data is reported back to the calling web service.


The DCA service provides the following status information:


  1. File received (Found an XML)
  2. ISO image created
  3. Virtual machine created
  4. Virtual machine can be pinged
  5. Registered in SUSE Manager: queries SUSE Manager if the server is already registered. If it is monitored, it gets the SUSE Manger System I.

To report the next installation states, we already get information from the newly created machine. To obtain this, we need another great function of SUSE Manager: remove script execution.


Executing Remote Scripts


After the server has been registered to SUSE Manager, we can execute scripts remotely.


We can execute a script via the API call:


scriptid = client.system.scheduleScriptRun(key, sumaid, "root", "root", timeout, script, now)


key is the authentication key from the SUSE Manager API.


sumaid is the SUSE Manager ID from the server you created.


timeout is the length of time during which we try to get a result.


script is the script we want to execute (basically, a string like: #!/bin/sh \n ls /root/install/inst_status.txt"), and the rest is obvious :-).


The return value can be checked via:


scriptresult = client.system.getScriptResults(key, scriptid)


where scriptid is from the above. The return value "scriptresult" is a dictionary that includes script return code and output (sometimes b64 encoded—see the API reference). This is how we verify certain stages of the installation. For example, we know that the Configuration Channels have not been applied properly if “ls” from a remote script does not come back with a positive result.


However, there is just one problem. A server usually checks in just once every hour, but we want our scripts to be executed as soon as possible. So, it would be great if we could issue an osa-ping via the API. Unfortunately, this is not possible yet. If you want the workaround, you can contact me (rbueker@suse.de). Meanwhile, you can sponsor https://fate.suse.com/317938.


Monitoring the Completion of the Installation


We can now identify the following steps and report the result back to the calling procedure:


  1. Wait For Bootstrap End: executes a remote script to see if a certain file is already in the server.
  2. Config Done: uses the same as above to see if all configuration from the channel is complete.
  3. Check Config Log: parses the remote "/var/adm/autoinstall/logs/bootstrap.sh.log” if there are any errors.
  4. Last Boot: reboots the new server.
  5. Final Up: issues a "ping" to see if the server is up and a client.system.getDmi(key, sumaid) to verify that the server has properly registered to SUSE Manager.
  6. Remote Cleanup: cleans all the remains from installation from the server.

When this is done, we send a success message to the calling web service, and the new system is available.


This all might seem complicated, but, in fact, the DCA Program has run successfully for almost 1 1/2 years now, and we have installed more than 400 SUSE Linux Enterprise Servers this way.



Kay Tate


By: Kay Tate

Certification Update


Authors

  • Kay Tate is the ISV Programs Manager at SUSE, driving the support of SUSE platforms by ISVs and across key verticals and categories. She has worked with and designed programs for UNIX and Linux ISVs for fifteen years at IBM and, since 2009, at SUSE. Her responsibilities include managing the SUSE Partner Software Catalog, Sales-requested application recruitment, shaping partner initiatives, and streamlining SUSE and PartnerNet processes for ISVs.

The dedicated hardware and software certification teams from SUSE have been hard at work ensuring that you get the support for the applications you need and for the systems you want to run them on.


YES Certified Hardware


From February 1, 2015, through May 18, 2015, SUSE YES Certified 474 hardware systems, a combination of network servers and workstations. These certifications included systems from Dell, Ericsson, Fujitsu, Hewlett-Packard, Hitachi, Huawei Technologies, and IBM. Inspur, Intel, Lenovo, NCR, Oracle, Positivo Informatica, Sugon Information Industries, Toshiba, Unisys, VMware and Wincor Nixdorf.


To find out which systems—and which configurations of those systems—have been YES Certified, go to the Certified Hardware Partners' Product Catalog, search the systems and click the bulletin number.


Highlights

Here are some trends and facts that stand out among YES Certifications completed in the period mentioned above:


  • Workstations accounted for more than 180 of the YES Certified systems during this period. The majority of these systems (156) were Hewlett-Packard Z440, Z460 and Z840 workstations, and were certified for SUSE Linux Enterprise Desktop 12.
  • While the majority of server YES Certifications were for SUSE Linux Enterprise Server 12, 129 servers were newly certified on SUSE Linux Enterprise Server 11 Service Pack 3.
  • Lenovo certified more servers (80) during this period than any other hardware vendor. These consisted mostly of x3750 6X, x3850 6X and x3950 6X systems.
  • Other vendors certifying many servers were Fujitsu (49—PRIMERGY RX2560 M1 and S7 systems); Unisys (45—Forward! by Unisys, ES7000 and ES3000 systems); and SGI (27—SGI UV30, UV200, Altix UV100 & UV1000, Rackable C2018 & C2112 and Infinite Storage.

To research specific systems, go to the Certified Hardware Partners' Product Catalog.


SUSE Partner Software


SUSE has increased its investment in developing and managing ISV relations to drive business and strategic technical relations with ISVs to support SUSE market initiatives and give our customers a choice of best-of-breed solutions to meet their IT and business needs.


Highlights

Examples of software from new and longstanding partners recently tested and validated for interoperability with SUSE Linux Enterprise 12 include:


  • SEP sesam 4.4. This hybrid backup software from SEP AG performs application backups consistently across platforms with multiple, simultaneous backup streams for optimal transfer rates (exceeding multiple TB per hour) and the most efficient backups. It can back up to anywhere, onsite or offsite, including any remote location, locally owned physical location, cloud or privately hosted facility. For more information, click here.
  • Storix System Backup Administrator 8.2. A centralized, full system and backup solution for virtual or physical servers from Storix Software, Storix System Backup Administrator provides Adaptable System Recover (ASR). With ASR, a file-based backup of the entire Linux system is created along with customizable recovery media. Unlike image-based backups, you can recover the entire Linux system to dissimilar hardware or virtual environments, as well as perform individual file restores from a full system backup. For more information, click here.
  • Several IBM Rational products, including Application Developer for WebSphere Software, Application Developer Standard Edition for WebSphere Software, Build Forge, License Key Server, Performance Test Server, Test Virtualization Server and Test Work Bench. As one of the five brands within IBM Software Group, Rational provides products, services, and guidance for software and systems development and delivery, covering the entire project lifecycle from design to deployment. For more information on these Rational products currently supported by SUSE Linux Enterprise 12, click here.

For a complete listing of all certified applications, visit the Partner Software Catalog.


http://www.susecon.com?src=conversations Sign up to take user tests, and earn Amazon gift cards