SUSE Linux Enterprise Server

How to Set Up a Multi-PXE Installation Server

This SUSE Best Practices document explains how to set up a multi-architecture PXE environment for the installation of the SUSE Linux Enterprise Server operating system on x86_64 and ARMv8 platforms with both BIOS and EFI.

Components that usually should be hosted on the installation server are also included. This helps improving deployment times in multi-architecture environments.

Author: David Byte, Senior Technology Strategist, SUSE
Publication Date: September 25, 2018

1 System Requirements

For a reasonable install server, you need:

  • 200 GB drive space

  • 5 GB RAM

  • Two network connections

The relatively huge drive space is needed if your installation server should also serve as your Subscription Management (SMT) server. Your deployment will use the SMT server in an ongoing manner for mirroring updates for the architectures in question in addition to hosting the installation repositories.

2 Installing the Operating System

First, install SUSE Linux Enterprise Server 12 SP2 on a virtual or a physical system.

On the registration screen, click the Network Configuration button at the upper right and configure the network. You can use two different interfaces: one for the public network with access to the Internet, and one for the cluster network. Deploying using the cluster network has several advantages:

  • It is a private network for the storage systems only and thus there will be no conflict with other DHCP servers or PXE servers

  • Generally it is faster than the public network

Use the Installation Settings screen to review and change several proposed installation settings. Clicking Software opens the Software Selection and System Tasks screen, where you can change the software selection by selecting or deselecting patterns.

The default scope of software includes the base system and X Window with the GNOME desktop. Deselect X Window with the GNOME desktop. For this, you are also safe deselecting 32-bit compatibility libraries.

3 Setting Up NTP Server

The NTP server is used to act as an authoritative time source. Start YaST, select NTP Server and configure it to start at boot time. Under security settings, select Open Port in Firewall and click OK. When you use the install server, you can point the systems being deployed back to it for time synchronization.

4 Setting Up DHCP Services

Go into YaST and select DHCP Server from network services. Configure the IP range you want to use. After that, it is your choice on whether you use YaST or edit the configuration file (/etc/dhcpd.conf) manually. As with the other services, be sure to open the ports in the firewall. The end result needs to be a file that looks like the below:

option domain-name "my.lab";
option domain-name-servers 172.16.253.5;
option routers 192.168.124.1;
option ntp-servers 192.168.124.3;
option arch code 93 = unsigned integer 16; # RFC4578
default-lease-time 3600;
ddns-update-style none;
subnet 192.168.124.0 netmask 255.255.255.0 {
  range 192.168.124.100 192.168.124.199;
  next-server 192.168.124.3;
  default-lease-time 3600;
  max-lease-time 3600;
  if option arch = 00:07 or option arch = 00:09 {
   filename "/EFI/x86/bootx64.efi";
    } else if option arch = 00:0b {
   filename "/EFI/armv8/bootaa64.efi";
    } else {
   filename "/bios/x86/pxelinux.0";
    }
}
Note
Note: if option arch

The if option arch sections allow the DHCP server to make the correct decision on which file to use for booting.

5 Setting Up Repositories

The next step is to create the installation source directories. Use a structure similar to /srv/install/arch/product/version.

/srv/install/x86/sles12/sp2/cd1
/srv/install/armv8/sles12/sp2/cd1

Now mount the install media into the appropriate location. Be sure to add it to /etc/fstab for persistence between reboots.

mount -o loop, crossmnt /root/sles12sp2.iso /srv/install/x86/sles12/sp2/cd1/

Do the same for each architecture’s boot media.

The final part of setting up the repositories is to export them. Although there are other supported methods such as HTTP, FTP, etc., use NFS for this scenario.  First, enable the NFS server through YaST and select Open Port in Firewall.  In this instance, export the entire /srv/install structure as seen in the sample /etc/exports entry below.

/srv/install  *(ro,root_squash,sync,no_subtree_check)

6 Setting Up TFTP

Now you need to enable the TFTP server. To do so, start YaST and select TFTP Server from Network Services. The system then prompts to install tftp. Select Enable, leave the image directory at /srv/tftpboot, select Open Port in Firewall, and click OK.

7 Setting Up PXE Boot

To ensure you set up your install server correctly, read carefully through the following sections. Proceed as described and double check your entries to avoid typos. Getting all the right files in the right places allows you to add more architectures, operating system install options, etc.  

Create a structure in /srv/tftpboot to support the various options:

mkdir /srv/tftpboot/bios
mkdir /srv/tftpboot/bios/x86
mkdir /srv/tftpboot/EFI
mkdir /srv/tftpboot/EFI/x86
mkdir /srv/tftpboot/EFI/x86/boot
mkdir /srv/tftpboot/EFI/armv8
mkdir /srv/tftpboot/EFI/armv8/boot

7.1 Setting Up the x86 BIOS Boot Environment

At this point, you need to copy the necessary boot files for the x86 BIOS environment to the appropriate boot location. To do so, navigate to the appropriate directory as shown below:

cd /srv/install/x86/sles12/sp2/cd1/boot/x86_64/loader/
cp -a linux initrd message /srv/tftpboot/bios/x86/

While still in the loader directory, create the directory for the configuration file and copy it in:

mkdir /srv/tftpboot/bios/x86/pxelinux.cfg
cp -a isolinux.cfg /srv/tftpboot/bios/x86/pxelinux.cfg/default

Copy pxelinux.0 to the same structure:

cp /usr/share/syslinux/pxelinux.0 /srv/tftpboot/bios/x86/

Now that the files are all in place, edit the configuration to ensure all the correct boot options are also in place.  Start with editing /srv/tftpboot/bios/x86/pxelinux.cfg/default. See the example below:

default harddisk

# hard disk
label harddisk
  localboot -2
  # install
label install
  kernel linux
  append initrd=initrd showopts install=nfs://192.168.124.3/srv/install/x86/sles12/sp2/cd1

display message
implicit 0
prompt 1
timeout 600

Now edit /srv/tftpboot/bios/x86/message to reflect the default file you edited before. See the example below:

              Welcome to the Installer Environment!

To start the installation enter 'install' and press <return>.

Available boot options:
  harddisk   - Boot from Hard Disk (this is default)
  install     - Installation

Have a lot of fun...

7.2 Setting Up the x86 EFI Boot Environment

Start by copying the files required for UEFI booting of a grub2-efi environment.

cd /srv/install/x86/sles12/sp2/cd1/EFI/BOOT
cp -a bootx64.efi grub.efi MokManager.efi /srv/tftpboot/EFI/x86/

Copy the kernel and initrd to the directory structure.

cd /srv/install/x86/sles12/sp2/cd1/boot/x86_64/loader/
cp -a linux initrd /srv/tftpboot/EFI/x86/boot

Now you need to create a grub.cfg file. This file goes in /srv/tftpboot/EFI/x86 and should have contents similar to the below example:

set timeout=5
menuentry 'Install SLES12 SP2 for x86_64' {
 linuxefi /EFI/x86/boot/linux install=nfs://192.168.124.3/srv/install/x86/sles12/sp2/cd1
 initrdefi /EFI/x86/boot/initrd
 }

7.3 Setting Up the ARMv8 EFI Boot Environment

Setting up the ARMv8 EFI boot environment is done in a way very similar to the x86_64 EFI environment. Start by copying the files required for UEFI booting of a grub2-efi environment:

cd /srv/install/armv8/sles12/sp2/cd1/EFI/BOOT
cp -a bootaa64.efi /srv/tftpboot/EFI/armv8/

Copy the kernel and initrd to the directory structure:

cd /srv/install/armv8/sles12/sp2/cd1/boot/aarch64
cp -a linux initrd /srv/tftpboot/EFI/armv8/boot

Next you need to edit the  grub.cfg file and add a new section.  This file is located in the directory /srv/tftpboot/EFI/armv8.  You should add contents similar to the example below.

menuentry 'Install SLES12 SP2 for SoftIron OverDrive' {
 linux /EFI/armv8/boot/linux network=1 usessh=1 sshpassword="suse" \
   install=nfs://192.168.124.3/srv/install/armv8/sles12/sp2/cd1 \
   console=ttyAMA0,115200n8
 initrd /EFI/armv8/boot/initrd
}

This addition to the configuration file contains a few other options to enable the serial console and to allow installation via ssh. This is helpful for systems that do not have a standard KVM console interface.

Note
Note: ARM Platforms

Be aware that this configuration is specifically set up only for a specific ARM platform.

8 Setting Up SMT

The SMT service provides a local repository mirror for updates to your software.  Follow the instructions in the Subscription Management Tool Guide at https://www.suse.com/documentation/sles-12/book_smt/data/smt_installation.html to install and configure SMT.  Be sure to select both the pool and update repositories for each product you are supporting with this server.

9 Installation Server Completed

At this point, you are ready to use the installation server for BIOS and EFI on x86 and EFI on ARMv8. Be sure to select the SMT server as the registration server during installation. You can gain further value by building custom AutoYaST files as well. These files can enable a streamlined installation process and even an unattended process if everything is well defined.

10 Legal Notice

Copyright ©2006– 2017 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

SUSE, the SUSE logo and YaST are registered trademarks of SUSE LLC in the United States and other countries. For SUSE trademarks, see http://www.suse.com/company/legal/. Linux is a registered trademark of Linus Torvalds. All other names or trademarks mentioned in this document may be trademarks or registered trademarks of their respective owners.

This article is part of a series of documents called "SUSE Best Practices". The individual documents in the series were contributed voluntarily by SUSE's employees and by third parties.

The articles are intended only to be one example of how a particular action could be taken. They should not be understood to be the only action and certainly not to be the action recommended by SUSE. Also, SUSE cannot verify either that the actions described in the articles do what they claim to do or that they don't have unintended consequences.

Therefore, we need to specifically state that neither SUSE LLC, its affiliates, the authors, nor the translators may be held liable for possible errors or the consequences thereof. Below we draw your attention to the license under which the articles are published.

Print this page