Time For Switching - SLES 12 SP3 In The Public Cloud | SUSE Communities

Time For Switching – SLES 12 SP3 In The Public Cloud

Share
Share

It was release day September 7, 2017, and SLES 12 SP3 and SLES 12 SP3 For SAP Application images are available in Amazon EC2, Google Compute Engine, and Microsoft Azure. Give them a whirl. If you plan on attending SUSECon in Prague in a couple of weeks you can meet some members of the Public Cloud team that make all this magic happen. And if you have no plans yet, well there’s still time to register, come out and see us.

What’s this mean to you? Well, when starting anything new you should start using the new images, on-demand or BYOS as the clock on the SLES 12 SP2 images has started ticking. For running instances it is time to start thinking about migration, and there are some caveats that I will get to in a couple of minutes. But first things first, how to find the new images.

Each framework has a different mechanism to get the new images accessible via the web console and thus making them available via point and click. We have little influence over the speed of this process and you will need to keep an eye out for the images in the web console. Of course all frameworks offer a search feature in their web console and searching for “12-sp3” or “v20170907” will find the images if they are not yet in the place where you expect them. For those that use the command line tools to run their instances the latest and greatest images can be found via pint, as always. For new goodies in SLES 12 SP3 please consult the Release Notes. As far as Public Cloud related new goodies, well those are not tied to a release. The Public Cloud module has a “continuous integration” life cycle and we get to update things such as command line tools along the way and do not have to wait for new SP releases. Therefore, there’s nothing really to announce on that front. A couple of highlights are on the networking front for Amazon EC2 and Azure. Relevant to Amazon EC2 is that the latest ENA (Elastic Network Adapter) driver is integrated in the SLES 12 SP3 kernel (it is also available via update in the SP2 kernel) and for Microsoft Azure SR-IOV bonding is also fully integrated into the kernel and happens auto-magically.

Now on to the migration part as I have made those looking for migration caveats wait long enough. First lets cover SLES migration, and I will focus only on the SLES 12 SP2 to SLES 12 SP3 migration as other migration paths have been described previously. SLES For SAP Applications will be covered after the SLES section. One improvement is that we are getting away from framework dependent “if-then-else” parts of the migration, this should be the last time we deal with this. On the other hand we took a step back in ease of migration by picking up a bug during the SLES 12 SP2 cycle. This bug introduces another “if-then-else” flow that I will describe below. During the SLES 12 SP2 cycle SUSE announced the availability of the HPC module which is distinctly different from the special HPC images offered in Microsoft Azure, and I will get to those later.

In an effort to make the packages for HPC easily available the product package for HPC was included in the image builds for SLES 12 SP2 and SLES 12 SP2 For SAP at some point such that the HPC repository would auto-magically get registered for on-demand instances. Following the theme that “no good deed shall go unpunished”, if we look at this from a more cynical point of view, we also ended up picking up a bug that makes migration a bit more cumbersome, this time. With that as introduction lets get to the action.

Before migrating, and as a note what follows is targeted for on-demand instance users, you want to make sure your instance already has the latest and greatest SLES 12 SP2 updates. All commands need root permission. Start with

zypper up

If you get a kernel update during this update you do not really need to reboot your instance. Next lets get the migration plugin, if you do not already have it

zypper in zypper-migration-plugin

Now to the HPC module complication, run

rpm -qa | grep hpc

If nothing shows up, meaning you started from an image that does not have the HPC module registered then you do not have the problem, easy enough, and you get to skip over the next few steps. If you do have the packages we need to double check the proclaimed module version, thus run

zypper products | grep -i hpc

If in this output you see a version string of “12.2-0” then there would be a migration issue that we want to nip in the butt before we run the migration process. If the version string is “12-0” you’re all set. For those with the buggy version the solution is relatively simple, as documented in the Release Notes. But before we go down that path, since I have no idea from which image your instance originated lets double check that you have the file /etc/SUSEConnect

ls /etc/SUSEConnect

If the file does exists we are good to go, if not lets create it. This is framework dependent.

For AWS the content should be:

---
insecure: false
url: https://smt-ec2.susecloud.net

 

For GCE the content should be:

---
insecure: false
url: https://smt-gce.susecloud.net

For Azure the content should be:

---
insecure: false
url: https://smt-azure.susecloud.net

Easy enough. Now back to managing the HPC version mismatch problem. If you do have “12.2-0” in the output from the “zypper products | grep -i hpc” command then you want to run:

rpm -e sle-module-hpc-release-POOL sle-module-hpc-release
SUSEConnect -p sle-module-hpc/12/x86_64

If you do not have the HPC module on your system and you want access to it you can also run “SUSEConnect -p sle-module-hpc/12/x86_64”

Now after all of this preparation we are ready for the migration, yippee

zypper migration

After the process is complete reboot your instance and you are all set. Enjoy SLES 12 SP3

Oh, and to avoid any confusion on login you might want to change your motd file

sed -i s/SP2/SP3/ /etc/motd

Now on to SLES For SAP. Given that SLES 12 SP3 For SAP Applications is brand new not all SAP products have been certified yet. However there is a large number of SAP applications for which the certification on SLES 12 carries over to SLES 12 SP3. Also if you are running instances based on SLES 12 SP1 you can follow the exact same steps as the same caveats apply for SLES 12 SP1 For SAP and SLES 12 SP2 For SAP. As with the SLES migration this applies to on-demand instances that are registered to the SUSE operated infrastructure in your cloud framework of choice.

Update 2018-07-28

Well the “exact same steps” in combination with the commands that followed was certainly misleading, sorry. I will leave the original section as a reference below but you should follow this new and updated section, and yes, this time I mean it, use the exact same steps for SLES 12 SP1 For SAP and SLES 12 SP2 For SAP migrations.

First we are going to figure out what needs to be removed:

rpm -qa | grep release | egrep 'sles|hpc|ha' | grep -v notes

Any package returned as the output needs to be removed. Depending on the origin of the instance this may return a list such as this:

sle-module-hpc-release-POOL-12.2-1.1.x86_64
sle-ha-release-POOL-12.2-1.125.x86_64
sle-module-toolchain-release-POOL-12-8.1.x86_64
sle-module-toolchain-release-12-8.1.x86_64
sle-ha-release-12.2-1.125.x86_64
sle-module-hpc-release-12.2-1.1.x86_64

or similar to this:

sles-release-POOL-12.1-1.331.x86_64
sles-release-12.1-1.331.x86_64
sle-ha-release-POOL-12.1-1.64.x86_64
sle-ha-release-12.1-1.64.x86_64

And of course we did fix the issue in the image at some point and you may get nothing back.

For those that get nothing back from this command the “standardzypper up, zypper migration process applies. For those that get something back the  next step is to remove the packages in question, for example this may look like this:

zypper rm sle-module-hpc-release-POOL-12.2-1.1.x86_64 \

sle-ha-release-POOL-12.2-1.125.x86_64 \

sle-module-toolchain-release-POOL-12-8.1.x86_64 \

sle-module-toolchain-release-12-8.1.x86_64 \

sle-ha-release-12.2-1.125.x86_64 \

sle-module-hpc-release-12.2-1.1.x86_64

Do not cut and paste the above command it will not necessarily match what is on your system, you need to customize the “zypper rm” command based on the package list obtained form the “rpm -qa” command shown previously

The next step form the original instructions still applies for those running on AWS:

rm /etc/zypp/repos.d/aws-ena*

This may return a message that the file does not exists, which is OK. If the file does not exist it means the ENA driver is already built in and we didn’t need to pull it from the SUSE Solid Driver build service.

Now update you system:

zypper up

Force a new registration to clean up the repositories known to the system:

registercloudguest --force-new

This step is necessary to sort out the repositories on your running instance, ignore the certificate warning. And finally we arrive where we want to be, an instance that can be migrated

zypper migration

If you want the HPC module back run

SUSEConnect -p sle-module-hpc/12-0/x86_64

This concludes the updated part of the blog. The original text, only for reference purposes is blow and marked accordingly.

Original section for reference only, do not use. Use the updated section above

We are going to start not by updating the system but by getting rid of stuff that interferes with the migration process:

zypper rm sles-release-POOL-12.1-1.331.x86_64 \
sles-release-12.1-1.331.x86_64 \
sle-ha-release-POOL-12.1-1.64.x86_64 \
sle-ha-release-12.1-1.64.x86_64

While this will show some packages to be removed that should ordinarily give you reason to pause, it is OK to proceed. A total of 7 packages should be removed.

Next you get to do something that under ordinary circumstances only causes trouble, so for this one time and one time only use SUSEConnect on an on-demand instance.

SUSEConnect -d

This will produce an error

“Error: SCC returned ‘Internal Server Error: Please contact your Administrator’ (500)”

Which is OK, you are not registered to SCC. This is an artifact of the update infrastructure. Locally the changes we want to happen still happened. In AWS EC2 we want to get rid of a special repo that exists only on SLES 12 SP1 For SAP instances to provide ENA access and will no longer be needed as in SLES 12 SP2 & SP3 For SAP, as in SLES 12 SP2 & SP3 the ENA driver is part of the kernel shipped by SUSE.

rm /etc/zypp/repos.d/aws-ena*

Now let zypper clean up all of it’s knowledge of the state of the system

zypper up

You’ll see a bunch of “Removing repository …” messages scroll across the screen. With this we have the system in a state where we can start from scratch with a registration:

registercloudguest --force-new

Ignore the warning about the certificate. Once the registration is finished lets make sure we have the latest and greatest:

zypper up

If you run into file conflicts type “yes” to let zypper proceed and update the packages. Now the instance is in a state so you can jump back up and follow the instructions for SLES, which includes the handling of the HPC module. On SLES For SAP instances the HPC module handling also depends on the origin of your migration. If the module is not already registered and you are starting from SLES 12 SP1 For SAP you want to wait with the HPC module registration until after the migration is complete, if you care to have the module repository set up.

End replaced section

And as a last topic for today a few words about HPC images in Azure, a section I could title “Unsuccessfully banging my head against the wall”. Those interested in HPC and using the HPC images will have noticed that, first we never released SLES 12 SP2 HPC images in Azure, second there was no mention of HPC w.r.t. migration. In short, yes, we never managed to get all the ducks in a row and produce a fully functioning HPC image for SLES 12 SP2. For SLES 12 SP3 we wanted to make sure we had an image on release day and worked really hard to get there. But once again the gremlins got us and the image release for HPC is delayed. Once we get there, sometime before Thanksgiving I hope, watch this space for a special blog about the HPC images. That means that there will be SLES 12 SP3 based HPC images in Azure in the not too distant future.

For the next time, well the next major release will be SUSE Linux Enterprise 15 and there will not be a migration path initially, but for SUSE Linux Enterprise 12 SP4 we’ll continue to work on cutting down the “if-then-else” conditions for migration.

Share

Comments

  • Avatar photo Þanda says:

    Hello rjschwei,

    May I know is this blog still valid? When I’m trying to upgrade from SLES 12 SP1 to either SP2, 3 or 4. It successfully go through the migration process. However, when I reboot the EC2 instance it does not boot up and I can’t SSH into the server too.

    Please advice is there any new steps/blog I need to follow?

    Thank You.

  • Avatar photo rjschwei says:

    Yes, still valid and expected to work. It could be an issue with the version of cloud-init that is being installed.

    Before upgrading do the following (as root):

    echo “network: {config: disabled}” >> /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

    There was an issue with one of the released cloud-init versions that messed up the network configuration when doing the ifcfg-eth0 generation.

  • Avatar photo Þanda says:

    Thank you for the respond. I’m a bit confuse with the explanation mentioned below Update 2018-07-28. When I execute rpm -qa | grep release | egrep ‘sles|hpc|ha’ | grep -v notes I received the below result.

    sles-release-POOL-12.1-1.331.x86_64
    sles-release-12.1-1.331.x86_64
    sles-release-DVD-11.4-1.109.x86_64

    When execute;
    zypper rm sles-release-POOL-12.1-1.331.x86_64 \
    sles-release-12.1-1.331.x86_64

    It shown that total of 298 packages are going to be REMOVED. Which included SUSEConnect cloud-init cloud-regionsrv-client grub2 zypper zypper-migration-plugin and other important command. If zypper is remove, I don’t think I can proceed with zypper migration anymore.

    Or I should not remove these 2 package and just directly jump into zypper migration?
    sles-release-POOL-12.1-1.331.x86_64
    sles-release-12.1-1.331.x86_64

    Please advice.

  • Avatar photo Þanda says:

    Hello rjschwei,

    I’m still receiving the same result after migration and rebooting the AWS EC2 instance.

    In the system log, I am seeing the below.
    [ 0.000000] Cannot get hvm parameter CONSOLE_EVTCHN (18): -22!
    [ 2.609519] Cannot get hvm parameter CONSOLE_EVTCHN (18): -22!
    [ 2.792819] ip_local_port_range: prefer different parity for start/end values.
    [[0;1;31mFAILED[0m] Failed to mount /hana/shared.
    See ‘systemctl status hana-shared.mount’ for details.
    [[0;1;33mDEPEND[0m] Dependency failed for Local File Systems.
    [[0;1;31mFAILED[0m] Failed to mount /usr/sap.
    See ‘systemctl status usr-sap.mount’ for details.
    [[0;1;31mFAILED[0m] Failed to mount /media.
    See ‘systemctl status media.mount’ for details.
    [[0;1;31mFAILED[0m] Failed to mount /hana/data.
    See ‘systemctl status hana-data.mount’ for details.
    [[0;1;31mFAILED[0m] Failed to mount /hana/log.
    See ‘systemctl status hana-log.mount’ for details.
    [[0;1;31mFAILED[0m] Failed to mount /backup.
    See ‘systemctl status backup.mount’ for details.

    You are in emergency mode. After logging in, type “journalctl -xb” to view system logs, “systemctl reboot” to reboot, “systemctl default” or ^D to try again to boot into default mode.
    Give root password for maintenance
    (or press Control-D to continue)

    Appreciate your advice here.

  • Avatar photo rjschwei says:

    This is certainly getting too involved to handle as comments to a blog post. Please get in contact with SUSE or AWS support depending on where you get your support from and create an issue. Then we can deal with this properly via bug handling process.

  • Avatar photo rjschwei says:

    You cannot migrate SLES For SAP with the sles-release-* packages installed. These have to be removed.

    You can force the removal with rpm and keep the dependencies.

    rpm -e –nodeps sles-release-POOL

    for example

  • Avatar photo Þanda says:

    It still does not work after using this to remove the unnecessary package. Since the result for rpm -qa | grep release | egrep ‘sles|hpc|ha’ | grep -v notes is

    sles-release-POOL-12.1-1.331.x86_64
    sles-release-12.1-1.331.x86_64
    sles-release-DVD-11.4-1.109.x86_64

    I execute, rpm -e –nodeps sles-release-POOL sles-release sles-release-DVD

    I also added the below before the migration too

    echo “network: {config: disabled}” >> /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

    After migration to SP4 and reboot, I’m still not able to SSH into the server. I can see a bunch of fail mounting of device and “You are in emergency mode. After logging in, typGive root password for maintenance (or press Control-D to continue):” in the system log.

    I will reach out to either one of the support team about this issue. Thank you.

  • Avatar photo rjschwei says:

    Is the “nofail” option set in /etc/fstab for any device that is not the root device?

  • Avatar photo Þanda says:

    Thank you so much for looking in to it. The default fstab is shown in below.

    devpts /dev/pts devpts mode=0620,gid=5 0 0
    proc /proc proc defaults 0 0
    sysfs /sys sysfs noauto 0 0
    debugfs /sys/kernel/debug debugfs noauto 0 0
    tmpfs /run tmpfs noauto 0 0
    /dev/hda1 / ext3 defaults 1 1
    /dev/xvds /usr/sap xfs nofail,nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/xvde /hana/shared xfs nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/mapper/vghana-lvhanadata /hana/data xfs nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/mapper/vghana-lvhanalog /hana/log xfs nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/xvdz /media xfs nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/mapper/vghanaback-lvhanaback /backup xfs nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0

    and I added the nofail for 6 of the drives at below.

    /dev/xvds /usr/sap xfs nofail,nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/xvde /hana/shared xfs nofail,nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/mapper/vghana-lvhanadata /hana/data xfs nofail,nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/mapper/vghana-lvhanalog /hana/log xfs nofail,nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/xvdz /media xfs nofail,nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
    /dev/mapper/vghanaback-lvhanaback /backup xfs nofail,nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0

    Good thing is the instance is able to boot-up now. However, after executing df -h most of the needed drives is not mounted. I am new to SUSE Linux, wasn’t sure how to enable back those drives like before. Do you have any blog that I can refer to it? Btw, this is a SAP HANA DB server running with SAP NetWeaver application as well. This instance was created back in 2016, with SLES 11 SP4 only.

    Let me know if there is any other information you need.

  • Avatar photo rjschwei says:

    OK, two things in play here.

    1.) Likely your device names are different, fdisk -l gets you a list of the attached devices.

    2.) LVM appears to be involved. There were some major changes between SLES 11 and SLES 12 in the way LVM works. But I have no idea how to fix/address this. you’ll need to file a bug with SUSE SUpport to get help with the LVM stuff or see what Google turns up.

  • Avatar photo Þanda says:

    Hi rjschwei,

    Just a quick update, the problem is solved after edit fstab by removing “delaylog” from all the drive.

    example;
    /dev/mapper/vghanaback-lvhanaback /backup xfs nofail,nobarrier,noatime,nodiratime,logbsize=256k 0 0

    All required application is able to start without any issue.

    Thank you so much for your help along the way.

  • Avatar photo diegocs01 says:

    Hi rjschwei

    Is this blog worth for updates from SUSE 12SP3 to SP4?
    We are receiving the following message during the update
    Skipping repository ‘SLES12-SP3-Updates’ because of the above error.
    Forcing raw metadata refresh
    Retrieving repository ‘SLE-12-SP4-SAP-Updates’ metadata ……………………………………………………………………………………………………………….[done]
    Forcing building of repository cache
    Building repository ‘SLE-12-SP4-SAP-Updates’ cache ……………………………………………………………………………………………………………………[done]
    Forcing raw metadata refresh
    Retrieving repository ‘SLE-HA12-SP4-Pool’ metadata ……………………………………………………………………………………………………………………[done]
    Forcing building of repository cache
    Building repository ‘SLE-HA12-SP4-Pool’ cache ………………………………………………………………………………………………………………………..[done]
    Forcing raw metadata refresh
    Retrieving repository ‘SLE-HA12-SP4-Updates’ metadata …………………………………………………………………………………………………………………[done]
    Forcing building of repository cache
    Building repository ‘SLE-HA12-SP4-Updates’ cache ……………………………………………………………………………………………………………………..[done]
    Forcing raw metadata refresh
    Retrieving repository ‘SLE12-SP4-SAP-Pool’ metadata …………………………………………………………………………………………………………………..[done]
    Forcing building of repository cache
    Building repository ‘SLE12-SP4-SAP-Pool’ cache ……………………………………………………………………………………………………………………….[done]
    Forcing raw metadata refresh
    Retrieving repository ‘SLES12-SP4-Pool’ metadata ……………………………………………………………………………………………………………………..[done]
    Forcing building of repository cache
    Building repository ‘SLES12-SP4-Pool’ cache ………………………………………………………………………………………………………………………….[done]
    Forcing raw metadata refresh
    Retrieving repository ‘SLES12-SP4-Updates’ metadata …………………………………………………………………………………………………………………..[done]
    Some of the repositories have not been refreshed because of an error.
    Executing ‘zypper –releasever 12.4 ref -f’: RuntimeError: Refresh of repositories failed.

    Migration failed.

    Rollback successful.

  • Avatar photo rjschwei says:

    @diegocs01

    Sorry for the delay. Yes,

    `zypper migrate`

    should just work for SLES and SLES For SAP migration from SP3 to SP4.

    In the meantime there has been a general upgrade to the upgrade to the update infrastructure. I recommend installing cloud-regionsrv-client-9.0.3 first, which is expected to show up in the update infrastructure no later than Friday Aug 23, 2019 5:00 P.M. EDT.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar photo
    8,060 views