SUSE HA for SAP HANA scale-up cost-optimized improved


Starting with version 0.160.1 the SAPHanaSR package ships an HADR provider hook script for automating changes in memory limits and table preload on fail-over. This simplifies the SAP HANA scale-up cost-optimized scenario.

Picture: SAP HANA scale-up cost-optimized

In this blog article you will learn what is new for the scale-up cost-optimized scenario and where you find more information.

What is a SAP HANA cost-optimized scenario?

SAP allows to run a non-replicated instance of HANA in parallel to the replication secondary on the pre-defined fail-over site. This non-replicated database could be a development system or alike. In case of failing primary HANA the cluster first tries to restart the failed database locally. If the restart is not possible a fail-over will be triggered. In that case the secondary HANA on the fail-over node is promoted after the shutdown of the non-replicated HANA.

So you need less resources when running a SUSE HA for SAP HANA cost-optimized scenario, compared to the performance-optimized scenario. On the other hand you get a more complex setup and a slower fail-over. If you want to learn more about the cost-optimized scenario, please read our blog article  .

What is new?

The RPM SAPHanaSR now contains an HANA HADR provider hook script for the method postTakeover(). This script changes HANA memory limits and table preload for the replicated HANA database. So it is used for the SAPHanaSR scale-up cost-optimized setup.

The new script comes ready to use out of the box. This is possible, because the script does not include sensitive information. It is using the HANA´s database user key store instead.

Memory limit and database user key are configured in the HADR provider script´s section of HANA´s global.ini config file. You can configure and activate the script on the pre-defined take-over node. This is the node running the HANA replication secondary and the non-replicated HANA database. The manual page gives details. The below example shows the new hook script configuration.

provider = susCostOpt
path = /usr/share/SAPHanaSR
userkey = saphanasr_HA1_costopt
costopt_primary_global_allocation_limit = 32000
execution_order = 2

Example: Section [ha_dr_provider_suscostopt] in global.ini

The new parameter userkey is mandatory. It tells the hook script which user key to use for changing HANA settings. Of course the respective user key and database user has to exist. The new parameter costopt_primary_global_allocation_limit is optional. You can use it to set a limit even for the promoted HANA instance after a fail-over happened. If not set, it defaults to unlimited.

There is some more configuration specific to the SAP HANA scale-up cost-optimized scenario. That configuration remains the same as with former versions of the SAPHanaSR package. The updated setup guide at  explains installation and configuration step-by-step.

Where can I find further information?

Find more information in SUSE blogs, in the setup guide about SAP HANA scale-up cost-optimized scenario and in the manual pages shipped with the product.

– SUSE blogs

SAP HANA Cost-optimized – An alternative Route is available

– Setup guides

– Manual pages

apropos(1), cs_man2pdf(8), SAPHanaSR(7),,, ocf_suse_SAPHana(7), ocf_heartbeat_SAPInstance(7)

(Visited 1 times, 1 visits today)