SUSE HA for SAP HANA scale-up cost-optimized improved
Starting with version 0.160.1 the SAPHanaSR package ships an HADR provider hook script for automating changes in memory limits and table preload on fail-over. This simplifies the SAP HANA scale-up cost-optimized scenario.
In this blog article you will learn what is new for the scale-up cost-optimized scenario and where you find more information.
What is a SAP HANA cost-optimized scenario?
SAP allows to run a non-replicated instance of HANA in parallel to the replication secondary on the pre-defined fail-over site. This non-replicated database could be a development system or alike. In case of failing primary HANA the cluster first tries to restart the failed database locally. If the restart is not possible a fail-over will be triggered. In that case the secondary HANA on the fail-over node is promoted after the shutdown of the non-replicated HANA.
So you need less resources when running a SUSE HA for SAP HANA cost-optimized scenario, compared to the performance-optimized scenario. On the other hand you get a more complex setup and a slower fail-over. If you want to learn more about the cost-optimized scenario, please read our blog article https://www.suse.com/c/sap-hana-cost-optimized-an-alternative-route-is-available/ .
What is new?
The RPM SAPHanaSR now contains an HANA HADR provider hook script for the method postTakeover(). This script susCostOpt.py changes HANA memory limits and table preload for the replicated HANA database. So it is used for the SAPHanaSR scale-up cost-optimized setup.
The new script comes ready to use out of the box. This is possible, because the script does not include sensitive information. It is using the HANA´s database user key store instead.
Memory limit and database user key are configured in the HADR provider script´s section of HANA´s global.ini config file. You can configure and activate the script on the pre-defined take-over node. This is the node running the HANA replication secondary and the non-replicated HANA database. The manual page susCostOpt.py(7) gives details. The below example shows the new hook script configuration.
[ha_dr_provider_suscostopt]
provider = susCostOpt
path = /usr/share/SAPHanaSR
userkey = saphanasr_HA1_costopt
costopt_primary_global_allocation_limit = 32000
execution_order = 2
Example: Section [ha_dr_provider_suscostopt] in global.ini
The new parameter userkey is mandatory. It tells the hook script which user key to use for changing HANA settings. Of course the respective user key and database user has to exist. The new parameter costopt_primary_global_allocation_limit is optional. You can use it to set a limit even for the promoted HANA instance after a fail-over happened. If not set, it defaults to unlimited.
There is some more configuration specific to the SAP HANA scale-up cost-optimized scenario. That configuration remains the same as with former versions of the SAPHanaSR package. The updated setup guide at https://documentation.suse.com/sbp/all/single-html/SLES4SAP-hana-sr-guide-costopt-15/ explains installation and configuration step-by-step.
Where can I find further information?
Find more information in SUSE blogs, in the setup guide about SAP HANA scale-up cost-optimized scenario and in the manual pages shipped with the product.
– SUSE blogs
https://www.suse.com/c/tag/towardszerodowntime/
– Setup guides
https://documentation.suse.com/sbp/all/single-html/SLES4SAP-hana-sr-guide-costopt-15
https://documentation.suse.com/sbp/sap/
– Manual pages
apropos(1), cs_man2pdf(8), SAPHanaSR(7), SAPHanaSR.py(7), susCostOpt.py(7), ocf_suse_SAPHana(7), ocf_heartbeat_SAPInstance(7)
Related Articles
Oct 31st, 2023
Confidential Cloud: Introduction to Confidential Computing
Jun 22nd, 2023
Add more power to Prometheus
Feb 15th, 2023