SUSE Support

Here When You Need Us

How to rename a cluster resource without stopping it

This document (7018672) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise High Availability Extension 12
SUSE Linux Enterprise High Availability Extension 11

Situation

When renaming a resource in maintenance mode, it is observed the resource gets stopped under its old name and started under the new one.

Assuming there is a resource called dummy4 which should get renamed to dummy5, the following is reported by lrmd in /var/log/messages:
2017-02-24T11:32:12.984522+01:00 sles12cluster2 root: Dummy2 start
2017-02-24T11:32:12.985718+01:00 sles12cluster2 lrmd[2291]:   notice: finished - rsc:dummy5 action:start call_id:36 pid:2669 exit-code:0 exec-time:27ms queue-time:0ms
2017-02-24T11:32:12.987163+01:00 sles12cluster2 root: Dummy2 stop
2017-02-24T11:32:12.987988+01:00 sles12cluster2 lrmd[2291]:   notice: finished - rsc:dummy4 action:stop call_id:37 pid:2670 exit-code:0 exec-time:27ms queue-time:1ms
The desired behavior is to rename the resource without having a stop/start operation.

Resolution

The following steps will allow to rename a given resource without having the cluster to start/stop it:
crm configure property maintenance-mode=true
crm edit
<now apply desired changes to the cib>
crm resource reprobe
crm resource cleanup
crm configure property maintenance-mode=false

Additional Information

Output of renaming a resource from dummy5 to dummy6:
sles12cluster1:~ # grep -E "dummy5|dummy6" /var/log/messages
2017-02-24T11:32:01.624576+01:00 sles12cluster1 crmd[1386]:   notice: Initiating monitor operation dummy5_monitor_0 on sles12cluster2
2017-02-24T11:32:01.628872+01:00 sles12cluster1 crmd[1386]:   notice: Initiating monitor operation dummy5_monitor_0 locally on sles12cluster1
2017-02-24T11:32:01.647228+01:00 sles12cluster1 crmd[1386]:   notice: Result of probe operation for dummy5 on sles12cluster1: 7 (not running)
2017-02-24T11:32:12.954680+01:00 sles12cluster1 pengine[1385]:   notice: Start   dummy5#011(sles12cluster2)
2017-02-24T11:32:12.957471+01:00 sles12cluster1 crmd[1386]:   notice: Initiating start operation dummy5_start_0 on sles12cluster2
2017-02-24T11:44:07.554338+01:00 sles12cluster1 pengine[1385]:  warning: Cluster configured not to stop active orphans. dummy5 must be stopped manually on sles12cluster2
2017-02-24T11:44:07.554607+01:00 sles12cluster1 pengine[1385]:   notice: Removing dummy5 from sles12cluster2
2017-02-24T11:44:07.554848+01:00 sles12cluster1 pengine[1385]:   notice: Removing dummy5 from sles12cluster1
2017-02-24T11:44:07.557720+01:00 sles12cluster1 crmd[1386]:   notice: Initiating monitor operation dummy6_monitor_0 on sles12cluster2
2017-02-24T11:44:07.559583+01:00 sles12cluster1 crmd[1386]:   notice: Initiating monitor operation dummy6_monitor_0 locally on sles12cluster1
2017-02-24T11:44:07.592502+01:00 sles12cluster1 crmd[1386]:   notice: Result of probe operation for dummy6 on sles12cluster1: 7 (not running)
2017-02-24T11:45:17.071011+01:00 sles12cluster1 crmd[1386]:   notice: Initiating monitor operation dummy6_monitor_0 on sles12cluster2
2017-02-24T11:45:17.076135+01:00 sles12cluster1 crmd[1386]:   notice: Initiating monitor operation dummy6_monitor_0 locally on sles12cluster1
2017-02-24T11:45:17.106359+01:00 sles12cluster1 crmd[1386]:   notice: Result of probe operation for dummy6 on sles12cluster1: 7 (not running)
2017-02-24T11:46:40.363056+01:00 sles12cluster1 crmd[1386]:   notice: Initiating monitor operation dummy6_monitor_0 on sles12cluster2
2017-02-24T11:46:40.364231+01:00 sles12cluster1 crmd[1386]:   notice: Initiating monitor operation dummy6_monitor_0 locally on sles12cluster1
2017-02-24T11:46:40.388282+01:00 sles12cluster1 crmd[1386]:   notice: Result of probe operation for dummy6 on sles12cluster1: 7 (not running)
2017-02-24T11:47:12.458297+01:00 sles12cluster1 pengine[1385]:   notice: Start   dummy6#011(sles12cluster2)
2017-02-24T11:47:12.462355+01:00 sles12cluster1 crmd[1386]:   notice: Initiating start operation dummy6_start_0 on sles12cluster2

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7018672
  • Creation Date: 28-Feb-2017
  • Modified Date:19-Jul-2023
    • SUSE Linux Enterprise High Availability Extension

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.