Cluster settings that affect the number of actions or jobs that can be executed in parallel
This document (7024060) is provided subject to the disclaimer at the end of this document.
SUSE Linux Enterprise High Availability Extension 12
SUSE Linux Enterprise High Availability Extension 11 SP4
Details: This option has been deprecated in favor of node-action-limit but if set will still affect the number of in-flight actions that will run on a cluster node. This is for backward compatibility.
Action: You can comment this out and use "node-action-limit" instead.
Note: Code dropped in SLE15SP0
2. node-action-limit= --> Cluster property --> cib-bootstrap-option -->node-action-limit=
Details: This is a per node limit. This is the number of in-flight actions that run on a local cluster node.
** It defaults to 2x CPU cores.
3. batch-limit= --> Cluster property --> cib-bootstrap-option --> batch-limit=
Details: This is a cluster wide limit for number of actions.
** The number of jobs that the Transition Engine (TE) is allowed to execute in parallel. The TE is the logic in pacemaker’s CRMd that executes the actions determined by the Policy Engine (PE). The "correct" value will depend on the speed and load of your network and cluster nodes.
Note: These limits are loaded into memory upon startup and enforced by the DC node. However, a restart of the pacemaker stack on each node can be done one at a time to load up the new values.
Cluster Resource Manager (CRM) logic:
1) Check the number of in-flight actions have reached the cluster-wide limit (batch-limit).
* If so, hold it.
* If not, go to step 2)
2) Check the number of in-flight actions on that node has reached the per-node limit (node-action-limit).
* If so , hold it.
and delay scheduling actions even if batch-limit and node-action-limit haven't been reached.
- Document ID:7024060
- Creation Date: 13-Aug-2019
- Modified Date:27-Apr-2021
- SUSE Linux Enterprise High Availability Extension
For questions or concerns with the SUSE Knowledgebase please contact: firstname.lastname@example.org