table of contents
PACEMAKER-CONTROLD(7) | Pacemaker Configuration | PACEMAKER-CONTROLD(7) |
NAME¶
pacemaker-controld - Pacemaker controller options
SYNOPSIS¶
[dc-version=string] [cluster-infrastructure=string] [cluster-name=string] [dc-deadtime=time] [cluster-recheck-interval=time] [load-threshold=percentage] [node-action-limit=integer] [fence-reaction=string] [election-timeout=time] [shutdown-escalation=time] [join-integration-timeout=time] [join-finalization-timeout=time] [transition-delay=time] [stonith-watchdog-timeout=time] [stonith-max-attempts=integer] [no-quorum-policy=select] [shutdown-lock=boolean]
DESCRIPTION¶
Cluster options used by Pacemaker's controller
SUPPORTED PARAMETERS¶
dc-version = string [none]
Includes a hash which identifies the exact changeset the code was built from. Used for diagnostic purposes.
cluster-infrastructure = string [corosync]
Used for informational and diagnostic purposes.
cluster-name = string
This optional value is mostly for users' convenience as desired in administration, but may also be used in Pacemaker configuration rules via the #cluster-name node attribute, and by higher-level tools and resource agents.
dc-deadtime = time [20s]
The optimal value will depend on the speed and load of your network and the type of switches used.
cluster-recheck-interval = time [15min]
Pacemaker is primarily event-driven, and looks ahead to know when to recheck cluster state for failure timeouts and most time-based rules. However, it will also recheck the cluster after this amount of inactivity, to evaluate rules with date specifications and serve as a fail-safe for certain types of scheduler bugs. Allowed values: Zero disables polling, while positive values are an interval in seconds(unless other units are specified, for example "5min")
load-threshold = percentage [80%]
The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit
node-action-limit = integer [0]
fence-reaction = string [stop]
A cluster node may receive notification of its own fencing if fencing is misconfigured, or if fabric fencing is in use that doesn't cut cluster communication. Allowed values are "stop" to attempt to immediately stop Pacemaker and stay stopped, or "panic" to attempt to immediately reboot the local node, falling back to stop on failure.
election-timeout = time [2min]
Declare an election failed if it is not decided within this much time. If you need to adjust this value, it probably indicates the presence of a bug.
shutdown-escalation = time [20min]
Exit immediately if shutdown does not complete within this much time. If you need to adjust this value, it probably indicates the presence of a bug.
join-integration-timeout = time [3min]
If you need to adjust this value, it probably indicates the presence of a bug.
join-finalization-timeout = time [30min]
If you need to adjust this value, it probably indicates the presence of a bug.
transition-delay = time [0s]
Delay cluster recovery for this much time to allow for additional events to occur. Useful if your configuration is sensitive to the order in which ping updates arrive.
stonith-watchdog-timeout = time [0]
If this is set to a positive value, lost nodes are assumed to self-fence using watchdog-based SBD within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local value of the `SBD_WATCHDOG_TIMEOUT` environment variable if that is positive, or otherwise treat this as 0. WARNING: When used, this timeout must be larger than `SBD_WATCHDOG_TIMEOUT` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, `SBD_WATCHDOG_TIMEOUT` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur.
stonith-max-attempts = integer [10]
no-quorum-policy = select [stop]
What to do when the cluster does not have quorum Allowed values: stop, freeze, ignore, demote, suicide
shutdown-lock = boolean [false]
When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.
AUTHOR¶
Andrew Beekhof <andrew@beekhof.net>
07/09/2023 | Pacemaker Configuration |