.TH PCS "8" "November 2018" "pcs 0.10.1" "System Administration Utilities" .SH NAME pcs \- pacemaker/corosync configuration system .SH SYNOPSIS .B pcs [\fI\-f file\fR] [\fI\-h\fR] [\fIcommands\fR]... .SH DESCRIPTION Control and configure pacemaker and corosync. .SH OPTIONS .TP \fB\-h\fR, \fB\-\-help\fR Display usage and exit. .TP \fB\-f\fR file Perform actions on file instead of active CIB. .TP \fB\-\-debug\fR Print all network traffic and external commands run. .TP \fB\-\-version\fR Print pcs version information. List pcs capabilities if \fB\-\-full\fR is specified. .TP \fB\-\-request\-timeout\fR= Timeout for each outgoing request to another node in seconds. Default is 60s. .SS "Commands:" .TP cluster Configure cluster options and nodes. .TP resource Manage cluster resources. .TP stonith Manage fence devices. .TP constraint Manage resource constraints. .TP property Manage pacemaker properties. .TP acl Manage pacemaker access control lists. .TP qdevice Manage quorum device provider on the local host. .TP quorum Manage cluster quorum settings. .TP booth Manage booth (cluster ticket manager). .TP status View cluster status. .TP config View and manage cluster configuration. .TP pcsd Manage pcs daemon. .TP host Manage hosts known to pcs/pcsd. .TP node Manage cluster nodes. .TP alert Manage pacemaker alerts. .TP client Manage pcsd client configuration. .SS "resource" .TP [status [\fB\-\-hide\-inactive\fR]] Show status of all currently configured resources. If \fB\-\-hide\-inactive\fR is specified, only show active resources. .TP config []... Show options of all currently configured resources or if resource ids are specified show the options for the specified resource ids. .TP list [filter] [\fB\-\-nodesc\fR] Show list of all available resource agents (if filter is provided then only resource agents matching the filter will be shown). If \fB\-\-nodesc\fR is used then descriptions of resource agents are not printed. .TP describe [:[:]] [\fB\-\-full\fR] Show options for the specified resource. If \fB\-\-full\fR is specified, all options including advanced and deprecated ones are shown. .TP create [:[:]] [resource options] [\fBop\fR [ ]...] [\fBmeta\fR ...] [\fBclone\fR [] | promotable | \fB\-\-group\fR [\fB\-\-before\fR | \fB\-\-after\fR ] | \fBbundle\fR ] [\fB\-\-disabled\fR] [\fB\-\-no\-default\-ops] [\fB\-\-wait\fR[=n]] Create specified resource. If \fBclone\fR is used a clone resource is created. If \fBpromotable\fR is used a promotable clone resource is created. If \fB\-\-group\fR is specified the resource is added to the group named. You can use \fB\-\-before\fR or \fB\-\-after\fR to specify the position of the added resource relatively to some resource already existing in the group. If \fBbundle\fR is specified, resource will be created inside of the specified bundle. If \fB\-\-disabled\fR is specified the resource is not started automatically. If \fB\-\-no\-default\-ops\fR is specified, only monitor operations are created for the resource and all other operations use default settings. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started, or 1 if the resource has not yet started. If 'n' is not specified it defaults to 60 minutes. Example: Create a new resource called 'VirtualIP' with IP address 192.168.0.99, netmask of 32, monitored everything 30 seconds, on eth2: pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s .TP delete Deletes the resource, group, bundle or clone (and all resources within the group/bundle/clone). .TP remove Deletes the resource, group, bundle or clone (and all resources within the group/bundle/clone). .TP enable ... [\fB\-\-wait\fR[=n]] Allow the cluster to start the resources. Depending on the rest of the configuration (constraints, options, failures, etc), the resources may remain stopped. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resources to start and then return 0 if the resources are started, or 1 if the resources have not yet started. If 'n' is not specified it defaults to 60 minutes. .TP disable ... [\fB\-\-wait\fR[=n]] Attempt to stop the resources if they are running and forbid the cluster from starting them again. Depending on the rest of the configuration (constraints, options, failures, etc), the resources may remain started. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resources to stop and then return 0 if the resources are stopped or 1 if the resources have not stopped. If 'n' is not specified it defaults to 60 minutes. .TP restart [node] [\fB\-\-wait\fR=n] Restart the resource specified. If a node is specified and if the resource is a clone or bundle it will be restarted only on the node specified. If \fB\-\-wait\fR is specified, then we will wait up to 'n' seconds for the resource to be restarted and return 0 if the restart was successful or 1 if it was not. .TP debug\-start [\fB\-\-full\fR] This command will force the specified resource to start on this node ignoring the cluster recommendations and print the output from starting the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to start. .TP debug\-stop [\fB\-\-full\fR] This command will force the specified resource to stop on this node ignoring the cluster recommendations and print the output from stopping the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to stop. .TP debug\-promote [\fB\-\-full\fR] This command will force the specified resource to be promoted on this node ignoring the cluster recommendations and print the output from promoting the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to promote. .TP debug\-demote [\fB\-\-full\fR] This command will force the specified resource to be demoted on this node ignoring the cluster recommendations and print the output from demoting the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to demote. .TP debug\-monitor [\fB\-\-full\fR] This command will force the specified resource to be monitored on this node ignoring the cluster recommendations and print the output from monitoring the resource. Using \fB\-\-full\fR will give more detailed output. This is mainly used for debugging resources that fail to be monitored. .TP move [destination node] [\fB\-\-master\fR] [lifetime=] [\fB\-\-wait\fR[=n]] Move the resource off the node it is currently running on by creating a \-INFINITY location constraint to ban the node. If destination node is specified the resource will be moved to that node by creating an INFINITY location constraint to prefer the destination node. If \fB\-\-master\fR is used the scope of the command is limited to the master role and you must use the promotable clone id (instead of the resource id). If lifetime is specified then the constraint will expire after that time, otherwise it defaults to infinity and the constraint can be cleared manually with 'pcs resource clear' or 'pcs constraint delete'. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to move and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. If you want the resource to preferably avoid running on some nodes but be able to failover to them use 'pcs constraint location avoids'. .TP ban [node] [\fB\-\-master\fR] [lifetime=] [\fB\-\-wait\fR[=n]] Prevent the resource id specified from running on the node (or on the current node it is running on if no node is specified) by creating a \-INFINITY location constraint. If \fB\-\-master\fR is used the scope of the command is limited to the master role and you must use the promotable clone id (instead of the resource id). If lifetime is specified then the constraint will expire after that time, otherwise it defaults to infinity and the constraint can be cleared manually with 'pcs resource clear' or 'pcs constraint delete'. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the resource to move and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. If you want the resource to preferably avoid running on some nodes but be able to failover to them use 'pcs constraint location avoids'. .TP clear [node] [\fB\-\-master\fR] [\fB\-\-wait\fR[=n]] Remove constraints created by move and/or ban on the specified resource (and node if specified). If \fB\-\-master\fR is used the scope of the command is limited to the master role and you must use the master id (instead of the resource id). If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting and/or moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP standards List available resource agent standards supported by this installation (OCF, LSB, etc.). .TP providers List available OCF resource agent providers. .TP agents [standard[:provider]] List available agents optionally filtered by standard and provider. .TP update [resource options] [op [ ]...] [meta ...] [\fB\-\-wait\fR[=n]] Add/Change options to specified resource, clone or multi\-state resource. If an operation (op) is specified it will update the first found operation with the same action on the specified resource, if no operation with that action exists then a new operation will be created. (WARNING: all existing options on the updated operation will be reset if not specified.) If you want to create multiple monitor operations you should use the 'op add' & 'op remove' commands. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the changes to take effect and then return 0 if the changes have been processed or 1 otherwise. If 'n' is not specified it defaults to 60 minutes. .TP op add [operation properties] Add operation for specified resource. .TP op delete [...] Remove specified operation (note: you must specify the exact operation properties to properly remove an existing operation). .TP op delete Remove the specified operation id. .TP op remove [...] Remove specified operation (note: you must specify the exact operation properties to properly remove an existing operation). .TP op remove Remove the specified operation id. .TP op defaults [options] Set default values for operations, if no options are passed, lists currently configured defaults. Defaults do not apply to resources which override them with their own defined operations. .TP meta [\fB\-\-wait\fR[=n]] Add specified options to the specified resource, group or clone. Meta options should be in the format of name=value, options may be removed by setting an option without a value. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the changes to take effect and then return 0 if the changes have been processed or 1 otherwise. If 'n' is not specified it defaults to 60 minutes. .br Example: pcs resource meta TestResource failure\-timeout=50 stickiness= .TP group list Show all currently configured resource groups and their resources. .TP group add [resource id] ... [resource id] [\fB\-\-before\fR | \fB\-\-after\fR ] [\fB\-\-wait\fR[=n]] Add the specified resource to the group, creating the group if it does not exist. If the resource is present in another group it is moved to the new group. You can use \fB\-\-before\fR or \fB\-\-after\fR to specify the position of the added resources relatively to some resource already existing in the group. By adding resources to a group they are already in and specifying \fB\-\-after\fR or \fB\-\-before\fR you can move the resources in the group. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP group delete [resource id] ... [resource id] [\fB\-\-wait\fR[=n]] Remove the specified resource(s) from the group, removing the group if no resources remain in it. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP group remove [resource id] ... [resource id] [\fB\-\-wait\fR[=n]] Remove the specified resource(s) from the group, removing the group if no resources remain in it. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP ungroup [resource id] ... [resource id] [\fB\-\-wait\fR[=n]] Remove the group (note: this does not remove any resources from the cluster) or if resources are specified, remove the specified resources from the group. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and the return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP clone [clone options]... [\fB\-\-wait\fR[=n]] Set up the specified resource or group as a clone. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting clone instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP promotable [clone options]... [\fB\-\-wait\fR[=n]] Set up the specified resource or group as a promotable clone. This is an alias for 'pcs resource clone promotable=true'. .TP unclone [\fB\-\-wait\fR[=n]] Remove the clone which contains the specified group or resource (the resource or group will not be removed). If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including stopping clone instances if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP bundle create container [] [network ] [port\-map ]... [storage\-map ]... [meta ] [\fB\-\-disabled\fR] [\fB\-\-wait\fR[=n]] Create a new bundle encapsulating no resources. The bundle can be used either as it is or a resource may be put into it at any time. If \fB\-\-disabled\fR is specified, the bundle is not started automatically. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the bundle to start and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP bundle update [container ] [network ] [port\-map (add ) | (delete | remove ...)]... [storage\-map (add ) | (delete | remove ...)]... [meta ] [\fB\-\-wait\fR[=n]] Add, remove or change options to specified bundle. If you wish to update a resource encapsulated in the bundle, use the 'pcs resource update' command instead and specify the resource id. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes. .TP manage ... [\fB\-\-monitor\fR] Set resources listed to managed mode (default). If \fB\-\-monitor\fR is specified, enable all monitor operations of the resources. .TP unmanage ... [\fB\-\-monitor\fR] Set resources listed to unmanaged mode. When a resource is in unmanaged mode, the cluster is not allowed to start nor stop the resource. If \fB\-\-monitor\fR is specified, disable all monitor operations of the resources. .TP defaults [options] Set default values for resources, if no options are passed, lists currently configured defaults. Defaults do not apply to resources which override them with their own defined values. .TP cleanup [] [node=] [operation= [interval=]] Make the cluster forget failed operations from history of the resource and re\-detect its current state. This can be useful to purge knowledge of past failures that have since been resolved. If a resource id is not specified then all resources / stonith devices will be cleaned up. If a node is not specified then resources / stonith devices on all nodes will be cleaned up. .TP refresh [] [node=] [\fB\-\-full\fR] Make the cluster forget the complete operation history (including failures) of the resource and re\-detect its current state. If you are interested in forgetting failed operations only, use the 'pcs resource cleanup' command. If a resource id is not specified then all resources / stonith devices will be refreshed. If a node is not specified then resources / stonith devices on all nodes will be refreshed. Use \fB\-\-full\fR to refresh a resource on all nodes, otherwise only nodes where the resource's state is known will be considered. .TP failcount show [] [node=] [operation= [interval=]] [\fB\-\-full\fR] Show current failcount for resources, optionally filtered by a resource, node, operation and its interval. If \fB\-\-full\fR is specified do not sum failcounts per resource and node. Use 'pcs resource cleanup' or 'pcs resource refresh' to reset failcounts. .TP relocate dry\-run [resource1] [resource2] ... The same as 'relocate run' but has no effect on the cluster. .TP relocate run [resource1] [resource2] ... Relocate specified resources to their preferred nodes. If no resources are specified, relocate all resources. This command calculates the preferred node for each resource while ignoring resource stickiness. Then it creates location constraints which will cause the resources to move to their preferred nodes. Once the resources have been moved the constraints are deleted automatically. Note that the preferred node is calculated based on current cluster status, constraints, location of resources and other settings and thus it might change over time. .TP relocate show Display current status of resources and their optimal node ignoring resource stickiness. .TP relocate clear Remove all constraints created by the 'relocate run' command. .TP utilization [ [= ...]] Add specified utilization options to specified resource. If resource is not specified, shows utilization of all resources. If utilization options are not specified, shows utilization of specified resource. Utilization option should be in format name=value, value has to be integer. Options may be removed by setting an option without a value. Example: pcs resource utilization TestResource cpu= ram=20 .SS "cluster" .TP setup ( [addr=]...)... [transport knet|udp|udpu [] [link ] [compression ] [crypto ]] [totem ] [quorum ] [\fB\-\-enable\fR] [\fB\-\-start\fR [\fB\-\-wait\fR[=]]] [\fB\-\-no\-keys\-sync\fR] Create a cluster from the listed nodes and synchronize cluster configuration files to them. .br Nodes are specified by their names and optionally their addresses. If no addresses are specified for a node, pcs will configure corosync to communicate with that node using an address provided in 'pcs host auth' command. Otherwise, pcs will configure corosync to communicate with the node using the specified addresses. Transport knet: .br This is the default transport. It allows configuring traffic encryption and compression as well as using multiple addresses (links) for nodes. .br Transport options are: ip_version, knet_pmtud_interval, link_mode .br Link options are: ip_version, link_priority, linknumber, mcastport, ping_interval, ping_precision, ping_timeout, pong_count, transport (udp or sctp) .br Compression options are: level, model, threshold .br Crypto options are: cipher, hash, model .br By default, encryption is enabled with cipher=aes256 and hash=sha256. To disable encryption, set cipher=none and hash=none. Transports udp and udpu: .br These transports are limited to one address per node. They do not support traffic encryption nor compression. .br Transport options are: ip_version, netmtu .br Link options are: bindnetaddr, broadcast, mcastaddr, mcastport, ttl Totem and quorum can be configured regardles of used transport. .br Totem options are: consensus, downcheck, fail_recv_const, heartbeat_failures_allowed, hold, join, max_messages, max_network_delay, merge, miss_count_const, send_join, seqno_unchanged_const, token, token_coefficient, token_retransmit, token_retransmits_before_loss_const, window_size .br Quorum options are: auto_tie_breaker, last_man_standing, last_man_standing_window, wait_for_all Transports and their options, link, compression, crypto and totem options are all documented in corosync.conf(5) man page; knet link options are prefixed 'knet_' there, compression options are prefixed 'knet_compression_' and crypto options are prefixed 'crypto_'. Quorum options are documented in votequorum(5) man page. \fB\-\-enable\fR will configure the cluster to start on nodes boot. \fB\-\-start\fR will start the cluster right after creating it. \fB\-\-wait\fR will wait up to 'n' seconds for the cluster to start. \fB\-\-no\-keys\-sync\fR will skip creating and distributing pcsd SSL certificate and key and corosync and pacemaker authkey files. Use this if you provide your own certificates and keys. Examples: .br Create a cluster with default settings: pcs cluster setup newcluster node1 node2 .br Create a cluster using two links: pcs cluster setup newcluster node1 addr=10.0.1.11 addr=10.0.2.11 node2 addr=10.0.1.12 addr=10.0.2.12 .br Create a cluster using udp transport with a non-default port: pcs cluster setup newcluster node1 node2 transport udp link mcastport=55405 .TP start [\fB\-\-all\fR | ... ] [\fB\-\-wait\fR[=]] [\fB\-\-request\-timeout\fR=] Start a cluster on specified node(s). If no nodes are specified then start a cluster on the local node. If \fB\-\-all\fR is specified then start a cluster on all nodes. If the cluster has many nodes then the start request may time out. In that case you should consider setting \fB\-\-request\-timeout\fR to a suitable value. If \fB\-\-wait\fR is specified, pcs waits up to 'n' seconds for the cluster to get ready to provide services after the cluster has successfully started. .TP stop [\fB\-\-all\fR | ... ] [\fB\-\-request\-timeout\fR=] Stop a cluster on specified node(s). If no nodes are specified then stop a cluster on the local node. If \fB\-\-all\fR is specified then stop a cluster on all nodes. If the cluster is running resources which take long time to stop then the stop request may time out before the cluster actually stops. In that case you should consider setting \fB\-\-request\-timeout\fR to a suitable value. .TP kill Force corosync and pacemaker daemons to stop on the local node (performs kill \-9). Note that init system (e.g. systemd) can detect that cluster is not running and start it again. If you want to stop cluster on a node, run pcs cluster stop on that node. .TP enable [\fB\-\-all\fR | ... ] Configure cluster to run on node boot on specified node(s). If node is not specified then cluster is enabled on the local node. If \fB\-\-all\fR is specified then cluster is enabled on all nodes. .TP disable [\fB\-\-all\fR | ... ] Configure cluster to not run on node boot on specified node(s). If node is not specified then cluster is disabled on the local node. If \fB\-\-all\fR is specified then cluster is disabled on all nodes. .TP auth [\fB\-u\fR ] [\fB\-p\fR ] Authenticate pcs/pcsd to pcsd on nodes configured in the local cluster. .TP status View current cluster status (an alias of 'pcs status cluster'). .TP pcsd\-status []... Show current status of pcsd on nodes specified, or on all nodes configured in the local cluster if no nodes are specified. .TP sync Sync cluster configuration (files which are supported by all subcommands of this command) to all cluster nodes. .TP sync corosync Sync corosync configuration to all nodes found from current corosync.conf file. .TP cib [filename] [scope= | \fB\-\-config\fR] Get the raw xml from the CIB (Cluster Information Base). If a filename is provided, we save the CIB to that file, otherwise the CIB is printed. Specify scope to get a specific section of the CIB. Valid values of the scope are: configuration, nodes, resources, constraints, crm_config, rsc_defaults, op_defaults, status. \fB\-\-config\fR is the same as scope=configuration. Do not specify a scope if you want to edit the saved CIB using pcs (pcs \-f ). .TP cib\-push [\fB\-\-wait\fR[=]] [diff\-against= | scope= | \fB\-\-config\fR] Push the raw xml from to the CIB (Cluster Information Base). You can obtain the CIB by running the 'pcs cluster cib' command, which is recommended first step when you want to perform desired modifications (pcs \fB\-f\fR ) for the one\-off push. If diff\-against is specified, pcs diffs contents of filename against contents of filename_original and pushes the result to the CIB. Specify scope to push a specific section of the CIB. Valid values of the scope are: configuration, nodes, resources, constraints, crm_config, rsc_defaults, op_defaults. \fB\-\-config\fR is the same as scope=configuration. Use of \fB\-\-config\fR is recommended. Do not specify a scope if you need to push the whole CIB or be warned in the case of outdated CIB. If \fB\-\-wait\fR is specified wait up to 'n' seconds for changes to be applied. WARNING: the selected scope of the CIB will be overwritten by the current content of the specified file. Example: pcs cluster cib > original.xml cp original.xml new.xml pcs \-f new.xml constraint location apache prefers node2 pcs cluster cib\-push new.xml diff\-against=original.xml .TP cib\-upgrade Upgrade the CIB to conform to the latest version of the document schema. .TP edit [scope= | \fB\-\-config\fR] Edit the cib in the editor specified by the $EDITOR environment variable and push out any changes upon saving. Specify scope to edit a specific section of the CIB. Valid values of the scope are: configuration, nodes, resources, constraints, crm_config, rsc_defaults, op_defaults. \fB\-\-config\fR is the same as scope=configuration. Use of \fB\-\-config\fR is recommended. Do not specify a scope if you need to edit the whole CIB or be warned in the case of outdated CIB. .TP node add [addr=]... [watchdog=] [device=]... [\fB\-\-start\fR [\fB\-\-wait\fR[=]]] [\fB\-\-enable\fB] [\fB\-\-no\-watchdog\-validation\fR] Add the node to the cluster and synchronize all relevant configuration files to the new node. This command can only be run on an existing cluster node. The new node is specified by its name and optionally its addresses. If no addresses are specified for the node, pcs will configure corosync to communicate with the node using an address provided in 'pcs host auth' command. Otherwise, pcs will configure corosync to communicate with the node using the specified addresses. Use 'watchdog' to specify a path to a watchdog on the new node, when SBD is enabled in the cluster. If SBD is configured with shared storage, use 'device' to specify path to shared device(s) on the new node. If \fB\-\-start\fR is specified also start cluster on the new node, if \fB\-\-wait\fR is specified wait up to 'n' seconds for the new node to start. If \fB\-\-enable\fR is specified configure cluster to start on the new node on boot. If \fB\-\-no\-watchdog\-validation\fR is specified, validation of watchdog will be skipped. .B WARNING: By default, it is tested whether the specified watchdog is supported. This may cause a restart of the system when a watchdog with no\-way\-out\-feature enabled is present. Use \-\-no\-watchdog\-validation to skip watchdog validation. .TP node delete []... Shutdown specified nodes and remove them from the cluster. .TP node remove []... Shutdown specified nodes and remove them from the cluster. .TP node add\-remote [] [options] [op [ ]...] [meta ...] [\fB\-\-wait\fR[=]] Add the node to the cluster as a remote node. Sync all relevant configuration files to the new node. Start the node and configure it to start the cluster on boot. Options are port and reconnect_interval. Operations and meta belong to an underlying connection resource (ocf:pacemaker:remote). If node address is not specified for the node, pcs will configure pacemaker to communicate with the node using an address provided in 'pcs host auth' command. Otherwise, pcs will configure pacemaker to communicate with the node using the specified addresses. If \fB\-\-wait\fR is specified, wait up to 'n' seconds for the node to start. .TP node delete\-remote Shutdown specified remote node and remove it from the cluster. The node\-identifier can be the name of the node or the address of the node. .TP node remove\-remote Shutdown specified remote node and remove it from the cluster. The node\-identifier can be the name of the node or the address of the node. .TP node add\-guest [options] [\fB\-\-wait\fR[=]] Make the specified resource a guest node resource. Sync all relevant configuration files to the new node. Start the node and configure it to start the cluster on boot. Options are remote\-addr, remote\-port and remote\-connect\-timeout. If remote\-addr is not specified for the node, pcs will configure pacemaker to communicate with the node using an address provided in 'pcs host auth' command. Otherwise, pcs will configure pacemaker to communicate with the node using the specified addresses. If \fB\-\-wait\fR is specified, wait up to 'n' seconds for the node to start. .TP node delete\-guest Shutdown specified guest node and remove it from the cluster. The node\-identifier can be the name of the node or the address of the node or id of the resource that is used as the guest node. .TP node remove\-guest Shutdown specified guest node and remove it from the cluster. The node\-identifier can be the name of the node or the address of the node or id of the resource that is used as the guest node. .TP node clear Remove specified node from various cluster caches. Use this if a removed node is still considered by the cluster to be a member of the cluster. .TP uidgid List the current configured uids and gids of users allowed to connect to corosync. .TP uidgid add [uid=] [gid=] Add the specified uid and/or gid to the list of users/groups allowed to connect to corosync. .TP uidgid delete [uid=] [gid=] Remove the specified uid and/or gid from the list of users/groups allowed to connect to corosync. .TP uidgid remove [uid=] [gid=] Remove the specified uid and/or gid from the list of users/groups allowed to connect to corosync. .TP corosync [node] Get the corosync.conf from the specified node or from the current node if node not specified. .TP reload corosync Reload the corosync configuration on the current node. .TP destroy [\fB\-\-all\fR] Permanently destroy the cluster on the current node, killing all cluster processes and removing all cluster configuration files. Using \fB\-\-all\fR will attempt to destroy the cluster on all nodes in the local cluster. \fBWARNING:\fR This command permanently removes any cluster configuration that has been created. It is recommended to run 'pcs cluster stop' before destroying the cluster. .TP verify [\fB\-\-full\fR] [\fB\-f\fR ] Checks the pacemaker configuration (CIB) for syntax and common conceptual errors. If no filename is specified the check is performed on the currently running cluster. If \fB\-\-full\fR is used more verbose output will be printed. .TP report [\fB\-\-from\fR "YYYY\-M\-D H:M:S" [\fB\-\-to\fR "YYYY\-M\-D H:M:S"]] Create a tarball containing everything needed when reporting cluster problems. If \fB\-\-from\fR and \fB\-\-to\fR are not used, the report will include the past 24 hours. .SS "stonith" .TP [status [\fB\-\-hide\-inactive\fR]] Show status of all currently configured stonith devices. If \fB\-\-hide\-inactive\fR is specified, only show active stonith devices. .TP config []... Show options of all currently configured stonith devices or if stonith ids are specified show the options for the specified stonith device ids. .TP list [filter] [\fB\-\-nodesc\fR] Show list of all available stonith agents (if filter is provided then only stonith agents matching the filter will be shown). If \fB\-\-nodesc\fR is used then descriptions of stonith agents are not printed. .TP describe [\fB\-\-full\fR] Show options for specified stonith agent. If \fB\-\-full\fR is specified, all options including advanced and deprecated ones are shown. .TP create [stonith device options] [op [ ]...] [meta ...] [\fB\-\-group\fR [\fB\-\-before\fR | \fB\-\-after\fR ]] [\fB\-\-disabled\fR] [\fB\-\-wait\fR[=n]] Create stonith device with specified type and options. If \fB\-\-group\fR is specified the stonith device is added to the group named. You can use \fB\-\-before\fR or \fB\-\-after\fR to specify the position of the added stonith device relatively to some stonith device already existing in the group. If\fB\-\-disabled\fR is specified the stonith device is not used. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the stonith device to start and then return 0 if the stonith device is started, or 1 if the stonith device has not yet started. If 'n' is not specified it defaults to 60 minutes. Example: Create a device for nodes node1 and node2 .br pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2 .br Example: Use port p1 for node n1 and ports p2 and p3 for node n2 .br pcs stonith create MyFence fence_virt 'pcmk_host_map=n1:p1;n2:p2,p3' .TP update [stonith device options] Add/Change options to specified stonith id. .TP delete Remove stonith id from configuration. .TP remove Remove stonith id from configuration. .TP enable ... [\fB\-\-wait[=n]\fR] Allow the cluster to use the stonith devices. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the stonith devices to start and then return 0 if the stonith devices are started, or 1 if the stonith devices have not yet started. If 'n' is not specified it defaults to 60 minutes. .TP disable ... [\fB\-\-wait[=n]\fR] Attempt to stop the stonith devices if they are running and disallow the cluster to use them. If \fB\-\-wait\fR is specified, pcs will wait up to 'n' seconds for the stonith devices to stop and then return 0 if the stonith devices are stopped or 1 if the stonith devices have not stopped. If 'n' is not specified it defaults to 60 minutes. .TP cleanup [] [\fB\-\-node\fR ] Make the cluster forget failed operations from history of the stonith device and re\-detect its current state. This can be useful to purge knowledge of past failures that have since been resolved. If a stonith id is not specified then all resources / stonith devices will be cleaned up. If a node is not specified then resources / stonith devices on all nodes will be cleaned up. .TP refresh [] [\fB\-\-node\fR ] [\fB\-\-full\fR] Make the cluster forget the complete operation history (including failures) of the stonith device and re\-detect its current state. If you are interested in forgetting failed operations only, use the 'pcs stonith cleanup' command. If a stonith id is not specified then all resources / stonith devices will be refreshed. If a node is not specified then resources / stonith devices on all nodes will be refreshed. Use \fB\-\-full\fR to refresh a stonith device on all nodes, otherwise only nodes where the stonith device's state is known will be considered. .TP level [config] Lists all of the fencing levels currently configured. .TP level add [stonith id]... Add the fencing level for the specified target with the list of stonith devices to attempt for that target at that level. Fence levels are attempted in numerical order (starting with 1). If a level succeeds (meaning all devices are successfully fenced in that level) then no other levels are tried, and the target is considered fenced. Target may be a node name or % or node%, a node name regular expression regexp% or a node attribute value attrib%=. .TP level delete [target] [stonith id]... Removes the fence level for the level, target and/or devices specified. If no target or devices are specified then the fence level is removed. Target may be a node name or % or node%, a node name regular expression regexp% or a node attribute value attrib%=. .TP level remove [target] [stonith id]... Removes the fence level for the level, target and/or devices specified. If no target or devices are specified then the fence level is removed. Target may be a node name or % or node%, a node name regular expression regexp% or a node attribute value attrib%=. .TP level clear [target|stonith id(s)] Clears the fence levels on the target (or stonith id) specified or clears all fence levels if a target/stonith id is not specified. If more than one stonith id is specified they must be separated by a comma and no spaces. Target may be a node name or % or node%, a node name regular expression regexp% or a node attribute value attrib%=. Example: pcs stonith level clear dev_a,dev_b .TP level verify Verifies all fence devices and nodes specified in fence levels exist. .TP fence [\fB\-\-off\fR] Fence the node specified (if \fB\-\-off\fR is specified, use the 'off' API call to stonith which will turn the node off instead of rebooting it). .TP confirm [\fB\-\-force\fR] Confirm to the cluster that the specified node is powered off. This allows the cluster to recover from a situation where no stonith device is able to fence the node. This command should \fBONLY\fR be used after manually ensuring that the node is powered off and has no access to shared resources. .B WARNING: If this node is not actually powered off or it does have access to shared resources, data corruption/cluster failure can occur. To prevent accidental running of this command, \-\-force or interactive user response is required in order to proceed. NOTE: It is not checked if the specified node exists in the cluster in order to be able to work with nodes not visible from the local cluster partition. .TP history [show []] Show fencing history for the specified node or all nodes if no node specified. .TP history cleanup [] Cleanup fence history of the specified node or all nodes if no node specified. .TP history update Update fence history from all nodes. .TP sbd enable [watchdog=[@]]... [device=[@]]... [=]... [\fB\-\-no\-watchdog\-validation\fR] Enable SBD in cluster. Default path for watchdog device is /dev/watchdog. Allowed SBD options: SBD_WATCHDOG_TIMEOUT (default: 5), SBD_DELAY_START (default: no) and SBD_STARTMODE (default: always). It is possible to specify up to 3 devices per node. If \fB\-\-no\-watchdog\-validation\fR is specified, validation of watchdogs will be skipped. .B WARNING: Cluster has to be restarted in order to apply these changes. .B WARNING: By default, it is tested whether the specified watchdog is supported. This may cause a restart of the system when a watchdog with no\-way\-out\-feature enabled is present. Use \-\-no\-watchdog\-validation to skip watchdog validation. Example of enabling SBD in cluster with watchdogs on node1 will be /dev/watchdog2, on node2 /dev/watchdog1, /dev/watchdog0 on all other nodes, device /dev/sdb on node1, device /dev/sda on all other nodes and watchdog timeout will bet set to 10 seconds: pcs stonith sbd enable watchdog=/dev/watchdog2@node1 watchdog=/dev/watchdog1@node2 watchdog=/dev/watchdog0 device=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10 .TP sbd disable Disable SBD in cluster. .B WARNING: Cluster has to be restarted in order to apply these changes. .TP sbd device setup device= [device=]... [watchdog\-timeout=] [allocate\-timeout=] [loop\-timeout=] [msgwait\-timeout=] Initialize SBD structures on device(s) with specified timeouts. .B WARNING: All content on device(s) will be overwritten. .TP sbd device message Manually set a message of the specified type on the device for the node. Possible message types (they are documented in sbd(8) man page): test, reset, off, crashdump, exit, clear .TP sbd status [\fB\-\-full\fR] Show status of SBD services in cluster and local device(s) configured. If \fB\-\-full\fR is specified, also dump of SBD headers on device(s) will be shown. .TP sbd config Show SBD configuration in cluster. .TP sbd watchdog list Show all available watchdog devices on the local node. .B WARNING: Listing available watchdogs may cause a restart of the system when a watchdog with no\-way\-out\-feature enabled is present. .TP sbd watchdog test [] This operation is expected to force\-reboot the local system without following any shutdown procedures using a watchdog. If no watchdog is specified, available watchdog will be used if only one watchdog device is available on the local system. .SS "acl" .TP [show] List all current access control lists. .TP enable Enable access control lists. .TP disable Disable access control lists. .TP role create [description=] [((read | write | deny) (xpath | id ))...] Create a role with the id and (optional) description specified. Each role can also have an unlimited number of permissions (read/write/deny) applied to either an xpath query or the id of a specific element in the cib. .TP role delete Delete the role specified and remove it from any users/groups it was assigned to. .TP role remove Delete the role specified and remove it from any users/groups it was assigned to. .TP role assign [to] [user|group] Assign a role to a user or group already created with 'pcs acl user/group create'. If there is user and group with the same id and it is not specified which should be used, user will be prioritized. In cases like this specify whenever user or group should be used. .TP role unassign [from] [user|group] Remove a role from the specified user. If there is user and group with the same id and it is not specified which should be used, user will be prioritized. In cases like this specify whenever user or group should be used. .TP user create []... Create an ACL for the user specified and assign roles to the user. .TP user delete Remove the user specified (and roles assigned will be unassigned for the specified user). .TP user remove Remove the user specified (and roles assigned will be unassigned for the specified user). .TP group create []... Create an ACL for the group specified and assign roles to the group. .TP group delete Remove the group specified (and roles assigned will be unassigned for the specified group). .TP group remove Remove the group specified (and roles assigned will be unassigned for the specified group). .TP permission add ((read | write | deny) (xpath | id ))... Add the listed permissions to the role specified. .TP permission delete Remove the permission id specified (permission id's are listed in parenthesis after permissions in 'pcs acl' output). .TP permission remove Remove the permission id specified (permission id's are listed in parenthesis after permissions in 'pcs acl' output). .SS "property" .TP [list|show [ | \fB\-\-all\fR | \fB\-\-defaults\fR]] | [\fB\-\-all\fR | \fB\-\-defaults\fR] List property settings (default: lists configured properties). If \fB\-\-defaults\fR is specified will show all property defaults, if \fB\-\-all\fR is specified, current configured properties will be shown with unset properties and their defaults. See \fBpacemaker-controld\fR(7) and \fBpacemaker-schedulerd\fR(7) man pages for a description of the properties. .TP set =[] ... [\fB\-\-force\fR] Set specific pacemaker properties (if the value is blank then the property is removed from the configuration). If a property is not recognized by pcs the property will not be created unless the \fB\-\-force\fR is used. See \fBpacemaker-controld\fR(7) and \fBpacemaker-schedulerd\fR(7) man pages for a description of the properties. .TP unset ... Remove property from configuration. See \fBpacemaker-controld\fR(7) and \fBpacemaker-schedulerd\fR(7) man pages for a description of the properties. .SS "constraint" .TP [list|show] \fB\-\-full\fR List all current constraints. If \fB\-\-full\fR is specified also list the constraint ids. .TP location prefers [=] [[=]]... Create a location constraint on a resource to prefer the specified node with score (default score: INFINITY). Resource may be either a resource id or % or resource%, or a resource name regular expression regexp%. .TP location avoids [=] [[=]]... Create a location constraint on a resource to avoid the specified node with score (default score: INFINITY). Resource may be either a resource id or % or resource%, or a resource name regular expression regexp%. .TP location rule [id=] [resource\-discovery=