table of contents
other versions
- jessie-backports 2.3.2-4~bpo8+1
- stretch 2.3.2-4
- testing 3.0.3-2
- unstable 4.0.0~git20190108.3d56538-3
CRM(8) | crmsh documentation | CRM(8) |
NAME¶
crm - Pacemaker command line interface for configuration and managementSYNOPSIS¶
crm [OPTIONS] [SUBCOMMAND ARGS...]DESCRIPTION¶
The crm shell is a command-line based cluster configuration and management tool. Its goal is to assist as much as possible with the configuration and maintenance of Pacemaker-based High Availability clusters. For more information on Pacemaker itself, see http://clusterlabs.org/. crm works both as a command-line tool to be called directly from the system shell, and as an interactive shell with extensive tab completion and help. The primary focus of the crm shell is to provide a simplified and consistent interface to Pacemaker, but it also provides tools for managing the creation and configuration of High Availability clusters from scratch. To learn more about this aspect of crm, see the cluster section below. The crm shell can be used to manage every aspect of configuring and maintaining a cluster. It provides a simplified line-based syntax on top of the XML configuration format used by Pacemaker, commands for starting and stopping resources, tools for exploring the history of a cluster including log scraping and a set of cluster scripts useful for automating the setup and installation of services on the cluster nodes. The crm shell is line oriented: every command must start and finish on the same line. It is possible to use a continuation character (\) to write one command in two or more lines. The continuation character is commonly used when displaying configurations.OPTIONS¶
-f, --file=FILELoad commands from the given file. If a dash - is used in
place of a file name, crm will read commands from the shell standard input
(stdin).
-c, --cib=CIB
Start the session using the given shadow CIB file.
Equivalent to cib use <CIB>.
-D, --display=OUTPUT_TYPE
Choose one of the output options: plain, color-always,
color, or uppercase. The default is color if the terminal emulation supports
colors. Otherwise, plain is used.
-F, --force
Make crm proceed with applying changes where it would
normally ask the user to confirm before proceeding. This option is mainly
useful in scripts, and should be used with care.
-w, --wait
Make crm wait for the cluster transition to finish (for
the changes to take effect) after each processed line.
-H, --history=DIR|FILE|SESSION
A directory or file containing a cluster report to load
into the history commands, or the name of a previously saved history
session.
-h, --help
Print help page.
--version
Print crmsh version and build information (Mercurial Hg
changeset hash).
-d, --debug
Print verbose debugging information.
-R, --regression-tests
Enables extra verbose trace logging used by the
regression tests. Logs all external calls made by crmsh.
--scriptdir=DIR
Extra directory where crm looks for cluster scripts, or a
list of directories separated by semi-colons (e.g. /dir1;/dir2;etc.).
-o, --opt=OPTION=VALUE
Set crmsh option temporarily. If the options are saved
using options save then the value passed here will also be saved. Multiple
options can be set by using -o multiple times.
INTRODUCTION¶
This section of the user guide covers general topics about the user interface and describes some of the features of crmsh in detail.User interface¶
The main purpose of crmsh is to provide a simple yet powerful interface to the cluster stack. There are two main modes of operation with the user interface of crmsh:•Command line (single-shot) use - Use crm as a
regular UNIX command from your usual shell. crm has full bash completion built
in, so using it in this manner should be as comfortable and familiar as using
any other command-line tool.
•Interactive mode - By calling crm without
arguments, or by calling it with only a sublevel as argument, crm enters the
interactive mode. In this mode, it acts as its own command shell, which
remembers which sublevel you are currently in and allows for rapid and
convenient execution of multiple commands within the same sublevel. This mode
also has full tab completion, as well as built-in interactive help and syntax
highlighting.
Here are a few examples of using crm both as a command-line tool and as an
interactive shell:
Command line (one-shot) use:.
# crm resource stop www_app
# crm crm(live)# resource crm(live)resource# unmanage tetris_1 crm(live)resource# up crm(live)# node standby node4
# crm configure<<EOF # # resources # primitive disk0 iscsi \ params portal=192.168.2.108:3260 target=iqn.2008-07.com.suse:disk0 primitive fs0 Filesystem \ params device=/dev/disk/by-label/disk0 directory=/disk0 fstype=ext3 primitive internal_ip IPaddr params ip=192.168.1.101 primitive apache apache \ params configfile=/disk0/etc/apache2/site0.conf primitive apcfence stonith:apcsmart \ params ttydev=/dev/ttyS0 hostlist="node1 node2" \ op start timeout=60s primitive pingd pingd \ params name=pingd dampen=5s multiplier=100 host_list="r1 r2" # # monitor apache and the UPS # monitor apache 60s:30s monitor apcfence 120m:60s # # cluster layout # group internal_www \ disk0 fs0 internal_ip apache clone fence apcfence \ meta globally-unique=false clone-max=2 clone-node-max=1 clone conn pingd \ meta globally-unique=false clone-max=2 clone-node-max=1 location node_pref internal_www \ rule 50: #uname eq node1 \ rule pingd: defined pingd # # cluster properties # property stonith-enabled=true commit EOF
Tab completion¶
The crm makes extensive use of tab completion. The completion is both static (i.e. for crm commands) and dynamic. The latter takes into account the current status of the cluster or information from installed resource agents. Sometimes, completion may also be used to get short help on resource parameters. Here are a few examples:crm(live)resource# <TAB><TAB> bye failcount move restart unmigrate cd help param show unmove cleanup list promote start up demote manage quit status utilization end meta refresh stop exit migrate reprobe unmanage crm(live)configure# primitive fence-1 <TAB><TAB> heartbeat: lsb: ocf: stonith: crm(live)configure# primitive fence-1 stonith:<TAB><TAB> apcmaster external/ippower9258 fence_legacy apcmastersnmp external/kdumpcheck ibmhmc apcsmart external/libvirt ipmilan crm(live)configure# primitive fence-1 stonith:ipmilan params <TAB><TAB> auth= hostname= ipaddr= login= password= port= priv= crm(live)configure# primitive fence-1 stonith:ipmilan params auth=<TAB><TAB> auth* (string) The authorization type of the IPMI session ("none", "straight", "md2", or "md5")
Shorthand syntax¶
When using the crm shell to manage clusters, you will end up typing a lot of commands many times over. Clear command names like configure help in understanding and learning to use the cluster shell, but is easy to misspell and is tedious to type repeatedly. The interactive mode and tab completion both help with this, but the crm shell also has the ability to understand a variety of shorthand aliases for all of the commands. For example, instead of typing crm status, you can type crm st or crm stat. Instead of crm configure you can type crm cfg or even crm cf. crm resource can be shorted as crm rsc, and so on. The exact list of accepted aliases is too long to print in full, but experimentation and typoes should help in discovering more of them.FEATURES¶
The feature set of crmsh covers a wide range of functionality, and understanding how and when to use the various features of the shell can be difficult. This section of the guide describes some of the features and use cases of crmsh in more depth. The intention is to provide a deeper understanding of these features, but also to serve as a guide to using them.Shadow CIB usage¶
A Shadow CIB is a normal cluster configuration stored in a file. They may be manipulated in much the same way as the live CIB, with the key difference that changes to a shadow CIB have no effect on the actual cluster resources. An administrator may choose to apply any of them to the cluster, thus replacing the running configuration with the one found in the shadow CIB. The crm prompt always contains the name of the configuration which is currently in use, or the string live if using the live cluster configuration. When editing the configuration in the configure level, no changes are actually applied until the commit command is executed. It is possible to start editing a configuration as usual, but instead of committing the changes to the active CIB, save them to a shadow CIB. The following example configure session demonstrates how this can be done:crm(live)configure# cib new test-2 INFO: test-2 shadow CIB created crm(test-2)configure# commit
Configuration semantic checks¶
Resource definitions may be checked against the meta-data provided with the resource agents. These checks are currently carried out:•are required parameters set
•existence of defined parameters
•timeout values for operations
The parameter checks are obvious and need no further explanation. Failures in
these checks are treated as configuration errors.
The timeouts for operations should be at least as long as those recommended in
the meta-data. Too short timeout values are a common mistake in cluster
configurations and, even worse, they often slip through if cluster testing was
not thorough. Though operation timeouts issues are treated as warnings, make
sure that the timeouts are usable in your environment. Note also that the
values given are just advisory minimum---your resources may require
longer timeouts.
User may tune the frequency of checks and the treatment of errors by the
check-frequency and check-mode preferences.
Note that if the check-frequency is set to always and the check-mode to strict,
errors are not tolerated and such configuration cannot be saved.
Configuration templates¶
Deprecation note Configuration templates have been deprecated in favor of the more capable cluster scripts. To learn how to use cluster scripts, see the dedicated documentation on the crmsh website at http://crmsh.github.io/, or in the Script section. Configuration templates are ready made configurations created by cluster experts. They are designed in such a way so that users may generate valid cluster configurations with minimum effort. If you are new to Pacemaker, templates may be the best way to start. We will show here how to create a simple yet functional Apache configuration:# crm configure crm(live)configure# template crm(live)configure template# list templates apache filesystem virtual-ip crm(live)configure template# new web <TAB><TAB> apache filesystem virtual-ip crm(live)configure template# new web apache INFO: pulling in template apache INFO: pulling in template virtual-ip crm(live)configure template# list web2-d web2 vip2 web3 vip web
crm(live)configure template# show ERROR: 23: required parameter ip not set ERROR: 61: required parameter id not set ERROR: 65: required parameter configfile not set crm(live)configure template# edit
$ grep -n ^%% ~/.crmconf/web 23:%% ip 31:%% netmask 35:%% lvs_support 61:%% id 65:%% configfile 71:%% options 76:%% envfiles
$ grep -n ^%% ~/.crmconf/web 23:%% ip 192.168.1.101 31:%% netmask 35:%% lvs_support 61:%% id websvc 65:%% configfile /etc/apache2/httpd.conf 71:%% options 76:%% envfiles
%% <name> <value>
crm(live)configure template# show primitive virtual-ip IPaddr \ params ip=192.168.1.101 primitive apache apache \ params configfile="/etc/apache2/httpd.conf" monitor apache 120s:60s group websvc \ apache virtual-ip
crm(live)configure template# apply crm(live)configure template# cd .. crm(live)configure# show node xen-b node xen-c primitive apache apache \ params configfile="/etc/apache2/httpd.conf" \ op monitor interval=120s timeout=60s primitive virtual-ip IPaddr \ params ip=192.168.1.101 group websvc apache virtual-ip
crm(live)configure# location websvc-pref websvc 100: xen-b
crm(live)configure# rename virtual-ip intranet-ip crm(live)configure# show node xen-b node xen-c primitive apache apache \ params configfile="/etc/apache2/httpd.conf" \ op monitor interval=120s timeout=60s primitive intranet-ip IPaddr \ params ip=192.168.1.101 group websvc apache intranet-ip location websvc-pref websvc 100: xen-b
•new: create a new configuration from
templates
•edit: define parameters, at least the required
ones
•show: see if the configuration is valid
•apply: apply the configuration to the configure
level
Resource testing¶
The amount of detail in a cluster makes all configurations prone to errors. By far the largest number of issues in a cluster is due to bad resource configuration. The shell can help quickly diagnose such problems. And considerably reduce your keyboard wear. Let’s say that we entered the following configuration:node xen-b node xen-c node xen-d primitive fencer stonith:external/libvirt \ params hypervisor_uri="qemu+tcp://10.2.13.1/system" \ hostlist="xen-b xen-c xen-d" \ op monitor interval=2h primitive svc Xinetd \ params service=systat \ op monitor interval=30s primitive intranet-ip IPaddr2 \ params ip=10.2.13.100 \ op monitor interval=30s primitive apache apache \ params configfile="/etc/apache2/httpd.conf" \ op monitor interval=120s timeout=60s group websvc apache intranet-ip location websvc-pref websvc 100: xen-b
crm(live)configure# rsctest websvc svc fencer
crm(live)configure# property stop-all-resources=yes
Access Control Lists (ACL)¶
Note on ACLs in Pacemaker 1.1.12 The support for ACLs has been revised in Pacemaker version 1.1.12 and up. Depending on which version you are using, the information in this section may no longer be accurate. Look for the acl_target configuration element for more details on the new syntax. By default, the users from the haclient group have full access to the cluster (or, more precisely, to the CIB). Access control lists allow for finer access control to the cluster. Access control lists consist of an ordered set of access rules. Each rule allows read or write access or denies access completely. Rules are typically combined to produce a specific role. Then, users may be assigned a role. For instance, this is a role which defines a set of rules allowing management of a single resource:role bigdb_admin \ write meta:bigdb:target-role \ write meta:bigdb:is-managed \ write location:bigdb \ read ref:bigdb
role read_all \ read cib
role basic_read \ read node attribute:uname \ read node attribute:type \ read property \ read status
//primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']
crm(live) configure# show xml bigdb_admin ... <acls> <acl_role id="bigdb_admin"> <write id="bigdb_admin-write" xpath="//primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']"/>
//primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role'] //resources/primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']
Syntax: Resource sets¶
Using resource sets can be a bit confusing unless one knows the details of the implementation in Pacemaker as well as how to interpret the syntax provided by crmsh. Three different types of resource sets are provided by crmsh, and each one implies different values for the two resource set attributes, sequential and require-all. sequentialIf false, the resources in the set do not depend on each
other internally. Setting sequential to true implies a strict order of
dependency within the set.
require-all
If false, only one resource in the set is required to
fulfil the requirements of the set. The set of A, B and C with require-all set
to false is be read as "A OR B OR C" when its dependencies are
resolved.
The three types of resource sets modify the attributes in the following way:
1.Implicit sets (no brackets). sequential=true,
require-all=true
2.Parenthesis set (( ... )). sequential=false,
require-all=true
3.Bracket set ([ ... ]). sequential=false,
require-all=false
To create a set with the properties sequential=true and require-all=false,
explicitly set sequential in a bracketed set, [ A B C sequential=true ].
To create multiple sets with both sequential and require-all set to true,
explicitly set sequential in a parenthesis set: A B ( C D sequential=true ).
Syntax: Attribute list references¶
Attribute lists are used to set attributes and parameters for resources, constraints and property definitions. For example, to set the virtual IP used by an IPAddr2 resource the attribute ip can be set in an attribute list for that resource. Attribute lists can have identifiers that name them, and other resources can reuse the same attribute list by referring to that name using an $id-ref. For example, the following statement defines a simple dummy resource with an attribute list which sets the parameter state to the value 1 and sets the identifier for the attribute list to on-state:primitive dummy-1 Dummy params $id=on-state state=1
primitive dummy-2 Dummy params $id-ref=on-state
Syntax: Attribute references¶
In some cases, referencing complete attribute lists is too coarse-grained, for example if two different parameters with different names should have the same value set. Instead of having to copy the value in multiple places, it is possible to create references to individual attributes in attribute lists. To name an attribute in order to be able to refer to it later, prefix the attribute name with a $ character (as seen above with the special names $id and $id-ref:primitive dummy-1 Dummy params $state=1
primitive dummy-2 Dummy params @state
primitive dummy-1 params $dummy-state-on:state=1 primitive dummy-2 params @dummy-state-on
primitive virtual-ip IPaddr2 params $vip:ip=192.168.1.100 primitive webserver apache params @vip:server_ip
Syntax: Rule expressions¶
Many of the configuration commands in crmsh now support the use of rule expressions, which can influence what attributes apply to a resource or under which conditions a constraint is applied, depending on changing conditions like date, time, the value of attributes and more. Here is an example of a simple rule expression used to apply a a different resource parameter on the node named node1:primitive my_resource Special \ params 2: rule #uname eq node1 interface=eth1 \ params 1: interface=eth0
rules :: rule [id_spec] [$role=<role>] <score>: <expression> [rule [id_spec] [$role=<role>] <score>: <expression> ...] id_spec :: $id=<id> | $id-ref=<id> score :: <number> | <attribute> | [-]inf expression :: <simple_exp> [<bool_op> <simple_exp> ...] bool_op :: or | and simple_exp :: <attribute> [type:]<binary_op> <value> | <unary_op> <attribute> | date <date_expr> type :: <string> | <version> | <number> binary_op :: lt | gt | lte | gte | eq | ne unary_op :: defined | not_defined date_expr :: lt <end> | gt <start> | in start=<start> end=<end> | in start=<start> <duration> | spec <date_spec> duration|date_spec :: hours=<value> | monthdays=<value> | weekdays=<value> | yearsdays=<value> | months=<value> | weeks=<value> | years=<value> | weekyears=<value> | moon=<value>
COMMAND REFERENCE¶
The commands are structured to be compatible with the shell command line. Sometimes, the underlying Pacemaker grammar uses characters that have special meaning in bash, that will need to be quoted. This includes the hash or pound sign (#), single and double quotes, and any significant whitespace. Whitespace is also significant when assigning values, meaning that key=value is different from key = value. Commands can be referenced using short-hand as long as the short-hand is unique. This can be either a prefix of the command name or a prefix string of characters found in the name. For example, status can be abbreviated as st or su, and configure as conf or cfg. The syntax for the commands is given below in an informal, BNF-like grammar.•<value> denotes a string.
•[value] means that the construct is
optional.
•The ellipsis (...) signifies that the previous
construct may be repeated.
•first|second means either first or second.
•The rest are literals (strings, :, =).
status¶
Show cluster status. The status is displayed by crm_mon. Supply additional arguments for more information or different format. See crm_mon(8) for more details. Example:status status simple status full
status [<option> ...] option :: full | bynode | inactive | ops | timing | failcounts | verbose | quiet | html | xml | simple | tickets | noheaders | detail | brief
verify¶
Performs basic checks for the cluster configuration and current status, reporting potential issues. See crm_verify(8) and crm_simulate(8) for more details. Example:verify verify scores
verify [scores]
cluster - Cluster setup and management¶
Whole-cluster configuration management with High Availability awareness. The commands on the cluster level allows configuration and modification of the underlying cluster infrastructure, and also supplies tools to do whole-cluster systems management. These commands enable easy installation and maintenance of a HA cluster, by providing support for package installation, configuration of the cluster messaging layer, file system setup and more.
This command simplifies the process of adding a new node to a running cluster.
The new node will be installed and configured with the packages and
configuration files needed to run the cluster resources. If a cluster file
system is used, the new node will be set up to host the file system.
This command should be executed from a node already in the cluster.
Usage:
add <node>
Copy file to other cluster nodes.
Copies the given file to all other nodes unless given a list of nodes to copy to
as argument.
Usage:
Example:
copy <filename> [nodes ...]
copy /etc/motd
Displays the difference, if any, between a given file on different nodes. If the
second argument is --checksum, a checksum of the file will be calculated and
displayed for each node.
Usage:
Example:
diff <file> [--checksum] [nodes...]
diff /etc/crm/crm.conf node2 diff /etc/resolv.conf --checksum
Runs a larger set of tests and queries on all nodes in the cluster to verify the
general system health and detect potential problems.
Usage:
health
Installs and configures a basic HA cluster on a set of nodes.
Usage:
init node1 node2 node3 init --dry-run node1 node2 node3
This command simplifies the process of removing a node from the cluster, moving
any resources hosted by that node to other nodes.
Usage:
remove <node>
This command takes a shell statement as argument, executes that statement on all
nodes in the cluster, and reports the result.
Usage:
Example:
run <command>
run "cat /proc/uptime"
Starts the cluster-related system services on this node.
Usage:
start
Reports the status for the cluster messaging layer on the local node.
Usage:
status
Stops the cluster-related system services on this node.
Usage:
stop
Mostly useful in scripts or automated workflows, this command will attempt to
connect to the local cluster node repeatedly. The command will keep trying
until the cluster node responds, or the timeout elapses. The timeout can be
changed by supplying a value in seconds as an argument.
Usage:
wait_for_startup
script - Cluster script management¶
A big part of the configuration and management of a cluster is collecting information about all cluster nodes and deploying changes to those nodes. Often, just performing the same procedure on all nodes will encounter problems, due to subtle differences in the configuration. For example, when configuring a cluster for the first time, the software needs to be installed and configured on all nodes before the cluster software can be launched and configured using crmsh. This process is cumbersome and error-prone, and the goal is for scripts to make this process easier. Scripts are implemented using the python parallax package which provides a thin wrapper on top of SSH. This allows the scripts to function through the usual SSH channels used for system maintenance, requiring no additional software to be installed or maintained.
This command provides a JSON API for the cluster scripts, intended for use in
user interface tools that want to interact with the cluster via scripts.
The command takes a single argument, which should be a JSON array with the first
member identifying the command to perform.
The output is line-based: Commands that return multiple results will return them
line-by-line, ending with a terminator value: "end".
When providing parameter values to this command, they should be provided as
nested objects, so virtual-ip:ip=192.168.0.5 on the command line becomes the
JSON object {"virtual-ip":{"ip":"192.168.0.5"}}.
API:
["list"] => [{name, shortdesc, category}] ["show", <name>] => [{name, shortdesc, longdesc, category, <<steps>>}] <<steps>> := [{name, shortdesc], longdesc, required, parameters, steps}] <<params>> := [{name, shortdesc, longdesc, required, unique, advanced, type, value, example}] ["verify", <name>, <<values>>] => [{shortdesc, longdesc, text, nodes}] ["run", <name>, <<values>>] => [{shortdesc, rc, output|error}]
Lists the available scripts, sorted by category. Scripts that have the special
Script category are hidden by default, since they are mainly used by other
scripts or commands. To also show these, pass all as argument.
To get a flat list of script names, not sorted by category, pass names as an
extra argument.
Usage:
Example:
list [all] [names]
list list all names
Given a list of parameter values, this command will execute the actions
specified by the cluster script. The format for the parameter values is the
same as for the verify command.
Can optionally take at least two parameters: * nodes=<nodes>: List of
nodes that the script runs over * dry_run=yes|no: If set, the script will not
perform any modifications.
Additional parameters may be available depending on the script.
Use the show command to see what parameters are available.
Usage:
Example:
run <script> [args...]
run apache install=true run sbd id=sbd-1 node=node1 sbd_device=/dev/disk/by-uuid/F00D-CAFE
Prints a description and short summary of the script, with descriptions of the
accepted parameters.
Advanced parameters are hidden by default. To show the complete list of
parameters accepted by the script, pass all as argument.
Usage:
Example:
show <script> [all]
show virtual-ip
Checks the given parameter values, and returns a list of actions that will be
executed when running the script if provided the same list of parameter
values.
Usage:
Example:
verify <script> [args...]
verify sbd id=sbd-1 node=node1 sbd_device=/dev/disk/by-uuid/F00D-CAFE
corosync - Corosync management¶
Corosync is the underlying messaging layer for most HA clusters. This level provides commands for editing and managing the corosync configuration.
Adds a node to the corosync configuration. This is used with the udpu type
configuration in corosync.
A nodeid for the added node is generated automatically.
Note that this command assumes that only a single ring is used, and sets only
the address for ring0.
Usage:
add-node <addr> [name]
Removes a node from the corosync configuration. The argument given is the
ring0_addr address set in the configuration file.
Usage:
del-node <addr>
Diffs the corosync configurations on different nodes. If no nodes are given as
arguments, the corosync configurations on all nodes in the cluster are
compared.
diff takes an option argument --checksum, to display a checksum for each file
instead of calculating a diff.
Usage:
diff [--checksum] [node...]
Opens the Corosync configuration file in an editor.
Usage:
edit
Returns the value configured in corosync.conf, which is not necessarily the
value used in the running configuration. See reload for telling corosync about
configuration changes.
The argument is the complete dot-separated path to the value.
If there are multiple values configured with the same path, the command returns
all values for that path. For example, to get all configured ring0_addr
values, use this command:
Example:
get nodelist.node.ring0_addr
Opens the log file specified in the corosync configuration file. If no log file
is configured, this command returns an error.
The pager used can be configured either using the PAGER environment variable or
in crm.conf.
Usage:
log
Gets the corosync configuration from another node and copies it to this node.
Usage:
pull <node>
Pushes the corosync configuration file on this node to the list of nodes
provided. If no target nodes are given, the configuration is pushed to all
other nodes in the cluster.
It is recommended to use csync2 to distribute the cluster configuration files
rather than relying on this command.
Usage:
Example:
push [node] ...
push node-2 node-3
Tells all instances of corosync in this cluster to reload corosync.conf.
After pushing a new configuration to all cluster nodes, call this command to
make corosync use the new configuration.
Usage:
reload
Sets the value identified by the given path. If the value does not exist in the
configuration file, it will be added. However, if the section containing the
value does not exist, the command will fail.
Usage:
set quorum.expected_votes 2
Displays the corosync configuration on the current node.
show
Displays the status of Corosync, including the votequorum state.
Usage:
status
cib - CIB shadow management¶
This level is for management of shadow CIBs. It is available both at the top level and the configure level. All the commands are implemented using cib_shadow(8) and the CIB_shadow environment variable. The user prompt always includes the name of the currently active shadow or the live CIB.
Enter edit and manage the CIB status section level. See the CIB status
management section.
Apply a shadow CIB to the cluster. If the shadow name is omitted then the
current shadow CIB is applied.
Temporary shadow CIBs are removed automatically on commit.
Usage:
commit [<cib>]
Delete an existing shadow CIB.
Usage:
delete <cib>
Print differences between the current cluster configuration and the active
shadow CIB.
Usage:
diff
At times it may be useful to create a shadow file from the existing CIB. The CIB
may be specified as file or as a PE input file number. The shell will look up
files in the local directory first and then in the PE directory (typically
/var/lib/pengine). Once the CIB file is found, it is copied to a shadow and
this shadow is immediately available for use at both configure and cibstatus
levels.
If the shadow name is omitted then the target shadow is named after the input
CIB file.
Note that there are often more than one PE input file, so you may need to
specify the full name.
Usage:
Examples:
import {<file>|<number>} [<shadow>]
import pe-warn-2222 import 2289 issue2
List existing shadow CIBs.
Usage:
list
Create a new shadow CIB. The live cluster configuration and status is copied to
the shadow CIB.
If the name of the shadow is omitted, we create a temporary CIB shadow. It is
useful if multiple level sessions are desired without affecting the cluster. A
temporary CIB shadow is short lived and will be removed either on commit or on
program exit. Note that if the temporary shadow is not committed all changes
in the temporary shadow are lost.
Specify withstatus if you want to edit the status section of the shadow CIB (see
the cibstatus section). Add force to force overwriting the existing shadow
CIB.
To start with an empty configuration that is not copied from the live CIB,
specify the empty keyword. (This also allows a shadow CIB to be created in
case no cluster is running.)
Usage:
new [<cib>] [withstatus] [force] [empty]
Copy the current cluster configuration into the shadow CIB.
Usage:
reset <cib>
Choose a CIB source. If you want to edit the status from the shadow CIB specify
withstatus (see cibstatus). Leave out the CIB name to switch to the running
CIB.
Usage:
use [<cib>] [withstatus]
ra - Resource Agents (RA) lists and documentation¶
This level contains commands which show various information about the installed resource agents. It is available both at the top level and at the configure level.
Print all resource agents' classes and, where appropriate, a list of available
providers.
Usage:
classes
Show the meta-data of a resource agent type. This is where users can find
information on how to use a resource agent. It is also possible to get
information from some programs: pengine, crmd, cib, and stonithd. Just specify
the program name instead of an RA.
Usage:
Example:
info [<class>:[<provider>:]]<type> info <type> <class> [<provider>] (obsolete)
info apache info ocf:pacemaker:Dummy info stonith:ipmilan info pengine
List available resource agents for the given class. If the class is ocf, supply
a provider to get agents which are available only from that provider.
Usage:
Example:
list <class> [<provider>]
list ocf pacemaker
List providers for a resource agent type. The class parameter defaults to ocf.
Usage:
Example:
providers <type> [<class>]
providers apache
If the resource agent supports the validate-all action, this calls the action
with the given parameters, printing any warnings or errors reported by the
agent.
Usage:
validate <agent> [<key>=<value> ...]
resource - Resource management¶
At this level resources may be managed. All (or almost all) commands are implemented with the CRM tools such as crm_resource(8).
Ban a resource from running on a certain node. If no node is given as argument,
the resource is banned from the current location.
See move for details on other arguments.
Usage:
ban <rsc> [<node>] [<lifetime>] [force]
Cleanup resource status. Typically done after the resource has temporarily
failed. If a node is omitted, cleanup on all nodes. If there are many nodes,
the command may take a while.
(Pacemaker 1.1.14) Pass force to cleanup the resource itself, otherwise the
cleanup command will apply to the parent resource (if any).
Usage:
cleanup <rsc> [<node>] [force]
Remove any relocation constraint created by the move, migrate or ban command.
Usage:
clear <rsc> unmigrate <rsc> unban <rsc>
Display the location and colocation constraints affecting the resource.
Usage:
constraints <rsc>
Demote a master-slave resource using the target-role attribute.
Usage:
demote <rsc>
Show/edit/delete the failcount of a resource.
Usage:
Example:
failcount <rsc> set <node> <value> failcount <rsc> delete <node> failcount <rsc> show <node>
failcount fs_0 delete node2
Show the current location of one or more resources.
Usage:
locate [<rsc> ...]
Enables or disables the per-resource maintenance mode. When this mode is
enabled, no monitor operations will be triggered for the resource.
Usage:
Example:
maintenance <resource> [on|off|true|false]
maintenance rsc1 maintenance rsc2 off
Manage a resource using the is-managed attribute. If there are multiple meta
attributes sets, the attribute is set in all of them. If the resource is a
clone, all is-managed attributes are removed from the children resources.
For details on group management see options manage-children.
Usage:
manage <rsc>
Show/edit/delete a meta attribute of a resource. Currently, all meta attributes
of a resource may be managed with other commands such as resource stop.
Usage:
Example:
meta <rsc> set <attr> <value> meta <rsc> delete <attr> meta <rsc> show <attr>
meta ip_0 set target-role stopped
Move a resource away from its current location.
If the destination node is left out, the resource is migrated by creating a
constraint which prevents it from running on the current node. For this type
of constraint to be created, the force argument is required.
A lifetime may be given for the constraint. Once it expires, the location
constraint will no longer be active.
Usage:
move <rsc> [<node>] [<lifetime>] [force]
Show active operations, optionally filtered by resource and node.
Usage:
operations [<rsc>] [<node>]
Show/edit/delete a parameter of a resource.
Usage:
Example:
param <rsc> set <param> <value> param <rsc> delete <param> param <rsc> show <param>
param ip_0 show ip
Promote a master-slave resource using the target-role attribute.
Usage:
promote <rsc>
Refresh CIB from the LRM status. Note
refresh has been deprecated and is now an alias for cleanup.
Usage:
refresh [<node>]
Probe for resources not started by the CRM. Note
reprobe has been deprecated and is now an alias for cleanup.
Usage:
reprobe [<node>]
Restart one or more resources. This is essentially a shortcut for resource stop
followed by a start. The shell is first going to wait for the stop to finish,
that is for all resources to really stop, and only then to order the start
action. Due to this command entailing a whole set of operations, informational
messages are printed to let the user see some progress.
For details on group management see options manage-children.
Usage:
Example:
restart <rsc> [<rsc> ...]
# crm resource restart g_webserver INFO: ordering g_webserver to stop waiting for stop to finish .... done INFO: ordering g_webserver to start #
Display the allocation scores for all resources.
Usage:
scores
Sensitive parameters can be kept in local files rather than CIB in order to
prevent accidental data exposure. Use the secret command to manage such
parameters. stash and unstash move the value from the CIB and back to the CIB
respectively. The set subcommand sets the parameter to the provided value.
delete removes the parameter completely. show displays the value of the
parameter from the local file. Use check to verify if the local file content
is valid.
Usage:
Example:
secret <rsc> set <param> <value> secret <rsc> stash <param> secret <rsc> unstash <param> secret <rsc> delete <param> secret <rsc> show <param> secret <rsc> check <param>
secret fence_1 show password secret fence_1 stash password secret fence_1 set password secret_value
Start one or more resources by setting the target-role attribute. If there are
multiple meta attributes sets, the attribute is set in all of them. If the
resource is a clone, all target-role attributes are removed from the children
resources.
For details on group management see options manage-children.
Usage:
start <rsc> [<rsc> ...]
Print resource status. More than one resource can be shown at once. If the
resource parameter is left out, the status of all resources is printed.
Usage:
status [<rsc> ...]
Stop one or more resources using the target-role attribute. If there are
multiple meta attributes sets, the attribute is set in all of them. If the
resource is a clone, all target-role attributes are removed from the children
resources.
For details on group management see options manage-children.
Usage:
stop <rsc> [<rsc> ...]
Start tracing RA for the given operation. The trace files are stored in
$HA_VARLIB/trace_ra. If the operation to be traced is monitor, note that the
number of trace files can grow very quickly.
If no operation name is given, crmsh will attempt to trace all operations for
the RA. This includes any configured operations, start and stop as well as
promote/demote for multistate resources.
To trace the probe operation which exists for all resources, either set a trace
for monitor with interval 0, or use probe as the operation name.
Usage:
Example:
trace <rsc> [<op> [<interval>] ]
trace fs start trace webserver trace webserver probe trace fs monitor 0
Unmanage a resource using the is-managed attribute. If there are multiple meta
attributes sets, the attribute is set in all of them. If the resource is a
clone, all is-managed attributes are removed from the children resources.
For details on group management see options manage-children.
Usage:
unmanage <rsc>
Stop tracing RA for the given operation. If no operation name is given, crmsh
will attempt to stop tracing all operations in resource.
Usage:
Example:
untrace <rsc> [<op> [<interval>] ]
untrace fs start untrace webserver
Show/edit/delete a utilization attribute of a resource. These attributes
describe hardware requirements. By setting the placement-strategy cluster
property appropriately, it is possible then to distribute resources based on
resource requirements and node size. See also node utilization attributes.
Usage:
Example:
utilization <rsc> set <attr> <value> utilization <rsc> delete <attr> utilization <rsc> show <attr>
utilization xen1 set memory 4096
node - Node management¶
Node management and status commands.
Edit node attributes. This kind of attribute should refer to relatively static
properties, such as memory size.
Usage:
Example:
attribute <node> set <attr> <value> attribute <node> delete <attr> attribute <node> show <attr>
attribute node_1 set memory_size 4096
Resets and clears the state of the specified node. This node is afterwards
assumed clean and offline. This command can be used to manually confirm that a
node has been fenced (e.g., powered off).
Be careful! This can cause data corruption if you confirm that a node is down
that is, in fact, not cleanly down - the cluster will proceed as if the fence
had succeeded, possibly starting resources multiple times.
Usage:
clearstate <node>
Delete a node. This command will remove the node from the CIB and, in case the
cluster stack is running, use the appropriate program (crm_node or hb_delnode)
to remove the node from the membership.
If the node is still listed as active and a member of our partition we refuse to
remove it. With the global force option (-F) we will try to delete the node
anyway.
Usage:
delete <node>
Make CRM fence a node. This functionality depends on stonith resources capable
of fencing the specified node. No such stonith resources, no fencing will
happen.
Usage:
fence <node>
Set the node status to maintenance. This is equivalent to the cluster-wide
maintenance-mode property but puts just one node into the maintenance mode.
The node parameter defaults to the node where the command is run.
Usage:
maintenance [<node>]
Set a node to online status.
The node parameter defaults to the node where the command is run.
Usage:
online [<node>]
Set the node’s maintenance status to off. The node should be now again
fully operational and capable of running resource operations.
The node parameter defaults to the node where the command is run.
Usage:
ready [<node>]
Remote nodes may have a configured server address which should be used when
contacting the node. This command prints the server address if configured,
else the node name.
If no parameter is given, the addresses or names for all nodes are printed.
Usage:
server [<node> ...]
Show a node definition. If the node parameter is omitted then all nodes are
shown.
Usage:
show [<node>]
Set a node to standby status. The node parameter defaults to the node where the
command is run.
Additionally, you may specify a lifetime for the standby---if set to reboot, the
node will be back online once it reboots. forever will keep the node in
standby after reboot. The life time defaults to forever.
Usage:
Example:
standby [<node>] [<lifetime>] lifetime :: reboot | forever
standby bob reboot
Show nodes' status as XML. If the node parameter is omitted then all nodes are
shown.
Usage:
status [<node>]
Edit node attributes which are in the CIB status section, i.e. attributes which
hold properties of a more volatile nature. One typical example is attribute
generated by the pingd utility.
Usage:
Example:
status-attr <node> set <attr> <value> status-attr <node> delete <attr> status-attr <node> show <attr>
status-attr node_1 show pingd
Edit node utilization attributes. These attributes describe hardware
characteristics as integer numbers such as memory size or the number of CPUs.
By setting the placement-strategy cluster property appropriately, it is
possible then to distribute resources based on resource requirements and node
size. See also resource utilization attributes.
Usage:
Examples:
utilization <node> set <attr> <value> utilization <node> delete <attr> utilization <node> show <attr>
utilization node_1 set memory 16384 utilization node_1 show cpu
site - GEO clustering site support¶
A cluster may consist of two or more subclusters in different and distant locations. This set of commands supports such setups.
Tickets are cluster-wide attributes. They can be managed at the site where this
command is executed.
It is then possible to constrain resources depending on the ticket availability
(see the rsc_ticket command for more details).
Usage:
Example:
ticket {grant|revoke|standby|activate|show|time|delete} <ticket>
ticket grant ticket1
options - User preferences¶
The user may set various options for the crm shell itself.
The shell (as in /bin/sh) parser strips quotes from the command line. This may
sometimes make it really difficult to type values which contain white space.
One typical example is the configure filter command. The crm shell will supply
extra quotes around arguments which contain white space. The default is yes.
Note on quotes use
Adding quotes around arguments automatically has been introduced with version
1.2.2 and it is technically a regression. Being a regression is the only
reason the add-quotes option exists. If you have custom shell scripts which
would break, just set the add-quotes option to no.
For instance, with adding quotes enabled, it is possible to do the following:
# crm configure primitive d1 Dummy \ meta description="some description here" # crm configure filter 'sed "s/hostlist=./&node-c /"' fencing
Semantic check of the CIB or elements modified or created may be done on every
configuration change (always), when verifying (on-verify) or never. It is by
default set to always. Experts may want to change the setting to on-verify.
The checks require that resource agents are present. If they are not installed
at the configuration time set this preference to never.
See Configuration semantic checks for more details.
Semantic check of the CIB or elements modified or created may be done in the
strict mode or in the relaxed mode. In the former certain problems are treated
as configuration errors. In the relaxed mode all are treated as warnings. The
default is strict.
See Configuration semantic checks for more details.
With output set to color, a comma separated list of colors from this option are
used to emphasize:
•keywords
•object ids
•attribute names
•attribute values
•scores
•resource references
crm can show colors only if there is curses support for python installed
(usually provided by the python-curses package). The colors are whatever is
available in your terminal. Use normal if you want to keep the default
foreground color.
This user preference defaults to yellow,normal,cyan,red,green,magenta which is
good for terminals with dark background. You may want to change the color
scheme and save it in the preferences file for other color setups.
Example:
colorscheme yellow,normal,blue,red,green,magenta
The edit command invokes an editor. Use this to specify your preferred editor
program. If not set, it will default to either the value of the EDITOR
environment variable or to one of the standard UNIX editors (vi,emacs,nano).
Usage:
Example:
editor program
editor vim
Some resource management commands, such as resource stop, when the target
resource is a group, may not always produce desired result. Each element,
group and the primitive members, can have a meta attribute and those
attributes may end up with conflicting values. Consider the following
construct:
Even though the element svc should be stopped, the group is actually running
because all its members have the target-role set to Started:
Hence, if the user invokes resource stop svc the intention is not clear. This
preference gives the user an opportunity to better control what happens if
attributes of group members have values which are in conflict with the same
attribute of the group itself.
Possible values are ask (the default), always, and never. If set to always, the
crm shell removes all children attributes which have values different from the
parent. If set to never, all children attributes are left intact. Finally, if
set to ask, the user will be asked for each member what is to be done.
crm(live)# configure show svc fs virtual-ip primitive fs Filesystem \ params device="/dev/drbd0" directory="/srv/nfs" fstype=ext3 \ op monitor interval=10s \ meta target-role=Started primitive virtual-ip IPaddr2 \ params ip=10.2.13.110 iflabel=1 \ op monitor interval=10s \ op start interval=0 \ meta target-role=Started group svc fs virtual-ip \ meta target-role=Stopped
crm(live)# resource show svc resource svc is running on: xen-f
crm can adorn configurations in two ways: in color (similar to for instance the
ls --color command) and by showing keywords in upper case. Possible values are
plain, color-always, color, and uppercase. It is possible to combine
uppercase with one of the color values in order to get an upper case xmass
tree. Just set this option to color,uppercase or color-always,uppercase. In
case you need color codes in pipes, color-always forces color codes even in
case the terminal is not a tty (just like ls --color=always).
The view command displays text through a pager. Use this to specify your
preferred pager program. If not set, it will default to either the value of
the PAGER environment variable or to one of the standard UNIX system pagers
(less,more,pg).
This command resets all user options to the defaults. If used as a single-shot
command, the rc file ($HOME/.config/crm/rc) is reset to the defaults
too.
Save current settings to the rc file ($HOME/.config/crm/rc). On further crm
runs, the rc file is automatically read and parsed.
Sets the value of an option. Takes the fully qualified name of the option as
argument, as displayed by show all.
The modified option value is stored in the user-local configuration file,
usually found in ~/.config/crm/crm.conf.
Usage:
Example:
set <option> <value>
set color.warn "magenta bold" set editor nano
Display all current settings.
Given an option name as argument, show will display only the value of that
argument.
Given all as argument, show displays all available user options.
Usage:
Example:
show [all|<option>]
show show skill-level show all
Based on the skill-level setting, the user is allowed to use only a subset of
commands. There are three levels: operator, administrator, and expert. The
operator level allows only commands at the resource and node levels, but not
editing or deleting resources. The administrator may do that and may also
configure the cluster at the configure level and manage the shadow CIBs. The
expert may do all.
Usage:
Note on security
The skill-level option is advisory only. There is nothing stopping any users
change their skill level (see Access Control Lists (ACL) on how to enforce
access control).
skill-level <level> level :: operator | administrator | expert
crm by default sorts CIB elements. If you want them appear in the order they
were created, set this option to no.
Usage:
Example:
sort-elements {yes|no}
sort-elements no
Sufficient privileges are necessary in order to manage a cluster: programs such
as crm_verify or crm_resource and, ultimately, cibadmin have to be run either
as root or as the CRM owner user (typically hacluster). You don’t have
to worry about that if you run crm as root. A more secure way is to run the
program with your usual privileges, set this option to the appropriate user
(such as hacluster), and setup the sudoers file.
Usage:
Example:
user system-user
user hacluster
In normal operation, crm runs a command and gets back immediately to process
other commands or get input from the user. With this option set to yes it will
wait for the started transition to finish. In interactive mode dots are
printed to indicate progress.
Usage:
Example:
wait {yes|no}
wait yes
configure - CIB configuration¶
This level enables all CIB object definition commands. The configuration may be logically divided into four parts: nodes, resources, constraints, and (cluster) properties and attributes. Each of these commands support one or more basic CIB objects. Nodes and attributes describing nodes are managed using the node command. Commands for resources are:•primitive
•monitor
•group
•clone
•ms/master (master-slave)
In order to streamline large configurations, it is possible to define a template
which can later be referenced in primitives:
•rsc_template
In that case the primitive inherits all attributes defined in the template.
There are three types of constraints:
•location
•colocation
•order
It is possible to define fencing order (stonith resource priorities):
•fencing_topology
Finally, there are the cluster properties, resource meta attributes defaults,
and operations defaults. All are just a set of attributes. These attributes
are managed by the following commands:
•property
•rsc_defaults
•op_defaults
In addition to the cluster configuration, the Access Control Lists (ACL) can be
setup to allow access to parts of the CIB for users other than root and
hacluster. The following commands manage ACL:
•user
•role
In Pacemaker 1.1.12 and up, this command replaces the user command for handling
ACLs:
•acl_target
The changes are applied to the current CIB only on ending the configuration
session or using the commit command.
Comments start with # in the first line. The comments are tied to the element
which follows. If the element moves, its comments will follow.
Defines an ACL target.
Usage:
Example:
acl_target <tid> [<role> ...]
acl_target joe resource_admin constraint_editor
Version note
This feature is only available in Pacemaker 1.1.15+.
Event-driven alerts enables calling scripts whenever interesting events occur in
the cluster (nodes joining or leaving, resources starting or stopping, etc.).
The path is an arbitrary file path to an alert script. Existing external scripts
used with ClusterMon resources can be used as alert scripts, since the
interface is compatible.
Each alert may have a number of receipients configured. These will be passed to
the script as arguments. The first recipient will also be passed as the
CRM_alert_recipient environment variable, for compatibility with existing
scripts that only support one recipient.
The available meta attributes are timeout (default 30s) and timestamp-format
(default "%H:%M:%S.%06N").
Some configurations may require each recipient to be delimited by brackets, to
avoid ambiguity. In the example alert-2 below, the meta attribute for timeout
is defined after the recipient, so the brackets are used to ensure that the
meta attribute is set for the alert and not just the recipient. This can be
avoided by setting any alert attributes before defining the recipients.
Usage:
Example:
alert <id> <path> \ [attributes <nvpair> ...] \ [meta <nvpair> ...] \ [to [{] <recipient> [attributes <nvpair> ...] \ [meta <nvpair> ...] [}] \ ...]
alert alert-1 /srv/pacemaker/pcmk_alert_sample.sh \ to /var/log/cluster-alerts.log alert alert-2 /srv/pacemaker/example_alert.sh \ meta timeout=60s \ to { /var/log/cluster-alerts.log }
This level is for management of shadow CIBs. It is available at the configure
level to enable saving intermediate changes to a shadow CIB instead of to the
live cluster. This short excerpt shows how:
Note how the current CIB in the prompt changed from live to test-2 after issuing
the cib new command. See also the CIB shadow management for more
information.
crm(live)configure# cib new test-2 INFO: test-2 shadow CIB created crm(test-2)configure# commit
Enter edit and manage the CIB status section level. See the CIB status
management section.
The clone command creates a resource clone. It may contain a single primitive
resource or one group of resources.
Usage:
Example:
clone <name> <rsc> [description=<description>] [meta <attr_list>] [params <attr_list>] attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
clone cl_fence apc_1 \ meta clone-node-max=1 globally-unique=false
This constraint expresses the placement relation between two or more resources.
If there are more than two resources, then the constraint is called a resource
set.
The score is used to indicate the priority of the constraint. A positive score
indicates that the resources should run on the same node. A negative score
that they should not run on the same node. Values of positive or negative
infinity indicate a mandatory constraint.
In the two resource form, the cluster will place <with-rsc> first, and
then decide where to put the <rsc> resource.
Collocation resource sets have an extra attribute (sequential) to allow for sets
of resources which don’t depend on each other in terms of state. The
shell syntax for such sets is to put resources in parentheses.
Sets cannot be nested.
The optional node-attribute can be used to colocate resources on a set of nodes
and not necessarily on the same node. For example, by setting a node attribute
color on all nodes and setting the node-attribute value to color as well, the
colocated resources will be placed on any node that has the same color.
For more details on how to configure resource sets, see Syntax: Resource sets.
Usage:
Example:
colocation <id> <score>: <rsc>[:<role>] <with-rsc>[:<role>] [node-attribute=<node_attr>] colocation <id> <score>: <resource_sets> [node-attribute=<node_attr>] resource_sets :: <resource_set> [<resource_set> ...] resource_set :: ["("|"["] <rsc>[:<role>] [<rsc>[:<role>] ...] \ [<attributes>] [")"|"]"] attributes :: [require-all=(true|false)] [sequential=(true|false)]
colocation never_put_apache_with_dummy -inf: apache dummy colocation c1 inf: A ( B C )
Commit the current configuration to the CIB in use. As noted elsewhere, commands
in a configure session don’t have immediate effect on the CIB. All
changes are applied at one point in time, either using commit or when the user
leaves the configure level. In case the CIB in use changed in the meantime,
presumably by somebody else, the crm shell will refuse to apply the changes.
If you know that it’s fine to still apply them, add force to the command
line.
To disable CIB patching and apply the changes by replacing the CIB completely,
add replace to the command line. Note that this can lead to previous changes
being overwritten if some other process concurrently modifies the CIB.
Usage:
commit [force] [replace]
This command takes the timeouts from the actions section of the resource agent
meta-data and sets them for the operations of the primitive.
Usage:
Note on default-timeouts
The use of this command is discouraged in favor of manually determining the best
timeouts required for the particular configuration. Relying on the resource
agent to supply appropriate timeouts can cause the resource to fail at the
worst possible moment.
Appropriate timeouts for resource actions are context-sensitive, and should be
carefully considered with the whole configuration in mind.
default-timeouts <id> [<id>...]
Delete one or more objects. If an object to be deleted belongs to a container
object, such as a group, and it is the only resource in that container, then
the container is deleted as well. Any related constraints are removed as well.
If the object is a started resource, it will not be deleted unless the --force
flag is passed to the command, or the force option is set.
Usage:
delete [--force] <id> [<id>...]
This command invokes the editor with the object description. As with the show
command, the user may choose to edit all objects or a set of objects.
If the user insists, he or she may edit the XML edition of the object. If you do
that, don’t modify any id attributes.
Usage:
Note on renaming element ids
The edit command sometimes cannot properly handle modifying element ids. In
particular for elements which belong to group or ms resources. Group and ms
resources themselves also cannot be renamed. Please use the rename command
instead.
edit [xml] [<id> ...] edit [xml] changed
The erase clears all configuration. Apart from nodes. To remove nodes, you have
to specify an additional keyword nodes.
Note that removing nodes from the live cluster may have some
strange/interesting/unwelcome effects.
Usage:
erase [nodes]
If multiple fencing (stonith) devices are available capable of fencing a node,
their order may be specified by fencing_topology. The order is specified per
node.
Stonith resources can be separated by , in which case all of them need to
succeed. If they fail, the next stonith resource (or set of resources) is
used. In other words, use comma to separate resources which all need to
succeed and whitespace for serial order. It is not allowed to use whitespace
around comma.
If the node is left out, the order is used for all nodes. That should reduce the
configuration size in some stonith setups.
From Pacemaker version 1.1.14, it is possible to use a node attribute as the
target in a fencing topology. The syntax for this usage is described below.
From Pacemaker version 1.1.14, it is also possible to use regular expression
patterns as the target in a fencing topology. The configured fencing sequence
then applies to all devices matching the pattern.
Usage:
Example:
fencing_topology <stonith_resources> [<stonith_resources> ...] fencing_topology <fencing_order> [<fencing_order> ...] fencing_order :: <target> <stonith_resources> [<stonith_resources> ...] stonith_resources :: <rsc>[,<rsc>...] target :: <node>: | attr:<node-attribute>=<value> | pattern:<pattern>
# Only kill the power if poison-pill fails fencing_topology poison-pill power # As above for node-a, but a different strategy for node-b fencing_topology \ node-a: poison-pill power \ node-b: ipmi serial # Fencing anything on rack 1 requires fencing via both APC 1 and 2, # to defeat the redundancy provided by two separate UPS units. fencing_topology attr:rack=1 apc01,apc02 # Fencing for all machines named green.* is done using the pear # fencing device first, while all machines named red.* are fenced # using the apple fencing device first. fencing_topology \ pattern:green.* pear apple \ pattern:red.* apple pear
This command filters the given CIB elements through an external program. The
program should accept input on stdin and send output to stdout (the standard
UNIX filter conventions). As with the show command, the user may choose to
filter all or just a subset of elements.
It is possible to filter the XML representation of objects, but probably not as
useful as the configuration language. The presentation is somewhat different
from what would be displayed by the show command---each element is shown on a
single line, i.e. there are no backslashes and no other embelishments.
Don’t forget to put quotes around the filter if it contains spaces.
Usage:
Examples:
Note on quotation marks
Filter commands which feature a blend of quotation marks can be difficult to get
right, especially when used directly from bash, since bash does its own
quotation parsing. In these cases, it can be easier to supply the filter
command as standard input. See the last example above.
filter <prog> [xml] [<id> ...] filter <prog> [xml] changed
filter "sed '/^primitive/s/target-role=[^ ]*//'" # crm configure filter "sed '/^primitive/s/target-role=[^ ]*//'" crm configure <<END filter "sed '/threshold=\"1\"/s/=\"1\"/=\"0\"/g'" END
Show the value of the given property. If the value is not set, the command will
print the default value for the property, if known.
If no property name is passed to the command, the list of known cluster
properties is printed.
If the property is set multiple times, for example using multiple property sets
with different rule expressions, the output of this command is undefined.
Pass the argument -t or --true to get-property to translate the argument value
into true or false. If the value is not set, the command will print false.
Usage:
Example:
get-property [-t|--true] [<name>]
get-property stonith-enabled get-property -t maintenance-mode
Create a graphviz graphical layout from the current cluster configuration.
Currently, only dot (directed graph) is supported. It is essentially a
visualization of resource ordering.
The graph may be saved to a file which can be used as source for various
graphviz tools (by default it is displayed in the user’s X11 session).
Optionally, by specifying the format, one can also produce an image instead.
For more or different graphviz attributes, it is possible to save the default
set of attributes to an ini file. If this file exists it will always override
the builtin settings. The exportsettings subcommand also prints the location
of the ini file.
Usage:
Example:
graph [<gtype> [<file> [<img_format>]]] graph exportsettings gtype :: dot img_format :: `dot` output format (see the +-T+ option)
graph dot graph dot clu1.conf.dot graph dot clu1.conf.svg svg
The group command creates a group of resources. This can be useful when
resources depend on other resources and require that those resources start in
order on the same node. A commmon use of resource groups is to ensure that a
server and a virtual IP are located together, and that the virtual IP is
started before the server.
Grouped resources are started in the order they appear in the group, and stopped
in the reverse order. If a resource in the group cannot run anywhere,
resources following it in the group will not start.
group can be passed the "container" meta attribute, to indicate that
it is to be used to group VM resources monitored using Nagios. The resource
referred to by the container attribute must be of type ocf:heartbeat:Xen,
oxf:heartbeat:VirtualDomain or ocf:heartbeat:lxc.
Usage:
Example:
group <name> <rsc> [<rsc>...] [description=<description>] [meta attr_list] [params attr_list] attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
group internal_www disk0 fs0 internal_ip apache \ meta target_role=stopped group vm-and-services vm vm-sshd meta container="vm"
Load a part of configuration (or all of it) from a local file or a network URL.
The replace method replaces the current configuration with the one from the
source. The update method tries to import the contents into the current
configuration. The push method imports the contents into the current
configuration and removes any lines that are not present in the given
configuration. The file may be a CLI file or an XML file.
If the URL is -, the configuration is read from standard input.
Usage:
Example:
load [xml] <method> URL method :: replace | update | push
load xml update myfirstcib.xml load xml replace http://storage.big.com/cibs/bigcib.xml load xml push smallcib.xml
location defines the preference of nodes for the given resource. The location
constraints consist of one or more rules which specify a score to be awarded
if the rule matches.
The resource referenced by the location constraint can be one of the following:
Examples:
•Plain resource reference: location loc1 webserver
100: node1
•Resource set in curly brackets: location loc1 {
virtual-ip webserver } 100: node1
•Tag containing resource ids: location loc1 tag1
100: node1
•Resource pattern: location loc1 /web.*/ 100:
node1
The resource-discovery attribute allows probes to be selectively enabled or
disabled per resource and node.
The syntax for resource sets is described in detail for colocation.
For more details on how to configure resource sets, see Syntax: Resource sets.
For more information on rule expressions, see Syntax: Rule expressions.
Usage:
location <id> <rsc> [<attributes>] {<node_pref>|<rules>} rsc :: /<rsc-pattern>/ | { resource_sets } | <rsc> attributes :: role=<role> | resource-discovery=always|never|exclusive node_pref :: <score>: <node> rules :: rule [id_spec] [$role=<role>] <score>: <expression> [rule [id_spec] [$role=<role>] <score>: <expression> ...] id_spec :: $id=<id> | $id-ref=<id> score :: <number> | <attribute> | [-]inf expression :: <simple_exp> [<bool_op> <simple_exp> ...] bool_op :: or | and simple_exp :: <attribute> [type:]<binary_op> <value> | <unary_op> <attribute> | date <date_expr> type :: string | version | number binary_op :: lt | gt | lte | gte | eq | ne unary_op :: defined | not_defined date_expr :: lt <end> | gt <start> | in start=<start> end=<end> | in start=<start> <duration> | spec <date_spec> duration|date_spec :: hours=<value> | monthdays=<value> | weekdays=<value> | yearsdays=<value> | months=<value> | weeks=<value> | years=<value> | weekyears=<value> | moon=<value>
location conn_1 internal_www 100: node1 location conn_1 internal_www \ rule 50: #uname eq node1 \ rule pingd: defined pingd location conn_2 dummy_float \ rule -inf: not_defined pingd or pingd number:lte 0 # never probe for rsc1 on node1 location no-probe rsc1 resource-discovery=never -inf: node1
Add or remove primitives in a group. The add subcommand appends the new group
member by default. Should it go elsewhere, there are after and before clauses.
Usage:
Examples:
modgroup <id> add <id> [after <id>|before <id>] modgroup <id> remove <id>
modgroup share1 add storage2 before share1-fs
Monitor is by far the most common operation. It is possible to add it without
editing the whole resource. Also, long primitive definitions may be a bit
uncluttered. In order to make this command as concise as possible, less common
operation attributes are not available. If you need them, then use the op part
of the primitive command.
Usage:
Example:
Note that after executing the command, the monitor operation may be shown as
part of the primitive definition.
monitor <rsc>[:<role>] <interval>[:<timeout>]
monitor apcfence 60m:60s
The ms command creates a master/slave resource type. It may contain a single
primitive resource or one group of resources.
Usage:
Example:
Note on id-ref usage
Instance or meta attributes (‘params` and meta) may contain a reference
to another set of attributes. In that case, no other attributes are allowed.
Since attribute sets’ ids, though they do exist, are not shown in the
crm, it is also possible to reference an object instead of an attribute set.
crm will automatically replace such a reference with the right id:
It is advisable to give meaningful names to attribute sets which are going to be
referenced.
ms <name> <rsc> [description=<description>] [meta attr_list] [params attr_list] attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
ms disk1 drbd1 \ meta notify=true globally-unique=false
crm(live)configure# primitive a2 www-2 meta $id-ref=a1 crm(live)configure# show a2 primitive a2 apache \ meta $id-ref=a1-meta_attributes [...]
The node command describes a cluster node. Nodes in the CIB are commonly created
automatically by the CRM. Hence, you should not need to deal with nodes unless
you also want to define node attributes. Note that it is also possible to
manage node attributes at the node level.
Usage:
Example:
node [$id=<id>] <uname>[:<type>] [description=<description>] [attributes [$id=<id>] [<score>:] [rule...] <param>=<value> [<param>=<value>...]] | $id-ref=<ref> [utilization [$id=<id>] [<score>:] [rule...] <param>=<value> [<param>=<value>...]] | $id-ref=<ref> type :: normal | member | ping | remote
node node1 node big_node attributes memory=64
Set defaults for the operations meta attributes.
For more information on rule expressions, see Syntax: Rule expressions.
Usage:
Example:
op_defaults [$id=<set_id>] [rule ...] <option>=<value> [<option>=<value> ...]
op_defaults record-pending=true
This constraint expresses the order of actions on two resources or more
resources. If there are more than two resources, then the constraint is called
a resource set.
Ordered resource sets have an extra attribute to allow for sets of resources
whose actions may run in parallel. The shell syntax for such sets is to put
resources in parentheses.
If the subsequent resource can start or promote after any one of the resources
in a set has done, enclose the set in brackets ([ and ]).
Sets cannot be nested.
Three strings are reserved to specify a kind of order constraint: Mandatory,
Optional, and Serialize. It is preferred to use one of these settings instead
of score. Previous versions mapped scores 0 and inf to keywords advisory and
mandatory. That is still valid but deprecated.
For more details on how to configure resource sets, see Syntax: Resource sets.
Usage:
Example:
order <id> [{kind|<score>}:] first then [symmetrical=<bool>] order <id> [{kind|<score>}:] resource_sets [symmetrical=<bool>] kind :: Mandatory | Optional | Serialize first :: <rsc>[:<action>] then :: <rsc>[:<action>] resource_sets :: resource_set [resource_set ...] resource_set :: ["["|"("] <rsc>[:<action>] [<rsc>[:<action>] ...] \ [attributes] ["]"|")"] attributes :: [require-all=(true|false)] [sequential=(true|false)]
order o-1 Mandatory: apache:start ip_1 order o-2 Serialize: A ( B C ) order o-3 inf: [ A B ] C order o-4 first-resource then-resource
The primitive command describes a resource. It may be referenced only once in
group, clone, or master-slave objects. If it’s not referenced, then it
is placed as a single resource in the CIB.
Operations may be specified anonymously, as a group or by reference:
Example:
•"Anonymous", as a list of op
specifications. Use this method if you don’t need to reference the set
of operations elsewhere. This is the most common way to define
operations.
•If reusing operation sets is desired, use the
operations keyword along with an id to give the operations set a name. Use the
operations keyword and an id-ref value set to the id of another operations
set, to apply the same set of operations to this primitive.
Operation attributes which are not recognized are saved as instance attributes
of that operation. A typical example is OCF_CHECK_LEVEL.
For multistate resources, roles are specified as role=<role>.
A template may be defined for resources which are of the same type and which
share most of the configuration. See rsc_template for more information.
Attributes containing time values, such as the interval attribute on operations,
are configured either as a plain number, which is interpreted as a time in
seconds, or using one of the following suffixes:
•s, sec - time in seconds (same as no
suffix)
•ms, msec - time in milliseconds
•us, usec - time in microseconds
•m, min - time in minutes
•h, hr - time in hours
Usage:
primitive <rsc> {[<class>:[<provider>:]]<type>|@<template>} [description=<description>] [[params] attr_list] [meta attr_list] [utilization attr_list] [operations id_spec] [op op_type [<attribute>=<value>...] ...] attr_list :: [$id=<id>] [<score>:] [rule...] <attr>=<val> [<attr>=<val>...]] | $id-ref=<id> id_spec :: $id=<id> | $id-ref=<id> op_type :: start | stop | monitor
primitive apcfence stonith:apcsmart \ params ttydev=/dev/ttyS0 hostlist="node1 node2" \ op start timeout=60s \ op monitor interval=30m timeout=60s primitive www8 apache \ configfile=/etc/apache/www8.conf \ operations $id-ref=apache_ops primitive db0 mysql \ params config=/etc/mysql/db0.conf \ op monitor interval=60s \ op monitor interval=300s OCF_CHECK_LEVEL=10 primitive r0 ocf:linbit:drbd \ params drbd_resource=r0 \ op monitor role=Master interval=60s \ op monitor role=Slave interval=300s primitive xen0 @vm_scheme1 xmfile=/etc/xen/vm/xen0 primitive mySpecialRsc Special \ params 3: rule #uname eq node1 interface=eth1 \ params 2: rule #uname eq node2 interface=eth2 port=8888 \ params 1: interface=eth0 port=9999
Set cluster configuration properties. To list the available cluster
configuration properties, use the ra info command with pengine, crmd, cib and
stonithd as arguments.
For more information on rule expressions, see Syntax: Rule expressions.
Usage:
Example:
property [<set_id>:] [rule ...] <option>=<value> [<option>=<value> ...]
property stonith-enabled=true property rule date spec years=2014 stonith-enabled=false
Show PE (Policy Engine) motions using ptest(8) or crm_simulate(8).
A CIB is constructed using the current user edited configuration and the status
from the running CIB. The resulting CIB is run through ptest (or crm_simulate)
to show changes which would happen if the configuration is committed.
The status section may be loaded from another source and modified using the
cibstatus level commands. In that case, the ptest command will issue a message
informing the user that the Policy Engine graph is not calculated based on the
current status section and therefore won’t show what would happen to
the running but some imaginary cluster.
If you have graphviz installed and X11 session, dotty(1) is run to display the
changes graphically.
Add a string of v characters to increase verbosity. ptest can also show
allocation scores. utilization turns on information about the remaining
capacity of nodes. With the actions option, ptest will print all resource
actions.
The ptest program has been replaced by crm_simulate in newer Pacemaker versions.
In some installations both could be installed. Use simulate to enfore using
crm_simulate.
Usage:
Examples:
ptest [nograph] [v...] [scores] [actions] [utilization]
ptest scores ptest vvvvv simulate actions
Refresh the internal structures from the CIB. All changes made during this
session are lost.
Usage:
refresh
Rename an object. It is recommended to use this command to rename a resource,
because it will take care of updating all related constraints and a parent
resource. Changing ids with the edit command won’t have the same
effect.
If you want to rename a resource, it must be in the stopped state.
Usage:
rename <old_id> <new_id>
An ACL role is a set of rules which describe access rights to CIB. Rules consist
of an access right read, write, or deny and a specification denoting part of
the configuration to which the access right applies. The specification can be
an XPath or a combination of tag and id references. If an attribute is
appended, then the specification applies only to that attribute of the
matching element.
There is a number of shortcuts for XPath specifications. The meta, params, and
utilization shortcuts reference resource meta attributes, parameters, and
utilization respectively. The location may be used to specify location
constraints most of the time to allow resource move and unmove commands. The
property references cluster properties. The node allows reading node
attributes. nodeattr and nodeutil reference node attributes and node capacity
(utilization). The status shortcut references the whole status section of the
CIB. Read access to status is necessary for various monitoring tools such as
crm_mon(8) (aka crm status).
For more information on rule expressions, see Syntax: Rule expressions.
Usage:
Example:
role <role-id> rule [rule ...] rule :: acl-right cib-spec [attribute:<attribute>] acl-right :: read | write | deny cib-spec :: xpath-spec | tag-ref-spec xpath-spec :: xpath:<xpath> | shortcut tag-ref-spec :: tag:<tag> | ref:<id> | tag:<tag> ref:<id> shortcut :: meta:<rsc>[:<attr>] params:<rsc>[:<attr>] utilization:<rsc> location:<rsc> property[:<attr>] node[:<node>] nodeattr[:<attr>] nodeutil[:<node>] status
role app1_admin \ write meta:app1:target-role \ write meta:app1:is-managed \ write location:app1 \ read ref:app1
Set defaults for the resource meta attributes.
For more information on rule expressions, see Syntax: Rule expressions.
Usage:
Example:
rsc_defaults [<set_id>:] [rule ...] <option>=<value> [<option>=<value> ...]
rsc_defaults failure-timeout=3m
The rsc_template command creates a resource template. It may be referenced in
primitives. It is used to reduce large configurations with many similar
resources.
Usage:
Example:
rsc_template <name> [<class>:[<provider>:]]<type> [description=<description>] [params attr_list] [meta attr_list] [utilization attr_list] [operations id_spec] [op op_type [<attribute>=<value>...] ...] attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id> id_spec :: $id=<id> | $id-ref=<id> op_type :: start | stop | monitor
rsc_template public_vm Xen \ op start timeout=300s \ op stop timeout=300s \ op monitor interval=30s timeout=60s \ op migrate_from timeout=600s \ op migrate_to timeout=600s primitive xen0 @public_vm \ params xmfile=/etc/xen/xen0 primitive xen1 @public_vm \ params xmfile=/etc/xen/xen1
This constraint expresses dependency of resources on cluster-wide attributes,
also known as tickets. Tickets are mainly used in geo-clusters, which consist
of multiple sites. A ticket may be granted to a site, thus allowing resources
to run there.
The loss-policy attribute specifies what happens to the resource (or resources)
if the ticket is revoked. The default is either stop or demote depending on
whether a resource is multi-state.
See also the site set of commands.
Usage:
Example:
rsc_ticket <id> <ticket_id>: <rsc>[:<role>] [<rsc>[:<role>] ...] [loss-policy=<loss_policy_action>] loss_policy_action :: stop | demote | fence | freeze
rsc_ticket ticket-A_public-ip ticket-A: public-ip rsc_ticket ticket-A_bigdb ticket-A: bigdb loss-policy=fence rsc_ticket ticket-B_storage ticket-B: drbd-a:Master drbd-b:Master
Test resources with current resource configuration. If no nodes are specified,
tests are run on all known nodes.
The order of resources is significant: it is assumed that later resources depend
on earlier ones.
If a resource is multi-state, it is assumed that the role on which later
resources depend is master.
Tests are run sequentially to prevent running the same resource on two or more
nodes. Tests are carried out only if none of the specified nodes currently run
any of the specified resources. However, it won’t verify whether
resources run on the other nodes.
Superuser privileges are obviously required: either run this as root or setup
the sudoers file appropriately.
Note that resource testing may take some time.
Usage:
Examples:
rsctest <rsc_id> [<rsc_id> ...] [<node_id> ...]
rsctest my_ip websvc rsctest websvc nodeB
Save the current configuration to a file. Optionally, as XML. Use - instead of
file name to write the output to stdout.
The save command accepts the same selection arguments as the show command. See
the help section for show for more details.
Usage:
Example:
save [xml] [<id> | type:<type | tag:<tag> | related:<obj> | changed ...] <file>
save myfirstcib.txt save web-server server-config.txt
CIB’s content is validated by a RNG schema. Pacemaker supports several,
depending on version. At least the following schemas are accepted by crmsh:
Example:
•pacemaker-1.0
•pacemaker-1.1
•pacemaker-1.2
•pacemaker-1.3
•pacemaker-2.0
Use this command to display or switch to another RNG schema.
Usage:
schema [<schema>]
schema pacemaker-1.1
Set the value of a configured attribute. The attribute must have a value
configured previously, and can be an agent parameter, meta attribute or
utilization value.
The first argument to the command is a path to an attribute. This is a
dot-separated sequence beginning with the name of the resource, and ending
with the name of the attribute to set.
Usage:
Examples:
set <path> <value>
set vip1.ip 192.168.20.5 set vm-a.force_stop 1
The show command displays CIB objects. Without any argument, it displays all
objects in the CIB, but the set of objects displayed by show can be limited to
only objects with the given IDs or by using one or more of the special
prefixes described below.
The XML representation for the objects can be displayed by passing xml as the
first argument.
To show one or more specific objects, pass the object IDs as arguments.
To show all objects of a certain type, use the type: prefix.
To show all objects in a tag, use the tag: prefix.
To show all constraints related to a primitive, use the related: prefix.
To show all modified objects, pass the argument changed.
The prefixes can be used together on a single command line. For example, to show
both the tag itself and the objects tagged by it the following combination can
be used: show tag:my-tag my-tag.
To refine a selection of objects using multiple modifiers, the keywords and and
or can be used. For example, to select all primitives tagged foo, the
following combination can be used: show type:primitive and tag:foo.
To hide values when displaying the configuration, use the obscure:<glob>
argument. This can be useful when sending the configuration over a public
channel, to avoid exposing potentially sensitive information. The <glob>
argument is a bash-style pattern matching attribute keys.
Usage:
Example:
show [xml] [<id> | changed | type:<type> | tag:<id> | related:<obj> | obscure:<glob> ...] type :: node | primitive | group | clone | ms | rsc_template | location | colocation | order | rsc_ticket | property | rsc_defaults | op_defaults | fencing_topology | role | user | acl_target | tag
show webapp show type:primitive show xml tag:db tag:fs show related:webapp show type:primitive obscure:passwd
Define a resource tag. A tag is an id referring to one or more resources,
without implying any constraints between the tagged resources. This can be
useful for grouping conceptually related resources.
Usage:
Example:
tag <tag-name>: <rsc> [<rsc> ...] tag <tag-name> <rsc> [<rsc> ...]
tag web: p-webserver p-vip tag ips server-vip admin-vip
The specified template is loaded into the editor. It’s up to the user to
make a good CRM configuration out of it. See also the template section.
Usage:
Example:
template [xml] url
template two-apaches.txt
Attempts to upgrade the CIB to validate with the current version. Commonly, this
is required if the error CIB not supported occurs. It typically means that the
active CIB version is coming from an older release.
As a safety precaution, the force argument is required if the validation-with
attribute is set to anything other than 0.6. Thus in most cases, it is
required.
Usage:
Example:
upgrade [force]
upgrade force
Users which normally cannot view or manage cluster configuration can be allowed
access to parts of the CIB. The access is defined by a set of read, write, and
deny rules as in role definitions or by referencing roles. The latter is
considered best practice.
For more information on rule expressions, see Syntax: Rule expressions.
Usage:
Example:
user <uid> {roles|rules} roles :: role:<role-ref> [role:<role-ref> ...] rules :: rule [rule ...]
user joe \ role:app1_admin \ role:read_all
Call the validate-all action for the resource, if possible.
Limitations:
•The resource agent must implement the
validate-all action.
•The current user must be root.
•The primitive resource must not use nvpair
references.
Usage:
validate-all <rsc>
Verify the contents of the CIB which would be committed.
Usage:
verify
Even though we promissed no xml, it may happen, but hopefully very very seldom,
that an element from the CIB cannot be rendered in the configuration language.
In that case, the element will be shown as raw xml, prefixed by this command.
That element can then be edited like any other. If the shell finds out that
after the change it can digest it, then it is going to be converted into the
normal configuration language. Otherwise, there is no need to use xml for
configuration.
Usage:
xml <xml>
template - Import configuration from templates¶
User may be assisted in the cluster configuration by templates prepared in advance. Templates consist of a typical ready configuration which may be edited to suit particular user needs. This command enters a template level where additional commands for configuration/template management are available.
Copy the current or given configuration to the current CIB. By default, the CIB
is replaced, unless the method is set to "update".
Usage:
apply [<method>] [<config>] method :: replace | update
Remove a configuration. The loaded (active) configuration may be removed by
force.
Usage:
delete <config> [force]
Edit current or given configuration using your favourite editor.
Usage:
edit [<config>]
When called with no argument, lists existing templates and configurations.
Given the argument templates, lists the available templates.
Given the argument configs, lists the available configurations.
Usage:
list [templates|configs]
Load an existing configuration. Further edit, show, and apply commands will
refer to this configuration.
Usage:
load <config>
Create a new configuration from one or more templates. Note that configurations
and templates are kept in different places, so it is possible to have a
configuration name equal a template name.
If you already know which parameters are required, you can set them directly on
the command line.
The parameter name id is set by default to the name of the configuration.
If no parameters are being set and you don’t want a particular name for
your configuration, you can call this command with a template name as the only
parameter. A unique configuration name based on the template name will be
generated.
Usage:
Example:
new [<config>] <template> [<template> ...] [params name=value ...]
new vip virtual-ip new bigfs ocfs2 params device=/dev/sdx8 directory=/bigfs new apache
Process the current or given configuration and display the result.
Usage:
show [<config>]
cibstatus - CIB status management and editing¶
The status section of the CIB keeps the current status of nodes and resources. It is modified only on events, i.e. when some resource operation is run or node status changes. For obvious reasons, the CRM has no user interface with which it is possible to affect the status section. From the user’s point of view, the status section is essentially a read-only part of the CIB. The current status is never even written to disk, though it is available in the PE (Policy Engine) input files which represent the history of cluster motions. The current status may be read using the cibadmin -Q command. It may sometimes be of interest to see how status changes would affect the Policy Engine. The set of ‘cibstatus` level commands allow the user to load status sections from various sources and then insert or modify resource operations or change nodes’ state. The effect of those changes may then be observed by running the ptest command at the configure level or simulate and run commands at this level. The ptest runs with the user edited CIB whereas the latter two commands run with the CIB which was loaded along with the status section. The simulate and run commands as well as all status modification commands are implemented using crm_simulate(8).
Load a status section from a file, a shadow CIB, or the running cluster. By
default, the current (live) status section is modified. Note that if the live
status section is modified it is not going to be updated if the cluster status
changes, because that would overwrite the user changes. To make crm drop
changes and resume use of the running cluster status, run load live.
All CIB shadow configurations contain the status section which is a snapshot of
the status section taken at the time the shadow was created. Obviously, this
status section doesn’t have much to do with the running cluster status,
unless the shadow CIB has just been created. Therefore, the ptest command by
default uses the running cluster status section.
Usage:
Example:
load {<file>|shadow:<cib>|live}
load bug-12299.xml load shadow:test1
Change the node status. It is possible to throw a node out of the cluster, make
it a member, or set its state to unclean.
online
Example:
Set the node_statecrmd attribute to online and the
expected and join attributes to member. The effect is that the node becomes a
cluster member.
offline
Set the node_statecrmd attribute to offline and the
expected attribute to empty. This makes the node cleanly removed from the
cluster.
unclean
Set the node_statecrmd attribute to offline and the
expected attribute to member. In this case the node has unexpectedly
disappeared.
Usage:
node <node> {online|offline|unclean}
node xen-b unclean
Edit the outcome of a resource operation. This way you can tell CRM that it ran
an operation and that the resource agent returned certain exit code. It is
also possible to change the operation’s status. In case the operation
status is set to something other than done, the exit code is effectively
ignored.
Usage:
Example:
op <operation> <resource> <exit_code> [<op_status>] [<node>] operation :: probe | monitor[:<n>] | start | stop | promote | demote | notify | migrate_to | migrate_from exit_code :: <rc> | success | generic | args | unimplemented | perm | installed | configured | not_running | master | failed_master op_status :: pending | done | cancelled | timeout | notsupported | error n :: the monitor interval in seconds; if omitted, the first recurring operation is referenced rc :: numeric exit code in range 0..9
op start d1 xen-b generic op start d1 xen-b 1 op monitor d1 xen-b not_running op stop d1 xen-b 0 timeout
Show the origin of the status section currently in use. This essentially shows
the latest load argument.
Usage:
origin
Set the quorum value.
Usage:
Example:
quorum <bool>
quorum false
Run the policy engine with the edited status section.
Add a string of v characters to increase verbosity. Specify scores to see
allocation scores also. utilization turns on information about the remaining
capacity of nodes.
If you have graphviz installed and X11 session, dotty(1) is run to display the
changes graphically.
Usage:
Example:
run [nograph] [v...] [scores] [utilization]
run
The current internal status section with whatever modifications were performed
can be saved to a file or shadow CIB.
If the file exists and contains a complete CIB, only the status section is going
to be replaced and the rest of the CIB will remain intact. Otherwise, the
current user edited configuration is saved along with the status section.
Note that all modifications are saved in the source file as soon as they are
run.
Usage:
Example:
save [<file>|shadow:<cib>]
save bug-12299.xml
Show the current status section in the XML format. Brace yourself for some
unreadable output. Add changed option to get a human readable output of all
changes.
Usage:
show [changed]
Run the policy engine with the edited status section and simulate the
transition.
Add a string of v characters to increase verbosity. Specify scores to see
allocation scores also. utilization turns on information about the remaining
capacity of nodes.
If you have graphviz installed and X11 session, dotty(1) is run to display the
changes graphically.
Usage:
Example:
simulate [nograph] [v...] [scores] [utilization]
simulate
Modify the ticket status. Tickets can be granted and revoked. Granted tickets
could be activated or put in standby.
Usage:
Example:
ticket <ticket> {grant|revoke|activate|standby}
ticket ticketA grant
assist - Configuration assistant¶
The assist sublevel is a collection of helper commands that create or modify resources and constraints, to simplify the creation of certain configurations. For more information on individual commands, see the help text for those commands.
This command takes a list of primitives as argument, and creates a new
rsc_template for these primitives. It can only do this if the primitives do
not already share a template and are of the same type.
Usage:
template primitive-1 primitive-2 primitive-3
A colocation between a group of resources says that the resources should be
located together, but it also means that those resources are dependent on each
other. If one of the resources fails, the others will be restarted.
If this is not desired, it is possible to circumvent: By placing the resources
in a non-sequential set and colocating the set with a dummy resource which is
not monitored, the resources will be placed together but will have no further
dependency on each other.
This command creates both the constraint and the dummy resource needed for such
a colocation.
Usage:
weak-bond resource-1 resource-2
maintenance - Maintenance mode commands¶
Maintenance mode commands are commands that manipulate resources directly without going through the cluster infrastructure. Therefore, it is essential to ensure that the cluster does not attempt to monitor or manipulate the resources while these commands are being executed. To ensure this, these commands require that maintenance mode is set either for the particular resource, or for the whole cluster.
Invokes the given action for the resource. This is done directly via the
resource agent, so the command must be issued while the cluster or the
resource is in maintenance mode.
Unless the action is start or monitor, the action must be invoked on the same
node as where the resource is running. If the resource is running on multiple
nodes, the command will fail.
To use SSH for executing resource actions on multiple nodes, append ssh after
the action name. This requires SSH access to be configured between the nodes
and the parallax python package to be installed.
Usage:
Example:
action <rsc> <action> action <rsc> <action> ssh
action webserver reload action webserver monitor ssh
Disables maintenances mode, either for the whole cluster or for the given
resource.
Usage:
Example:
off off <rsc>
off rsc1
Enables maintenances mode, either for the whole cluster or for the given
resource.
Usage:
Example:
on on <rsc>
on rsc1
history - Cluster history¶
Examining Pacemaker’s history is a particularly involved task. The number of subsystems to be considered, the complexity of the configuration, and the set of various information sources, most of which are not exactly human readable, keep analyzing resource or node problems accessible to only the most knowledgeable. Or, depending on the point of view, to the most persistent. The following set of commands has been devised in hope to make cluster history more accessible. Of course, looking at all history could be time consuming regardless of how good the tools at hand are. Therefore, one should first say which period he or she wants to analyze. If not otherwise specified, the last hour is considered. Logs and other relevant information is collected using crm report. Since this process takes some time and we always need fresh logs, information is refreshed in a much faster way using the python parallax module. If python-parallax is not found on the system, examining a live cluster is still possible — though not as comfortable. Apart from examining a live cluster, events may be retrieved from a report generated by crm report (see also the -H option). In that case we assume that the period stretching the whole report needs to be investigated. Of course, it is still possible to further reduce the time range. If you have discovered an issue that you want to show someone else, you can use the session pack command to save the current session as a tarball, similar to those generated by crm report. In order to minimize the size of the tarball, and to make it easier for others to find the interesting events, it is recommended to limit the time frame which the saved session covers. This can be done using the timeframe command (example below). It is also possible to name the saved session using the session save command. Example:crm(live)history# limit "Jul 18 12:00" "Jul 18 12:30" crm(live)history# session save strange_restart crm(live)history# session pack Report saved in .../strange_restart.tar.bz2 crm(live)history#
How much detail to show from the logs. Valid detail levels are either 0 or 1,
where 1 is the highest detail level. The default detail level is 0.
Usage:
Example:
detail <detail_level> detail_level :: small integer (defaults to 0)
detail 1
A transition represents a change in cluster configuration or state. Use diff to
see what has changed between two transitions.
If you want to specify the current cluster configuration and status, use the
string live.
Normally, the first transition specified should be the one which is older, but
we are not going to enforce that.
Note that a single configuration update may result in more than one transition.
Usage:
Examples:
diff <pe> <pe> [status] [html] pe :: <number>|<index>|<file>|live
diff 2066 2067 diff pe-input-2080.bz2 live status
By analysing the log output and looking for particular patterns, the events
command helps sifting through the logs to find when particular events like
resources changing state or node failure may have occurred.
This can be used to generate a combined list of events from all nodes.
Usage:
Example:
events
events
If a log is infested with irrelevant messages, those messages may be excluded by
specifying a regular expression. The regular expressions used are Python
extended. This command is additive. To drop all regular expressions, use
exclude clear. Run exclude only to see the current list of regular
expressions. Excludes are saved along with the history sessions.
Usage:
Example:
exclude [<regex>|clear]
exclude kernel.*ocfs2
Create a graphviz graphical layout from the PE file (the transition). Every
transition contains the cluster configuration which was active at the time.
See also generate a directed graph from configuration.
Usage:
Example:
graph <pe> [<gtype> [<file> [<img_format>]]] gtype :: dot img_format :: `dot` output format (see the +-T+ option)
graph -1 graph 322 dot clu1.conf.dot graph 322 dot clu1.conf.svg svg
The info command provides a summary of the information source, which can be
either a live cluster snapshot or a previously generated report.
Usage:
Example:
info
info
The latest command shows a bit of recent history, more precisely whatever
happened since the last cluster change (the latest transition). If the
transition is running, the shell will first wait until it finishes.
Usage:
Example:
latest
latest
This command can be used to modify the time span to examine. All history
commands look at events within a certain time span.
For the live source, the default time span is the last hour.
There is no time span limit for the hb_report source.
The time period is parsed by the dateutil python module. It covers a wide range
of date formats. For instance:
Examples:
•3:00 (today at 3am)
•15:00 (today at 3pm)
•2010/9/1 2pm (September 1st 2010 at 2pm)
For more examples of valid time/date statements, please refer to the
python-dateutil documentation:
•dateutil.readthedocs.org
If the dateutil module is not available, then the time is parsed using strptime
and only the kind as printed by date(1) is allowed:
•Tue Sep 15 20:46:27 CEST 2010
Usage:
limit [<from_time>] [<to_time>]
limit 10:15 limit 15h22m 16h limit "Sun 5 20:46" "Sun 5 22:00"
Show messages logged on one or more nodes. Leaving out a node name produces
combined logs of all nodes. Messages are sorted by time and, if the terminal
emulations supports it, displayed in different colours depending on the node
to allow for easier reading.
The sorting key is the timestamp as written by syslog which normally has the
maximum resolution of one second. Obviously, messages generated by events
which share the same timestamp may not be sorted in the same way as they
happened. Such close events may actually happen fairly often.
Usage:
Example:
log [<node> [<node> ...] ]
log node-a
Show important events that happened on a node. Important events are node lost
and join, standby and online, and fence. Use either node names or extended
regular expressions.
Usage:
Example:
node <node> [<node> ...]
node node1
Every event in the cluster results in generating one or more Policy Engine (PE)
files. These files describe future motions of resources. The files are listed
as full paths in the current report directory. Add v to also see the creation
time stamps.
Usage:
Example:
peinputs [{<range>|<number>} ...] [v] range :: <n1>:<n2>
peinputs peinputs 440:444 446 peinputs v
This command makes sense only for the live source and makes crm collect the
latest logs and other relevant information from the logs. If you want to make
a completely new report, specify force.
Usage:
refresh [force]
Show actions and any failures that happened on all specified resources on all
nodes. Normally, one gives resource names as arguments, but it is also
possible to use extended regular expressions. Note that neither groups nor
clones or master/slave names are ever logged. The resource command is going to
expand all of these appropriately, so that clone instances or resources which
are part of a group are shown.
Usage:
Example:
resource <rsc> [<rsc> ...]
resource bigdb public_ip resource my_.*_db2 resource ping_clone
Sometimes you may want to get back to examining a particular history period or
bug report. In order to make that easier, the current settings can be saved
and later retrieved.
If the current history being examined is coming from a live cluster the logs, PE
inputs, and other files are saved too, because they may disappear from nodes.
For the existing reports coming from hb_report, only the directory location is
saved (not to waste space).
A history session may also be packed into a tarball which can then be sent to
support.
Leave out subcommand to see the current session.
Usage:
Examples:
session [{save|load|delete} <name> | pack [<name>] | update | list]
session save bnc966622 session load rsclost-2 session list
In case the host this program runs on is not part of the cluster, it is
necessary to set the list of nodes.
Usage:
Example:
setnodes node <node> [<node> ...]
setnodes node_a node_b
Every transition is saved as a PE file. Use this command to render that PE file
either as configuration or status. The configuration output is the same as crm
configure show.
Usage:
Examples:
show <pe> [status] pe :: <number>|<index>|<file>|live
show 2066 show pe-input-2080.bz2 status
Events to be examined can come from the current cluster or from a hb_report
report. This command sets the source. source live sets source to the running
cluster and system logs. If no source is specified, the current source
information is printed.
In case a report source is specified as a file reference, the file is going to
be unpacked in place where it resides. This directory is not removed on exit.
Usage:
Examples:
source [<dir>|<file>|live]
source live source /tmp/customer_case_22.tar.bz2 source /tmp/customer_case_22 source
This command will print actions planned by the PE and run graphviz (dotty) to
display a graphical representation of the transition. Of course, for the
latter an X11 session is required. This command invokes ptest(8) in
background.
The showdot subcommand runs graphviz (dotty) to display a graphical
representation of the .dot file which has been included in the report.
Essentially, it shows the calculation produced by pengine which is installed
on the node where the report was produced. In optimal case this output should
not differ from the one produced by the locally installed pengine.
The log subcommand shows the full log for the duration of the transition.
A transition can also be saved to a CIB shadow for further analysis or use with
cib or configure commands (use the save subcommand). The shadow file name
defaults to the name of the PE input file.
If the PE input file number is not provided, it defaults to the last one, i.e.
the last transition. The last transition can also be referenced with number 0.
If the number is negative, then the corresponding transition relative to the
last one is chosen.
If there are warning and error PE input files or different nodes were the DC in
the observed timeframe, it may happen that PE input file numbers collide. In
that case provide some unique part of the path to the file.
After the ptest output, logs about events that happened during the transition
are printed.
The tags subcommand scans the logs for the transition and return a list of key
events during that transition. For example, the tag error will be returned if
there are any errors logged during the transition.
Usage:
Examples:
transition [<number>|<index>|<file>] [nograph] [v...] [scores] [actions] [utilization] transition showdot [<number>|<index>|<file>] transition log [<number>|<index>|<file>] transition save [<number>|<index>|<file> [name]] transition tags [<number>|<index>|<file>]
transition transition 444 transition -1 transition pe-error-3.bz2 transition node-a/pengine/pe-input-2.bz2 transition showdot 444 transition log transition save 0 enigma-22
A transition represents a change in cluster configuration or state. This command
lists the transitions in the current timeframe.
Usage:
Example:
transitions
transitions
A transition represents a change in cluster configuration or state. Use wdiff to
see what has changed between two transitions as word differences on a
line-by-line basis.
If you want to specify the current cluster configuration and status, use the
string live.
Normally, the first transition specified should be the one which is older, but
we are not going to enforce that.
Note that a single configuration update may result in more than one transition.
Usage:
Examples:
wdiff <pe> <pe> [status] pe :: <number>|<index>|<file>|live
wdiff 2066 2067 wdiff pe-input-2080.bz2 live status
report¶
Interface to a tool for creating a cluster report. A report is an archive containing log files, configuration files, system information and other relevant data for a given time period. This is a useful tool for collecting data to attach to bug reports, or for detecting the root cause of errors resulting in resource failover, for example. See crmsh_hb_report(8) for more details on arguments, or call crm report -h Usage:report -f {time|"cts:"testnum} [-t time] [-u user] [-l file] [-n nodes] [-E files] [-p patt] [-L patt] [-e prog] [-MSDZAVsvhd] [dest]
report -f 2pm report_1 report -f "2007/9/5 12:30" -t "2007/9/5 14:00" report_2 report -f 1:00 -t 3:00 -l /var/log/cluster/ha-debug report_3 report -f "09sep07 2:00" -u hbadmin report_4 report -f 18:00 -p "usern.*" -p "admin.*" report_5 report -f cts:133 ctstest_133
end (cd, up)¶
The end command ends the current level and the user moves to the parent level. This command is available everywhere. Usage:end
help¶
The help command prints help for the current level or for the specified topic (command). This command is available everywhere. Usage:help [<topic>]
quit (exit, bye)¶
Leave the program.BUGS¶
Even though all sensible configurations (and most of those that are not) are going to be supported by the crm shell, I suspect that it may still happen that certain XML constructs may confuse the tool. When that happens, please file a bug report. The crm shell will not try to update the objects it does not understand. Of course, it is always possible to edit such objects in the XML format.AUTHORS¶
Dejan Muhamedagic, <dejan@suse.de> Kristoffer Gronlund <kgronlund@suse.com> and many OTHERSSEE ALSO¶
crm_resource(8), crm_attribute(8), crm_mon(8), cib_shadow(8), ptest(8), dotty(1), crm_simulate(8), cibadmin(8)COPYING¶
Copyright (C) 2008-2013 Dejan Muhamedagic. Copyright (C) 2013 Kristoffer Gronlund. Free use of this software is granted under the terms of the GNU General Public License (GPL).06/07/2017 | crm 2.3.2 |