.\" Man page generated from reStructuredText. . .TH "CEPH-DEPLOY" "8" "May 27, 2021" "dev" "Ceph" .SH NAME ceph-deploy \- Ceph deployment tool . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .SH SYNOPSIS .nf \fBceph\-deploy\fP \fBnew\fP [\fIinitial\-monitor\-node(s)\fP] .fi .sp .nf \fBceph\-deploy\fP \fBinstall\fP [\fIceph\-node\fP] [\fIceph\-node\fP\&...] .fi .sp .nf \fBceph\-deploy\fP \fBmon\fP \fIcreate\-initial\fP .fi .sp .nf \fBceph\-deploy\fP \fBosd\fP \fIcreate\fP \fI\-\-data\fP \fIdevice\fP \fIceph\-node\fP .fi .sp .nf \fBceph\-deploy\fP \fBadmin\fP [\fIadmin\-node\fP][\fIceph\-node\fP\&...] .fi .sp .nf \fBceph\-deploy\fP \fBpurgedata\fP [\fIceph\-node\fP][\fIceph\-node\fP\&...] .fi .sp .nf \fBceph\-deploy\fP \fBforgetkeys\fP .fi .sp .SH DESCRIPTION .sp \fBceph\-deploy\fP is a tool which allows easy and quick deployment of a Ceph cluster without involving complex and detailed manual configuration. It uses ssh to gain access to other Ceph nodes from the admin node, sudo for administrator privileges on them and the underlying Python scripts automates the manual process of Ceph installation on each node from the admin node itself. It can be easily run on an workstation and doesn\(aqt require servers, databases or any other automated tools. With \fBceph\-deploy\fP, it is really easy to set up and take down a cluster. However, it is not a generic deployment tool. It is a specific tool which is designed for those who want to get Ceph up and running quickly with only the unavoidable initial configuration settings and without the overhead of installing other tools like \fBChef\fP, \fBPuppet\fP or \fBJuju\fP\&. Those who want to customize security settings, partitions or directory locations and want to set up a cluster following detailed manual steps, should use other tools i.e, \fBChef\fP, \fBPuppet\fP, \fBJuju\fP or \fBCrowbar\fP\&. .sp With \fBceph\-deploy\fP, you can install Ceph packages on remote nodes, create a cluster, add monitors, gather/forget keys, add OSDs and metadata servers, configure admin hosts or take down the cluster. .SH COMMANDS .SS new .sp Start deploying a new cluster and write a configuration file and keyring for it. It tries to copy ssh keys from admin node to gain passwordless ssh to monitor node(s), validates host IP, creates a cluster with a new initial monitor node or nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and a log file for the new cluster. It populates the newly created Ceph configuration file with \fBfsid\fP of cluster, hostnames and IP addresses of initial monitor members under \fB[global]\fP section. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy new [MON][MON...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [MON] is the initial monitor hostname (short hostname i.e, \fBhostname \-s\fP). .sp Other options like \fI\%\-\-no\-ssh\-copykey\fP, \fI\%\-\-fsid\fP, \fI\%\-\-cluster\-network\fP and \fI\%\-\-public\-network\fP can also be used with this command. .sp If more than one network interface is used, \fBpublic network\fP setting has to be added under \fB[global]\fP section of Ceph configuration file. If the public subnet is given, \fBnew\fP command will choose the one IP from the remote host that exists within the subnet range. Public network can also be added at runtime using \fI\%\-\-public\-network\fP option with the command as mentioned above. .SS install .sp Install Ceph packages on remote hosts. As a first step it installs \fByum\-plugin\-priorities\fP in admin and other nodes using passwordless ssh and sudo so that Ceph packages from upstream repository get more priority. It then detects the platform and distribution for the hosts and installs Ceph normally by downloading distro compatible packages if adequate repo for Ceph is already added. \fB\-\-release\fP flag is used to get the latest release for installation. During detection of platform and distribution before installation, if it finds the \fBdistro.init\fP to be \fBsysvinit\fP (Fedora, CentOS/RHEL etc), it doesn\(aqt allow installation with custom cluster name and uses the default name \fBceph\fP for the cluster. .sp If the user explicitly specifies a custom repo url with \fI\%\-\-repo\-url\fP for installation, anything detected from the configuration will be overridden and the custom repository location will be used for installation of Ceph packages. If required, valid custom repositories are also detected and installed. In case of installation from a custom repo a boolean is used to determine the logic needed to proceed with a custom repo installation. A custom repo install helper is used that goes through config checks to retrieve repos (and any extra repos defined) and installs them. \fBcd_conf\fP is the object built from \fBargparse\fP that holds the flags and information needed to determine what metadata from the configuration is to be used. .sp A user can also opt to install only the repository without installing Ceph and its dependencies by using \fI\%\-\-repo\fP option. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy install [HOST][HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is/are the host node(s) where Ceph is to be installed. .sp An option \fB\-\-release\fP is used to install a release known as CODENAME (default: firefly). .sp Other options like \fI\%\-\-testing\fP, \fI\%\-\-dev\fP, \fI\%\-\-adjust\-repos\fP, \fI\%\-\-no\-adjust\-repos\fP, \fI\%\-\-repo\fP, \fI\%\-\-local\-mirror\fP, \fI\%\-\-repo\-url\fP and \fI\%\-\-gpg\-url\fP can also be used with this command. .SS mds .sp Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and the \fBmds\fP command is used to create one on the desired host node. It uses the subcommand \fBcreate\fP to do so. \fBcreate\fP first gets the hostname and distro information of the desired mds host. It then tries to read the \fBbootstrap\-mds\fP key for the cluster and deploy it in the desired host. The key generally has a format of \fB{cluster}.bootstrap\-mds.keyring\fP\&. If it doesn\(aqt finds a keyring, it runs \fBgatherkeys\fP to get the keyring. It then creates a mds on the desired host under the path \fB/var/lib/ceph/mds/\fP in \fB/var/lib/ceph/mds/{cluster}\-{name}\fP format and a bootstrap keyring under \fB/var/lib/ceph/bootstrap\-mds/\fP in \fB/var/lib/ceph/bootstrap\-mds/{cluster}.keyring\fP format. It then runs appropriate commands based on \fBdistro.init\fP to start the \fBmds\fP\&. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy mds create [HOST[:DAEMON\-NAME]] [HOST[:DAEMON\-NAME]...] .ft P .fi .UNINDENT .UNINDENT .sp The [DAEMON\-NAME] is optional. .SS mon .sp Deploy Ceph monitor on remote hosts. \fBmon\fP makes use of certain subcommands to deploy Ceph monitors on other nodes. .sp Subcommand \fBcreate\-initial\fP deploys for monitors defined in \fBmon initial members\fP under \fB[global]\fP section in Ceph configuration file, wait until they form quorum and then gatherkeys, reporting the monitor status along the process. If monitors don\(aqt form quorum the command will eventually time out. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy mon create\-initial .ft P .fi .UNINDENT .UNINDENT .sp Subcommand \fBcreate\fP is used to deploy Ceph monitors by explicitly specifying the hosts which are desired to be made monitors. If no hosts are specified it will default to use the \fBmon initial members\fP defined under \fB[global]\fP section of Ceph configuration file. \fBcreate\fP first detects platform and distro for desired hosts and checks if hostname is compatible for deployment. It then uses the monitor keyring initially created using \fBnew\fP command and deploys the monitor in desired host. If multiple hosts were specified during \fBnew\fP command i.e, if there are multiple hosts in \fBmon initial members\fP and multiple keyrings were created then a concatenated keyring is used for deployment of monitors. In this process a keyring parser is used which looks for \fB[entity]\fP sections in monitor keyrings and returns a list of those sections. A helper is then used to collect all keyrings into a single blob that will be used to inject it to monitors with \fI\%\-\-mkfs\fP on remote nodes. All keyring files are concatenated to be in a directory ending with \fB\&.keyring\fP\&. During this process the helper uses list of sections returned by keyring parser to check if an entity is already present in a keyring and if not, adds it. The concatenated keyring is used for deployment of monitors to desired multiple hosts. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy mon create [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is hostname of desired monitor host(s). .sp Subcommand \fBadd\fP is used to add a monitor to an existing cluster. It first detects platform and distro for desired host and checks if hostname is compatible for deployment. It then uses the monitor keyring, ensures configuration for new monitor host and adds the monitor to the cluster. If the section for the monitor exists and defines a monitor address that will be used, otherwise it will fallback by resolving the hostname to an IP. If \fI\%\-\-address\fP is used it will override all other options. After adding the monitor to the cluster, it gives it some time to start. It then looks for any monitor errors and checks monitor status. Monitor errors arise if the monitor is not added in \fBmon initial members\fP, if it doesn\(aqt exist in \fBmonmap\fP and if neither \fBpublic_addr\fP nor \fBpublic_network\fP keys were defined for monitors. Under such conditions, monitors may not be able to form quorum. Monitor status tells if the monitor is up and running normally. The status is checked by running \fBceph daemon mon.hostname mon_status\fP on remote end which provides the output and returns a boolean status of what is going on. \fBFalse\fP means a monitor that is not fine even if it is up and running, while \fBTrue\fP means the monitor is up and running correctly. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy mon add [HOST] ceph\-deploy mon add [HOST] \-\-address [IP] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor node. Please note, unlike other \fBmon\fP subcommands, only one node can be specified at a time. .sp Subcommand \fBdestroy\fP is used to completely remove monitors on remote hosts. It takes hostnames as arguments. It stops the monitor, verifies if \fBceph\-mon\fP daemon really stopped, creates an archive directory \fBmon\-remove\fP under \fB/var/lib/ceph/\fP, archives old monitor directory in \fB{cluster}\-{hostname}\-{stamp}\fP format in it and removes the monitor from cluster by running \fBceph remove...\fP command. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy mon destroy [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is hostname of monitor that is to be removed. .SS gatherkeys .sp Gather authentication keys for provisioning new nodes. It takes hostnames as arguments. It checks for and fetches \fBclient.admin\fP keyring, monitor keyring and \fBbootstrap\-mds/bootstrap\-osd\fP keyring from monitor host. These authentication keys are used when new \fBmonitors/OSDs/MDS\fP are added to the cluster. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy gatherkeys [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is hostname of the monitor from where keys are to be pulled. .SS disk .sp Manage disks on a remote host. It actually triggers the \fBceph\-volume\fP utility and its subcommands to manage disks. .sp Subcommand \fBlist\fP lists disk partitions and Ceph OSDs. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy disk list HOST .ft P .fi .UNINDENT .UNINDENT .sp Subcommand \fBzap\fP zaps/erases/destroys a device\(aqs partition table and contents. It actually uses \fBceph\-volume lvm zap\fP remotely, alternatively allowing someone to remove the Ceph metadata from the logical volume. .SS osd .sp Manage OSDs by preparing data disk on remote host. \fBosd\fP makes use of certain subcommands for managing OSDs. .sp Subcommand \fBcreate\fP prepares a device for Ceph OSD. It first checks against multiple OSDs getting created and warns about the possibility of more than the recommended which would cause issues with max allowed PIDs in a system. It then reads the bootstrap\-osd key for the cluster or writes the bootstrap key if not found. It then uses \fBceph\-volume\fP utility\(aqs \fBlvm create\fP subcommand to prepare the disk, (and journal if using filestore) and deploy the OSD on the desired host. Once prepared, it gives some time to the OSD to start and checks for any possible errors and if found, reports to the user. .sp Bluestore Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy osd create \-\-data DISK HOST .ft P .fi .UNINDENT .UNINDENT .sp Filestore Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy osd create \-\-data DISK \-\-journal JOURNAL HOST .ft P .fi .UNINDENT .UNINDENT .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 For other flags available, please see the man page or the \-\-help menu on ceph\-deploy osd create .UNINDENT .UNINDENT .sp Subcommand \fBlist\fP lists devices associated to Ceph as part of an OSD. It uses the \fBceph\-volume lvm list\fP output that has a rich output, mapping OSDs to devices and other interesting information about the OSD setup. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy osd list HOST .ft P .fi .UNINDENT .UNINDENT .SS admin .sp Push configuration and \fBclient.admin\fP key to a remote host. It takes the \fB{cluster}.client.admin.keyring\fP from admin node and writes it under \fB/etc/ceph\fP directory of desired node. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy admin [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is desired host to be configured for Ceph administration. .SS config .sp Push/pull configuration file to/from a remote host. It uses \fBpush\fP subcommand to takes the configuration file from admin host and write it to remote host under \fB/etc/ceph\fP directory. It uses \fBpull\fP subcommand to do the opposite i.e, pull the configuration file under \fB/etc/ceph\fP directory of remote host to admin node. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy config push [HOST] [HOST...] ceph\-deploy config pull [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is the hostname of the node where config file will be pushed to or pulled from. .SS uninstall .sp Remove Ceph packages from remote hosts. It detects the platform and distro of selected host and uninstalls Ceph packages from it. However, some dependencies like \fBlibrbd1\fP and \fBlibrados2\fP will not be removed because they can cause issues with \fBqemu\-kvm\fP\&. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy uninstall [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is hostname of the node from where Ceph will be uninstalled. .SS purge .sp Remove Ceph packages from remote hosts and purge all data. It detects the platform and distro of selected host, uninstalls Ceph packages and purges all data. However, some dependencies like \fBlibrbd1\fP and \fBlibrados2\fP will not be removed because they can cause issues with \fBqemu\-kvm\fP\&. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy purge [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is hostname of the node from where Ceph will be purged. .SS purgedata .sp Purge (delete, destroy, discard, shred) any Ceph data from \fB/var/lib/ceph\fP\&. Once it detects the platform and distro of desired host, it first checks if Ceph is still installed on the selected host and if installed, it won\(aqt purge data from it. If Ceph is already uninstalled from the host, it tries to remove the contents of \fB/var/lib/ceph\fP\&. If it fails then probably OSDs are still mounted and needs to be unmounted to continue. It unmount the OSDs and tries to remove the contents of \fB/var/lib/ceph\fP again and checks for errors. It also removes contents of \fB/etc/ceph\fP\&. Once all steps are successfully completed, all the Ceph data from the selected host are removed. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy purgedata [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [HOST] is hostname of the node from where Ceph data will be purged. .SS forgetkeys .sp Remove authentication keys from the local directory. It removes all the authentication keys i.e, monitor keyring, client.admin keyring, bootstrap\-osd and bootstrap\-mds keyring from the node. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy forgetkeys .ft P .fi .UNINDENT .UNINDENT .SS pkg .sp Manage packages on remote hosts. It is used for installing or removing packages from remote hosts. The package names for installation or removal are to be specified after the command. Two options \fI\%\-\-install\fP and \fI\%\-\-remove\fP are used for this purpose. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .nf .ft C ceph\-deploy pkg \-\-install [PKGs] [HOST] [HOST...] ceph\-deploy pkg \-\-remove [PKGs] [HOST] [HOST...] .ft P .fi .UNINDENT .UNINDENT .sp Here, [PKGs] is comma\-separated package names and [HOST] is hostname of the remote node where packages are to be installed or removed from. .SH OPTIONS .INDENT 0.0 .TP .B \-\-address IP address of the host node to be added to the cluster. .UNINDENT .INDENT 0.0 .TP .B \-\-adjust\-repos Install packages modifying source repos. .UNINDENT .INDENT 0.0 .TP .B \-\-ceph\-conf Use (or reuse) a given \fBceph.conf\fP file. .UNINDENT .INDENT 0.0 .TP .B \-\-cluster Name of the cluster. .UNINDENT .INDENT 0.0 .TP .B \-\-dev Install a bleeding edge built from Git branch or tag (default: master). .UNINDENT .INDENT 0.0 .TP .B \-\-cluster\-network Specify the (internal) cluster network. .UNINDENT .INDENT 0.0 .TP .B \-\-dmcrypt Encrypt [data\-path] and/or journal devices with \fBdm\-crypt\fP\&. .UNINDENT .INDENT 0.0 .TP .B \-\-dmcrypt\-key\-dir Directory where \fBdm\-crypt\fP keys are stored. .UNINDENT .INDENT 0.0 .TP .B \-\-install Comma\-separated package(s) to install on remote hosts. .UNINDENT .INDENT 0.0 .TP .B \-\-fs\-type Filesystem to use to format disk \fB(xfs, btrfs or ext4)\fP\&. Note that support for btrfs and ext4 is no longer tested or recommended; please use xfs. .UNINDENT .INDENT 0.0 .TP .B \-\-fsid Provide an alternate FSID for \fBceph.conf\fP generation. .UNINDENT .INDENT 0.0 .TP .B \-\-gpg\-url Specify a GPG key url to be used with custom repos (defaults to ceph.com). .UNINDENT .INDENT 0.0 .TP .B \-\-keyrings Concatenate multiple keyrings to be seeded on new monitors. .UNINDENT .INDENT 0.0 .TP .B \-\-local\-mirror Fetch packages and push them to hosts for a local repo mirror. .UNINDENT .INDENT 0.0 .TP .B \-\-mkfs Inject keys to MONs on remote nodes. .UNINDENT .INDENT 0.0 .TP .B \-\-no\-adjust\-repos Install packages without modifying source repos. .UNINDENT .INDENT 0.0 .TP .B \-\-no\-ssh\-copykey Do not attempt to copy ssh keys. .UNINDENT .INDENT 0.0 .TP .B \-\-overwrite\-conf Overwrite an existing conf file on remote host (if present). .UNINDENT .INDENT 0.0 .TP .B \-\-public\-network Specify the public network for a cluster. .UNINDENT .INDENT 0.0 .TP .B \-\-remove Comma\-separated package(s) to remove from remote hosts. .UNINDENT .INDENT 0.0 .TP .B \-\-repo Install repo files only (skips package installation). .UNINDENT .INDENT 0.0 .TP .B \-\-repo\-url Specify a repo url that mirrors/contains Ceph packages. .UNINDENT .INDENT 0.0 .TP .B \-\-testing Install the latest development release. .UNINDENT .INDENT 0.0 .TP .B \-\-username The username to connect to the remote host. .UNINDENT .INDENT 0.0 .TP .B \-\-version The current installed version of \fBceph\-deploy\fP\&. .UNINDENT .INDENT 0.0 .TP .B \-\-zap\-disk Destroy the partition table and content of a disk. .UNINDENT .SH AVAILABILITY .sp \fBceph\-deploy\fP is part of Ceph, a massively scalable, open\-source, distributed storage system. Please refer to the documentation at \fI\%https://ceph.com/ceph\-deploy/docs\fP for more information. .SH SEE ALSO .sp ceph\-mon(8), ceph\-osd(8), ceph\-volume(8), ceph\-mds(8) .SH COPYRIGHT 2010-2021, Inktank Storage, Inc. and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0) .\" Generated by docutils manpage writer. .