ceph-osd - ceph object storage daemon
ceph-osd -i osdnum [ --osd-data datapath ] [ --osd-journal
journal ] [ --mkfs ] [ --mkjournal ] [ --mkkey ]
ceph-osd is the object storage daemon for the Ceph distributed file
system. It is responsible for storing objects on a local file system and
providing access to them over the network.
The datapath argument should be a directory on a btrfs file system
where the object data resides. The journal is optional, and is only useful
performance-wise when it resides on a different disk than datapath with low
latency (ideally, an NVRAM device).
- -f, --foreground
- Foreground: do not daemonize after startup (run in foreground). Do not
generate a pid file. Useful when run via ceph-run(8).
- Debug mode: like -f, but also send all log output to stderr.
- --osd-data osddata
- Use object store at osddata.
- --osd-journal journal
- Journal updates to journal.
- Create an empty object repository. This also initializes the journal (if
one is defined).
- Generate a new secret key. This is normally used in combination with
--mkfs as it is more convenient than generating a key by hand with
- Create a new journal file to match an existing object repository. This is
useful if the journal device or file is wiped out due to a disk or file
- Flush the journal to permanent store. This runs in the foreground so you
know when it's completed. This can be useful if you want to resize the
journal or need to otherwise destroy it: this guarantees you won't lose
- Print the cluster fsid (uuid) and exit.
- Print the OSD's fsid and exit. The OSD's uuid is generated at --mkfs time
and is thus unique to a particular instantiation of this OSD.
- Print the journal's uuid. The journal fsid is set to match the OSD fsid at
- -c ceph.conf, --conf=ceph.conf
- Use ceph.conf configuration file instead of the default
/etc/ceph/ceph.conf for runtime configuration options.
- -m monaddress[:port]
- Connect to specified monitor (instead of looking through
ceph-osd is part of the Ceph distributed storage system. Please refer to
the Ceph documentation at http://ceph.com/docs for more information.
2010-2014, Inktank Storage, Inc. and contributors. Licensed under Creative