Scroll to navigation

RADOS(8) Ceph RADOS(8)

NAME

rados - rados object storage utility

SYNOPSIS

rados [ -m monaddr ] [ mkpool | rmpool foo ] [ -p | --pool
pool ] [ -s | --snap snap ] [ -i infile ] [ -o outfile ]
command ...

DESCRIPTION

rados is a utility for interacting with a Ceph object storage cluster (RADOS), part of the Ceph distributed storage system.

OPTIONS

-p pool, --pool pool
Interact with the given pool. Required by most commands.

-s snap, --snap snap
Read from the given pool snapshot. Valid for all pool-specific read operations.

-i infile
will specify an input file to be passed along as a payload with the command to the monitor cluster. This is only used for specific monitor commands.

-o outfile
will write any payload returned by the monitor cluster with its reply to outfile. Only specific monitor commands (e.g. osd getmap) return a payload.

-c ceph.conf, --conf=ceph.conf
Use ceph.conf configuration file instead of the default /etc/ceph/ceph.conf to determine monitor addresses during startup.

-m monaddress[:port]
Connect to specified monitor (instead of looking through ceph.conf).

-b block_size
Set the block size for put/get ops and for write benchmarking.

--striper
Uses the striping API of rados rather than the default one. Available for stat, get, put, truncate, rm, ls and all xattr related operation

GLOBAL COMMANDS

lspools
List object pools
df
Show utilization statistics, including disk usage (bytes) and object counts, over the entire system and broken down by pool.
mkpool foo
Create a pool with name foo.
rmpool foo [ foo --yes-i-really-really-mean-it ]
Delete the pool foo (and all its data)

POOL SPECIFIC COMMANDS

get name outfile
Read object name from the cluster and write it to outfile.
put name infile
Write object name to the cluster with contents from infile.
rm name
Remove object name.
listwatchers name
List the watchers of object name.
ls outfile
List objects in given pool and write to outfile.
lssnap
List snapshots for given pool.
clonedata srcname dstname --object-locator key
Clone object byte data from srcname to dstname. Both objects must be stored with the locator key key (usually either srcname or dstname). Object attributes and omap keys are not copied or cloned.
mksnap foo
Create pool snapshot named foo.
rmsnap foo
Remove pool snapshot named foo.
bench seconds mode [ -b objsize ] [ -t threads ]
Benchmark for seconds. The mode can be write, seq, or rand. seq and rand are read benchmarks, either sequential or random. Before running one of the reading benchmarks, run a write benchmark with the --no-cleanup option. The default object size is 4 MB, and the default number of simulated threads (parallel writes) is 16. The --run-name <label> option is useful for benchmarking a workload test from multiple clients. The <label> is an arbitrary object name. It is "benchmark_last_metadata" by default, and is used as the underlying object name for "read" and "write" ops. Note: -b objsize option is valid only in write mode.

cleanup
listomapkeys name
List all the keys stored in the object map of object name.
listomapvals name
List all key/value pairs stored in the object map of object name. The values are dumped in hexadecimal.
getomapval name key
Dump the hexadecimal value of key in the object map of object name.
setomapval name key value
Set the value of key in the object map of object name.
rmomapkey name key
Remove key from the object map of object name.
getomapheader name
Dump the hexadecimal value of the object map header of object name.
setomapheader name value
Set the value of the object map header of object name.

EXAMPLES

To view cluster utilization:
rados df


To get a list object in pool foo sent to stdout:
rados -p foo ls -


To write an object:
rados -p foo put myobject blah.txt


To create a snapshot:
rados -p foo mksnap mysnap


To delete the object:
rados -p foo rm myobject


To read a previously snapshotted version of an object:
rados -p foo -s mysnap get myobject blah.txt.old


AVAILABILITY

rados is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at http://ceph.com/docs for more information.

SEE ALSO

ceph(8)

COPYRIGHT

2010-2014, Inktank Storage, Inc. and contributors. Licensed under Creative Commons BY-SA
January 20, 2017 dev