zpool(1M) | System Administration Commands | zpool(1M) |
NAME¶
zpool - configures ZFS storage poolsSYNOPSIS¶
zpool [-?]
zpool add [-fn] pool vdev ...
zpool attach [-f] pool device new_device
zpool clear pool [device]
zpool create [-fn] [-o property=value] ... [-O file-system-property=value] ... [ -m mountpoint] [-R root] pool vdev ...
zpool destroy [-f] pool
zpool detach pool device
zpool export [-f] pool ...
zpool get "all" | property[,...] pool ...
zpool history [-il] [pool] ...
zpool import [-d dir] [-D]
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile] [ -D] [-f] [-R root] -a
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile] [ -D] [-f] [-R root] pool |id [newpool]
zpool iostat [-T u | d ] [-v] [pool] ... [interval[count]]
zpool labelclear [-f] device
zpool list [-H] [-o property[,...]] [pool] ...
zpool offline [-t] pool device ...
zpool online pool device ...
zpool remove pool device ...
zpool replace [-f] pool device [new_device]
zpool scrub [-s] pool ...
zpool set property=value pool
zpool status [-xv] [pool] ...
zpool upgrade
zpool upgrade -v
zpool upgrade [-V version] -a | pool ...
DESCRIPTION¶
The zpool command configures ZFS storage pools. A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets.Virtual Devices (vdevs)¶
A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported:A block device, typically located under
/dev/dsk. ZFS can use individual slices or partitions, though
the recommended mode of operation is to use whole disks. A disk can be
specified by a full path, or it can be a shorthand name (the relative portion
of the path under "/dev/dsk"). A whole disk can be specified by
omitting the slice or partition designation. For example, "c0t0d0"
is equivalent to "/dev/dsk/c0t0d0s2". When given a whole disk,
ZFS automatically labels the disk, if necessary.
A regular file. The use of files as a backing
store is strongly discouraged. It is designed primarily for experimental
purposes, as the fault tolerance of a file is only as good as the file system
of which it is a part. A file must be specified by a full path.
A mirror of two or more devices. Data is
replicated in an identical fashion across all components of a mirror. A mirror
with N disks of size X can hold X bytes and can withstand
( N-1) devices failing before data integrity is compromised.
A variation on RAID-5 that allows for
better distribution of parity and eliminates the " RAID-5 write
hole" (in which data and parity become inconsistent after a power loss).
Data and parity is striped across all disks within a raidz group.
A raidz group can have single-, double- , or triple parity, meaning that
the raidz group can sustain one, two, or three failures, respectively,
without losing any data. The raidz1 vdev type specifies a
single-parity raidz group; the raidz2 vdev type specifies
a double-parity raidz group; and the raidz3 vdev type
specifies a triple-parity raidz group. The raidz vdev
type is an alias for raidz1.
A raidz group with N disks of size X with P parity
disks can hold approximately ( N-P)*X bytes and can withstand
P device(s) failing before data integrity is compromised. The minimum
number of devices in a raidz group is one more than the number of
parity disks. The recommended number is between 3 and 9 to help increase
performance.
A special pseudo-vdev which keeps
track of available hot spares for a pool. For more information, see the
"Hot Spares" section.
A separate-intent log device. If more than
one log device is specified, then writes are load-balanced between devices.
Log devices can be mirrored. However, raidz vdev types are not
supported for the intent log. For more information, see the "Intent
Log" section.
A device used to cache storage pool data. A
cache device cannot be cannot be configured as a mirror or raidz group.
For more information, see the "Cache Devices" section.
# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
Device Failure and Recovery¶
ZFS supports a rich set of mechanisms for handling device failure and data corruption. All metadata and data is checksummed, and ZFS automatically repairs bad data from a good copy when corruption is detected.One or more top-level vdevs is in the
degraded state because one or more component devices are offline. Sufficient
replicas exist to continue functioning.
One or more component devices is in the degraded or faulted state, but
sufficient replicas exist to continue functioning. The underlying conditions
are as follows:
- o
- The number of checksum errors exceeds acceptable levels and the device is degraded as an indication that something may be wrong. ZFS continues to use the device as necessary.
- o
- The number of I/O errors exceeds acceptable levels. The device could not be marked as faulted because there are insufficient replicas to continue functioning.
One or more top-level vdevs is in the faulted
state because one or more component devices are offline. Insufficient replicas
exist to continue functioning.
One or more component devices is in the faulted state, and insufficient replicas
exist to continue functioning. The underlying conditions are as follows:
- o
- The device could be opened, but the contents did not match expected values.
- o
- The number of I/O errors exceeds acceptable levels and the device is faulted to prevent further use of the device.
The device was explicitly taken offline by
the " zpool offline" command.
The device is online and functioning.
The device was physically removed while the
system was running. Device removal detection is hardware-dependent and may not
be supported on all platforms.
The device could not be opened. If a pool is
imported when a device was unavailable, then the device will be identified by
a unique identifier instead of its path since the path was never correct in
the first place.
Hot Spares¶
ZFS allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a "spare" vdev with any number of devices. For example,# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
Intent Log¶
The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous transactions. For instance, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync() to ensure data stability. By default, the intent log is allocated from blocks within the main pool. However, it might be possible to get better performance using separate intent log devices such as NVRAM or a dedicated disk. For example:# zpool create pool c0d0 c1d0 log c2d0
Cache Devices¶
Devices can be added to a storage pool as "cache devices." These devices provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content.# zpool create pool c0d0 c1d0 cache c2d0 c3d0
Properties¶
Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool. The following are read-only properties:Amount of storage available within the pool.
This property can also be referred to by its shortened column name,
"avail".
Percentage of pool space used. This property
can also be referred to by its shortened column name, "cap".
The current health of the pool. Health can be
" ONLINE", "DEGRADED",
"FAULTED", " OFFLINE",
"REMOVED", or " UNAVAIL".
A unique identifier for the pool.
Total size of the storage pool.
Amount of storage space used within the
pool.
Alternate root directory. If set, this
directory is prepended to any mount points within the pool. This can be used
when examining an unknown pool where the mount points cannot be trusted, or in
an alternate boot environment, where the typical paths are not valid.
altroot is not a persistent property. It is valid only while the system
is up. Setting altroot defaults to using cachefile=none, though
this may be overridden using an explicit setting.
Controls automatic pool expansion when the
underlying LUN is grown. If set to on, the pool will be resized
according to the size of the expanded device. If the device is part of a
mirror or raidz then all devices within that mirror/raidz group
must be expanded before the new space is made available to the pool. The
default behavior is off. This property can also be referred to by its
shortened column name, expand.
Controls automatic device replacement. If set
to " off", device replacement must be initiated by the
administrator by using the " zpool replace" command. If set
to " on", any new device, found in the same physical location
as a device that previously belonged to the pool, is automatically formatted
and replaced. The default behavior is " off". This property
can also be referred to by its shortened column name,
"replace".
Identifies the default bootable dataset for
the root pool. This property is expected to be set mainly by the installation
and upgrade programs.
Controls the location of where the pool
configuration is cached. Discovering all pools on system startup requires a
cached copy of the configuration data that is stored on the root file system.
All pools in this cache are automatically imported when the system boots. Some
environments, such as install and clustering, need to cache this information
in a different location so that pools are not automatically imported. Setting
this property caches the pool configuration in a different location that can
later be imported with " zpool import -c". Setting it to the
special value " none" creates a temporary pool that is never
cached, and the special value '' (empty string) uses the default
location.
Multiple pools can share the same cache file. Because the kernel destroys and
recreates this file when pools are added and removed, care should be taken
when attempting to access this file. When the last pool using a
cachefile is exported or destroyed, the file is removed.
Controls whether a non-privileged user is
granted access based on the dataset permissions defined on the dataset. See
zfs(1M) for more information on ZFS delegated
administration.
Controls the system behavior in the event of
catastrophic pool failure. This condition is typically a result of a loss of
connectivity to the underlying storage device(s) or a failure of all devices
within the pool. The behavior of such an event is determined as follows:
wait
continue
panic
Blocks all I/O access until the device
connectivity is recovered and the errors are cleared. This is the default
behavior.
Returns EIO to any new write
I/O requests but allows reads to any of the remaining healthy devices.
Any write requests that have yet to be committed to disk would be
blocked.
Prints out a message to the console and
generates a system crash dump.
Controls whether information about snapshots
associated with this pool is output when " zfs list" is run
without the -t option. The default value is "off".
The current on-disk version of the pool. This
can be increased, but never decreased. The preferred method of updating pools
is with the " zpool upgrade" command, though this property
can be used when a specific version is needed for backwards compatibility.
This property can be any number between 1 and the current version reported by
" zpool upgrade -v".
Subcommands¶
All subcommands that modify state are logged persistently to the pool in their original form.Displays a help message.
Adds the specified virtual devices to the
given pool. The vdev specification is described in the "Virtual
Devices" section. The behavior of the -f option, and the device
checks performed are described in the "zpool create" subcommand.
-f
-n
Do not add a disk that is currently configured as a quorum device to a zpool.
After a disk is in the pool, that disk can then be configured as a quorum
device.
Forces use of vdevs, even if they
appear in use or specify a conflicting replication level. Not all devices can
be overridden in this manner.
Displays the configuration that would be used
without actually adding the vdevs. The actual pool creation can still
fail due to insufficient privileges or device sharing.
Attaches new_device to an existing
zpool device. The existing device cannot be part of a raidz
configuration. If device is not currently part of a mirrored
configuration, device automatically transforms into a two-way mirror of
device and new_device. If device is part of a two-way
mirror, attaching new_device creates a three-way mirror, and so on. In
either case, new_device begins to resilver immediately.
-f
Forces use of new_device, even if its
appears to be in use. Not all devices can be overridden in this manner.
Clears device errors in a pool. If no
arguments are specified, all device errors within the pool are cleared. If one
or more devices is specified, only those errors associated with the specified
device or devices are cleared.
Creates a new storage pool containing the
virtual devices specified on the command line. The pool name must begin with a
letter, and can only contain alphanumeric characters as well as underscore
("_"), dash ("-"), and period ("."). The pool
names "mirror", "raidz", "spare" and
"log" are reserved, as are names beginning with the pattern
"c[0-9]". The vdev specification is described in the
"Virtual Devices" section.
The command verifies that each device specified is accessible and not currently
in use by another subsystem. There are some uses, such as being currently
mounted, or specified as the dedicated dump device, that prevents a device
from ever being used by ZFS. Other uses, such as having a preexisting
UFS file system, can be overridden with the -f option.
The command also checks that the replication strategy for the pool is
consistent. An attempt to combine redundant and non-redundant storage in a
single pool, or to mix disks and files, results in an error unless -f
is specified. The use of differently sized devices within a single
raidz or mirror group is also flagged as an error unless -f is
specified.
Unless the -R option is specified, the default mount point is
"/pool". The mount point must not exist or must be empty, or
else the root dataset cannot be mounted. This can be overridden with the
-m option.
-f
-n
-o property=value [-o property=value] ...
-O file-system-property=value
[-O file-system-property=value] ...
-R root
-m mountpoint
Forces use of vdevs, even if they
appear in use or specify a conflicting replication level. Not all devices can
be overridden in this manner.
Displays the configuration that would be used
without actually creating the pool. The actual pool creation can still fail
due to insufficient privileges or device sharing.
Sets the given pool properties. See the
"Properties" section for a list of valid properties that can be
set.
Sets the given file system properties in the
root file system of the pool. See the "Properties" section of
zfs(1M) for a list of valid properties that can be set.
Equivalent to "-o
cachefile=none,altroot=root"
Sets the mount point for the root dataset. The
default mount point is "/ pool" or
"altroot/pool" if altroot is specified. The
mount point must be an absolute path, " legacy", or
"none". For more information on dataset mount points, see
zfs(1M).
Destroys the given pool, freeing up any
devices for other use. This command tries to unmount any active datasets
before destroying the pool.
-f
Forces any active datasets contained within
the pool to be unmounted.
Detaches device from a mirror. The
operation is refused if there are no other valid replicas of the data.
Exports the given pools from the system. All
devices are marked as exported, but are still considered in use by other
subsystems. The devices can be moved between systems (even those of different
endianness) and imported as long as a sufficient number of devices are
present.
Before exporting the pool, all datasets within the pool are unmounted. A pool
can not be exported if it has a shared spare that is currently being used.
For pools to be portable, you must give the zpool command whole disks,
not just slices, so that ZFS can label the disks with portable
EFI labels. Otherwise, disk drivers on platforms of different
endianness will not recognize the disks.
-f
Forcefully unmount all datasets, using the
" unmount -f" command.
This command will forcefully export the pool even if it has a shared spare that
is currently being used. This may lead to potential data corruption.
Retrieves the given list of properties (or all
properties if " all" is used) for the specified storage
pool(s). These properties are displayed with the following fields:
See the "Properties" section for more information on the available
pool properties.
name Name of storage pool property Property name value Property value source Property source, either 'default' or 'local'.
Displays the command history of the specified
pools or all pools if no pool is specified.
-i
-l
Displays internally logged ZFS events
in addition to user initiated events.
Displays log records in long format, which in
addition to standard format includes, the user name, the hostname, and the
zone in which the operation was performed.
Lists pools available to import. If the
-d option is not specified, this command searches for devices in
"/dev/dsk". The -d option can be specified multiple times,
and all directories are searched. If the device appears to be part of an
exported pool, this command displays a summary of the pool with the name of
the pool, a numeric identifier, as well as the vdev layout and current
health of the device for each device or file. Destroyed pools, pools that were
previously destroyed with the " zpool destroy" command, are
not listed unless the -D option is specified.
The numeric identifier is unique, and can be used instead of the pool name when
multiple exported pools of the same name are available.
-c cachefile
-d dir
-D
Reads configuration from the given
cachefile that was created with the " cachefile" pool
property. This cachefile is used instead of searching for
devices.
Searches for devices or files in dir.
The -d option can be specified multiple times.
Lists destroyed pools only.
Imports all pools found in the search
directories. Identical to the previous command, except that all pools with a
sufficient number of devices available are imported. Destroyed pools, pools
that were previously destroyed with the " zpool destroy"
command, will not be imported unless the -D option is specified.
-o mntopts
-o property=value
-c cachefile
-d dir
-D
-f
-a
-R root
Comma-separated list of mount options to use
when mounting datasets within the pool. See zfs(1M) for a description
of dataset properties and mount options.
Sets the specified property on the imported
pool. See the "Properties" section for more information on the
available pool properties.
Reads configuration from the given
cachefile that was created with the " cachefile" pool
property. This cachefile is used instead of searching for
devices.
Searches for devices or files in dir.
The -d option can be specified multiple times. This option is
incompatible with the -c option.
Imports destroyed pools only. The -f
option is also required.
Forces import, even if the pool appears to be
potentially active.
Searches for and imports all pools
found.
Sets the "cachefile"
property to " none" and the "altroot"
property to " root".
Imports a specific pool. A pool can be
identified by its name or the numeric identifier. If newpool is
specified, the pool is imported using the name newpool. Otherwise, it
is imported with the same name as its exported name.
If a device is removed from a system without running " zpool
export" first, the device appears as potentially active. It cannot be
determined if this was a failed export, or whether the device is really in use
from another host. To import a pool in this state, the -f option is
required.
-o mntopts
-o property=value
-c cachefile
-d dir
-D
-f
-R root
Comma-separated list of mount options to use
when mounting datasets within the pool. See zfs(1M) for a description
of dataset properties and mount options.
Sets the specified property on the imported
pool. See the "Properties" section for more information on the
available pool properties.
Reads configuration from the given
cachefile that was created with the " cachefile" pool
property. This cachefile is used instead of searching for
devices.
Searches for devices or files in dir.
The -d option can be specified multiple times. This option is
incompatible with the -c option.
Imports destroyed pool. The -f option
is also required.
Forces import, even if the pool appears to be
potentially active.
Sets the "cachefile" property
to " none" and the "altroot" property to
" root".
Displays I/O statistics for the given
pools. When given an interval, the statistics are printed every
interval seconds until Ctrl-C is pressed. If no pools are
specified, statistics for every pool in the system is shown. If count
is specified, the command exits after count reports are printed.
-T u | d
-v
Display a time stamp.
Specify u for a printed representation of the internal representation of
time. See time(2). Specify d for standard date format. See
date(1).
Verbose statistics. Reports usage statistics
for individual vdevs within the pool, in addition to the pool-wide
statistics.
Removes ZFS label information from the
specified device. The device must not be part of an active pool configuration.
-f
Treat exported or foreign devices as
inactive.
Lists the given pools along with a health
status and space usage. When given no arguments, all pools in the system are
listed.
-H
-o props
Scripted mode. Do not display headers, and
separate fields by a single tab instead of arbitrary space.
Comma-separated list of properties to
display. See the "Properties" section for a list of valid
properties. The default list is "name, size, used, available, capacity,
health, altroot"
Takes the specified physical device offline.
While the device is offline, no attempt is made to read or write to the
device.
This command is not applicable to spares or cache devices.
-t
Temporary. Upon reboot, the specified physical
device reverts to its previous state.
Brings the specified physical device online.
This command is not applicable to spares or cache devices.
-e
Expand the device to use all available space.
If the device is part of a mirror or raidz then all devices must be
expanded before the new space will become available to the pool.
Removes the specified device from the pool.
This command currently only supports removing hot spares, cache, and log
devices. A mirrored log device can be removed by specifying the top-level
mirror for the log. Non-log devices that are part of a mirrored configuration
can be removed using the zpool detach command. Non-redundant and
raidz devices cannot be removed from a pool.
Replaces old_device with
new_device. This is equivalent to attaching new_device, waiting
for it to resilver, and then detaching old_device.
The size of new_device must be greater than or equal to the minimum size
of all the devices in a mirror or raidz configuration.
new_device is required if the pool is not redundant. If new_device
is not specified, it defaults to old_device. This form of replacement
is useful after an existing disk has failed and has been physically replaced.
In this case, the new disk may have the same /dev/dsk path as the old
device, even though it is actually a different disk. ZFS recognizes
this.
-f
Forces use of new_device, even if its
appears to be in use. Not all devices can be overridden in this manner.
Begins a scrub. The scrub examines all data in
the specified pools to verify that it checksums correctly. For replicated
(mirror or raidz) devices, ZFS automatically repairs any damage
discovered during the scrub. The " zpool status" command
reports the progress of the scrub and summarizes the results of the scrub upon
completion.
Scrubbing and resilvering are very similar operations. The difference is that
resilvering only examines data that ZFS knows to be out of date (for
example, when attaching a new device to a mirror or replacing an existing
device), whereas scrubbing examines all data to discover silent errors due to
hardware faults or disk failure.
Because scrubbing and resilvering are I/O-intensive operations,
ZFS only allows one at a time. If a scrub is already in progress, the
" zpool scrub" command terminates it and starts a new scrub.
If a resilver is in progress, ZFS does not allow a scrub to be started
until the resilver completes.
-s
Stop scrubbing.
Sets the given property on the specified pool.
See the "Properties" section for more information on what properties
can be set and acceptable values.
Displays the detailed health status for the
given pools. If no pool is specified, then the status of each pool in
the system is displayed. For more information on pool and device health, see
the "Device Failure and Recovery" section.
If a scrub or resilver is in progress, this command reports the percentage done
and the estimated time to completion. Both of these are only approximate,
because the amount of data in the pool and the other workloads on the system
can change.
-x
-v
Only display status for pools that are
exhibiting errors or are otherwise unavailable.
Displays verbose data error information,
printing out a complete list of all data errors since the last complete pool
scrub.
Displays all pools formatted using a different
ZFS on-disk version. Older versions can continue to be used, but some
features may not be available. These pools can be upgraded using "
zpool upgrade -a". Pools that are formatted with a more recent
version are also displayed, although these pools will be inaccessible on the
system.
Displays ZFS versions supported by the
current software. The current ZFS versions and all previous supported
versions are displayed, along with an explanation of the features provided
with each version.
Upgrades the given pool to the latest on-disk
version. Once this is done, the pool will no longer be accessible on systems
running older versions of the software.
-a
-V version
Upgrades all pools.
Upgrade to the specified version. If the
-V flag is not specified, the pool is upgraded to the most recent
version. This option can only be used to increase the version number, and only
up to the most recent version supported by this software.
EXAMPLES¶
Example 1 Creating a RAID-Z Storage Pool# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
# zpool create tank /path/to/file/a /path/to/file/b
# zpool add tank mirror c1t0d0 c1t1d0
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 67.5G 2.92M 67.5G 0% ONLINE - tank 67.5G 2.92M 67.5G 0% ONLINE - zion - - - 0% FAULTED -
# zpool destroy -f tank
# zpool export tank
# zpool import pool: tank id: 15451357997522795478 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror ONLINE c1t2d0 ONLINE c1t3d0 ONLINE # zpool import tank
# zpool upgrade -a This system is currently running ZFS version 2.
# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
# zpool replace tank c0t0d0 c0t3d0
# zpool remove tank c0t2d0
# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \ c4d0 c5d0
# zpool add pool cache c2d0 c3d0
# zpool iostat -v pool 5
pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c6t0d0 ONLINE 0 0 0 c6t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c6t2d0 ONLINE 0 0 0 c6t3d0 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 c4t0d0 ONLINE 0 0 0 c4t1d0 ONLINE 0 0 0
# zpool remove tank mirror-2
EXIT STATUS¶
The following exit values are returned:Successful completion.
An error occurred.
Invalid command line options were
specified.
ATTRIBUTES¶
See attributes(5) for descriptions of the following attributes:ATTRIBUTE TYPE | ATTRIBUTE VALUE |
Availability | SUNWzfsu |
Interface Stability | Evolving |
SEE ALSO¶
zfs(1M), attributes(5)21 Sep 2009 | SunOS 5.11 |