ZPOOL(8) Maintenance Commands and Procedures ZPOOL(8)
NAME
zpool - configure ZFS storage pools
SYNOPSIS
zpool -? zpool add [
-fgLnP] [
-o property=
value]
pool vdev...
zpool attach [
-f] [
-o property=
value]
pool device new_device zpool checkpoint [
-d, --discard]
pool zpool clear pool [
device]
zpool create [
-dfn] [
-B] [
-m mountpoint] [
-o property=
value]...
[
-o feature@feature=
value]... [
-O file-system-property=
value]...
[
-R root] [
-t tempname]
pool vdev...
zpool destroy [
-f]
pool zpool detach pool device zpool export [
-f]
pool...
zpool get [
-Hp] [
-o field[,
field]...]
all|
property[,
property]...
pool...
zpool history [
-il] [
pool]...
zpool import [
-D] [
-d dir]
zpool import -a [
-DflmN] [
-F [
-n]] [
-c cachefile|
-d dir] [
-o mntopts]
[
-o property=
value]... [
-R root]
zpool import [
-Dfmt] [
-F [
-n]] [
--rewind-to-checkpoint]
[
-c cachefile|
-d dir] [
-o mntopts] [
-o property=
value]...
[
-R root]
pool|
id [
newpool]
zpool initialize [
-c |
-s]
pool [
device...]
zpool iostat [[
-lq] |
-rw] [
-T u|
d] [
-ghHLnpPvy]
[[
pool...]|[
pool vdev...]|[
vdev...]] [
interval [
count]]
zpool labelclear [
-f]
device zpool list [
-HgLpPv] [
-o property[,
property]...] [
-T u|
d] [
pool]...
[
interval [
count]]
zpool offline [
-t]
pool device...
zpool online [
-e]
pool device...
zpool reguid pool zpool reopen pool zpool remove [
-np]
pool device...
zpool remove -s pool zpool replace [
-f]
pool device [
new_device]
zpool resilver pool...
zpool scrub [
-s |
-p]
pool...
zpool trim [
-d] [
-r rate] [
-c |
-s]
pool [
device...]
zpool set property=
value pool zpool split [
-gLlnP] [
-o property=
value]... [
-R root]
pool newpool zpool status [
-DigLpPstvx] [
-T u|
d] [
pool]... [
interval [
count]]
zpool sync [
pool]...
zpool upgrade zpool upgrade -v zpool upgrade [
-V version]
-a|
pool...
DESCRIPTION
The
zpool command configures ZFS storage pools. A storage pool is a
collection of devices that provides physical storage and data
replication for ZFS datasets. All datasets within a storage pool share
the same space. See
zfs(8) for information on managing datasets.
Virtual Devices (vdevs) A "virtual device" describes a single device or a collection of devices
organized according to certain performance and fault characteristics.
The following virtual devices are supported:
disk A block device, typically located under
/dev/dsk. ZFS can use
individual slices or partitions, though the recommended mode of
operation is to use whole disks. A disk can be specified by a
full path, or it can be a shorthand name (the relative portion
of the path under
/dev/dsk). A whole disk can be specified by
omitting the slice or partition designation. For example,
c0t0d0 is equivalent to
/dev/dsk/c0t0d0s2. When given a whole
disk, ZFS automatically labels the disk, if necessary.
file A regular file. The use of files as a backing store is
strongly discouraged. It is designed primarily for
experimental purposes, as the fault tolerance of a file is only
as good as the file system of which it is a part. A file must
be specified by a full path.
mirror A mirror of two or more devices. Data is replicated in an
identical fashion across all components of a mirror. A mirror
with N disks of size X can hold X bytes and can withstand (N-1)
devices failing before data integrity is compromised.
raidz,
raidz1,
raidz2,
raidz3 A variation on RAID-5 that allows for better distribution of
parity and eliminates the RAID-5 "write hole" (in which data
and parity become inconsistent after a power loss). Data and
parity is striped across all disks within a raidz group.
A raidz group can have single-, double-, or triple-parity,
meaning that the raidz group can sustain one, two, or three
failures, respectively, without losing any data. The
raidz1 vdev type specifies a single-parity raidz group; the
raidz2 vdev type specifies a double-parity raidz group; and the
raidz3 vdev type specifies a triple-parity raidz group. The
raidz vdev type is an alias for
raidz1.
A raidz group with N disks of size X with P parity disks can
hold approximately (N-P)*X bytes and can withstand P device(s)
failing before data integrity is compromised. The minimum
number of devices in a raidz group is one more than the number
of parity disks. The recommended number is between 3 and 9 to
help increase performance.
spare A special pseudo-vdev which keeps track of available hot spares
for a pool. For more information, see the
Hot Spares section.
log A separate intent log device. If more than one log device is
specified, then writes are load-balanced between devices. Log
devices can be mirrored. However, raidz vdev types are not
supported for the intent log. For more information, see the
Intent Log section.
dedup A device dedicated solely for allocating dedup data. The
redundancy of this device should match the redundancy of the
other normal devices in the pool. If more than one dedup
device is specified, then allocations are load-balanced between
devices.
special A device dedicated solely for allocating various kinds of
internal metadata, and optionally small file data. The
redundancy of this device should match the redundancy of the
other normal devices in the pool. If more than one special
device is specified, then allocations are load-balanced between
devices.
For more information on special allocations, see the
Special Allocation Class section.
cache A device used to cache storage pool data. A cache device
cannot be configured as a mirror or raidz group. For more
information, see the
Cache Devices section.
Virtual devices cannot be nested, so a mirror or raidz virtual device
can only contain files or disks. Mirrors of mirrors (or other
combinations) are not allowed.
A pool can have any number of virtual devices at the top of the
configuration (known as "root vdevs"). Data is dynamically distributed
across all top-level devices to balance data among devices. As new
virtual devices are added, ZFS automatically places data on the newly
available devices.
Virtual devices are specified one at a time on the command line,
separated by whitespace. The keywords
mirror and
raidz are used to
distinguish where a group ends and another begins. For example, the
following creates two root vdevs, each a mirror of two disks:
# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
Device Failure and Recovery
ZFS supports a rich set of mechanisms for handling device failure and
data corruption. All metadata and data is checksummed, and ZFS
automatically repairs bad data from a good copy when corruption is
detected.
In order to take advantage of these features, a pool must make use of
some form of redundancy, using either mirrored or raidz groups. While
ZFS supports running in a non-redundant configuration, where each root
vdev is simply a disk or file, this is strongly discouraged. A single
case of bit corruption can render some or all of your data unavailable.
A pool's health status is described by one of three states: online,
degraded, or faulted. An online pool has all devices operating
normally. A degraded pool is one in which one or more devices have
failed, but the data is still available due to a redundant
configuration. A faulted pool has corrupted metadata, or one or more
faulted devices, and insufficient replicas to continue functioning.
The health of the top-level vdev, such as mirror or raidz device, is
potentially impacted by the state of its associated vdevs, or component
devices. A top-level vdev or component device is in one of the
following states:
DEGRADED One or more top-level vdevs is in the degraded state because
one or more component devices are offline. Sufficient
replicas exist to continue functioning.
One or more component devices is in the degraded or faulted
state, but sufficient replicas exist to continue functioning.
The underlying conditions are as follows:
+o The number of checksum errors exceeds acceptable levels
and the device is degraded as an indication that
something may be wrong. ZFS continues to use the device
as necessary.
+o The number of I/O errors exceeds acceptable levels. The
device could not be marked as faulted because there are
insufficient replicas to continue functioning.
FAULTED One or more top-level vdevs is in the faulted state because
one or more component devices are offline. Insufficient
replicas exist to continue functioning.
One or more component devices is in the faulted state, and
insufficient replicas exist to continue functioning. The
underlying conditions are as follows:
+o The device could be opened, but the contents did not
match expected values.
+o The number of I/O errors exceeds acceptable levels and
the device is faulted to prevent further use of the
device.
OFFLINE The device was explicitly taken offline by the
zpool offline command.
ONLINE The device is online and functioning.
REMOVED The device was physically removed while the system was
running. Device removal detection is hardware-dependent and
may not be supported on all platforms.
UNAVAIL The device could not be opened. If a pool is imported when a
device was unavailable, then the device will be identified by
a unique identifier instead of its path since the path was
never correct in the first place.
If a device is removed and later re-attached to the system, ZFS
attempts to put the device online automatically. Device attach
detection is hardware-dependent and might not be supported on all
platforms.
Hot Spares
ZFS allows devices to be associated with pools as "hot spares". These
devices are not actively used in the pool, but when an active device
fails, it is automatically replaced by a hot spare. If there is more
than one spare that could be used as a replacement then they are tried
in order of increasing capacity so that the smallest available spare
that can replace the failed device is used. To create a pool with hot
spares, specify a
spare vdev with any number of devices. For example,
# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
Spares can be shared across multiple pools, and can be added with the
zpool add command and removed with the
zpool remove command. Once a
spare replacement is initiated, a new
spare vdev is created within the
configuration that will remain there until the original device is
replaced. At this point, the hot spare becomes available again if
another device fails.
If a pool has a shared spare that is currently being used, the pool can
not be exported since other pools may use this shared spare, which may
lead to potential data corruption.
Shared spares add some risk. If the pools are imported on different
hosts, and both pools suffer a device failure at the same time, both
could attempt to use the spare at the same time. This may not be
detected, resulting in data corruption.
An in-progress spare replacement can be cancelled by detaching the hot
spare. If the original faulted device is detached, then the hot spare
assumes its place in the configuration, and is removed from the spare
list of all active pools.
Spares cannot replace log devices.
Intent Log
The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
transactions. For instance, databases often require their transactions
to be on stable storage devices when returning from a system call. NFS
and other applications can also use
fsync(3C) to ensure data stability.
By default, the intent log is allocated from blocks within the main
pool. However, it might be possible to get better performance using
separate intent log devices such as NVRAM or a dedicated disk. For
example:
# zpool create pool c0d0 c1d0 log c2d0
Multiple log devices can also be specified, and they can be mirrored.
See the
EXAMPLES section for an example of mirroring multiple log
devices.
Log devices can be added, replaced, attached, detached, and imported
and exported as part of the larger pool. Mirrored devices can be
removed by specifying the top-level mirror vdev.
Cache Devices
Devices can be added to a storage pool as "cache devices". These
devices provide an additional layer of caching between main memory and
disk. For read-heavy workloads, where the working set size is much
larger than what can be cached in main memory, using cache devices
allow much more of this working set to be served from low latency
media. Using cache devices provides the greatest performance
improvement for random read-workloads of mostly static content.
To create a pool with cache devices, specify a
cache vdev with any
number of devices. For example:
# zpool create pool c0d0 c1d0 cache c2d0 c3d0
Cache devices cannot be mirrored or part of a raidz configuration. If
a read error is encountered on a cache device, that read I/O is
reissued to the original storage pool device, which might be part of a
mirrored or raidz configuration.
The content of the cache devices is considered volatile, as is the case
with other system caches.
Pool checkpoint
Before starting critical procedures that include destructive actions
(e.g. zfs
destroy), an administrator can checkpoint the pool's state
and in the case of a mistake or failure, rewind the entire pool back to
the checkpoint. The checkpoint is automatically discarded upon
rewinding. Otherwise, the checkpoint can be discarded when the
procedure has completed successfully.
A pool checkpoint can be thought of as a pool-wide snapshot and should
be used with care as it contains every part of the pool's state, from
properties to vdev configuration. Thus, while a pool has a checkpoint
certain operations are not allowed. Specifically, vdev
removal/attach/detach, mirror splitting, and changing the pool's guid.
Adding a new vdev is supported but in the case of a rewind it will have
to be added again. Finally, users of this feature should keep in mind
that scrubs in a pool that has a checkpoint do not repair checkpointed
data.
To create a checkpoint for a pool:
# zpool checkpoint pool
To later rewind to its checkpointed state (which also discards the
checkpoint), you need to first export it and then rewind it during
import:
# zpool export pool
# zpool import --rewind-to-checkpoint pool
To discard the checkpoint from a pool without rewinding:
# zpool checkpoint -d pool
Dataset reservations (controlled by the
reservation or
refreservation zfs properties) may be unenforceable while a checkpoint exists, because
the checkpoint is allowed to consume the dataset's reservation.
Finally, data that is part of the checkpoint but has been freed in the
current state of the pool won't be scanned during a scrub.
Special Allocation Class
The allocations in the special class are dedicated to specific block
types. By default this includes all metadata, the indirect blocks of
user data, and any dedup data. The class can also be provisioned to
accept a limited percentage of small file data blocks.
A pool must always have at least one general (non-specified) vdev
before other devices can be assigned to the special class. If the
special class becomes full, then allocations intended for it will spill
back into the normal class.
Dedup data can be excluded from the special class by setting the
zfs_ddt_data_is_special zfs kernel variable to false (0).
Inclusion of small file blocks in the special class is opt-in. Each
dataset can control the size of small file blocks allowed in the
special class by setting the
special_small_blocks dataset property. It
defaults to zero so you must opt-in by setting it to a non-zero value.
See
zfs(8) for more info on setting this property.
Properties
Each pool has several properties associated with it. Some properties
are read-only statistics while others are configurable and change the
behavior of the pool.
The following are read-only properties:
allocated Amount of storage space used within the pool.
bootsize The size of the system boot partition. This property can only
be set at pool creation time and is read-only once pool is
created. Setting this property implies using the
-B option.
capacity Percentage of pool space used. This property can also be
referred to by its shortened column name,
cap.
expandsize Amount of uninitialized space within the pool or device that
can be used to increase the total capacity of the pool.
Uninitialized space consists of any space on an EFI labeled
vdev which has not been brought online (e.g, using
zpool online -e). This space occurs when a LUN is dynamically expanded.
fragmentation The amount of fragmentation in the pool.
free The amount of free space available in the pool.
freeing After a file system or snapshot is destroyed, the space it was
using is returned to the pool asynchronously.
freeing is the
amount of space remaining to be reclaimed. Over time
freeing will decrease while
free increases.
health The current health of the pool. Health can be one of
ONLINE,
DEGRADED,
FAULTED,
OFFLINE, REMOVED,
UNAVAIL.
guid A unique identifier for the pool.
size Total size of the storage pool.
unsupported@feature_guid Information about unsupported features that are enabled on the
pool. See
zpool-features(7) for details.
The space usage properties report actual physical space available to
the storage pool. The physical space can be different from the total
amount of space that any contained datasets can actually use. The
amount of space used in a raidz configuration depends on the
characteristics of the data being written. In addition, ZFS reserves
some space for internal accounting that the
zfs(8) command takes into
account, but the
zpool command does not. For non-full pools of a
reasonable size, these effects should be invisible. For small pools,
or pools that are close to being completely full, these discrepancies
may become more noticeable.
The following property can be set at creation time and import time:
altroot Alternate root directory. If set, this directory is prepended
to any mount points within the pool. This can be used when
examining an unknown pool where the mount points cannot be
trusted, or in an alternate boot environment, where the typical
paths are not valid.
altroot is not a persistent property. It
is valid only while the system is up. Setting
altroot defaults
to using
cachefile=
none, though this may be overridden using an
explicit setting.
The following property can be set only at import time:
readonly=
on|
off If set to
on, the pool will be imported in read-only mode.
This property can also be referred to by its shortened column
name,
rdonly.
The following properties can be set at creation time and import time,
and later changed with the
zpool set command:
ashift=
ashift Pool sector size exponent, to the power of
2 (internally
referred to as
ashift ). Values from 9 to 16, inclusive, are
valid; also, the value 0 (the default) means to auto-detect
using the kernel's block layer and a ZFS internal exception
list. I/O operations will be aligned to the specified size
boundaries. Additionally, the minimum (disk) write size will
be set to the specified size, so this represents a space vs
performance trade-off. For optimal performance, the pool
sector size should be greater than or equal to the sector size
of the underlying disks. The typical case for setting this
property is when performance is important and the underlying
disks use 4KiB sectors but report 512B sectors to the OS (for
compatibility reasons); in that case, set
ashift=12 (which is
1<<12 = 4096). When set, this property is used as the default
hint value in subsequent vdev operations (add, attach and
replace). Changing this value will not modify any existing
vdev, not even on disk replacement; however it can be used, for
instance, to replace a dying 512B sectors disk with a newer
4KiB sectors device: this will probably result in bad
performance but at the same time could prevent loss of data.
autoexpand=
on|
off Controls automatic pool expansion when the underlying LUN is
grown. If set to
on, the pool will be resized according to the
size of the expanded device. If the device is part of a mirror
or raidz then all devices within that mirror/raidz group must
be expanded before the new space is made available to the pool.
The default behavior is
off. This property can also be
referred to by its shortened column name,
expand.
autoreplace=
on|
off Controls automatic device replacement. If set to
off, device
replacement must be initiated by the administrator by using the
zpool replace command. If set to
on, any new device, found in
the same physical location as a device that previously belonged
to the pool, is automatically formatted and replaced. The
default behavior is
off. This property can also be referred to
by its shortened column name,
replace.
bootfs=
pool/
dataset Identifies the default bootable dataset for the root pool.
This property is expected to be set mainly by the installation
and upgrade programs.
cachefile=
path|
none Controls the location of where the pool configuration is
cached. Discovering all pools on system startup requires a
cached copy of the configuration data that is stored on the
root file system. All pools in this cache are automatically
imported when the system boots. Some environments, such as
install and clustering, need to cache this information in a
different location so that pools are not automatically
imported. Setting this property caches the pool configuration
in a different location that can later be imported with
zpool import -c. Setting it to the special value
none creates a
temporary pool that is never cached, and the special value ""
(empty string) uses the default location.
Multiple pools can share the same cache file. Because the
kernel destroys and recreates this file when pools are added
and removed, care should be taken when attempting to access
this file. When the last pool using a
cachefile is exported or
destroyed, the file is removed.
comment=
text A text string consisting of printable ASCII characters that
will be stored such that it is available even if the pool
becomes faulted. An administrator can provide additional
information about a pool using this property.
dedupditto=
number Threshold for the number of block ditto copies. If the
reference count for a deduplicated block increases above this
number, a new ditto copy of this block is automatically stored.
The default setting is
0 which causes no ditto copies to be
created for deduplicated blocks. The minimum legal nonzero
setting is
100.
delegation=
on|
off Controls whether a non-privileged user is granted access based
on the dataset permissions defined on the dataset. See
zfs(8) for more information on ZFS delegated administration.
failmode=
wait|
continue|
panic Controls the system behavior in the event of catastrophic pool
failure. This condition is typically a result of a loss of
connectivity to the underlying storage device(s) or a failure
of all devices within the pool. The behavior of such an event
is determined as follows:
wait Blocks all I/O access until the device connectivity
is recovered and the errors are cleared. This is the
default behavior.
continue Returns EIO to any new write I/O requests but allows
reads to any of the remaining healthy devices. Any
write requests that have yet to be committed to disk
would be blocked.
panic Prints out a message to the console and generates a
system crash dump.
autotrim=
on|
off When set to
on space which has been recently freed, and is no
longer allocated by the pool, will be periodically trimmed.
This allows block device vdevs which support BLKDISCARD, such
as SSDs, or file vdevs on which the underlying file system
supports hole-punching, to reclaim unused blocks. The default
setting for this property is
off.
Automatic TRIM does not immediately reclaim blocks after a
free. Instead, it will optimistically delay allowing smaller
ranges to be aggregated in to a few larger ones. These can
then be issued more efficiently to the storage.
Be aware that automatic trimming of recently freed data blocks
can put significant stress on the underlying storage devices.
This will vary depending of how well the specific device
handles these commands. For lower end devices it is often
possible to achieve most of the benefits of automatic trimming
by running an on-demand (manual) TRIM periodically using the
zpool trim command.
feature@feature_name=
enabled The value of this property is the current state of
feature_name. The only valid value when setting this property
is
enabled which moves
feature_name to the enabled state. See
zpool-features(7) for details on feature states.
listsnapshots=
on|
off Controls whether information about snapshots associated with
this pool is output when
zfs list is run without the
-t option.
The default value is
off. This property can also be referred
to by its shortened name,
listsnaps.
multihost=
on|
off Controls whether a pool activity check should be performed
during
zpool import. When a pool is determined to be active it
cannot be imported, even with the
-f option. This property is
intended to be used in failover configurations where multiple
hosts have access to a pool on shared storage.
Multihost provides protection on import only. It does not
protect against an individual device being used in multiple
pools, regardless of the type of vdev. See the discussion
under
zpool create. When this property is on, periodic writes to storage occur to
show the pool is in use. See
zfs_multihost_interval in the
zfs-module-parameters(7) man page. In order to enable this
property each host must set a unique hostid. The default value
is
off.
version=
version The current on-disk version of the pool. This can be
increased, but never decreased. The preferred method of
updating pools is with the
zpool upgrade command, though this
property can be used when a specific version is needed for
backwards compatibility. Once feature flags are enabled on a
pool this property will no longer have a value.
Subcommands
All subcommands that modify state are logged persistently to the pool
in their original form.
The
zpool command provides subcommands to create and destroy storage
pools, add capacity to storage pools, and provide information about the
storage pools. The following subcommands are supported:
zpool -? Displays a help message.
zpool add [
-fgLnP] [
-o property=
value]
pool vdev...
Adds the specified virtual devices to the given pool. The
vdev specification is described in the
Virtual Devices section. The
behavior of the
-f option, and the device checks performed are
described in the
zpool create subcommand.
-f Forces use of
vdevs, even if they appear in use or
specify a conflicting replication level. Not all
devices can be overridden in this manner.
-g Display
vdev, GUIDs instead of the normal device names.
These GUIDs can be used in place of device names for
the zpool detach/offline/remove/replace commands.
-L Display real paths for
vdevs resolving all symbolic
links. This can be used to look up the current block
device name regardless of the /dev/disk/ path used to
open it.
-n Displays the configuration that would be used without
actually adding the
vdevs. The actual pool creation
can still fail due to insufficient privileges or device
sharing.
-P Display real paths for
vdevs instead of only the last
component of the path. This can be used in conjunction
with the
-L flag.
-o property=
value Sets the given pool properties. See the
Properties section for a list of valid properties that can be set.
The only property supported at the moment is
ashift. zpool attach [
-f] [
-o property=
value]
pool device new_device Attaches
new_device to the existing
device. The existing
device cannot be part of a raidz configuration. If
device is
not currently part of a mirrored configuration,
device automatically transforms into a two-way mirror of
device and
new_device. If
device is part of a two-way mirror, attaching
new_device creates a three-way mirror, and so on. In either
case,
new_device begins to resilver immediately.
-f Forces use of
new_device, even if its appears to be in
use. Not all devices can be overridden in this manner.
-o property=
value Sets the given pool properties. See the
Properties section for a list of valid properties that can be set.
The only property supported at the moment is
ashift. zpool checkpoint [
-d, --discard]
pool Checkpoints the current state of
pool , which can be later
restored by
zpool import --rewind-to-checkpoint. Rewinding
will also discard the checkpoint. The existence of a
checkpoint in a pool prohibits the following
zpool commands:
remove,
attach,
detach,
split, and
reguid. In addition, it may
break reservation boundaries if the pool lacks free space. The
zpool status command indicates the existence of a checkpoint or
the progress of discarding a checkpoint from a pool. The
zpool list command reports how much space the checkpoint takes from
the pool.
-d, --discard Discards an existing checkpoint from
pool without
rewinding.
zpool clear pool [
device]
Clears device errors in a pool. If no arguments are specified,
all device errors within the pool are cleared. If one or more
devices is specified, only those errors associated with the
specified device or devices are cleared. If multihost is
enabled, and the pool has been suspended, this will not resume
I/O. While the pool was suspended, it may have been imported
on another host, and resuming I/O could result in pool damage.
zpool create [
-dfn] [
-B] [
-m mountpoint] [
-o property=
value]... [
-o feature@feature=
value]... [
-O file-system-property=
value]...
[
-R root] [
-t tempname]
pool vdev...
Creates a new storage pool containing the virtual devices
specified on the command line. The pool name must begin with a
letter, and can only contain alphanumeric characters as well as
underscore ("
_"), dash ("
-"), and period ("
."). The pool names
mirror,
raidz,
spare and
log are reserved, as are names
beginning with the pattern
c[0-9]. The
vdev specification is
described in the
Virtual Devices section.
The command attempts to verify that each device specified is
accessible and not currently in use by another subsystem.
However this check is not robust enough to detect simultaneous
attempts to use a new device in different pools, even if
multihost is
enabled. The administrator must ensure that
simultaneous invocations of any combination of
zpool replace,
zpool create,
zpool add, or
zpool labelclear, do not refer to
the same device. Using the same device in two pools will
result in pool corruption.
There are some uses, such as being currently mounted, or
specified as the dedicated dump device, that prevents a device
from ever being used by ZFS. Other uses, such as having a
preexisting UFS file system, can be overridden with the
-f option.
The command also checks that the replication strategy for the
pool is consistent. An attempt to combine redundant and non-
redundant storage in a single pool, or to mix disks and files,
results in an error unless
-f is specified. The use of
differently sized devices within a single raidz or mirror group
is also flagged as an error unless
-f is specified.
Unless the
-R option is specified, the default mount point is
/pool. The mount point must not exist or must be empty, or
else the root dataset cannot be mounted. This can be
overridden with the
-m option.
By default all supported features are enabled on the new pool
unless the
-d option is specified.
-B Create whole disk pool with EFI System partition to
support booting system with UEFI firmware. Default
size is 256MB. To create boot partition with custom
size, set the
bootsize property with the
-o option.
See the
Properties section for details.
-d Do not enable any features on the new pool. Individual
features can be enabled by setting their corresponding
properties to
enabled with the
-o option. See
zpool-features(7) for details about feature properties.
-f Forces use of
vdevs, even if they appear in use or
specify a conflicting replication level. Not all
devices can be overridden in this manner.
-m mountpoint Sets the mount point for the root dataset. The default
mount point is
/pool or
altroot/pool if
altroot is
specified. The mount point must be an absolute path,
legacy, or
none. For more information on dataset mount
points, see
zfs(8).
-n Displays the configuration that would be used without
actually creating the pool. The actual pool creation
can still fail due to insufficient privileges or device
sharing.
-o property=
value Sets the given pool properties. See the
Properties section for a list of valid properties that can be set.
-o feature@feature=
value Sets the given pool feature. See
zpool-features(7) for
a list of valid features that can be set.
value can either be
disabled or
enabled.
-O file-system-property=
value Sets the given file system properties in the root file
system of the pool. See the
Properties section of
zfs(8) for a list of valid properties that can be set.
-R root Equivalent to
-o cachefile=
none -o altroot=
root -t tempname Sets the in-core pool name to
tempname while the on-
disk name will be the name specified as the pool name
pool. This will set the default cachefile property to
none. This is intended to handle name space collisions
when creating pools for other systems, such as virtual
machines or physical machines whose pools live on
network block devices.
zpool destroy [
-f]
pool Destroys the given pool, freeing up any devices for other use.
This command tries to unmount any active datasets before
destroying the pool.
-f Forces any active datasets contained within the pool to
be unmounted.
zpool detach pool device Detaches
device from a mirror. The operation is refused if
there are no other valid replicas of the data.
zpool export [
-f]
pool...
Exports the given pools from the system. All devices are
marked as exported, but are still considered in use by other
subsystems. The devices can be moved between systems (even
those of different endianness) and imported as long as a
sufficient number of devices are present.
Before exporting the pool, all datasets within the pool are
unmounted. A pool can not be exported if it has a shared spare
that is currently being used.
For pools to be portable, you must give the
zpool command whole
disks, not just slices, so that ZFS can label the disks with
portable EFI labels. Otherwise, disk drivers on platforms of
different endianness will not recognize the disks.
-f Forcefully unmount all datasets, using the
unmount -f command.
This command will forcefully export the pool even if it
has a shared spare that is currently being used. This
may lead to potential data corruption.
zpool get [
-Hp] [
-o field[,
field]...]
all|
property[,
property]...
pool...
Retrieves the given list of properties (or all properties if
all is used) for the specified storage pool(s). These
properties are displayed with the following fields:
name Name of storage pool
property Property name
value Property value
source Property source, either 'default' or 'local'.
See the
Properties section for more information on the
available pool properties.
-H Scripted mode. Do not display headers, and separate
fields by a single tab instead of arbitrary space.
-o field A comma-separated list of columns to display.
name,
property,
value,
source is the default value.
-p Display numbers in parsable (exact) values.
zpool history [
-il] [
pool]...
Displays the command history of the specified pool(s) or all
pools if no pool is specified.
-i Displays internally logged ZFS events in addition to
user initiated events.
-l Displays log records in long format, which in addition
to standard format includes, the user name, the
hostname, and the zone in which the operation was
performed.
zpool import [
-D] [
-d dir]
Lists pools available to import. If the
-d option is not
specified, this command searches for devices in
/dev/dsk. The
-d option can be specified multiple times, and all directories
are searched. If the device appears to be part of an exported
pool, this command displays a summary of the pool with the name
of the pool, a numeric identifier, as well as the vdev layout
and current health of the device for each device or file.
Destroyed pools, pools that were previously destroyed with the
zpool destroy command, are not listed unless the
-D option is
specified.
The numeric identifier is unique, and can be used instead of
the pool name when multiple exported pools of the same name are
available.
-c cachefile Reads configuration from the given
cachefile that was
created with the
cachefile pool property. This
cachefile is used instead of searching for devices.
-d dir Searches for devices or files in
dir. The
-d option
can be specified multiple times.
-D Lists destroyed pools only.
zpool import -a [
-DflmN] [
-F [
-n]] [
-c cachefile|
-d dir] [
-o mntopts]
[
-o property=
value]... [
-R root]
Imports all pools found in the search directories. Identical
to the previous command, except that all pools with a
sufficient number of devices available are imported. Destroyed
pools, pools that were previously destroyed with the
zpool destroy command, will not be imported unless the
-D option is
specified.
-a Searches for and imports all pools found.
-c cachefile Reads configuration from the given
cachefile that was
created with the
cachefile pool property. This
cachefile is used instead of searching for devices.
-d dir Searches for devices or files in
dir. The
-d option
can be specified multiple times. This option is
incompatible with the
-c option.
-D Imports destroyed pools only. The
-f option is also
required.
-f Forces import, even if the pool appears to be
potentially active.
-F Recovery mode for a non-importable pool. Attempt to
return the pool to an importable state by discarding
the last few transactions. Not all damaged pools can
be recovered by using this option. If successful, the
data from the discarded transactions is irretrievably
lost. This option is ignored if the pool is importable
or already imported.
-l Indicates that this command will request encryption
keys for all encrypted datasets it attempts to mount as
it is bringing the pool online. Note that if any
datasets have a
keylocation of
prompt this command will
block waiting for the keys to be entered. Without this
flag encrypted datasets will be left unavailable until
the keys are loaded.
-m Allows a pool to import when there is a missing log
device. Recent transactions can be lost because the
log device will be discarded.
-n Used with the
-F recovery option. Determines whether a
non-importable pool can be made importable again, but
does not actually perform the pool recovery. For more
details about pool recovery mode, see the
-F option,
above.
-N Import the pool without mounting any file systems.
-o mntopts Comma-separated list of mount options to use when
mounting datasets within the pool. See
zfs(8) for a
description of dataset properties and mount options.
-o property=
value Sets the specified property on the imported pool. See
the
Properties section for more information on the
available pool properties.
-R root Sets the
cachefile property to
none and the
altroot property to
root.
zpool import [
-Dfmt] [
-F [
-n]] [
--rewind-to-checkpoint] [
-c cachefile|
-d dir] [
-o mntopts] [
-o property=
value]... [
-R root]
pool|
id [
newpool]
Imports a specific pool. A pool can be identified by its name
or the numeric identifier. If
newpool is specified, the pool
is imported using the name
newpool. Otherwise, it is imported
with the same name as its exported name.
If a device is removed from a system without running
zpool export first, the device appears as potentially active. It
cannot be determined if this was a failed export, or whether
the device is really in use from another host. To import a
pool in this state, the
-f option is required.
-c cachefile Reads configuration from the given
cachefile that was
created with the
cachefile pool property. This
cachefile is used instead of searching for devices.
-d dir Searches for devices or files in
dir. The
-d option
can be specified multiple times. This option is
incompatible with the
-c option.
-D Imports destroyed pool. The
-f option is also
required.
-f Forces import, even if the pool appears to be
potentially active.
-F Recovery mode for a non-importable pool. Attempt to
return the pool to an importable state by discarding
the last few transactions. Not all damaged pools can
be recovered by using this option. If successful, the
data from the discarded transactions is irretrievably
lost. This option is ignored if the pool is importable
or already imported.
-l Indicates that the zpool command will request
encryption keys for all encrypted datasets it attempts
to mount as it is bringing the pool online. This is
equivalent to running
zpool mount on each encrypted
dataset immediately after the pool is imported. If any
datasets have a
prompt keysource this command will
block waiting for the key to be entered. Otherwise,
encrypted datasets will be left unavailable until the
keys are loaded.
-m Allows a pool to import when there is a missing log
device. Recent transactions can be lost because the
log device will be discarded.
-n Used with the
-F recovery option. Determines whether a
non-importable pool can be made importable again, but
does not actually perform the pool recovery. For more
details about pool recovery mode, see the
-F option,
above.
-o mntopts Comma-separated list of mount options to use when
mounting datasets within the pool. See
zfs(8) for a
description of dataset properties and mount options.
-o property=
value Sets the specified property on the imported pool. See
the
Properties section for more information on the
available pool properties.
-R root Sets the
cachefile property to
none and the
altroot property to
root.
-t Used with
newpool. Specifies that
newpool is
temporary. Temporary pool names last until export.
Ensures that the original pool name will be used in all
label updates and therefore is retained upon export.
Will also set
cachefile property to
none when not
explicitly specified.
--rewind-to-checkpoint Rewinds pool to the checkpointed state. Once the pool
is imported with this flag there is no way to undo the
rewind. All changes and data that were written after
the checkpoint are lost! The only exception is when
the
readonly mounting option is enabled. In this case,
the checkpointed state of the pool is opened and an
administrator can see how the pool would look like if
they were to fully rewind.
zpool initialize [
-c |
-s]
pool [
device...]
Begins initializing by writing to all unallocated regions on
the specified devices, or all eligible devices in the pool if
no individual devices are specified. Only leaf data or log
devices may be initialized.
-c, --cancel Cancel initializing on the specified devices, or all
eligible devices if none are specified. If one or more
target devices are invalid or are not currently being
initialized, the command will fail and no cancellation
will occur on any device.
-s --suspend Suspend initializing on the specified devices, or all
eligible devices if none are specified. If one or more
target devices are invalid or are not currently being
initialized, the command will fail and no suspension
will occur on any device. Initializing can then be
resumed by running
zpool initialize with no flags on
the relevant target devices.
zpool iostat [[
-lq] |
-rw] [
-T u|
d] [
-ghHLnpPvy] [[
pool...]|[
pool vdev...]|[
vdev...]] [
interval [
count]]
Displays I/O statistics for the given pools/vdevs. Physical
I/Os may be observed via
iostat(8). If writes are located
nearby, they may be merged into a single larger operation.
Additional I/O may be generated depending on the level of vdev
redundancy. To filter output, you may pass in a list of pools,
a pool and list of vdevs in that pool, or a list of any vdevs
from any pool. If no items are specified, statistics for every
pool in the system are shown. When given an
interval, the
statistics are printed every
interval seconds until ^C is
pressed. If
-n flag is specified the headers are displayed
only once, otherwise they are displayed periodically. If
count is specified, the command exits after
count reports are
printed. The first report printed is always the statistics
since boot regardless of whether
interval and
count are passed.
Also note that the units of
K,
M,
G ... that are printed in the
report are in base 1024. To get the raw values, use the
-p flag.
-T u|
d Display a time stamp. Specify
u for a printed
representation of the internal representation of time.
See
time(2). Specify
d for standard date format. See
date(1).
-i Display vdev initialization status.
-g Display vdev GUIDs instead of the normal device names.
These GUIDs can be used in place of device names for
the zpool detach/offline/remove/replace commands.
-H Scripted mode. Do not display headers, and separate
fields by a single tab instead of arbitrary space.
-L Display real paths for vdevs resolving all symbolic
links. This can be used to look up the current block
device name regardless of the
/dev/dsk/ path used to
open it.
-n Print headers only once when passed.
-p Display numbers in parsable (exact) values. Time
values are in nanoseconds.
-P Display full paths for vdevs instead of only the last
component of the path. This can be used in conjunction
with the
-L flag.
-r Print request size histograms for the leaf vdev's IO
This includes histograms of individual IOs (ind) and
aggregate IOs (agg). These stats can be useful for
observing how well IO aggregation is working. Note
that TRIM IOs may exceed 16M, but will be counted as
16M.
-v Verbose statistics Reports usage statistics for
individual vdevs within the pool, in addition to the
pool-wide statistics.
-y Omit statistics since boot. Normally the first line of
output reports the statistics since boot. This option
suppresses that first line of output.
interval -w Display latency histograms:
total_wait: Total IO time (queuing + disk IO time).
disk_wait: Disk IO time (time reading/writing the
disk).
syncq_wait: Amount of time IO spent in
synchronous priority queues. Does not include disk
time.
asyncq_wait: Amount of time IO spent in
asynchronous priority queues. Does not include disk
time.
scrub: Amount of time IO spent in scrub queue.
Does not include disk time.
-l Include average latency statistics:
total_wait: Average total IO time (queuing + disk IO
time).
disk_wait: Average disk IO time (time
reading/writing the disk).
syncq_wait: Average amount
of time IO spent in synchronous priority queues. Does
not include disk time.
asyncq_wait: Average amount of
time IO spent in asynchronous priority queues. Does
not include disk time.
scrub: Average queuing time in
scrub queue. Does not include disk time.
trim:
Average queuing time in trim queue. Does not include
disk time.
-q Include active queue statistics. Each priority queue
has both pending (
pend) and active (
activ) IOs.
Pending IOs are waiting to be issued to the disk, and
active IOs have been issued to disk and are waiting for
completion. These stats are broken out by priority
queue:
syncq_read/write: Current number of entries in
synchronous priority queues.
asyncq_read/write:
Current number of entries in asynchronous priority
queues.
scrubq_read: Current number of entries in
scrub queue.
trimq_write: Current number of entries in
trim queue.
All queue statistics are instantaneous measurements of
the number of entries in the queues. If you specify an
interval, the measurements will be sampled from the end
of the interval.
zpool labelclear [
-f]
device Removes ZFS label information from the specified
device. The
device must not be part of an active pool configuration.
-f Treat exported or foreign devices as inactive.
zpool list [
-HgLpPv] [
-o property[,
property]...] [
-T u|
d] [
pool]...
[
interval [
count]]
Lists the given pools along with a health status and space
usage. If no
pools are specified, all pools in the system are
listed. When given an
interval, the information is printed
every
interval seconds until ^C is pressed. If
count is
specified, the command exits after
count reports are printed.
-g Display vdev GUIDs instead of the normal device names.
These GUIDs can be used in place of device names for
the zpool detach/offline/remove/replace commands.
-H Scripted mode. Do not display headers, and separate
fields by a single tab instead of arbitrary space.
-o property Comma-separated list of properties to display. See the
Properties section for a list of valid properties. The
default list is
name,
size,
allocated,
free,
checkpoint, expandsize,
fragmentation,
capacity,
dedupratio,
health,
altroot.
-L Display real paths for vdevs resolving all symbolic
links. This can be used to look up the current block
device name regardless of the /dev/disk/ path used to
open it.
-p Display numbers in parsable (exact) values.
-P Display full paths for vdevs instead of only the last
component of the path. This can be used in conjunction
with the
-L flag.
-T u|
d Display a time stamp. Specify
u for a printed
representation of the internal representation of time.
See
time(2). Specify
d for standard date format. See
date(1).
-v Verbose statistics. Reports usage statistics for
individual vdevs within the pool, in addition to the
pool-wise statistics.
zpool offline [
-t]
pool device...
Takes the specified physical device offline. While the
device is offline, no attempt is made to read or write to the device.
This command is not applicable to spares.
-t Temporary. Upon reboot, the specified physical device
reverts to its previous state.
zpool online [
-e]
pool device...
Brings the specified physical device online. This command is
not applicable to spares.
-e Expand the device to use all available space. If the
device is part of a mirror or raidz then all devices
must be expanded before the new space will become
available to the pool.
zpool reguid pool Generates a new unique identifier for the pool. You must
ensure that all devices in this pool are online and healthy
before performing this action.
zpool reopen pool Reopen all the vdevs associated with the pool.
zpool remove [
-np]
pool device...
Removes the specified device from the pool. This command
currently only supports removing hot spares, cache, log devices
and mirrored top-level vdevs (mirror of leaf devices); but not
raidz.
Removing a top-level vdev reduces the total amount of space in
the storage pool. The specified device will be evacuated by
copying all allocated space from it to the other devices in the
pool. In this case, the
zpool remove command initiates the
removal and returns, while the evacuation continues in the
background. The removal progress can be monitored with
zpool status. This feature must be enabled to be used, see
zpool-features(7) A mirrored top-level device (log or data) can be removed by
specifying the top-level mirror for the same. Non-log devices
or data devices that are part of a mirrored configuration can
be removed using the
zpool detach command.
-n Do not actually perform the removal ("no-op").
Instead, print the estimated amount of memory that will
be used by the mapping table after the removal
completes. This is nonzero only for top-level vdevs.
-p Used in conjunction with the
-n flag, displays numbers
as parsable (exact) values.
zpool remove -s pool Stops and cancels an in-progress removal of a top-level vdev.
zpool replace [
-f]
pool device [
new_device]
Replaces
old_device with
new_device. This is equivalent to
attaching
new_device, waiting for it to resilver, and then
detaching
old_device.
The size of
new_device must be greater than or equal to the
minimum size of all the devices in a mirror or raidz
configuration.
new_device is required if the pool is not redundant. If
new_device is not specified, it defaults to
old_device. This
form of replacement is useful after an existing disk has failed
and has been physically replaced. In this case, the new disk
may have the same
/dev/dsk path as the old device, even though
it is actually a different disk. ZFS recognizes this.
-f Forces use of
new_device, even if its appears to be in
use. Not all devices can be overridden in this manner.
zpool resilver pool...
Starts a resilver. If an existing resilver is already running
it will be restarted from the beginning. Any drives that were
scheduled for a deferred resilver will be added to the new one.
This requires the
resilver_defer feature.
zpool scrub [
-s |
-p]
pool...
Begins a scrub or resumes a paused scrub. The scrub examines
all data in the specified pools to verify that it checksums
correctly. For replicated (mirror or raidz) devices, ZFS
automatically repairs any damage discovered during the scrub.
The
zpool status command reports the progress of the scrub and
summarizes the results of the scrub upon completion.
Scrubbing and resilvering are very similar operations. The
difference is that resilvering only examines data that ZFS
knows to be out of date (for example, when attaching a new
device to a mirror or replacing an existing device), whereas
scrubbing examines all data to discover silent errors due to
hardware faults or disk failure.
Because scrubbing and resilvering are I/O-intensive operations,
ZFS only allows one at a time. If a scrub is paused, the
zpool scrub resumes it. If a resilver is in progress, ZFS does not
allow a scrub to be started until the resilver completes.
Note that, due to changes in pool data on a live system, it is
possible for scrubs to progress slightly beyond 100%
completion. During this period, no completion time estimate
will be provided.
-s Stop scrubbing.
-p Pause scrubbing. Scrub pause state and progress are
periodically synced to disk. If the system is
restarted or pool is exported during a paused scrub,
even after import, scrub will remain paused until it is
resumed. Once resumed the scrub will pick up from the
place where it was last checkpointed to disk. To
resume a paused scrub issue
zpool scrub again.
zpool set property=
value pool Sets the given property on the specified pool. See the
Properties section for more information on what properties can
be set and acceptable values.
zpool split [
-gLlnP] [
-o property=
value]... [
-R root]
pool newpool Splits devices off
pool creating
newpool. All vdevs in
pool must be mirrors. At the time of the split,
newpool will be a
replica of
pool.
-g Display vdev GUIDs instead of the normal device names.
These GUIDs can be used in place of device names for
the zpool detach/offline/remove/replace commands.
-L Display real paths for vdevs resolving all symbolic
links. This can be used to look up the current block
device name regardless of the
/dev/disk/ path used to
open it.
-l Indicates that this command will request encryption
keys for all encrypted datasets it attempts to mount as
it is bringing the new pool online. Note that if any
datasets have a
keylocation of
prompt this command will
block waiting for the keys to be entered. Without this
flag encrypted datasets will be left unavailable and
unmounted until the keys are loaded.
-n Do dry run, do not actually perform the split. Print
out the expected configuration of
newpool.
-P Display full paths for vdevs instead of only the last
component of the path. This can be used in conjunction
with the
-L flag.
-o property=
value Sets the specified property for
newpool. See the
Properties section for more information on the
available pool properties.
-R root Set
altroot for
newpool to
root and automatically
import it.
zpool status [
-DigLpPstvx] [
-T u|
d] [
pool]... [
interval [
count]]
Displays the detailed health status for the given pools. If no
pool is specified, then the status of each pool in the system
is displayed. For more information on pool and device health,
see the
Device Failure and Recovery section.
If a scrub or resilver is in progress, this command reports the
percentage done and the estimated time to completion. Both of
these are only approximate, because the amount of data in the
pool and the other workloads on the system can change.
-D Display a histogram of deduplication statistics,
showing the allocated (physically present on disk) and
referenced (logically referenced in the pool) block
counts and sizes by reference count.
-g Display vdev GUIDs instead of the normal device names.
These GUIDs can be used in place of device names for
the zpool detach/offline/remove/replace commands.
-L Display real paths for vdevs resolving all symbolic
links. This can be used to look up the current block
device name regardless of the
/dev/disk/ path used to
open it.
-p Display numbers in parsable (exact) values.
-P Display full paths for vdevs instead of only the last
component of the path. This can be used in conjunction
with the
-L flag.
-s Display the number of leaf VDEV slow IOs. This is the
number of IOs that didn't complete in
zio_slow_io_ms milliseconds (default 30 seconds). This does not
necessarily mean the IOs failed to complete, just took
an unreasonably long amount of time. This may indicate
a problem with the underlying storage.
-t Display vdev TRIM status.
-T u|
d Display a time stamp. Specify
u for a printed
representation of the internal representation of time.
See
time(2). Specify
d for standard date format. See
date(1).
-v Displays verbose data error information, printing out a
complete list of all data errors since the last
complete pool scrub.
-x Only display status for pools that are exhibiting
errors or are otherwise unavailable. Warnings about
pools not using the latest on-disk format will not be
included.
zpool sync [
pool]...
Forces all in-core dirty data to be written to the primary pool
storage and not the ZIL. It will also update administrative
information including quota reporting. Without arguments,
zpool sync will sync all pools on the system. Otherwise, it
will only sync the specified
pool.
zpool trim [
-d] [
-r rate] [
-c |
-s]
pool [
device...]
Initiates an immediate on-demand TRIM operation for all of the
free space in a pool. This operation informs the underlying
storage devices of all blocks in the pool which are no longer
allocated and allows thinly provisioned devices to reclaim the
space.
A manual on-demand TRIM operation can be initiated irrespective
of the
autotrim pool property setting. See the documentation
for the
autotrim property above for the types of vdev devices
which can be trimmed.
-d --secure Causes a secure TRIM to be initiated. When performing
a secure TRIM, the device guarantees that data stored
on the trimmed blocks has been erased. This requires
support from the device and is not supported by all
SSDs.
-r --rate rate Controls the rate at which the TRIM operation
progresses. Without this option TRIM is executed as
quickly as possible. The rate, expressed in bytes per
second, is applied on a per-vdev basis and may be set
differently for each leaf vdev.
-c, --cancel Cancel trimming on the specified devices, or all
eligible devices if none are specified. If one or more
target devices are invalid or are not currently being
trimmed, the command will fail and no cancellation will
occur on any device.
-s --suspend Suspend trimming on the specified devices, or all
eligible devices if none are specified. If one or more
target devices are invalid or are not currently being
trimmed, the command will fail and no suspension will
occur on any device. Trimming can then be resumed by
running
zpool trim with no flags on the relevant target
devices.
zpool upgrade Displays pools which do not have all supported features enabled
and pools formatted using a legacy ZFS version number. These
pools can continue to be used, but some features may not be
available. Use
zpool upgrade -a to enable all features on all
pools.
zpool upgrade -v Displays legacy ZFS versions supported by the current software.
See
zpool-features(7) for a description of feature flags
features supported by the current software.
zpool upgrade [
-V version]
-a|
pool...
Enables all supported features on the given pool. Once this is
done, the pool will no longer be accessible on systems that do
not support feature flags. See
zpool-features(7) for details
on compatibility with systems that support feature flags, but
do not support all features enabled on the pool.
-a Enables all supported features on all pools.
-V version Upgrade to the specified legacy version. If the
-V flag is specified, no features will be enabled on the
pool. This option can only be used to increase the
version number up to the last supported legacy version
number.
EXIT STATUS
The following exit values are returned:
0 Successful completion.
1 An error occurred.
2 Invalid command line options were specified.
EXAMPLES
Example 1 Creating a RAID-Z Storage Pool
The following command creates a pool with a single raidz root
vdev that consists of six disks.
# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
Example 2 Creating a Mirrored Storage Pool
The following command creates a pool with two mirrors, where
each mirror contains two disks.
# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
Example 3 Creating a ZFS Storage Pool by Using Slices
The following command creates an unmirrored pool using two disk
slices.
# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
Example 4 Creating a ZFS Storage Pool by Using Files
The following command creates an unmirrored pool using files.
While not recommended, a pool based on files can be useful for
experimental purposes.
# zpool create tank /path/to/file/a /path/to/file/b
Example 5 Adding a Mirror to a ZFS Storage Pool
The following command adds two mirrored disks to the pool
tank,
assuming the pool is already made up of two-way mirrors. The
additional space is immediately available to any datasets
within the pool.
# zpool add tank mirror c1t0d0 c1t1d0
Example 6 Listing Available ZFS Storage Pools
The following command lists all available pools on the system.
In this case, the pool
zion is faulted due to a missing device.
The results from this command are similar to the following:
# zpool list
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
zion - - - - - - - FAULTED -
Example 7 Destroying a ZFS Storage Pool
The following command destroys the pool
tank and any datasets
contained within.
# zpool destroy -f tank
Example 8 Exporting a ZFS Storage Pool
The following command exports the devices in pool
tank so that
they can be relocated or later imported.
# zpool export tank
Example 9 Importing a ZFS Storage Pool
The following command displays available pools, and then
imports the pool
tank for use on the system. The results from
this command are similar to the following:
# zpool import
pool: tank
id: 15451357997522795478
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
mirror ONLINE
c1t2d0 ONLINE
c1t3d0 ONLINE
# zpool import tank
Example 10 Upgrading All ZFS Storage Pools to the Current Version
The following command upgrades all ZFS Storage pools to the
current version of the software.
# zpool upgrade -a
This system is currently running ZFS version 2.
Example 11 Managing Hot Spares
The following command creates a new pool with an available hot
spare:
# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
If one of the disks were to fail, the pool would be reduced to
the degraded state. The failed device can be replaced using
the following command:
# zpool replace tank c0t0d0 c0t3d0
Once the data has been resilvered, the spare is automatically
removed and is made available for use should another device
fail. The hot spare can be permanently removed from the pool
using the following command:
# zpool remove tank c0t2d0
Example 12 Creating a ZFS Pool with Mirrored Separate Intent Logs
The following command creates a ZFS storage pool consisting of
two, two-way mirrors and mirrored log devices:
# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \
c4d0 c5d0
Example 13 Adding Cache Devices to a ZFS Pool
The following command adds two disks for use as cache devices
to a ZFS storage pool:
# zpool add pool cache c2d0 c3d0
Once added, the cache devices gradually fill with content from
main memory. Depending on the size of your cache devices, it
could take over an hour for them to fill. Capacity and reads
can be monitored using the
iostat option as follows:
# zpool iostat -v pool 5
Example 14 Removing a Mirrored top-level (Log or Data) Device
The following commands remove the mirrored log device
mirror-2 and mirrored top-level data device
mirror-1.
Given this configuration:
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c6t0d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c6t2d0 ONLINE 0 0 0
c6t3d0 ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
c4t0d0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
The command to remove the mirrored log
mirror-2 is:
# zpool remove tank mirror-2
The command to remove the mirrored data
mirror-1 is:
# zpool remove tank mirror-1
Example 15 Displaying expanded space on a device
The following command displays the detailed information for the
pool
data. This pool is comprised of a single raidz vdev where
one of its devices increased its capacity by 10GB. In this
example, the pool will not be able to utilize this extra
capacity until all the devices under the raidz vdev have been
expanded.
# zpool list -v data
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
raidz1 23.9G 14.6G 9.30G 48% -
c1t1d0 - - - - -
c1t2d0 - - - - 10G
c1t3d0 - - - - -
ENVIRONMENT VARIABLES
ZPOOL_VDEV_NAME_GUID Cause
zpool subcommands to output vdev guids by default. This behavior is identical to the
zpool status -g command line option.
ZPOOL_VDEV_NAME_FOLLOW_LINKS Cause
zpool subcommands to follow links
for vdev names by default. This behavior
is identical to the
zpool status -L command line option.
ZPOOL_VDEV_NAME_PATH Cause
zpool subcommands to output full vdev path
names by default. This behavior is identical to
the
zpool status -P command line option.
INTERFACE STABILITY
EvolvingSEE ALSO
attributes(7),
zpool-features(7),
zfs(8)illumos May 8, 2024 illumos