Scroll to navigation



nvme-discover - Send Get Log Page request to Discovery Controller.


nvme discover
                [--transport=<trtype>     | -t <trtype>]
                [--traddr=<traddr>        | -a <traddr>]
                [--trsvcid=<trsvcid>      | -s <trsvcid>]
                [--host-traddr=<traddr>   | -w <traddr>]
                [--hostnqn=<hostnqn>      | -q <hostnqn>]
                [--hostid=<hostid>        | -I <hostid>]
                [--raw=<filename>         | -r <filename>]
                [--keep-alive-tmo=<sec>   | -k <sec>]
                [--reconnect-delay=<#>    | -c <#>]
                [--ctrl-loss-tmo=<#>      | -l <#>]
                [--hdr_digest             | -g]
                [--data_digest            | -G]
                [--nr-io-queues=<#>       | -i <#>]
                [--nr-write-queues=<#>    | -W <#>]
                [--nr-poll-queues=<#>     | -P <#>]
                [--queue-size=<#>         | -Q <#>]
                [--persistent             | -p]
                [--quiet                  | -S]


Send one or more Get Log Page requests to a NVMe-over-Fabrics Discovery Controller.

If no parameters are given, then nvme discover will attempt to find a /etc/nvme/discovery.conf file to use to supply a list of Discovery commands to run. If no /etc/nvme/discovery.conf file exists, the command will quit with an error.

Otherwise, a specific Discovery Controller should be specified using the --transport, --traddr, and if necessary the --trsvcid flags. A Diѕcovery request will then be sent to the specified Discovery Controller.


The NVMe-over-Fabrics specification defines the concept of a Discovery Controller that an NVMe Host can query on a fabric network to discover NVMe subsystems contained in NVMe Targets which it can connect to on the network. The Discovery Controller will return Discovery Log Pages that provide the NVMe Host with specific information (such as network address and unique subsystem NQN) the NVMe Host can use to issue an NVMe connect command to connect itself to a storage resource contained in that NVMe subsystem on the NVMe Target.

Note that the base NVMe specification defines the NQN (NVMe Qualified Name) format which an NVMe endpoint (device, subsystem, etc) must follow to guarantee a unique name under the NVMe standard. In particular, the Host NQN uniquely identifies the NVMe Host, and may be used by the the Discovery Controller to control what NVMe Target resources are allocated to the NVMe Host for a connection.

A Discovery Controller has it’s own NQN defined in the NVMe-over-Fabrics specification, All Discovery Controllers must use this NQN name. This NQN is used by default by nvme-cli for the discover command.


-t <trtype>, --transport=<trtype>
This field specifies the network fabric being used for a NVMe-over-Fabrics network. Current string values include:
Value Definition
rdma The network fabric is an rdma network (RoCE, iWARP, Infiniband, basic rdma, etc)
fc WIP The network fabric is a Fibre Channel network.
loop Connect to a NVMe over Fabrics target on the local host

-a <traddr>, --traddr=<traddr>

This field specifies the network address of the Discovery Controller. For transports using IP addressing (e.g. rdma) this should be an IP-based (ex. IPv4) address.

-s <trsvcid>, --trsvcid=<trsvcid>

This field specifies the transport service id. For transports using IP addressing (e.g. rdma) this field is the port number. By default, the IP port number for the RDMA transport is 4420.

-w <traddr>, --host-traddr=<traddr>

This field specifies the network address used on the host to connect to the Discovery Controller.

-q <hostnqn>, --hostnqn=<hostnqn>

Overrides the default host NQN that identifies the NVMe Host. If this option is not specified, the default is read from /etc/nvme/hostnqn first. If that does not exist, the autogenerated NQN value from the NVMe Host kernel module is used next.

-I <hostid>, --hostid=<hostid>

UUID(Universally Unique Identifier) to be discovered which should be formatted.

-r <filename>, --raw=<filename>

This field will take the output of the nvme discover command and dump it to a raw binary file. By default nvme discover will dump the output to stdout.

-k <#>, --keep-alive-tmo=<#>

Overrides the default dealy (in seconds) for keep alive. This option will be ignored for the discovery, and it is only implemented for completeness.

-c <#>, --reconnect-delay=<#>

Overrides the default delay (in seconds) before reconnect is attempted after a connect loss.

-l <#>, --ctrl-loss-tmo=<#>

Overrides the default controller loss timeout period (in seconds).

-g, --hdr_digest

Generates/verifies header digest (TCP).

-G, --data_digest

Generates/verifies data digest (TCP).

-i <#>, --nr-io-queues=<#>

Overrides the default number of I/O queues create by the driver. This option will be ignored for the discovery, and it is only implemented for completeness.

-W <#>, --nr-write-queues=<#>

Adds additional queues that will be used for write I/O.

-P <#>, --nr-poll-queues=<#>

Adds additional queues that will be used for polling latency sensitive I/O.

-Q <#>, --queue-size=<#>

Overrides the default number of elements in the I/O queues created by the driver which can be found at drivers/nvme/host/fabrics.h. This option will be ignored for the discovery, and it is only implemented for completeness.

-p, --persistent

Persistent discovery connection.

-S, --quiet

Suppress already connected errors.


•Query the Discover Controller with IP4 address for all resources allocated for NVMe Host name host1-rogue-nqn on the RDMA network. Port 4420 is used by default:

# nvme discover --transport=rdma --traddr= \

•Issue a nvme discover command using a /etc/nvme/discovery.conf file:

# Machine default 'nvme discover' commands.  Query the
# Discovery Controller's two ports (some resources may only
# be accessible on a single port).  Note an official
# nqn (Host) name defined in the NVMe specification is being used
# in this example.
-t rdma -a -s 4420 -q
-t rdma -a   -s 4420 -q
At the prompt type "nvme discover".


nvme-connect(1) nvme-connect-all(1)


This was written by Jay Freyensee[1]


Part of the nvme-user suite


Jay Freyensee
04/24/2020 NVMe