Chapter 12. Configuring NVMe over fabrics using NVMe/TCP


In a Non-volatile Memory Express™ (NVMe™) over TCP (NVMe/TCP) setup, the host mode is fully supported and the controller setup is not supported.

Red Hat does not support the NVMe Target (nvmet) functionality. Consult your storage manufacturer’s documentation for instructions about how to configure your NVMe over Fabrics block storage target device.

Note

In Red Hat Enterprise Linux 10, the native NVMe multipathing is enabled by default. Enabling DM multipathing is not supported with NVMe/TCP.

12.1. Configuring an NVMe/TCP host

You can configure a Non-volatile Memory Express™ (NVMe™) over TCP (NVMe/TCP) host by using the NVMe management command-line interface (nvme-cli) tool. For more information, see the nvme(1) man page on your system.

Procedure

  1. Install the nvme-cli tool:

    # dnf install nvme-cli
    Copy to Clipboard Toggle word wrap
  2. Check the status of the controller:

    # nmcli device show ens6
    GENERAL.DEVICE:                         ens6
    GENERAL.TYPE:                           ethernet
    GENERAL.HWADDR:                         52:57:02:12:02:02
    GENERAL.MTU:                            1500
    GENERAL.STATE:                          30 (disconnected)
    GENERAL.CONNECTION:                     --
    GENERAL.CON-PATH:                       --
    WIRED-PROPERTIES.CARRIER:               on
    Copy to Clipboard Toggle word wrap
  3. Configure the host network for a newly installed Ethernet controller with a static IP address:

    # nmcli connection add con-name ens6 ifname ens6 type ethernet ip4 192.168.101.154/24 gw4 192.168.101.1
    Copy to Clipboard Toggle word wrap

    Here, replace 192.168.101.154 with the host IP address.

    # nmcli connection mod ens6 ipv4.method manual
    # nmcli connection up ens6
    Copy to Clipboard Toggle word wrap

    Since a new network is created to connect the NVMe/TCP host to the NVMe/TCP controller, repeat this step on the controller too.

Verification

  • Verify if the newly created host network works correctly:

    # nmcli device show ens6
    GENERAL.DEVICE:                         ens6
    GENERAL.TYPE:                           ethernet
    GENERAL.HWADDR:                         52:57:02:12:02:02
    GENERAL.MTU:                            1500
    GENERAL.STATE:                          100 (connected)
    GENERAL.CONNECTION:                     ens6
    GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/5
    WIRED-PROPERTIES.CARRIER:               on
    IP4.ADDRESS[1]:                         192.168.101.154/24
    IP4.GATEWAY:                            192.168.101.1
    IP4.ROUTE[1]:                           dst = 192.168.101.0/24, nh = 0.0.0.0, mt = 101
    IP4.ROUTE[2]:                           dst = 192.168.1.1/32, nh = 0.0.0.0, mt = 101
    IP4.ROUTE[3]:                           dst = 0.0.0.0/0, nh = 192.168.1.1, mt = 101
    IP6.ADDRESS[1]:                         fe80::27ce:dde1:620:996c/64
    IP6.GATEWAY:                            --
    IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 101
    Copy to Clipboard Toggle word wrap

Connect the NVMe™ over TCP (NVMe/TCP) host to the NVMe/TCP controller system to verify that the NVMe/TCP host can now access the namespace. For more information, see nvme(1) man page on your system.

Note

The NVMe/TCP controller (nvmet-tcp) module is not supported.

Prerequisites

  • You have configured an NVMe/TCP host. For more information, see Configuring an NVMe/TCP host.
  • You have configured an NVMe/TCP controller by using external storage software and the network is configured on the controller. In this procedure, 192.168.101.55 is the IP address of the NVMe/TCP controller.

Procedure

  1. Load the nvme-tcp module if not already:

    # modprobe nvme-tcp
    Copy to Clipboard Toggle word wrap
  2. Discover the available subsystems on the NVMe controller:

    # nvme discover --transport=tcp --traddr=192.168.101.55 --trsvcid=8009
    
    Discovery Log Number of Records 2, Generation counter 7
    =====Discovery Log Entry 0======
    trtype:  tcp
    adrfam:  ipv4
    subtype: current discovery subsystem
    treq:	not specified, sq flow control disable supported
    portid:  2
    trsvcid: 8009
    subnqn:  nqn.2014-08.org.nvmexpress.discovery
    traddr:  192.168.101.55
    eflags:  not specified
    sectype: none
    =====Discovery Log Entry 1======
    trtype:  tcp
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:	not specified, sq flow control disable supported
    portid:  2
    trsvcid: 8009
    subnqn:  nqn.2014-08.org.nvmexpress:uuid:0c468c4d-a385-47e0-8299-6e95051277db
    traddr:  192.168.101.55
    eflags:  not specified
    sectype: none
    Copy to Clipboard Toggle word wrap

    Here, 192.168.101.55 is the NVMe/TCP controller IP address.

  3. Configure the /etc/nvme/discovery.conf file to add the parameters used in the nvme discover command:

    # echo "--transport=tcp --traddr=192.168.101.55 --trsvcid=8009" >> /etc/nvme/discovery.conf
    Copy to Clipboard Toggle word wrap
  4. Connect the NVMe/TCP host to the controller system:

    # nvme connect-all
    Copy to Clipboard Toggle word wrap
  5. Make the NVMe/TCP connection persistent:

    # systemctl enable nvmf-autoconnect.service
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the NVMe/TCP host can access the namespace:

    # nvme list-subsys
    
    nvme-subsys3 - NQN=nqn.2014-08.org.nvmexpress:uuid:0c468c4d-a385-47e0-8299-6e95051277db
    \
     +- nvme3 tcp traddr=192.168.101.55,trsvcid=8009,host_traddr=192.168.101.154 live optimized
    
    # nvme list
    Node              	Generic           	SN               	Model                                	Namespace Usage                  	Format       	FW Rev
    --------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme3n1      	/dev/ng3n1        	d93a63d394d043ab4b74 Linux                                    1          21.47  GB /  21.47  GB    512   B +  0 B   5.18.5-2
    Copy to Clipboard Toggle word wrap

12.3. Configuring NVMe host authentication

To establish an authenticated connection with an NVMe over Fabrics (NVMe-oF) controller, you can configure authentication on a Non-volatile Memory Express (NVMe) host. NVMe authentication uses a shared secret or a pair of secrets, with a challenge-response protocol, for example, NVMe DH-HMAC-CHAP.

Note

NVMe authentication is supported only for the NVMe/TCP transport type. This feature is not available for other transports, such as NVMe over Remote Direct Memory Access (NVMe/RDMA) or NVMe over Fibre Channel (NVMe/FC).

Prerequisites

  • The nvme-cli package is installed.
  • You know the Host NVMe Qualified Name (Host NQN) and the Subsystem NVMe Qualified Name (Subsystem NQN), if using bi-directional authentication. To see the default Host NQN for your system, run nvme show-hostnqnq.

Procedure

  1. Generate an authentication secret:

    1. For the host:

      # hostkey=$(nvme gen-dhchap-key -n ${HOSTNQN})
      Copy to Clipboard Toggle word wrap
    2. For the subsystem:

      # ctrlkey=$(nvme gen-dhchap-key -n ${SUBSYSTEM})
      Copy to Clipboard Toggle word wrap
  2. Configure the host for authentication:

    # nvme connect -t tcp -n ${SUBSYSTEM} -a ${TRADDR} -s 4420 --dhchap-secret=${hostkey} --dhchap-ctrl-secret=${ctrlkey}
    Copy to Clipboard Toggle word wrap

    This provides the authentication secrets to the nvme-connect utility so that it can authenticate and establish a connection to the target.

    • Optional: To enable automated logins, set up persistent NVMe fabrics configuration. To do so, add the --dhchap-secret and --dhchap-ctrl-secret parameters to /etc/nvme/discovery.conf or /etc/nvme/config.json.

Verification

  • Verify that the NVMe storage is attached:

    # nvme list
    Copy to Clipboard Toggle word wrap

    This displays the list of NVMe devices currently attached to the host. Verify that the expected storage is listed, indicating the connection to the storage server is successful.

You can configure a Non-volatile Memory Express™ (NVMe™) over TCP (NVMe™/TCP) host while enabling TLS encryption. The NVMe/TLS configuration uses a TLS Pre-Shared Key (PSK).

The NVM Express TCP Transport Specification specifies a PSK Interchange Format for exchanging PSK information between systems. You can use nvme-cli or other methods to generate PSKs in this format (for example, create it on a storage target, see your vendor documentation). These configured PSKs are then used by nvme-cli to derive retained PSKs, which are inserted into a kernel keyring for use.

Important

NVMe/TCP using TLS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • The nvme_tcp kernel module is installed on your system.
  • The following packages are installed on your system:

    • nvme-cli
    • ktls-utils
  • You have the Subsystem NVMe Qualified Name (Subsystem NQN).
  • You have root permissions on the system.

Procedure

  1. Configure Pre-Shared-Key Keyring.

    1. Identify Host NQN:

      # HOSTNQN=$(nvme show-hostnqn)
      Copy to Clipboard Toggle word wrap
    2. Generate and copy a newly configured PSK:

      # PSK=$(nvme gen-tls-key)
      Copy to Clipboard Toggle word wrap
      # echo $PSK
      Copy to Clipboard Toggle word wrap
    3. Configure Pre-Shared-Key Keyring:

      # nvme check-tls-key --insert --hostnqn=${HOSTNQN} --subsysnqn=${SUBSYSTEM} --keydata=${PSK} --identity=1
      Copy to Clipboard Toggle word wrap
  2. Configure the tlshd service.

    1. Add the keyring name to the /etc/tlshd.conf configuration file:

      ...
      [authenticate]
      keyring=.nvme
      ...
      Copy to Clipboard Toggle word wrap
    2. Restart the tlshd service:

      # systemctl restart tlshd
      Copy to Clipboard Toggle word wrap
  3. Enable TLS for NVMe fabrics connections:

    # nvme discover -t tcp --tls -a ${TRADDR} -s 4420
    Copy to Clipboard Toggle word wrap
    # nvme connect -t tcp --tls -a ${TRADDR} -s 4420 -n ${SUBSYSTEM}
    Copy to Clipboard Toggle word wrap

Verification

  • List the NVMe devices that are currently connected:

    # nvme list
    Node              	Generic           	SN               	Model                                	Namespace  Usage                  	Format       	FW Rev
    --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
    /dev/nvme4n1      	/dev/ng4n1        	81JJAJTOpnmUAAAAAAAB NetApp ONTAP Controller              	0x1     	16.17  GB / 161.06  GB  	4 KiB +  0 B   9.16.1
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat