29.2. NVMe over fabrics using FC


The NVMe over Fibre Channel (FC-NVMe) is fully supported in initiator mode when used with certain Broadcom Emulex and Marvell Qlogic Fibre Channel adapters. As a system administrator, complete the tasks in the following sections to deploy the FC-NVMe:

29.2.1. Configuring the NVMe initiator for Broadcom adapters

Use this procedure to configure the NVMe initiator for Broadcom adapters client using the NVMe management command line interface (nvme-cli) tool.
  1. Install the nvme-cli tool:
    # yum install nvme-cli
    This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host. To generate a new hostnqn:
    # nvme gen-hostnqn
  2. Create a /etc/modprobe.d/lpfc.conf file with the following content:
    options lpfc lpfc_enable_fc4_type=3
  3. Rebuild the initramfs image:
    # dracut --force
  4. Reboot the host system to reconfigure the lpfc driver:
    # systemctl reboot
  5. Find the WWNN and WWPN of the local and remote ports and use the output to find the subsystem NQN:
    # cat /sys/class/scsi_host/host*/nvme_info
    
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x10000090fae0b5f5 WWNN x20000090fae0b5f5 DID x010f00 ONLINE
    NVME RPORT       WWPN x204700a098cbcac6 WWNN x204600a098cbcac6 DID x01050e TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 000000000e Cmpl 000000000e Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 00000000000008ea Issue 00000000000008ec OutIO 0000000000000002
        abort 00000000 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000000 Err 00000000
    
    # nvme discover --transport fc \ --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 \ --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5
    
    Discovery Log Number of Records 2, Generation counter 49530
    =====Discovery Log Entry 0======
    trtype:  fc
    adrfam:  fibre-channel
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: none
    subnqn: nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1
    traddr:  nn-0x204600a098cbcac6:pn-0x204700a098cbcac6
    
    Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr.
    Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host_traddr.
  6. Connect to the NVMe target using the nvme-cli:
    # nvme connect --transport fc --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 -n nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 
    Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr.
    Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host_traddr.
    Replace nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 with the subnqn.
  7. Verify the NVMe devices are currently connected:
    # nvme list
    Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
    ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme0n1     80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller                  1         107.37  GB / 107.37  GB      4 KiB +  0 B   FFFFFFFF
    # lsblk |grep nvme
      nvme0n1                     259:0    0   100G  0 disk
    

Additional resources

29.2.2. Configuring the NVMe initiator for QLogic adapters

Use this procedure to configure NVMe initiator for Qlogic adapters client using the NVMe management command line interface (nvme-cli) tool.
  1. Install the nvme-cli tool:
    # yum install nvme-cli
    This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host. To generate a new hostnqn:
    # nvme gen-hostnqn
  2. Remove and reload the qla2xxx module:
    # rmmod qla2xxx
    # modprobe qla2xxx
  3. Find the WWNN and WWPN of the local and remote ports:
    # dmesg |grep traddr
        [    6.139862] qla2xxx [0000:04:00.0]-ffff:0: register_localport: host-traddr=nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 on portID:10700
    [    6.241762] qla2xxx [0000:04:00.0]-2102:0: qla_nvme_register_remote: traddr=nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 PortID:01050d
    
    Using this host-traddr and traddr, find the subsystem NQN:
    # nvme discover --transport fc --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 --host-traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62
    
    Discovery Log Number of Records 2, Generation counter 49530
    =====Discovery Log Entry 0======
    trtype:  fc
    adrfam:  fibre-channel
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: none
    subnqn:  nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468
    traddr:  nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6
    
    Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr.
    Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host_traddr.
  4. Connect to the NVMe target using the nvme-cli tool:
    # nvme connect --transport fc --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 --host_traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 -n nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468
    Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr.
    Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host_traddr.
    Replace nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 with the subnqn.
  5. Verify the NVMe devices are currently connected:
    # nvme list
    Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
    ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme0n1     80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller                  1         107.37  GB / 107.37  GB      4 KiB +  0 B   FFFFFFFF
    # lsblk |grep nvme
    nvme0n1                     259:0    0   100G  0 disk
    

Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.