Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 2. Configuring the rdma service


With the Remote Direct Memory Access (RDMA) protocol, you can transfer data between the RDMA enabled systems over the network by using the main memory. The RDMA protocol provides low latency and high throughput. To manage supported network protocols and communication standards, you need to configure the rdma service. This configuration includes high speed network protocols such as RoCE and iWARP, and communication standards such as Soft-RoCE and Soft-iWARP. When Red Hat Enterprise Linux detects InfiniBand, iWARP, or RoCE devices and their configuration files residing at the /etc/rdma/modules/* directory, the udev device manager instructs systemd to start the rdma service. Configuration of modules in the /etc/rdma/modules/rdma.conf file remains persistent after reboot. You need to restart the rdma-load-modules@rdma.service configuration service to apply changes.

Procedure

  1. Install the rdma-core package:

    # dnf install rdma-core
  2. Edit the /etc/rdma/modules/rdma.conf file and uncomment the modules that you want to enable:

    # These modules are loaded by the system if any RDMA devices is installed
    
    # iSCSI over RDMA client support
    ib_iser
    
    # iSCSI over RDMA target support
    ib_isert
    
    # SCSI RDMA Protocol target driver
    ib_srpt
    
    # User access to RDMA verbs (supports libibverbs)
    ib_uverbs
    
    # User access to RDMA connection management (supports librdmacm)
    rdma_ucm
    
    # RDS over RDMA support
    # rds_rdma
    
    # NFS over RDMA client support
    xprtrdma
    
    # NFS over RDMA server support
    svcrdma
  3. Restart the service to make the changes effective:

    # systemctl restart <rdma-load-modules@rdma.service>

Verification

  1. Install the libibverbs-utils and infiniband-diags packages:

    # dnf install libibverbs-utils infiniband-diags
  2. List the available InfiniBand devices:

    # ibv_devices
    
        device                 node GUID
        ------              ----------------
        mlx4_0              0002c903003178f0
        mlx4_1              f4521403007bcba0
  3. Display the information of the mlx4_1 device:

    # ibv_devinfo -d mlx4_1
    
    hca_id: mlx4_1
         transport:                  InfiniBand (0)
         fw_ver:                     2.30.8000
         node_guid:                  f452:1403:007b:cba0
         sys_image_guid:             f452:1403:007b:cba3
         vendor_id:                  0x02c9
         vendor_part_id:             4099
         hw_ver:                     0x0
         board_id:                   MT_1090120019
         phys_port_cnt:              2
              port:   1
                    state:              PORT_ACTIVE (4)
                    max_mtu:            4096 (5)
                    active_mtu:         2048 (4)
                    sm_lid:             2
                    port_lid:           2
                    port_lmc:           0x01
                    link_layer:         InfiniBand
    
              port:   2
                    state:              PORT_ACTIVE (4)
                    max_mtu:            4096 (5)
                    active_mtu:         4096 (5)
                    sm_lid:             0
                    port_lid:           0
                    port_lmc:           0x00
                    link_layer:         Ethernet
  4. Display the status of the mlx4_1 device:

    # ibstat mlx4_1
    
    CA 'mlx4_1'
         CA type: MT4099
         Number of ports: 2
         Firmware version: 2.30.8000
         Hardware version: 0
         Node GUID: 0xf4521403007bcba0
         System image GUID: 0xf4521403007bcba3
         Port 1:
               State: Active
               Physical state: LinkUp
               Rate: 56
               Base lid: 2
               LMC: 1
               SM lid: 2
               Capability mask: 0x0251486a
               Port GUID: 0xf4521403007bcba1
               Link layer: InfiniBand
         Port 2:
               State: Active
               Physical state: LinkUp
               Rate: 40
               Base lid: 0
               LMC: 0
               SM lid: 0
               Capability mask: 0x04010000
               Port GUID: 0xf65214fffe7bcba2
               Link layer: Ethernet
  5. The ibping utility pings an InfiniBand address and runs as a client/server by configuring the parameters.

    1. Start server mode -S on port number -P with -C InfiniBand certificate authority (CA) name on the host:

      # ibping -S -C mlx4_1 -P 1
    2. Start client mode, send some packets -c on port number -P by using -C InfiniBand certificate authority (CA) name with -L Local Identifier (LID) on the host:

      # ibping -c 50 -C mlx4_0 -P 1 -L 2
Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.