5.6. 配置 NVIDIA network Operator


NVIDIA network Operator 管理 NVIDIA 网络资源和网络相关组件,如驱动程序和设备插件来启用 NVIDIA GPUDirect RDMA 工作负载。

先决条件

  • 已安装 NVIDIA network Operator。

流程

  1. 运行以下命令,验证 network Operator 是否已安装并运行,确认控制器是否在 nvidia-network-operator 命名空间中运行:

    $ oc get pods -n nvidia-network-operator
    Copy to Clipboard Toggle word wrap

    输出示例

    NAME                                                          READY   STATUS             RESTARTS        AGE
    nvidia-network-operator-controller-manager-6f7d6956cd-fw5wg   1/1     Running            0                5m
    Copy to Clipboard Toggle word wrap

  2. 在 Operator 运行时,创建 NicClusterPolicy 自定义资源文件。您选择的设备取决于您的系统配置。在这个示例中,Infiniband 接口 ibs2f0 是硬编码的,用作共享的 NVIDIA GPUDirect RDMA 设备。

    apiVersion: mellanox.com/v1alpha1
    kind: NicClusterPolicy
    metadata:
      name: nic-cluster-policy
    spec:
      nicFeatureDiscovery:
        image: nic-feature-discovery
        repository: ghcr.io/mellanox
        version: v0.0.1
      docaTelemetryService:
        image: doca_telemetry
        repository: nvcr.io/nvidia/doca
        version: 1.16.5-doca2.6.0-host
      rdmaSharedDevicePlugin:
        config: |
          {
            "configList": [
              {
                "resourceName": "rdma_shared_device_ib",
                "rdmaHcaMax": 63,
                "selectors": {
                  "ifNames": ["ibs2f0"]
                }
              },
              {
                "resourceName": "rdma_shared_device_eth",
                "rdmaHcaMax": 63,
                "selectors": {
                  "ifNames": ["ens8f0np0"]
                }
              }
            ]
          }
        image: k8s-rdma-shared-dev-plugin
        repository: ghcr.io/mellanox
        version: v1.5.1
      secondaryNetwork:
        ipoib:
          image: ipoib-cni
          repository: ghcr.io/mellanox
          version: v1.2.0
      nvIpam:
        enableWebhook: false
        image: nvidia-k8s-ipam
        repository: ghcr.io/mellanox
        version: v0.2.0
      ofedDriver:
        readinessProbe:
          initialDelaySeconds: 10
          periodSeconds: 30
        forcePrecompiled: false
        terminationGracePeriodSeconds: 300
        livenessProbe:
          initialDelaySeconds: 30
          periodSeconds: 30
        upgradePolicy:
          autoUpgrade: true
          drain:
            deleteEmptyDir: true
            enable: true
            force: true
            timeoutSeconds: 300
            podSelector: ''
          maxParallelUpgrades: 1
          safeLoad: false
          waitForCompletion:
            timeoutSeconds: 0
        startupProbe:
          initialDelaySeconds: 10
          periodSeconds: 20
        image: doca-driver
        repository: nvcr.io/nvidia/mellanox
        version: 24.10-0.7.0.0-0
        env:
        - name: UNLOAD_STORAGE_MODULES
          value: "true"
        - name: RESTORE_DRIVER_ON_POD_TERMINATION
          value: "true"
        - name: CREATE_IFNAMES_UDEV
          value: "true"
    Copy to Clipboard Toggle word wrap
  3. 运行以下命令在集群中创建 NicClusterPolicy 自定义资源:

    $ oc create -f network-sharedrdma-nic-cluster-policy.yaml
    Copy to Clipboard Toggle word wrap

    输出示例

    nicclusterpolicy.mellanox.com/nic-cluster-policy created
    Copy to Clipboard Toggle word wrap

  4. 在 DOCA/MOFED 容器中运行以下命令来验证 NicClusterPolicy

    $ oc get pods -n nvidia-network-operator
    Copy to Clipboard Toggle word wrap

    输出示例

    NAME                                                          READY   STATUS    RESTARTS   AGE
    doca-telemetry-service-hwj65                                  1/1     Running   2          160m
    kube-ipoib-cni-ds-fsn8g                                       1/1     Running   2          160m
    mofed-rhcos4.16-9b5ddf4c6-ds-ct2h5                            2/2     Running   4          160m
    nic-feature-discovery-ds-dtksz                                1/1     Running   2          160m
    nv-ipam-controller-854585f594-c5jpp                           1/1     Running   2          160m
    nv-ipam-controller-854585f594-xrnp5                           1/1     Running   2          160m
    nv-ipam-node-xqttl                                            1/1     Running   2          160m
    nvidia-network-operator-controller-manager-5798b564cd-5cq99   1/1     Running   2         5d23h
    rdma-shared-dp-ds-p9vvg                                       1/1     Running   0          85m
    Copy to Clipboard Toggle word wrap

  5. rshmofed 容器,运行以下命令来检查状态:

    $ MOFED_POD=$(oc get pods -n nvidia-network-operator -o name | grep mofed)
    $ oc rsh -n nvidia-network-operator -c mofed-container ${MOFED_POD}
    sh-5.1# ofed_info -s
    Copy to Clipboard Toggle word wrap

    输出示例

    OFED-internal-24.07-0.6.1:
    Copy to Clipboard Toggle word wrap

    sh-5.1# ibdev2netdev -v
    Copy to Clipboard Toggle word wrap

    输出示例

    0000:0d:00.0 mlx5_0 (MT41692 - 900-9D3B4-00EN-EA0) BlueField-3 E-series SuperNIC 400GbE/NDR single port QSFP112, PCIe Gen5.0 x16 FHHL, Crypto Enabled, 16GB DDR5, BMC, Tall Bracket                                                       fw 32.42.1000 port 1 (ACTIVE) ==> ibs2f0 (Up)
    0000:a0:00.0 mlx5_1 (MT41692 - 900-9D3B4-00EN-EA0) BlueField-3 E-series SuperNIC 400GbE/NDR single port QSFP112, PCIe Gen5.0 x16 FHHL, Crypto Enabled, 16GB DDR5, BMC, Tall Bracket                                                       fw 32.42.1000 port 1 (ACTIVE) ==> ens8f0np0 (Up)
    Copy to Clipboard Toggle word wrap

  6. 创建 IPoIBNetwork 自定义资源文件:

    apiVersion: mellanox.com/v1alpha1
    kind: IPoIBNetwork
    metadata:
      name: example-ipoibnetwork
    spec:
      ipam: |
        {
          "type": "whereabouts",
          "range": "192.168.6.225/28",
          "exclude": [
           "192.168.6.229/30",
           "192.168.6.236/32"
          ]
        }
      master: ibs2f0
      networkNamespace: default
    Copy to Clipboard Toggle word wrap
  7. 运行以下命令在集群中创建 IPoIBNetwork 资源:

    $ oc create -f ipoib-network.yaml
    Copy to Clipboard Toggle word wrap

    输出示例

    ipoibnetwork.mellanox.com/example-ipoibnetwork created
    Copy to Clipboard Toggle word wrap

  8. 为其他接口创建一个 MacvlanNetwork 自定义资源文件:

    apiVersion: mellanox.com/v1alpha1
    kind: MacvlanNetwork
    metadata:
      name: rdmashared-net
    spec:
      networkNamespace: default
      master: ens8f0np0
      mode: bridge
      mtu: 1500
      ipam: '{"type": "whereabouts", "range": "192.168.2.0/24", "gateway": "192.168.2.1"}'
    Copy to Clipboard Toggle word wrap
  9. 运行以下命令在集群中创建资源:

    $ oc create -f macvlan-network.yaml
    Copy to Clipboard Toggle word wrap

    输出示例

    macvlannetwork.mellanox.com/rdmashared-net created
    Copy to Clipboard Toggle word wrap

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat