This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.11.8. Using virtual functions (VFs) with DPDK and RDMA modes
You can use Single Root I/O Virtualization (SR-IOV) network hardware with the Data Plane Development Kit (DPDK) and with remote direct memory access (RDMA).
The Data Plane Development Kit (DPDK) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Remote Direct Memory Access (RDMA) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
11.8.2. Prerequisites 复制链接链接已复制到粘贴板!
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - You must have installed the SR-IOV Network Operator.
Procedure
Create the following
SriovNetworkNodePolicyobject, and then save the YAML in theintel-dpdk-node-policy.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the driver type for the virtual functions to
vfio-pci.
注意Please refer to the
Configuring SR-IOV network devicessection for a detailed explanation on each option inSriovNetworkNodePolicy.When applying the configuration specified in a
SriovNetworkNodePolicyobject, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.After the configuration update is applied, all the pods in
openshift-sriov-network-operatornamespace will change to aRunningstatus.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f intel-dpdk-node-policy.yaml
$ oc create -f intel-dpdk-node-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
SriovNetworkobject, and then save the YAML in theintel-dpdk-network.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify an empty object
"{}"for the ipam CNI plug-in. DPDK works in userspace mode and does not require an IP address.
注意Please refer to the
Configuring SR-IOV additional networksection for a detailed explanation on each option inSriovNetwork.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f intel-dpdk-network.yaml
$ oc create -f intel-dpdk-network.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
Podspec, and then save the YAML in theintel-dpdk-pod.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the same
target_namespacewhere theSriovNetworkobjectintel-dpdk-networkis created. If you would like to create the pod in a different namespace, changetarget_namespacein both thePodspec and theSriovNetowrkobject. - 2
- Specify the DPDK image which includes your application and the DPDK library used by application.
- 3
- Specify the
IPC_LOCKcapability which is required by the application to allocate hugepage memory inside container. - 4
- Mount a hugepage volume to the DPDK pod under
/dev/hugepages. The hugepage volume is backed by the emptyDir volume type with the medium beingHugepages. - 5
- Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting
enableInjectoroption tofalsein the defaultSriovOperatorConfigCR. - 6
- Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to
staticand creating a pod withGuaranteedQoS. - 7
- Specify hugepage size
hugepages-1Giorhugepages-2Miand the quantity of hugepages that will be allocated to the DPDK pod. Configure2Miand1Gihugepages separately. Configuring1Gihugepage requires adding kernel arguments to Nodes. For example, adding kernel argumentsdefault_hugepagesz=1GB,hugepagesz=1Gandhugepages=16will result in16*1Gihugepages be allocated during system boot.
Create the DPDK pod by running the following command:
oc create -f intel-dpdk-pod.yaml
$ oc create -f intel-dpdk-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create the following
SriovNetworkNodePolicyobject, and then save the YAML in themlx-dpdk-node-policy.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the device hex code of the SR-IOV network device. The only allowed values for Mellanox cards are
1015,1017. - 2
- Specify the driver type for the virtual functions to
netdevice. Mellanox SR-IOV VF can work in DPDK mode without using thevfio-pcidevice type. VF device appears as a kernel network interface inside a container. - 3
- Enable RDMA mode. This is required by Mellanox cards to work in DPDK mode.
注意Please refer to
Configuring SR-IOV network devicessection for detailed explanation on each option inSriovNetworkNodePolicy.When applying the configuration specified in a
SriovNetworkNodePolicyobject, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.After the configuration update is applied, all the pods in the
openshift-sriov-network-operatornamespace will change to aRunningstatus.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f mlx-dpdk-node-policy.yaml
$ oc create -f mlx-dpdk-node-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
SriovNetworkobject, and then save the YAML in themlx-dpdk-network.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a configuration object for the ipam CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
注意Please refer to
Configuring SR-IOV additional networksection for detailed explanation on each option inSriovNetwork.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f mlx-dpdk-network.yaml
$ oc create -f mlx-dpdk-network.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
Podspec, and then save the YAML in themlx-dpdk-pod.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the same
target_namespacewhereSriovNetworkobjectmlx-dpdk-networkis created. If you would like to create the pod in a different namespace, changetarget_namespacein bothPodspec andSriovNetowrkobject. - 2
- Specify the DPDK image which includes your application and the DPDK library used by application.
- 3
- Specify the
IPC_LOCKcapability which is required by the application to allocate hugepage memory inside the container andNET_RAWfor the application to access the network interface. - 4
- Mount the hugepage volume to the DPDK pod under
/dev/hugepages. The hugepage volume is backed by the emptyDir volume type with the medium beingHugepages. - 5
- Optional: Specify the number of DPDK devices allocated to the DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the
enableInjectoroption tofalsein the defaultSriovOperatorConfigCR. - 6
- Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs be allocated from kubelet. This is achieved by setting CPU Manager policy to
staticand creating a pod withGuaranteedQoS. - 7
- Specify hugepage size
hugepages-1Giorhugepages-2Miand the quantity of hugepages that will be allocated to DPDK pod. Configure2Miand1Gihugepages separately. Configuring1Gihugepage requires adding kernel arguments to Nodes.
Create the DPDK pod by running the following command:
oc create -f mlx-dpdk-pod.yaml
$ oc create -f mlx-dpdk-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
RDMA over Converged Ethernet (RoCE) is the only supported mode when using RDMA on OpenShift Container Platform.
Procedure
Create the following
SriovNetworkNodePolicyobject, and then save the YAML in themlx-rdma-node-policy.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意Please refer to the
Configuring SR-IOV network devicessection for a detailed explanation on each option inSriovNetworkNodePolicy.When applying the configuration specified in a
SriovNetworkNodePolicyobject, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.After the configuration update is applied, all the pods in the
openshift-sriov-network-operatornamespace will change to aRunningstatus.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f mlx-rdma-node-policy.yaml
$ oc create -f mlx-rdma-node-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
SriovNetworkobject, and then save the YAML in themlx-rdma-network.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a configuration object for the ipam CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
注意Please refer to
Configuring SR-IOV additional networksection for detailed explanation on each option inSriovNetwork.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f mlx-rdma-network.yaml
$ oc create -f mlx-rdma-network.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
Podspec, and then save the YAML in themlx-rdma-pod.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the same
target_namespacewhereSriovNetworkobjectmlx-rdma-networkis created. If you would like to create the pod in a different namespace, changetarget_namespacein bothPodspec andSriovNetowrkobject. - 2
- Specify the RDMA image which includes your application and RDMA library used by application.
- 3
- Specify the
IPC_LOCKcapability which is required by the application to allocate hugepage memory inside the container. - 4
- Mount the hugepage volume to RDMA pod under
/dev/hugepages. The hugepage volume is backed by the emptyDir volume type with the medium beingHugepages. - 5
- Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to
staticand create pod withGuaranteedQoS. - 6
- Specify hugepage size
hugepages-1Giorhugepages-2Miand the quantity of hugepages that will be allocated to the RDMA pod. Configure2Miand1Gihugepages separately. Configuring1Gihugepage requires adding kernel arguments to Nodes.
Create the RDMA pod by running the following command:
oc create -f mlx-rdma-pod.yaml
$ oc create -f mlx-rdma-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow