Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 8. Using DPDK and RDMA
The containerized Data Plane Development Kit (DPDK) application is supported on OpenShift Container Platform. You can use Single Root I/O Virtualization (SR-IOV) network hardware with the Data Plane Development Kit (DPDK) and with remote direct memory access (RDMA).
Before you perform any tasks in the following documentation, ensure that you installed the SR-IOV Network Operator.
8.1. Example use of a virtual function in a pod Link kopierenLink in die Zwischenablage kopiert!
You can run a remote direct memory access (RDMA) or a Data Plane Development Kit (DPDK) application in a pod with SR-IOV VF attached.
This example shows a pod using a virtual function (VF) in RDMA mode:
Pod spec that uses RDMA mode
The following example shows a pod with a VF in DPDK mode:
Pod spec that uses DPDK mode
8.2. Using a virtual function in DPDK mode with an Intel NIC Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
Install the OpenShift CLI (
oc). - Install the SR-IOV Network Operator.
-
Log in as a user with
cluster-adminprivileges.
Procedure
Create the following
SriovNetworkNodePolicyobject, and then save the YAML in theintel-dpdk-node-policy.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the driver type for the virtual functions to
vfio-pci.
NoteSee the
Configuring SR-IOV network devicessection for a detailed explanation on each option inSriovNetworkNodePolicy.When applying the configuration specified in a
SriovNetworkNodePolicyobject, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.After the configuration update is applied, all the pods in
openshift-sriov-network-operatornamespace will change to aRunningstatus.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f intel-dpdk-node-policy.yaml
$ oc create -f intel-dpdk-node-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
SriovNetworkobject, and then save the YAML in theintel-dpdk-network.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
NoteSee the "Configuring SR-IOV additional network" section for a detailed explanation on each option in
SriovNetwork.An optional library, app-netutil, provides several API methods for gathering network information about a container’s parent pod.
Create the
SriovNetworkobject by running the following command:oc create -f intel-dpdk-network.yaml
$ oc create -f intel-dpdk-network.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
Podspec, and then save the YAML in theintel-dpdk-pod.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the same
target_namespacewhere theSriovNetworkobjectintel-dpdk-networkis created. If you would like to create the pod in a different namespace, changetarget_namespacein both thePodspec and theSriovNetworkobject. - 2
- Specify the DPDK image which includes your application and the DPDK library used by application.
- 3
- Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access.
- 4
- Mount a hugepage volume to the DPDK pod under
/mnt/huge. The hugepage volume is backed by the emptyDir volume type with the medium beingHugepages. - 5
- Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting
enableInjectoroption tofalsein the defaultSriovOperatorConfigCR. - 6
- Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to
staticand creating a pod withGuaranteedQoS. - 7
- Specify hugepage size
hugepages-1Giorhugepages-2Miand the quantity of hugepages that will be allocated to the DPDK pod. Configure2Miand1Gihugepages separately. Configuring1Gihugepage requires adding kernel arguments to Nodes. For example, adding kernel argumentsdefault_hugepagesz=1GB,hugepagesz=1Gandhugepages=16will result in16*1Gihugepages be allocated during system boot.
Create the DPDK pod by running the following command:
oc create -f intel-dpdk-pod.yaml
$ oc create -f intel-dpdk-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3. Using a virtual function in DPDK mode with a Mellanox NIC Link kopierenLink in die Zwischenablage kopiert!
You can create a network node policy and create a Data Plane Development Kit (DPDK) pod using a virtual function in DPDK mode with a Mellanox NIC.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - You have installed the Single Root I/O Virtualization (SR-IOV) Network Operator.
-
You have logged in as a user with
cluster-adminprivileges.
Procedure
Save the following
SriovNetworkNodePolicyYAML configuration to anmlx-dpdk-node-policy.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the device hex code of the SR-IOV network device.
- 2
- Specify the driver type for the virtual functions to
netdevice. A Mellanox SR-IOV Virtual Function (VF) can work in DPDK mode without using thevfio-pcidevice type. The VF device appears as a kernel network interface inside a container. - 3
- Enable Remote Direct Memory Access (RDMA) mode. This is required for Mellanox cards to work in DPDK mode.
NoteSee Configuring an SR-IOV network device for a detailed explanation of each option in the
SriovNetworkNodePolicyobject.When applying the configuration specified in an
SriovNetworkNodePolicyobject, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.After the configuration update is applied, all the pods in the
openshift-sriov-network-operatornamespace will change to aRunningstatus.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f mlx-dpdk-node-policy.yaml
$ oc create -f mlx-dpdk-node-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the following
SriovNetworkYAML configuration to anmlx-dpdk-network.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a configuration object for the IP Address Management (IPAM) Container Network Interface (CNI) plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
NoteSee Configuring an SR-IOV network device for a detailed explanation on each option in the
SriovNetworkobject.The
app-netutiloption library provides several API methods for gathering network information about the parent pod of a container.Create the
SriovNetworkobject by running the following command:oc create -f mlx-dpdk-network.yaml
$ oc create -f mlx-dpdk-network.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Save the following
PodYAML configuration to anmlx-dpdk-pod.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the same
target_namespacewhereSriovNetworkobjectmlx-dpdk-networkis created. To create the pod in a different namespace, changetarget_namespacein both thePodspec andSriovNetworkobject. - 2
- Specify the DPDK image which includes your application and the DPDK library used by the application.
- 3
- Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access.
- 4
- Mount the hugepage volume to the DPDK pod under
/mnt/huge. The hugepage volume is backed by theemptyDirvolume type with the medium beingHugepages. - 5
- Optional: Specify the number of DPDK devices allocated for the DPDK pod. If not explicitly specified, this resource request and limit is automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the
enableInjectoroption tofalsein the defaultSriovOperatorConfigCR. - 6
- Specify the number of CPUs. The DPDK pod usually requires that exclusive CPUs be allocated from the kubelet. To do this, set the CPU Manager policy to
staticand create a pod withGuaranteedQuality of Service (QoS). - 7
- Specify hugepage size
hugepages-1Giorhugepages-2Miand the quantity of hugepages that will be allocated to the DPDK pod. Configure2Miand1Gihugepages separately. Configuring1Gihugepages requires adding kernel arguments to Nodes.
Create the DPDK pod by running the following command:
oc create -f mlx-dpdk-pod.yaml
$ oc create -f mlx-dpdk-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4. Using the TAP CNI to run a rootless DPDK workload with kernel access Link kopierenLink in die Zwischenablage kopiert!
DPDK applications can use virtio-user as an exception path to inject certain types of packets, such as log messages, into the kernel for processing. For more information about this feature, see Virtio_user as Exception Path.
In OpenShift Container Platform version 4.14 and later, you can use non-privileged pods to run DPDK applications alongside the tap CNI plugin. To enable this functionality, you need to mount the vhost-net device by setting the needVhostNet parameter to true within the SriovNetworkNodePolicy object.
Figure 8.1. DPDK and TAP example configuration
Prerequisites
-
You have installed the OpenShift CLI (
oc). - You have installed the SR-IOV Network Operator.
-
You are logged in as a user with
cluster-adminprivileges. Ensure that
setsebools container_use_devices=onis set as root on all nodes.NoteUse the Machine Config Operator to set this SELinux boolean.
Procedure
Create a file, such as
test-namespace.yaml, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new
Namespaceobject by running the following command:oc apply -f test-namespace.yaml
$ oc apply -f test-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file, such as
sriov-node-network-policy.yaml, with content like the following example::Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This indicates that the profile is tailored specifically for Mellanox Network Interface Controllers (NICs).
- 2
- Setting
isRdmatotrueis only required for a Mellanox NIC. - 3
- This mounts the
/dev/net/tunand/dev/vhost-netdevices into the container so the application can create a tap device and connect the tap device to the DPDK workload. - 4
- The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC.
- 5
- The device hexadecimal code of the SR-IOV network device.
Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f sriov-node-network-policy.yaml
$ oc create -f sriov-node-network-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
SriovNetworkobject, and then save the YAML in thesriov-network-attachment.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSee the "Configuring SR-IOV additional network" section for a detailed explanation on each option in
SriovNetwork.An optional library,
app-netutil, provides several API methods for gathering network information about a container’s parent pod.Create the
SriovNetworkobject by running the following command:oc create -f sriov-network-attachment.yaml
$ oc create -f sriov-network-attachment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file, such as
tap-example.yaml, that defines a network attachment definition, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the same
target_namespacewhere theSriovNetworkobject is created.
Create the
NetworkAttachmentDefinitionobject by running the following command:oc apply -f tap-example.yaml
$ oc apply -f tap-example.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file, such as
dpdk-pod-rootless.yaml, with content like the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the same
target_namespacein which theSriovNetworkobject is created. If you want to create the pod in a different namespace, changetarget_namespacein both thePodspec and theSriovNetworkobject. - 2
- Sets the group ownership of volume-mounted directories and files created in those volumes.
- 3
- Specify the primary group ID used for running the container.
- 4
- Specify the DPDK image that contains your application and the DPDK library used by application.
- 5
- Removing all capabilities (
ALL) from the container’s securityContext means that the container has no special privileges beyond what is necessary for normal operation. - 6
- Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. These capabilities must also be set in the binary file by using the
setcapcommand. - 7
- Mellanox network interface controller (NIC) requires the
NET_RAWcapability. - 8
- Specify the user ID used for running the container.
- 9
- This setting indicates that the container or containers within the pod should not be granted privileged access to the host system.
- 10
- This setting allows a container to escalate its privileges beyond the initial non-root privileges it might have been assigned.
- 11
- This setting ensures that the container runs with a non-root user. This helps enforce the principle of least privilege, limiting the potential impact of compromising the container and reducing the attack surface.
- 12
- Mount a hugepage volume to the DPDK pod under
/mnt/huge. The hugepage volume is backed by the emptyDir volume type with the medium beingHugepages. - 13
- Optional: Specify the number of DPDK devices allocated for the DPDK pod. If not explicitly specified, this resource request and limit is automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the
enableInjectoroption tofalsein the defaultSriovOperatorConfigCR. - 14
- Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to
staticand creating a pod withGuaranteedQoS. - 15
- Specify hugepage size
hugepages-1Giorhugepages-2Miand the quantity of hugepages that will be allocated to the DPDK pod. Configure2Miand1Gihugepages separately. Configuring1Gihugepage requires adding kernel arguments to Nodes. For example, adding kernel argumentsdefault_hugepagesz=1GB,hugepagesz=1Gandhugepages=16will result in16*1Gihugepages be allocated during system boot. - 16
- If your performance profile is not named
cnf-performance profile, replace that string with the correct performance profile name.
Create the DPDK pod by running the following command:
oc create -f dpdk-pod-rootless.yaml
$ oc create -f dpdk-pod-rootless.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.5. Overview of achieving a specific DPDK line rate Link kopierenLink in die Zwischenablage kopiert!
To achieve a specific Data Plane Development Kit (DPDK) line rate, deploy a Node Tuning Operator and configure Single Root I/O Virtualization (SR-IOV). You must also tune the DPDK settings for the following resources:
- Isolated CPUs
- Hugepages
- The topology scheduler
In previous versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator.
DPDK test environment
The following diagram shows the components of a traffic-testing environment:
- Traffic generator: An application that can generate high-volume packet traffic.
- SR-IOV-supporting NIC: A network interface card compatible with SR-IOV. The card runs a number of virtual functions on a physical interface.
- Physical Function (PF): A PCI Express (PCIe) function of a network adapter that supports the SR-IOV interface.
- Virtual Function (VF): A lightweight PCIe function on a network adapter that supports SR-IOV. The VF is associated with the PCIe PF on the network adapter. The VF represents a virtualized instance of the network adapter.
- Switch: A network switch. Nodes can also be connected back-to-back.
-
testpmd: An example application included with DPDK. Thetestpmdapplication can be used to test the DPDK in a packet-forwarding mode. Thetestpmdapplication is also an example of how to build a fully-fledged application using the DPDK Software Development Kit (SDK). - worker 0 and worker 1: OpenShift Container Platform nodes.
8.6. Using SR-IOV and the Node Tuning Operator to achieve a DPDK line rate Link kopierenLink in die Zwischenablage kopiert!
You can use the Node Tuning Operator to configure isolated CPUs, hugepages, and a topology scheduler. You can then use the Node Tuning Operator with Single Root I/O Virtualization (SR-IOV) to achieve a specific Data Plane Development Kit (DPDK) line rate.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - You have installed the SR-IOV Network Operator.
-
You have logged in as a user with
cluster-adminprivileges. You have deployed a standalone Node Tuning Operator.
NoteIn previous versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator.
Procedure
Create a
PerformanceProfileobject based on the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If hyperthreading is enabled on the system, allocate the relevant symbolic links to the
isolatedandreservedCPU groups. If the system contains multiple non-uniform memory access nodes (NUMAs), allocate CPUs from both NUMAs to both groups. You can also use the Performance Profile Creator for this task. For more information, see Creating a performance profile. - 2
- You can also specify a list of devices that will have their queues set to the reserved CPU count. For more information, see Reducing NIC queues using the Node Tuning Operator.
- 3
- Allocate the number and size of hugepages needed. You can specify the NUMA configuration for the hugepages. By default, the system allocates an even number to every NUMA node on the system. If needed, you can request the use of a realtime kernel for the nodes. See Provisioning a worker with real-time capabilities for more information.
-
Save the
yamlfile asmlx-dpdk-perfprofile-policy.yaml. Apply the performance profile using the following command:
oc create -f mlx-dpdk-perfprofile-policy.yaml
$ oc create -f mlx-dpdk-perfprofile-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6.1. DPDK library for use with container applications Link kopierenLink in die Zwischenablage kopiert!
An optional library, app-netutil, provides several API methods for gathering network information about a pod from within a container running within that pod.
This library can assist with integrating SR-IOV virtual functions (VFs) in Data Plane Development Kit (DPDK) mode into the container. The library provides both a Golang API and a C API.
Currently there are three API methods implemented:
GetCPUInfo()- This function determines which CPUs are available to the container and returns the list.
GetHugepages()-
This function determines the amount of huge page memory requested in the
Podspec for each container and returns the values. GetInterfaces()- This function determines the set of interfaces in the container and returns the list. The return value includes the interface type and type-specific data for each interface.
The repository for the library includes a sample Dockerfile to build a container image, dpdk-app-centos. The container image can run one of the following DPDK sample applications, depending on an environment variable in the pod specification: l2fwd, l3wd or testpmd. The container image provides an example of integrating the app-netutil library into the container image itself. The library can also integrate into an init container. The init container can collect the required data and pass the data to an existing DPDK workload.
8.6.2. Example SR-IOV Network Operator for virtual functions Link kopierenLink in die Zwischenablage kopiert!
You can use the Single Root I/O Virtualization (SR-IOV) Network Operator to allocate and configure Virtual Functions (VFs) from SR-IOV-supporting Physical Function NICs on the nodes.
For more information on deploying the Operator, see Installing the SR-IOV Network Operator. For more information on configuring an SR-IOV network device, see Configuring an SR-IOV network device.
There are some differences between running Data Plane Development Kit (DPDK) workloads on Intel VFs and Mellanox VFs. This section provides object configuration examples for both VF types. The following is an example of an sriovNetworkNodePolicy object used to run DPDK applications on Intel NICs:
The following is an example of an sriovNetworkNodePolicy object for Mellanox NICs:
8.6.3. Example SR-IOV network operator Link kopierenLink in die Zwischenablage kopiert!
The following is an example definition of an sriovNetwork object. In this case, Intel and Mellanox configurations are identical:
- 1
- You can use a different IP Address Management (IPAM) implementation, such as Whereabouts. For more information, see Dynamic IP address assignment configuration with Whereabouts.
- 2
- You must request the
networkNamespacewhere the network attachment definition will be created. You must create thesriovNetworkCR under theopenshift-sriov-network-operatornamespace. - 3
- The
resourceNamevalue must match that of theresourceNamecreated under thesriovNetworkNodePolicy.
8.6.4. Example DPDK base workload Link kopierenLink in die Zwischenablage kopiert!
The following is an example of a Data Plane Development Kit (DPDK) container:
- 1
- Request the SR-IOV networks you need. Resources for the devices will be injected automatically.
- 2
- Disable the CPU and IRQ load balancing base. See Disabling interrupt processing for individual pods for more information.
- 3
- Set the
runtimeClasstoperformance-performance. Do not set theruntimeClasstoHostNetworkorprivileged. - 4
- Request an equal number of resources for requests and limits to start the pod with
GuaranteedQuality of Service (QoS).
Do not start the pod with SLEEP and then exec into the pod to start the testpmd or the DPDK workload. This can add additional interrupts as the exec process is not pinned to any CPU.
8.6.5. Example testpmd script Link kopierenLink in die Zwischenablage kopiert!
The following is an example script for running testpmd:
This example uses two different sriovNetwork CRs. The environment variable contains the Virtual Function (VF) PCI address that was allocated for the pod. If you use the same network in the pod definition, you must split the pciAddress. It is important to configure the correct MAC addresses of the traffic generator. This example uses custom MAC addresses.
8.7. Using a virtual function in RDMA mode with a Mellanox NIC Link kopierenLink in die Zwischenablage kopiert!
RDMA over Converged Ethernet (RoCE) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
RDMA over Converged Ethernet (RoCE) is the only supported mode when using RDMA on OpenShift Container Platform.
Prerequisites
-
Install the OpenShift CLI (
oc). - Install the SR-IOV Network Operator.
-
Log in as a user with
cluster-adminprivileges.
Procedure
Create the following
SriovNetworkNodePolicyobject, and then save the YAML in themlx-rdma-node-policy.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSee the
Configuring SR-IOV network devicessection for a detailed explanation on each option inSriovNetworkNodePolicy.When applying the configuration specified in a
SriovNetworkNodePolicyobject, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand.After the configuration update is applied, all the pods in the
openshift-sriov-network-operatornamespace will change to aRunningstatus.Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f mlx-rdma-node-policy.yaml
$ oc create -f mlx-rdma-node-policy.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
SriovNetworkobject, and then save the YAML in themlx-rdma-network.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
NoteSee the "Configuring SR-IOV additional network" section for a detailed explanation on each option in
SriovNetwork.An optional library, app-netutil, provides several API methods for gathering network information about a container’s parent pod.
Create the
SriovNetworkNodePolicyobject by running the following command:oc create -f mlx-rdma-network.yaml
$ oc create -f mlx-rdma-network.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
Podspec, and then save the YAML in themlx-rdma-pod.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the same
target_namespacewhereSriovNetworkobjectmlx-rdma-networkis created. If you would like to create the pod in a different namespace, changetarget_namespacein bothPodspec andSriovNetworkobject. - 2
- Specify the RDMA image which includes your application and RDMA library used by application.
- 3
- Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access.
- 4
- Mount the hugepage volume to RDMA pod under
/mnt/huge. The hugepage volume is backed by the emptyDir volume type with the medium beingHugepages. - 5
- Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to
staticand create pod withGuaranteedQoS. - 6
- Specify hugepage size
hugepages-1Giorhugepages-2Miand the quantity of hugepages that will be allocated to the RDMA pod. Configure2Miand1Gihugepages separately. Configuring1Gihugepage requires adding kernel arguments to Nodes.
Create the RDMA pod by running the following command:
oc create -f mlx-rdma-pod.yaml
$ oc create -f mlx-rdma-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.8. A test pod template for clusters that use OVS-DPDK on OpenStack Link kopierenLink in die Zwischenablage kopiert!
The following testpmd pod demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port.
An example testpmd pod