Chapter 17. Optimizing data plane performance with the Intel vRAN Dedicated Accelerator ACC100
17.1. Understanding the vRAN Dedicated Accelerator ACC100
Hardware accelerator cards from Intel accelerate 4G/LTE and 5G Virtualized Radio Access Networks (vRAN) workloads. This in turn increases the overall compute capacity of a commercial, off-the-shelf platform.
The vRAN Dedicated Accelerator ACC100, based on Intel eASIC technology is designed to offload and accelerate the computing-intensive process of forward error correction (FEC) for 4G/LTE and 5G technology, freeing up processing power. Intel eASIC devices are structured ASICs, an intermediate technology between FPGAs and standard application-specific integrated circuits (ASICs).
Intel vRAN Dedicated Accelerator ACC100 support on OpenShift Container Platform uses one Operator:
- OpenNESS Operator for Wireless FEC Accelerators
17.2. Installing the OpenNESS SR-IOV Operator for Wireless FEC Accelerators
The role of the OpenNESS Operator for Intel Wireless forward error correction (FEC) Accelerator is to orchestrate and manage the devices exposed by a range of Intel vRAN FEC acceleration hardware within the OpenShift Container Platform cluster.
One of the most compute-intensive 4G/LTE and 5G workloads is RAN layer 1 (L1) FEC. FEC resolves data transmission errors over unreliable or noisy communication channels. FEC technology detects and corrects a limited number of errors in 4G/LTE or 5G data without the need for retransmission.
The FEC device provided by the Intel vRAN Dedicated Accelerator ACC100 supports the vRAN use case.
The OpenNESS SR-IOV Operator for Wireless FEC Accelerators provides functionality to create virtual functions (VFs) for the FEC device, binds them to appropriate drivers, and configures the VFs queues for functionality in 4G/LTE or 5G deployment.
As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the OpenShift Container Platform CLI or the web console.
17.2.1. Installing the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the CLI
As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the CLI.
Prerequisites
- A cluster installed on bare-metal hardware.
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
Create a namespace for the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by completing the following actions:
Define the
vran-acceleration-operators
namespace by creating a file namedsriov-namespace.yaml
as shown in the following example:apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators labels: openshift.io/cluster-monitoring: "true"
Create the namespace by running the following command:
$ oc create -f sriov-namespace.yaml
Install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators in the namespace you created in the previous step by creating the following objects:
Create the following
OperatorGroup
custom resource (CR) and save the YAML in thesriov-operatorgroup.yaml
file:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators spec: targetNamespaces: - vran-acceleration-operators
Create the
OperatorGroup
CR by running the following command:$ oc create -f sriov-operatorgroup.yaml
Run the following command to get the
channel
value required for the next step.$ oc get packagemanifest sriov-fec -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'
Example output
stable
Create the following Subscription CR and save the YAML in the
sriov-sub.yaml
file:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators spec: channel: "<channel>" 1 name: sriov-fec source: certified-operators 2 sourceNamespace: openshift-marketplace
Create the
Subscription
CR by running the following command:$ oc create -f sriov-sub.yaml
Verification
Verify that the Operator is installed:
$ oc get csv -n vran-acceleration-operators -o custom-columns=Name:.metadata.name,Phase:.status.phase
Example output
Name Phase sriov-fec.v1.1.0 Succeeded
17.2.2. Installing the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the web console
As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the web console.
You must create the Namespace
and OperatorGroup
custom resource (CR) as mentioned in the previous section.
Procedure
Install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the OpenShift Container Platform web console:
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Choose OpenNESS SR-IOV Operator for Wireless FEC Accelerators from the list of available Operators, and then click Install.
- On the Install Operator page, select All namespaces on the cluster. Then, click Install.
-
In the OpenShift Container Platform web console, click Operators
Optional: Verify that the SRIOV-FEC Operator is installed successfully:
-
Switch to the Operators
Installed Operators page. Ensure that OpenNESS SR-IOV Operator for Wireless FEC Accelerators is listed in the vran-acceleration-operators project with a Status of InstallSucceeded.
NoteDuring installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
If the console does not indicate that the Operator is installed, perform the following troubleshooting steps:
-
Go to the Operators
Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status. -
Go to the Workloads
Pods page and check the logs for pods in the vran-acceleration-operators
project.
-
Go to the Operators
-
Switch to the Operators
17.2.3. Configuring the SR-IOV FEC Operator for the Intel® vRAN Dedicated Accelerator ACC100
Programming the Intel vRAN Dedicated Accelerator ACC100 exposes the Single Root I/O Virtualization (SRIOV) virtual function (VF) devices that are then used to accelerate the forward error correction (FEC) in the vRAN workload. The Intel vRAN Dedicated Accelerator ACC100 accelerates 4G and 5G Virtualized Radio Access Networks (vRAN) workloads. This in turn increases the overall compute capacity of a commercial, off-the-shelf platform. This device is also known as Mount Bryce.
The SR-IOV-FEC Operator handles the management of the FEC devices that are used to accelerate the FEC process in vRAN L1 applications.
Configuring the SR-IOV-FEC Operator involves:
- Creating the virtual functions (VFs) for the FEC device
- Binding the VFs to the appropriate drivers
- Configuring the VF queues for desired functionality in a 4G or 5G deployment
The role of forward error correction (FEC) is to correct transmission errors, where certain bits in a message can be lost or garbled. Messages can be lost or garbled due to noise in the transmission media, interference, or low signal strength. Without FEC, a garbled message would have to be resent, adding to the network load and impacting throughput and latency.
Prerequisites
- Intel FPGA ACC100 5G/4G card.
- Node or nodes installed with the OpenNESS Operator for Wireless FEC Accelerators.
- Enable global SR-IOV and VT-d settings in the BIOS for the node.
- RT kernel configured with Performance Addon Operator.
-
Log in as a user with
cluster-admin
privileges.
Procedure
Change to the
vran-acceleration-operators
project:$ oc project vran-acceleration-operators
Verify that the SR-IOV-FEC Operator is installed:
$ oc get csv -o custom-columns=Name:.metadata.name,Phase:.status.phase
Example output
Name Phase sriov-fec.v1.1.0 Succeeded
Verify that the
sriov-fec
pods are running:$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE sriov-device-plugin-j5jlv 1/1 Running 1 15d sriov-fec-controller-manager-85b6b8f4d4-gd2qg 1/1 Running 1 15d sriov-fec-daemonset-kqqs6 1/1 Running 1 15d
-
sriov-device-plugin
expose the FEC virtual functions as resources under the node -
sriov-fec-controller-manager
applies CR to the node and maintains the operands containers sriov-fec-daemonset
is responsible for:- Discovering the SRIOV NICs on each node.
- Syncing the status of the custom resource (CR) defined in step 6.
- Taking the spec of the CR as input and configuring the discovered NICs.
-
Retrieve all the nodes containing one of the supported vRAN FEC accelerator devices:
$ oc get sriovfecnodeconfig
Example output
NAME CONFIGURED node1 Succeeded
Find the physical function (PF) of the SR-IOV FEC accelerator device to configure:
$ oc get sriovfecnodeconfig node1 -o yaml
Example output
status: conditions: - lastTransitionTime: "2021-03-19T17:19:37Z" message: Configured successfully observedGeneration: 1 reason: ConfigurationSucceeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: "" maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 1 vendorID: "8086" virtualFunctions: [] 2
Configure the number of virtual functions and queue groups on the FEC device:
Create the following custom resource (CR) and save the YAML in the
sriovfec_acc100cr.yaml
file:NoteThis example configures the ACC100 8/8 queue groups for 5G, 4 queue groups for Uplink, and another 4 queue groups for Downlink.
apiVersion: sriovfec.intel.com/v1 kind: SriovFecClusterConfig metadata: name: config 1 spec: nodes: - nodeName: node1 2 physicalFunctions: - pciAddress: 0000:af:00.0 3 pfDriver: "pci-pf-stub" vfDriver: "vfio-pci" vfAmount: 16 4 bbDevConfig: acc100: # Programming mode: 0 = VF Programming, 1 = PF Programming pfMode: false numVfBundles: 16 maxQueueSize: 1024 uplink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 downlink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 uplink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4 downlink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4
- 1
- Specify a name for the CR object. The only name that can be specified is
config
. - 2
- Specify the node name.
- 3
- Specify the PCI address of the card on which the SR-IOV-FEC Operator will be installed.
- 4
- Specify the number of virtual functions to create. For the Intel vRAN Dedicated Accelerator ACC100, create all 16 VFs.
NoteThe card is configured to provide up to 8 queue groups with up to 16 queues per group. The queue groups can be divided between groups allocated to 5G and 4G and Uplink and Downlink. The Intel vRAN Dedicated Accelerator ACC100 can be configured for:
- 4G or 5G only
- 4G and 5G at the same time
Each configured VF has access to all the queues. Each of the queue groups have a distinct priority level. The request for a given queue group is made from the application level that is, the vRAN application leveraging the FEC device.
Apply the CR:
$ oc apply -f sriovfec_acc100cr.yaml
After applying the CR, the SR-IOV FEC daemon starts configuring the FEC device.
Verification
Check the status:
$ oc get sriovfecclusterconfig config -o yaml
Example output
status: conditions: - lastTransitionTime: "2021-03-19T11:46:22Z" message: Configured successfully observedGeneration: 1 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: "8086" virtualFunctions: - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.0 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.1 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.2 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.3 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.4
Check the logs:
Determine the pod name of the SR-IOV daemon:
$ oc get po -o wide | grep sriov-fec-daemonset | grep node1
Example output
sriov-fec-daemonset-kqqs6 1/1 Running 0 19h
View the logs:
$ oc logs sriov-fec-daemonset-kqqs6
Example output
{"level":"Level(-2)","ts":1616794345.4786215,"logger":"daemon.drainhelper.cordonAndDrain()","msg":"node drained"} {"level":"Level(-4)","ts":1616794345.4786265,"logger":"daemon.drainhelper.Run()","msg":"worker function - start"} {"level":"Level(-4)","ts":1616794345.5762916,"logger":"daemon.NodeConfigurator.applyConfig","msg":"current node status","inventory":{"sriovAccelerat ors":[{"vendorID":"8086","deviceID":"0b32","pciAddress":"0000:20:00.0","driver":"","maxVirtualFunctions":1,"virtualFunctions":[]},{"vendorID":"8086" ,"deviceID":"0d5c","pciAddress":"0000:af:00.0","driver":"","maxVirtualFunctions":16,"virtualFunctions":[]}]}} {"level":"Level(-4)","ts":1616794345.5763638,"logger":"daemon.NodeConfigurator.applyConfig","msg":"configuring PF","requestedConfig":{"pciAddress":" 0000:af:00.0","pfDriver":"pci-pf-stub","vfDriver":"vfio-pci","vfAmount":2,"bbDevConfig":{"acc100":{"pfMode":false,"numVfBundles":16,"maxQueueSize":1 024,"uplink4G":{"numQueueGroups":4,"numAqsPerGroups":16,"aqDepthLog2":4},"downlink4G":{"numQueueGroups":4,"numAqsPerGroups":16,"aqDepthLog2":4},"uplink5G":{"numQueueGroups":0,"numAqsPerGroups":16,"aqDepthLog2":4},"downlink5G":{"numQueueGroups":0,"numAqsPerGroups":16,"aqDepthLog2":4}}}}} {"level":"Level(-4)","ts":1616794345.5774765,"logger":"daemon.NodeConfigurator.loadModule","msg":"executing command","cmd":"/usr/sbin/chroot /host/ modprobe pci-pf-stub"} {"level":"Level(-4)","ts":1616794345.5842702,"logger":"daemon.NodeConfigurator.loadModule","msg":"commands output","output":""} {"level":"Level(-4)","ts":1616794345.5843055,"logger":"daemon.NodeConfigurator.loadModule","msg":"executing command","cmd":"/usr/sbin/chroot /host/ modprobe vfio-pci"} {"level":"Level(-4)","ts":1616794345.6090655,"logger":"daemon.NodeConfigurator.loadModule","msg":"commands output","output":""} {"level":"Level(-2)","ts":1616794345.6091156,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:af:00.0/driver_override"} {"level":"Level(-2)","ts":1616794345.6091807,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/pci-pf-stub/bind"} {"level":"Level(-2)","ts":1616794345.7488534,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:b0:00.0/driver_override"} {"level":"Level(-2)","ts":1616794345.748938,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/vfio-pci/bind"} {"level":"Level(-2)","ts":1616794345.7492096,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:b0:00.1/driver_override"} {"level":"Level(-2)","ts":1616794345.7492566,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/vfio-pci/bind"} {"level":"Level(-4)","ts":1616794345.74968,"logger":"daemon.NodeConfigurator.applyConfig","msg":"executing command","cmd":"/sriov_workdir/pf_bb_config ACC100 -c /sriov_artifacts/0000:af:00.0.ini -p 0000:af:00.0"} {"level":"Level(-4)","ts":1616794346.5203931,"logger":"daemon.NodeConfigurator.applyConfig","msg":"commands output","output":"Queue Groups: 0 5GUL, 0 5GDL, 4 4GUL, 4 4GDL\nNumber of 5GUL engines 8\nConfiguration in VF mode\nPF ACC100 configuration complete\nACC100 PF [0000:af:00.0] configuration complete!\n\n"} {"level":"Level(-4)","ts":1616794346.520459,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"executing command","cmd":"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND"} {"level":"Level(-4)","ts":1616794346.5458736,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"commands output","output":"0000:af:00.0 @04 = 0142\n"} {"level":"Level(-4)","ts":1616794346.5459251,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"executing command","cmd":"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND=0146"} {"level":"Level(-4)","ts":1616794346.5795262,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"commands output","output":"0000:af:00.0 @04 0146\n"} {"level":"Level(-2)","ts":1616794346.5795407,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"MasterBus set","pci":"0000:af:00.0","output":"0000:af:00.0 @04 0146\n"} {"level":"Level(-4)","ts":1616794346.6867144,"logger":"daemon.drainhelper.Run()","msg":"worker function - end","performUncordon":true} {"level":"Level(-4)","ts":1616794346.6867719,"logger":"daemon.drainhelper.Run()","msg":"uncordoning node"} {"level":"Level(-4)","ts":1616794346.6896322,"logger":"daemon.drainhelper.uncordon()","msg":"starting uncordon attempts"} {"level":"Level(-2)","ts":1616794346.69735,"logger":"daemon.drainhelper.uncordon()","msg":"node uncordoned"} {"level":"Level(-4)","ts":1616794346.6973662,"logger":"daemon.drainhelper.Run()","msg":"cancelling the context to finish the leadership"} {"level":"Level(-4)","ts":1616794346.7029872,"logger":"daemon.drainhelper.Run()","msg":"stopped leading"} {"level":"Level(-4)","ts":1616794346.7030034,"logger":"daemon.drainhelper","msg":"releasing the lock (bug mitigation)"} {"level":"Level(-4)","ts":1616794346.8040674,"logger":"daemon.updateInventory","msg":"obtained inventory","inv":{"sriovAccelerators":[{"vendorID":"8086","deviceID":"0b32","pciAddress":"0000:20:00.0","driver":"","maxVirtualFunctions":1,"virtualFunctions":[]},{"vendorID":"8086","deviceID":"0d5c","pciAddress":"0000:af:00.0","driver":"pci-pf-stub","maxVirtualFunctions":16,"virtualFunctions":[{"pciAddress":"0000:b0:00.0","driver":"vfio-pci","deviceID":"0d5d"},{"pciAddress":"0000:b0:00.1","driver":"vfio-pci","deviceID":"0d5d"}]}]}} {"level":"Level(-4)","ts":1616794346.9058325,"logger":"daemon","msg":"Update ignored, generation unchanged"} {"level":"Level(-2)","ts":1616794346.9065044,"logger":"daemon.Reconcile","msg":"Reconciled","namespace":"vran-acceleration-operators","name":"pg-itengdvs02r.altera.com"}
Check the FEC configuration of the card:
$ oc get sriovfecnodeconfig node1 -o yaml
Example output
status: conditions: - lastTransitionTime: "2021-03-19T11:46:22Z" message: Configured successfully observedGeneration: 1 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c 1 driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: "8086" virtualFunctions: - deviceID: 0d5d 2 driver: vfio-pci pciAddress: 0000:b0:00.0 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.1 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.2 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.3 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.4
17.2.4. Verifying application pod access and ACC100 usage on OpenNESS
OpenNESS is an edge computing software toolkit that you can use to onboard and manage applications and network functions on any type of network.
To verify all OpenNESS features are working together, including SR-IOV binding, the device plugin, Wireless Base Band Device (bbdev) configuration, and SR-IOV (FEC) VF functionality inside a non-root pod, you can build an image and run a simple validation application for the device.
For more information, go to openess.org.
Prerequisites
- Node or nodes installed with the OpenNESS SR-IOV Operator for Wireless FEC Accelerators.
- Real-Time kernel and huge pages configured with the Performance Addon Operator.
Procedure
Create a namespace for the test by completing the following actions:
Define the
test-bbdev
namespace by creating a file namedtest-bbdev-namespace.yaml
file as shown in the following example:apiVersion: v1 kind: Namespace metadata: name: test-bbdev labels: openshift.io/run-level: "1"
Create the namespace by running the following command:
$ oc create -f test-bbdev-namespace.yaml
Create the following
Pod
specification, and then save the YAML in thepod-test.yaml
file:apiVersion: v1 kind: Pod metadata: name: pod-bbdev-sample-app namespace: test-bbdev 1 spec: containers: - securityContext: privileged: false capabilities: add: - IPC_LOCK - SYS_NICE name: bbdev-sample-app image: bbdev-sample-app:1.0 2 command: [ "sudo", "/bin/bash", "-c", "--" ] runAsUser: 0 3 resources: requests: hugepages-1Gi: 4Gi 4 memory: 1Gi cpu: "4" 5 intel.com/intel_fec_acc100: '1' 6 limits: memory: 4Gi cpu: "4" hugepages-1Gi: 4Gi intel.com/intel_fec_acc100: '1'
- 1
- Specify the
namespace
you created in step 1. - 2
- This defines the test image containing the compiled DPDK.
- 3
- Make the container execute internally as the root user.
- 4
- Specify hugepage size
hugepages-1Gi
and the quantity of hugepages that will be allocated to the pod. Hugepages and isolated CPUs need to be configured using the Performance Addon Operator. - 5
- Specify the number of CPUs.
- 6
- Testing of the ACC100 5G FEC configuration is supported by
intel.com/intel_fec_acc100
.
Create the pod:
$ oc apply -f pod-test.yaml
Check that the pod is created:
$ oc get pods -n test-bbdev
Example output
NAME READY STATUS RESTARTS AGE pod-bbdev-sample-app 1/1 Running 0 80s
Use a remote shell to log in to the
pod-bbdev-sample-app
:$ oc rsh pod-bbdev-sample-app
Example output
sh-4.4#
Print the VF allocated to the pod:
sh-4.4# printenv | grep INTEL_FEC
Example output
PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100=0.0.0.0:1d.00.0 1
- 1
- This is the PCI address of the virtual function.
Change to the
test-bbdev
directory.sh-4.4# cd test/test-bbdev/
Check the CPUs that are assigned to the pod:
sh-4.4# export CPU=$(cat /sys/fs/cgroup/cpuset/cpuset.cpus) sh-4.4# echo ${CPU}
This prints out the CPUs that are assigned to the
fec.pod
.Example output
24,25,64,65
Run the
test-bbdev
application to test the device:sh-4.4# ./test-bbdev.py -e="-l ${CPU} -a ${PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100}" -c validation \ -n 64 -b 32 -l 1 -v ./test_vectors/*"
Example output
Executing: ../../build/app/dpdk-test-bbdev -l 24-25,64-65 0000:1d.00.0 -- -n 64 -l 1 -c validation -v ./test_vectors/bbdev_null.data -b 32 EAL: Detected 80 lcore(s) EAL: Detected 2 NUMA nodes Option -w, --pci-whitelist is deprecated, use -a, --allow option instead EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: using IOMMU type 1 (Type 1) EAL: Probe PCI driver: intel_fpga_5ngr_fec_vf (8086:d90) device: 0000:1d.00.0 (socket 1) EAL: No legacy callbacks, legacy socket not created =========================================================== Starting Test Suite : BBdev Validation Tests Test vector file = ldpc_dec_v7813.data Device 0 queue 16 setup failed Allocated all queues (id=16) at prio0 on dev0 Device 0 queue 32 setup failed Allocated all queues (id=32) at prio1 on dev0 Device 0 queue 48 setup failed Allocated all queues (id=48) at prio2 on dev0 Device 0 queue 64 setup failed Allocated all queues (id=64) at prio3 on dev0 Device 0 queue 64 setup failed All queues on dev 0 allocated: 64 + ------------------------------------------------------- + == test: validation dev:0000:b0:00.0, burst size: 1, num ops: 1, op type: RTE_BBDEV_OP_LDPC_DEC Operation latency: avg: 23092 cycles, 10.0838 us min: 23092 cycles, 10.0838 us max: 23092 cycles, 10.0838 us TestCase [ 0] : validation_tc passed + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + Test Suite Summary : BBdev Validation Tests + Tests Total : 1 + Tests Skipped : 0 + Tests Passed : 1 1 + Tests Failed : 0 + Tests Lasted : 177.67 ms + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +
- 1
- While some tests can be skipped, be sure that the vector tests pass.