Chapter 20. Performing latency tests for platform verification
You can use the Cloud-native Network Functions (CNF) tests image to run latency tests on a CNF-enabled OpenShift Container Platform cluster, where all the components required for running CNF workloads are installed. Run the latency tests to validate node tuning for your workload.
The cnf-tests container image is available at registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18.
20.1. Prerequisites for running latency tests Copy linkLink copied to clipboard!
Your cluster must meet the following requirements before you can run the latency tests:
-
You have applied all the required CNF configurations. This includes the
PerformanceProfilecluster and other configuration according to the reference design specifications (RDS) or your specific requirements. -
You have logged in to
registry.redhat.iowith your Customer Portal credentials by using thepodman logincommand.
20.2. Measuring latency Copy linkLink copied to clipboard!
To accurately measure system latency, use the hwlatdetect, cyclictest, and oslat tools provided in the cnf-tests image. Evaluating these metrics helps you identify and resolve performance delays in your environment.
Each tool has a specific use. Use the tools in sequence to achieve reliable test results.
- hwlatdetect
-
Measures the baseline that the bare-metal hardware can achieve. Before proceeding with the next latency test, ensure that the latency reported by
hwlatdetectmeets the required threshold because you cannot fix hardware latency spikes by operating system tuning. - cyclictest
-
Verifies the real-time kernel scheduler latency after
hwlatdetectpasses validation. Thecyclictesttool schedules a repeated timer and measures the difference between the desired and the actual trigger times. The difference can uncover basic issues with the tuning caused by interrupts or process priorities. The tool must run on a real-time kernel. - oslat
- Behaves similarly to a CPU-intensive DPDK application and measures all the interruptions and disruptions to the busy loop that simulates CPU heavy data processing.
The tests introduce the following environment variables:
| Environment variables | Description |
|---|---|
|
| Specifies the amount of time in seconds after which the test starts running. You can use the variable to allow the CPU manager reconcile loop to update the default CPU pool. The default value is 0. |
|
| Specifies the number of CPUs that the pod running the latency tests uses. If you do not set the variable, the default configuration includes all isolated CPUs. |
|
| Specifies the amount of time in seconds that the latency test must run. The default value is 300 seconds. Note
To prevent the Ginkgo 2.0 test suite from timing out before the latency tests complete, set the |
|
|
Specifies the maximum acceptable hardware latency in microseconds for the workload and operating system. If you do not set the value of |
|
|
Specifies the maximum latency in microseconds that all threads expect before waking up during the |
|
|
Specifies the maximum acceptable latency in microseconds for the |
|
| Unified variable that specifies the maximum acceptable latency in microseconds. Applicable for all available latency tools. |
Variables that are specific to a latency tool take precedence over unified variables. For example, if OSLAT_MAXIMUM_LATENCY is set to 30 microseconds and MAXIMUM_LATENCY is set to 10 microseconds, the oslat test will run with maximum acceptable latency of 30 microseconds.
20.3. Running the latency tests Copy linkLink copied to clipboard!
Run the cluster latency tests to validate node tuning for your Cloud-native Network Functions (CNF) workload.
When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v $(pwd)/:/kubeconfig:Z. This allows podman to do the proper SELinux relabeling.
The procedure runs the three individual tests hwlatdetect, cyclictest, and oslat. For details on these individual tests, see their individual sections.
Procedure
Open a shell prompt in the directory containing the
kubeconfigfile.You provide the test image with a
kubeconfigfile in current directory and its related$KUBECONFIGenvironment variable, mounted through a volume. This allows the running container to use thekubeconfigfile from inside the container.NoteIn the following command, your local
kubeconfigis mounted to kubeconfig/kubeconfig in the cnf-tests container, which allows access to the cluster.To run the latency tests, run the following command, substituting variable values as appropriate:
podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=600\ -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 /usr/bin/test-run.sh \ --ginkgo.v --ginkgo.timeout="24h"
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=600\ -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 /usr/bin/test-run.sh \ --ginkgo.v --ginkgo.timeout="24h"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The LATENCY_TEST_RUNTIME is shown in seconds, in this case 600 seconds (10 minutes). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 μs).
If the results exceed the latency threshold, the test fails.
-
Optional: Append
--ginkgo.dry-runflag to run the latency tests in dry-run mode. This is useful for checking what commands the tests run. -
Optional: Append
--ginkgo.vflag to run the tests with increased verbosity. Optional: Append
--ginkgo.timeout="24h"flag to ensure the Ginkgo 2.0 test suite does not timeout before the latency tests complete.ImportantDuring testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds).
20.3.1. Running hwlatdetect Copy linkLink copied to clipboard!
To measure hardware latency, run the hwlatdetect tool. This diagnostic utility is available in the rt-kernel package through your Red Hat Enterprise Linux (RHEL) 9.x subscription.
When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v $(pwd)/:/kubeconfig:Z. This allows podman to do the proper SELinux relabeling.
Prerequisites
- You have reviewed the prerequisites for running latency tests.
Procedure
To run the
hwlatdetecttests, run the following command, substituting variable values as appropriate:podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.focus="hwlatdetect" --ginkgo.v --ginkgo.timeout="24h"
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.focus="hwlatdetect" --ginkgo.v --ginkgo.timeout="24h"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
hwlatdetecttest runs for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower thanMAXIMUM_LATENCY(20 μs).If the results exceed the latency threshold, the test fails.
ImportantDuring testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds).
Example failure output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Latency threshold: You can configure the latency threshold by using theMAXIMUM_LATENCYor theHWLATDETECT_MAXIMUM_LATENCYenvironment variables. -
Max Latency: The maximum latency value measured during the test.
-
20.3.2. Example hwlatdetect test results Copy linkLink copied to clipboard!
To track the impact of changes made during testing, capture the raw data from each run along with a combined set of your optimal configuration settings. Retaining these metrics provides a comprehensive history of your test results.
You can capture the following types of results:
- Rough results that are gathered after each run to create a history of impact on any changes made throughout the test.
- The combined set of the rough tests with the best results and configuration settings.
Example of good results
The hwlatdetect tool only provides output if the sample exceeds the specified threshold.
Example of bad results
The output of hwlatdetect shows that multiple samples exceed the threshold. However, the same output can indicate different results based on the following factors:
- The duration of the test
- The number of CPU cores
- The host firmware settings
Before proceeding with the next latency test, ensure that the latency reported by hwlatdetect meets the required threshold. Fixing latencies introduced by hardware might require you to contact the system vendor support.
Not all latency spikes are hardware related. Ensure that you tune the host firmware to meet your workload requirements. For more information, see "Setting firmware parameters for system tuning".
20.3.3. Running cyclictest Copy linkLink copied to clipboard!
To measure real-time kernel scheduler latency on specified CPUs, run the cyclictest tool. Evaluating these metrics helps you identify execution delays and optimize your system for high-performance operations.
When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v $(pwd)/:/kubeconfig:Z. This allows podman to do the proper SELinux relabeling.
Prerequisites
- You have reviewed the prerequisites for running latency tests.
Procedure
To perform the
cyclictest, run the following command, substituting variable values as appropriate:podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.focus="cyclictest" --ginkgo.v --ginkgo.timeout="24h"
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.focus="cyclictest" --ginkgo.v --ginkgo.timeout="24h"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command runs the
cyclictesttool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower thanMAXIMUM_LATENCY(in this example, 20 μs). Latency spikes of 20 μs and above are generally not acceptable for telco RAN workloads.If the results exceed the latency threshold, the test fails.
ImportantDuring testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds).
Example failure output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
20.3.4. Example cyclictest results Copy linkLink copied to clipboard!
To accurately interpret latency test results, evaluate the metrics against your specific workload requirements. Acceptable performance thresholds differ significantly depending on whether you are running 4G DU or 5G DU workloads.
The following example shows a spike up to 18μs that is acceptable for 4G DU workloads, but not for 5G DU workloads:
Example of good results
Example of bad results
20.3.5. Running oslat Copy linkLink copied to clipboard!
To evaluate how your cluster handles CPU-heavy data processing, run the oslat test. This diagnostic tool simulates a CPU-intensive DPDK application to measure system interruptions and performance disruptions.
When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v $(pwd)/:/kubeconfig:Z. This allows podman to do the proper SELinux relabeling.
Prerequisites
- You have reviewed the prerequisites for running latency tests.
Procedure
To perform the
oslattest, run the following command, substituting variable values as appropriate:podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.focus="oslat" --ginkgo.v --ginkgo.timeout="24h"
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.focus="oslat" --ginkgo.v --ginkgo.timeout="24h"Copy to Clipboard Copied! Toggle word wrap Toggle overflow LATENCY_TEST_CPUSspecifies the number of CPUs to test with theoslatcommand.The command runs the
oslattool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower thanMAXIMUM_LATENCY(20 μs).If the results exceed the latency threshold, the test fails.
ImportantDuring testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds).
Example failure output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the measured latency is outside the maximum allowed value.
20.4. Generating a latency test failure report Copy linkLink copied to clipboard!
To analyze test failures and troubleshoot performance issues, generate a JUnit latency test output and test failure report. Reviewing this diagnostic data helps you pinpoint exactly where your system is experiencing delays.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges.
Procedure
Create a test failure report with information about the cluster state and resources for troubleshooting by passing the
--reportparameter with the path to where the report is dumped:podman run -v $(pwd)/:/kubeconfig:Z -v $(pwd)/reportdest:<report_folder_path> \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --report <report_folder_path> --ginkgo.v
$ podman run -v $(pwd)/:/kubeconfig:Z -v $(pwd)/reportdest:<report_folder_path> \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --report <report_folder_path> --ginkgo.vCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
<report_folder_path>: Specifies the path to the folder where the report is generated.
-
20.5. Generating a JUnit latency test report Copy linkLink copied to clipboard!
To analyze system performance and track execution delays, generate a JUnit latency test report. Reviewing this diagnostic output helps you identify configuration issues and performance bottlenecks within your cluster.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges.
Procedure
Create a JUnit-compliant XML report by passing the
--junitparameter together with the path to where the report is dumped:NoteYou must create the
junitfolder before running this command.podman run -v $(pwd)/:/kubeconfig:Z -v $(pwd)/junit:/junit \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.junit-report junit/<file_name>.xml --ginkgo.v
$ podman run -v $(pwd)/:/kubeconfig:Z -v $(pwd)/junit:/junit \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.junit-report junit/<file_name>.xml --ginkgo.vCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
file_name- The name of the XML report file.
20.6. Running latency tests on a single-node OpenShift cluster Copy linkLink copied to clipboard!
To validate node tuning and identify performance delays, run latency tests on your single-node OpenShift clusters. Evaluating these metrics ensures your environment is optimized for high-performance workloads.
When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v $(pwd)/:/kubeconfig:Z. This allows podman to do the proper SELinux relabeling.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have applied a cluster performance profile by using the Node Tuning Operator.
Procedure
To run the latency tests on a single-node OpenShift cluster, run the following command:
podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h"
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default runtime for each test is 300 seconds. For valid latency test results, run the tests for at least 12 hours by updating the
LATENCY_TEST_RUNTIMEvariable.To run the buckets latency validation step, you must specify a maximum latency. For details on maximum latency variables, see the table in the "Measuring latency" section.
After running the test suite, all the dangling resources are cleaned up.
20.7. Running latency tests in a disconnected cluster Copy linkLink copied to clipboard!
The CNF tests image can run tests in a disconnected cluster that is not able to reach external registries. This requires two steps:
-
Mirroring the
cnf-testsimage to the custom disconnected registry. - Instructing the tests to consume the images from the custom disconnected registry.
20.7.1. Mirroring the images to a custom registry accessible from the cluster Copy linkLink copied to clipboard!
To make required images accessible from your cluster, mirror them to a custom registry. Performing this synchronization ensures that your deployment has the necessary container files, which is particularly useful in restricted or disconnected network environments.
A mirror executable is shipped in the image to provide the input required by oc to mirror the test image to a local registry.
Procedure
Run the following command from an intermediate machine that has access to the cluster and registry.redhat.io:
podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<disconnected_registry>-
Specifies the disconnected mirror registry you have configured, such as
my.local.registry:5000/.
When you have mirrored the
cnf-testsimage into the disconnected registry, you must override the original registry used to fetch the images when running the tests by a command similar to the following example:podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<disconnected_registry>" \ -e CNF_TESTS_IMAGE="cnf-tests-rhel9:v4.18" \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ <disconnected_registry>/cnf-tests-rhel9:v4.18 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h"
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<disconnected_registry>" \ -e CNF_TESTS_IMAGE="cnf-tests-rhel9:v4.18" \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ <disconnected_registry>/cnf-tests-rhel9:v4.18 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
20.7.2. Configuring the tests to consume images from a custom registry Copy linkLink copied to clipboard!
You can run the latency tests by using a custom test image and image registry using CNF_TESTS_IMAGE and IMAGE_REGISTRY variables.
Procedure
To configure the latency tests to use a custom test image and image registry, run a command similar to the following example:
podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<custom_image_registry>" \ -e CNF_TESTS_IMAGE="<custom_cnf-tests_image>" \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h"
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<custom_image_registry>" \ -e CNF_TESTS_IMAGE="<custom_cnf-tests_image>" \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h"Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<custom_image_registry>-
Specifies the custom image registry, for example,
custom.registry:5000/. <custom_cnf-tests_image>-
Specifies the custom cnf-tests image, for example,
custom-cnf-tests-image:latest.
20.7.3. Mirroring images to the cluster OpenShift image registry Copy linkLink copied to clipboard!
To make container images locally available for your deployment, mirror them to the built-in OpenShift image registry. This integrated component runs as a standard workload on your OpenShift Container Platform cluster to ensure continuous access to required files.
Procedure
Gain external access to the registry by exposing the registry with a route. You can do this task by running a command similar to the following example:
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge$ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Fetch the registry endpoint by running a command similar to the following example:
REGISTRY=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')$ REGISTRY=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a namespace for exposing the images by running a command similar to the following example:
oc create ns cnftests
$ oc create ns cnftestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the image stream available to all the namespaces used for tests. This is required to allow the tests namespaces to fetch the images from the
cnf-testsimage stream. Run commands similar to the following examples:oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests
$ oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests
$ oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the docker secret name by running a command similar to the following example:
SECRET=$(oc -n cnftests get secret | grep builder-docker | awk {'print $1'}$ SECRET=$(oc -n cnftests get secret | grep builder-docker | awk {'print $1'}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the docker auth token by running a command similar to the following example:
TOKEN=$(oc -n cnftests get secret $SECRET -o jsonpath="{.data['\.dockercfg']}" | base64 --decode | jq '.["image-registry.openshift-image-registry.svc:5000"].auth')$ TOKEN=$(oc -n cnftests get secret $SECRET -o jsonpath="{.data['\.dockercfg']}" | base64 --decode | jq '.["image-registry.openshift-image-registry.svc:5000"].auth')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
dockerauth.jsonfile, for example:echo "{\"auths\": { \"$REGISTRY\": { \"auth\": $TOKEN } }}" > dockerauth.json$ echo "{\"auths\": { \"$REGISTRY\": { \"auth\": $TOKEN } }}" > dockerauth.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mirror the image by running a command similar to the following example:
podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/mirror -registry $REGISTRY/cnftests | oc image mirror --insecure=true \ -a=$(pwd)/dockerauth.json -f -
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ /usr/bin/mirror -registry $REGISTRY/cnftests | oc image mirror --insecure=true \ -a=$(pwd)/dockerauth.json -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the tests by running a command similar to the following example:
podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h"
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
20.7.4. Mirroring a different set of test images Copy linkLink copied to clipboard!
You can optionally change the default upstream images that are mirrored for the latency tests.
Procedure
The
mirrorcommand tries to mirror the upstream images by default. This can be overridden by passing a file with the following format to the image:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pass the file to the
mirrorcommand, for example saving it locally asimages.json. With the following command, the local path is mounted in/kubeconfiginside the container and that can be passed to the mirror command.podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 /usr/bin/mirror \ --registry "my.local.registry:5000/" --images "/kubeconfig/images.json" \ | oc image mirror -f -
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 /usr/bin/mirror \ --registry "my.local.registry:5000/" --images "/kubeconfig/images.json" \ | oc image mirror -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow
20.8. Troubleshooting errors with the cnf-tests container Copy linkLink copied to clipboard!
To troubleshoot errors when running latency tests, verify that your cluster is accessible from within the cnf-tests container. Ensuring this connectivity resolves common test execution failures.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges.
Procedure
Verify that the cluster is accessible from inside the
cnf-testscontainer by running the following command:podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ oc get nodes
$ podman run -v $(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel9:v4.18 \ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow If this command does not work, an error related to spanning across DNS, MTU size, or firewall access might be occurring.