Chapter 15. Monitoring
15.1. Monitoring overview Copy linkLink copied to clipboard!
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
- Monitoring OpenShift Virtualization VM health status
-
View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home
Overview page in the OpenShift Container Platform web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions. - OpenShift Container Platform cluster checkup framework
Run automated tests on your cluster with the OpenShift Container Platform cluster checkup framework to check the following conditions:
- Network connectivity and latency between two VMs attached to a secondary network interface
- VM running a Data Plane Development Kit (DPDK) workload with zero packet loss
- Cluster storage is optimally configured for OpenShift Virtualization
- Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporterservice to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
- Runbooks
- Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the OpenShift Container Platform web console.
15.2. OpenShift Virtualization cluster checkup framework Copy linkLink copied to clipboard!
A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.
The OpenShift Virtualization cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
As a developer or cluster administrator, you can use predefined checkups to improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. You can review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.
15.2.1. Running predefined latency checkups Copy linkLink copied to clipboard!
You can use a latency checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The predefined latency checkup uses the ping utility.
Before you run a latency checkup, you must first create a bridge interface on the cluster nodes to connect the VM’s secondary interface to any interface on the node. If you do not create a bridge interface, the VMs do not start and the job fails.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.
You must always:
- Verify that the checkup image is from a trustworthy source before applying it.
-
Review the checkup permissions before creating the
RoleandRoleBindingobjects.
15.2.1.1. Running a latency checkup by using the web console Copy linkLink copied to clipboard!
Run a latency checkup to verify network connectivity and measure the latency between two virtual machines attached to a secondary network interface.
Prerequisites
-
You must add a
NetworkAttachmentDefinitionto the namespace.
Procedure
-
Navigate to Virtualization
Checkups in the web console. - Click the Network latency tab.
- Click Install permissions.
- Click Run checkup.
- Enter a name for the checkup in the Name field.
- Select a NetworkAttachmentDefinition from the drop-down menu.
- Optional: Set a duration for the latency sample in the Sample duration (seconds) field.
- Optional: Define a maximum latency time interval by enabling Set maximum desired latency (milliseconds) and defining the time interval.
- Optional: Target specific nodes by enabling Select nodes and specifying the Source node and Target node.
- Click Run.
Verification
- To view the status of the latency checkup, go to the Checkups list on the Latency checkup tab. Click on the name of the checkup for more details.
15.2.1.2. Running a latency checkup by using the CLI Copy linkLink copied to clipboard!
You run a latency checkup using the CLI by performing the following steps:
- Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the latency checkup resources.
Prerequisites
-
You installed the OpenShift CLI (
oc). - The cluster has at least two worker nodes.
- You configured a network attachment definition for a namespace.
Procedure
Create a
ServiceAccount,Role, andRoleBindingmanifest for the latency checkup:Example 15.1. Example role manifest file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ServiceAccount,Role, andRoleBindingmanifest:oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml
$ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<target_namespace>is the namespace where the checkup is to be run. This must be an existing namespace where theNetworkAttachmentDefinitionobject resides.
Create a
ConfigMapmanifest that contains the input parameters for the checkup:Example input config map
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
NetworkAttachmentDefinitionobject. - 2
- Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
- 3
- Optional: The duration of the latency check, in seconds.
- 4
- Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the
spec.param.targetNodefield cannot be empty. - 5
- Optional: When specified, latency is measured from the source node to this node.
Apply the config map manifest in the target namespace:
oc apply -n <target_namespace> -f <latency_config_map>.yaml
$ oc apply -n <target_namespace> -f <latency_config_map>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Jobmanifest to run the checkup:Example job manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
Jobmanifest:oc apply -n <target_namespace> -f <latency_job>.yaml
$ oc apply -n <target_namespace> -f <latency_job>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the job to complete:
oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
$ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the
spec.param.maxDesiredLatencyMillisecondsattribute, the checkup fails and returns an error.oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml
$ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output config map (success)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The maximum measured latency in nanoseconds.
Optional: To view the detailed job log in case of checkup failure, use the following command:
oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
$ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the job and config map that you previously created by running the following commands:
oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
$ oc delete job -n <target_namespace> kubevirt-vm-latency-checkupCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
$ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you do not plan to run another checkup, delete the roles manifest:
oc delete -f <latency_sa_roles_rolebinding>.yaml
$ oc delete -f <latency_sa_roles_rolebinding>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.2.2. Running predefined storage checkups Copy linkLink copied to clipboard!
You can use a storage checkup to verify that the cluster storage is optimally configured for OpenShift Virtualization.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.
You must always:
- Verify that the checkup image is from a trustworthy source before applying it.
-
Review the checkup permissions before creating the
RoleandRoleBindingobjects.
15.2.2.1. Retaining resources for troubleshooting storage checkups Copy linkLink copied to clipboard!
The predefined storage checkup includes skipTeardown configuration options, which control resource clean up after a storage checkup runs. By default, the skipTeardown field value is Never, which means that the checkup always performs teardown steps and deletes all resources after the checkup runs.
You can retain resources for further inspection in case a failure occurs by setting the skipTeardown field to onfailure.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Run the following command to edit the
storage-checkup-configconfig map:oc edit configmap storage-checkup-config -n <checkup_namespace>
$ oc edit configmap storage-checkup-config -n <checkup_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
skipTeardownfield to use theonfailurevalue. You can do this by modifying thestorage-checkup-configconfig map, stored in thestorage_checkup.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reapply the
storage-checkup-configconfig map by running the following command:oc apply -f storage_checkup.yaml -n <checkup_namespace>
$ oc apply -f storage_checkup.yaml -n <checkup_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.2.2.2. Running a storage checkup by using the web console Copy linkLink copied to clipboard!
Run a storage checkup to validate that storage is working correctly for virtual machines.
Procedure
-
Navigate to Virtualization
Checkups in the web console. - Click the Storage tab.
- Click Install permissions.
- Click Run checkup.
- Enter a name for the checkup in the Name field.
- Enter a timeout value for the checkup in the Timeout (minutes) fields.
- Click Run.
You can view the status of the storage checkup in the Checkups list on the Storage tab. Click on the name of the checkup for more details.
15.2.2.3. Running a storage checkup by using the CLI Copy linkLink copied to clipboard!
Use a predefined checkup to verify that the OpenShift Container Platform cluster storage is configured optimally to run OpenShift Virtualization workloads.
Prerequisites
-
You have installed the OpenShift CLI (
oc). The cluster administrator has created the required
cluster-readerpermissions for the storage checkup service account and namespace, such as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where the checkup is to be run.
Procedure
Create a
ServiceAccount,Role, andRoleBindingmanifest file for the storage checkup:Example 15.2. Example service account, role, and rolebinding manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ServiceAccount,Role, andRoleBindingmanifest in the target namespace:oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml
$ oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ConfigMapandJobmanifest file. The config map contains the input parameters for the checkup job.Example input config map and job manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ConfigMapandJobmanifest file in the target namespace to run the checkup:oc apply -n <target_namespace> -f <storage_configmap_job>.yaml
$ oc apply -n <target_namespace> -f <storage_configmap_job>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the job to complete:
oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m
$ oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the results of the checkup by running the following command:
oc get configmap storage-checkup-config -n <target_namespace> -o yaml
$ oc get configmap storage-checkup-config -n <target_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output config map (success)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies if the checkup is successful (
true) or not (false). - 2
- The reason for failure if the checkup fails.
- 3
- The time when the checkup started, in RFC 3339 time format.
- 4
- The time when the checkup has completed, in RFC 3339 time format.
- 5
- The OpenShift Virtualization version.
- 6
- Specifies if there is a default storage class.
- 7
- The list of golden images whose data source is not ready.
- 8
- The list of golden images whose data import cron is not up-to-date.
- 9
- The OpenShift Container Platform version.
- 10
- Specifies if a PVC of 10Mi has been created and bound by the provisioner.
- 11
- The list of storage profiles using snapshot-based clone but missing VolumeSnapshotClass.
- 12
- The list of storage profiles with unknown provisioners.
- 13
- The list of storage profiles with smart clone support (CSI/snapshot).
- 14
- The list of storage profiles spec-overriden claimPropertySets.
- 15
- The list of virtual machines that use the Ceph RBD storage class when the virtualization storage class exists.
- 16
- The list of virtual machines that use an Elastic File Store (EFS) storage class where the GID and UID are not set in the storage class.
Delete the job and config map that you previously created by running the following commands:
oc delete job -n <target_namespace> storage-checkup
$ oc delete job -n <target_namespace> storage-checkupCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete config-map -n <target_namespace> storage-checkup-config
$ oc delete config-map -n <target_namespace> storage-checkup-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you do not plan to run another checkup, delete the
ServiceAccount,Role, andRoleBindingmanifest:oc delete -f <storage_sa_roles_rolebinding>.yaml
$ oc delete -f <storage_sa_roles_rolebinding>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.2.2.4. Troubleshooting a failed storage checkup Copy linkLink copied to clipboard!
If a storage checkup fails, there are steps that you can take to identify the reason for failure.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have downloaded the directory provided by the
must-gathertool.
Procedure
Review the
status.failureReasonfield in thestorage-checkup-configconfig map by running the following command and observing the output:oc get configmap storage-checkup-config -n <namespace> -o yaml
$ oc get configmap storage-checkup-config -n <namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output config map
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Search the directory provided by the
must-gathertool for logs, events, or terms related to the error in thedata.status.failureReasonfield value.
15.2.2.5. Storage checkup error codes Copy linkLink copied to clipboard!
The following error codes might appear in the storage-checkup-config config map after a storage checkup fails.
| Error code | Meaning |
|---|---|
|
| No default storage class is configured. |
|
| One or more persistent volume claims (PVCs) failed to bind. |
|
| Multiple default storage classes are configured. |
|
|
There are |
|
|
There are VMs using elastic file system (EFS) storage classes, where the GID and UID are not set in the |
|
|
One or more golden images has a |
|
|
The |
|
| Some VMs failed to boot within the expected time. |
15.2.3. Running predefined DPDK checkups Copy linkLink copied to clipboard!
You can use a DPDK checkup to verify that a node can run a VM with a Data Plane Development Kit (DPDK) workload with zero packet loss.
15.2.3.1. Running a DPDK checkup by using the CLI Copy linkLink copied to clipboard!
Use a predefined checkup to verify that your OpenShift Container Platform cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.
You run a DPDK checkup by performing the following steps:
- Create a service account, role, and role bindings for the DPDK checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the DPDK checkup resources.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - The cluster is configured to run DPDK applications.
- The project is configured to run DPDK applications.
Procedure
Create a
ServiceAccount,Role, andRoleBindingmanifest for the DPDK checkup:Example 15.3. Example service account, role, and rolebinding manifest file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ServiceAccount,Role, andRoleBindingmanifest:oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ConfigMapmanifest that contains the input parameters for the checkup:Example input config map
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
NetworkAttachmentDefinitionobject. - 2
- The container disk image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
- 3
- The container disk image for the VM under test. In this example, the image is pulled from the upstream Project Quay Container Registry.
Apply the
ConfigMapmanifest in the target namespace:oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
$ oc apply -n <target_namespace> -f <dpdk_config_map>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Jobmanifest to run the checkup:Example job manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
Jobmanifest:oc apply -n <target_namespace> -f <dpdk_job>.yaml
$ oc apply -n <target_namespace> -f <dpdk_job>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the job to complete:
oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
$ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the results of the checkup by running the following command:
oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output config map (success)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies if the checkup is successful (
true) or not (false). - 2
- The reason for failure if the checkup fails.
- 3
- The time when the checkup started, in RFC 3339 time format.
- 4
- The time when the checkup has completed, in RFC 3339 time format.
- 5
- The number of packets sent from the traffic generator.
- 6
- The number of error packets sent from the traffic generator.
- 7
- The number of error packets received by the traffic generator.
- 8
- The node on which the traffic generator VM was scheduled.
- 9
- The node on which the VM under test was scheduled.
- 10
- The number of packets received on the VM under test.
- 11
- The ingress traffic packets that were dropped by the DPDK application.
- 12
- The egress traffic packets that were dropped from the DPDK application.
Delete the job and config map that you previously created by running the following commands:
oc delete job -n <target_namespace> dpdk-checkup
$ oc delete job -n <target_namespace> dpdk-checkupCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete config-map -n <target_namespace> dpdk-checkup-config
$ oc delete config-map -n <target_namespace> dpdk-checkup-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you do not plan to run another checkup, delete the
ServiceAccount,Role, andRoleBindingmanifest:oc delete -f <dpdk_sa_roles_rolebinding>.yaml
$ oc delete -f <dpdk_sa_roles_rolebinding>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.2.3.1.1. DPDK checkup config map parameters Copy linkLink copied to clipboard!
The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup:
| Parameter | Description | Is Mandatory |
|---|---|---|
|
| The time, in minutes, before the checkup fails. | True |
|
|
The name of the | True |
|
| The container disk image for the traffic generator. | True |
|
| The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The number of packets per second, in kilo (k) or million(m). The default value is 8m. | False |
|
| The container disk image for the VM under test. | True |
|
| The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes. | False |
|
| The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps. | False |
|
|
When set to | False |
15.2.3.1.2. Building a container disk image for RHEL virtual machines Copy linkLink copied to clipboard!
You can build a custom Red Hat Enterprise Linux (RHEL) 9 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map.
To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a RHEL 9 VM that can be used to build custom RHEL images.
Prerequisites
-
The image builder VM must run RHEL 9.4 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the
/vardirectory. -
You have installed the image builder tool and its CLI (
composer-cli) on the VM. For more information, see "Additional resources". You have installed the
virt-customizetool:dnf install guestfs-tools
# dnf install guestfs-toolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
You have installed the Podman CLI tool (
podman).
Procedure
Verify that you can build a RHEL 9.4 image:
composer-cli distros list
# composer-cli distros listCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo run the
composer-clicommands as non-root, add your user to theweldrorrootgroups:usermod -a -G weldr <user>
# usermod -a -G weldr <user>Copy to Clipboard Copied! Toggle word wrap Toggle overflow newgrp weldr
$ newgrp weldrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the blueprint file to the image builder tool by running the following command:
composer-cli blueprints push dpdk-vm.toml
# composer-cli blueprints push dpdk-vm.tomlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
composer-cli compose start dpdk_image qcow2
# composer-cli compose start dpdk_image qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the compose process to complete. The compose status must show
FINISHEDbefore you can continue to the next step.composer-cli compose status
# composer-cli compose statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to download the
qcow2image file by specifying its UUID:composer-cli compose image <UUID>
# composer-cli compose image <UUID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the customization scripts by running the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
virt-customizetool to customize the image generated by the image builder tool:virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel
$ virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabelCopy to Clipboard Copied! Toggle word wrap Toggle overflow To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:
cat << EOF > Dockerfile FROM scratch COPY --chown=107:107 <UUID>-disk.qcow2 /disk/ EOF
$ cat << EOF > Dockerfile FROM scratch COPY --chown=107:107 <UUID>-disk.qcow2 /disk/ EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <UUID>-disk.qcow2
-
Specifies the name of the custom image in
qcow2format.
Build and tag the container by running the following command:
podman build . -t dpdk-rhel:latest
$ podman build . -t dpdk-rhel:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow Push the container disk image to a registry that is accessible from your cluster by running the following command:
podman push dpdk-rhel:latest
$ podman push dpdk-rhel:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Provide a link to the container disk image in the
spec.param.vmUnderTestContainerDiskImageattribute in the DPDK checkup config map.
15.3. Prometheus queries for virtual resources Copy linkLink copied to clipboard!
OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including vCPU, network, storage, and guest memory swapping. You can also use metrics to query live migration status.
15.3.1. Prerequisites Copy linkLink copied to clipboard!
-
To use the vCPU metric, the
schedstats=enablekernel argument must be applied to theMachineConfigobject. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. For more information, see Adding kernel arguments to nodes. - For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
15.3.2. Querying metrics for all projects with the OpenShift Container Platform web console Copy linkLink copied to clipboard!
You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI.
The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admincluster role or with view permissions for all projects. -
You have installed the OpenShift CLI (
oc).
Procedure
-
In the OpenShift Container Platform web console, click Observe
Metrics. To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
15.3.3. Querying metrics for user-defined projects with the OpenShift Container Platform web console Copy linkLink copied to clipboard!
You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet. These queries are restricted to the selected project. You can also run custom Prometheus Query Language (PromQL) queries for the project.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a
ServiceMonitorcustom resource definition (CRD) for the service to define how the service is monitored.
Procedure
-
In the OpenShift Container Platform web console, click Observe
Metrics. To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
15.3.4. Virtualization metrics Copy linkLink copied to clipboard!
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.
The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.
15.3.4.1. vCPU metrics Copy linkLink copied to clipboard!
The following query can identify virtual machines that are waiting for Input/Output (I/O):
kubevirt_vmi_vcpu_wait_seconds_total- Returns the wait time (in seconds) on I/O for vCPUs of a virtual machine. Type: Counter.
A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.
To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler.
Example vCPU wait time query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0
- 1
- This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.
15.3.4.2. Network metrics Copy linkLink copied to clipboard!
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.
15.3.4.3. Storage metrics Copy linkLink copied to clipboard!
15.3.4.3.1. Storage-related traffic Copy linkLink copied to clipboard!
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
15.3.4.3.2. Storage snapshot data Copy linkLink copied to clipboard!
kubevirt_vmsnapshot_disks_restored_from_source- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"}
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the amount of space in bytes restored from the source virtual machine.
15.3.4.3.3. I/O performance Copy linkLink copied to clipboard!
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.
15.3.4.4. Guest memory swapping metrics Copy linkLink copied to clipboard!
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0
- 1
- This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
15.3.4.5. Live migration metrics Copy linkLink copied to clipboard!
The following metrics can be queried to show live migration status:
kubevirt_vmi_migration_data_processed_bytes- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_vmi_migration_data_remaining_bytes- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_vmi_migration_memory_transfer_rate_bytes- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_vmi_migrations_in_pending_phase- The number of pending migrations. Type: Gauge.
kubevirt_vmi_migrations_in_scheduling_phase- The number of scheduling migrations. Type: Gauge.
kubevirt_vmi_migrations_in_running_phase- The number of running migrations. Type: Gauge.
kubevirt_vmi_migration_succeeded- The number of successfully completed migrations. Type: Gauge.
kubevirt_vmi_migration_failed- The number of failed migrations. Type: Gauge.
15.4. Exposing custom metrics for virtual machines Copy linkLink copied to clipboard!
OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.
In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.
15.4.1. Configuring the node exporter service Copy linkLink copied to clipboard!
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in to the cluster as a user with
cluster-adminprivileges. -
Create the
cluster-monitoring-configConfigMapobject in theopenshift-monitoringproject. -
Configure the
user-workload-monitoring-configConfigMapobject in theopenshift-user-workload-monitoringproject by settingenableUserWorkloadtotrue.
Procedure
Create the
ServiceYAML file. In the following example, the file is callednode-exporter-service.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The node-exporter service that exposes the metrics from the virtual machines.
- 2
- The namespace where the service is created.
- 3
- The label for the service. The
ServiceMonitoruses this label to match this service. - 4
- The name given to the port that exposes metrics on port 9100 for the
ClusterIPservice. - 5
- The target port used by
node-exporter-serviceto listen for requests. - 6
- The TCP port number of the virtual machine that is configured with the
monitorlabel. - 7
- The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label
monitorand a value ofmetricswill be matched.
Create the node-exporter service:
oc create -f node-exporter-service.yaml
$ oc create -f node-exporter-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.4.2. Configuring a virtual machine with the node exporter service Copy linkLink copied to clipboard!
Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.
Prerequisites
-
The pods for the component are running in the
openshift-user-workload-monitoringproject. -
Grant the
monitoring-editrole to users who need to monitor this user-defined project.
Procedure
- Log on to the virtual machine.
Download the
node-exporterfile on to the virtual machine by using the directory path that applies to the version ofnode-exporterfile.wget https://github.com/prometheus/node_exporter/releases/download/<version>/node_exporter-<version>.linux-<architecture>.tar.gz
$ wget https://github.com/prometheus/node_exporter/releases/download/<version>/node_exporter-<version>.linux-<architecture>.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the executable and place it in the
/usr/bindirectory.sudo tar xvf node_exporter-<version>.linux-<architecture>.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"$ sudo tar xvf node_exporter-<version>.linux-<architecture>.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
node_exporter.servicefile in this directory path:/etc/systemd/system. Thissystemdservice file runs the node-exporter service when the virtual machine reboots.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the
systemdservice.sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service
$ sudo systemctl enable node_exporter.service $ sudo systemctl start node_exporter.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
curl http://localhost:9100/metrics
$ curl http://localhost:9100/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.4.3. Creating a custom monitoring label for virtual machines Copy linkLink copied to clipboard!
To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
templatespec of your virtual machine configuration file. In this example, the labelmonitorhas the valuemetrics.spec: template: metadata: labels: monitor: metricsspec: template: metadata: labels: monitor: metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Stop and restart the virtual machine to create a new pod with the label name given to the
monitorlabel.
15.4.3.1. Querying the node-exporter service for metrics Copy linkLink copied to clipboard!
Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
oc get service -n <namespace> <node-exporter-service>
$ oc get service -n <namespace> <node-exporter-service>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list all available metrics for the node-exporter service, query the
metricsresource.curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"
$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.4.4. Creating a ServiceMonitor resource for the node exporter service Copy linkLink copied to clipboard!
You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a YAML file for the
ServiceMonitorresource configuration. In this example, the service monitor matches any service with the labelmetricsand queries theexmetport every 30 seconds.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ServiceMonitorconfiguration for the node-exporter service.oc create -f node-exporter-metrics-monitor.yaml
$ oc create -f node-exporter-metrics-monitor.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.4.4.1. Accessing the node exporter service outside the cluster Copy linkLink copied to clipboard!
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Expose the node-exporter service.
oc expose service -n <namespace> <node_exporter_service_name>
$ oc expose service -n <namespace> <node_exporter_service_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the FQDN (Fully Qualified Domain Name) for the route.
oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
curlcommand to display metrics for the node-exporter service.curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics
$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.5. Exposing downward metrics for virtual machines Copy linkLink copied to clipboard!
As an administrator, you can expose a limited set of host and virtual machine (VM) metrics to a guest VM by first enabling a downwardMetrics feature gate and then configuring a downwardMetrics device.
Users can view the metrics results by using the command line or the vm-dump-metrics tool.
On Red Hat Enterprise Linux (RHEL) 9, use the command line to view downward metrics. See Viewing downward metrics by using the command line.
The vm-dump-metrics tool is not supported on the Red Hat Enterprise Linux (RHEL) 9 platform.
15.5.1. Enabling or disabling the downwardMetrics feature gate Copy linkLink copied to clipboard!
You can enable or disable the downwardMetrics feature gate by performing either of the following actions:
- Editing the HyperConverged custom resource (CR) in your default editor
- Using the command line
15.5.1.1. Enabling or disabling the downward metrics feature gate in a YAML file Copy linkLink copied to clipboard!
To expose downward metrics for a host virtual machine, you can enable the downwardMetrics feature gate by editing a YAML file.
Prerequisites
- You must have administrator privileges to enable the feature gate.
-
You have installed the OpenShift CLI (
oc).
Procedure
Open the HyperConverged custom resource (CR) in your default editor by running the following command:
oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Choose to enable or disable the downwardMetrics feature gate as follows:
To enable the
downwardMetricsfeature gate, add and then setspec.featureGates.downwardMetricstotrue. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable the
downwardMetricsfeature gate, setspec.featureGates.downwardMetricstofalse. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.5.1.2. Enabling or disabling the downward metrics feature gate from the CLI Copy linkLink copied to clipboard!
To expose downward metrics for a host virtual machine, you can enable the downwardMetrics feature gate by using the command line.
Prerequisites
- You must have administrator privileges to enable the feature gate.
-
You have installed the OpenShift CLI (
oc).
Procedure
Choose to enable or disable the
downwardMetricsfeature gate as follows:Enable the
downwardMetricsfeature gate by running the command shown in the following example:oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/downwardMetrics", \ "value": true}]'$ oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/downwardMetrics", \ "value": true}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable the
downwardMetricsfeature gate by running the command shown in the following example:oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/downwardMetrics", \ "value": false}]'$ oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/downwardMetrics", \ "value": false}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.5.2. Configuring a downward metrics device Copy linkLink copied to clipboard!
You enable the capturing of downward metrics for a host VM by creating a configuration file that includes a downwardMetrics device. Adding this device establishes that the metrics are exposed through a virtio-serial port.
Prerequisites
-
You must first enable the
downwardMetricsfeature gate.
15.5.3. Viewing downward metrics Copy linkLink copied to clipboard!
You can view downward metrics by using either of the following options:
- The command-line interface (CLI)
-
The
vm-dump-metricstool
On Red Hat Enterprise Linux (RHEL) 9, use the command line to view downward metrics. The vm-dump-metrics tool is not supported on the Red Hat Enterprise Linux (RHEL) 9 platform.
15.5.3.1. Viewing downward metrics by using the CLI Copy linkLink copied to clipboard!
You can view downward metrics by entering a command from inside a guest virtual machine (VM).
Procedure
Run the following commands:
sudo sh -c 'printf "GET /metrics/XML\n\n" > /dev/virtio-ports/org.github.vhostmd.1'
$ sudo sh -c 'printf "GET /metrics/XML\n\n" > /dev/virtio-ports/org.github.vhostmd.1'Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cat /dev/virtio-ports/org.github.vhostmd.1
$ sudo cat /dev/virtio-ports/org.github.vhostmd.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.5.3.2. Viewing downward metrics by using the vm-dump-metrics tool Copy linkLink copied to clipboard!
To view downward metrics, install the vm-dump-metrics tool and then use the tool to expose the metrics results.
On Red Hat Enterprise Linux (RHEL) 9, use the command line to view downward metrics. The vm-dump-metrics tool is not supported on the Red Hat Enterprise Linux (RHEL) 9 platform.
Procedure
Install the
vm-dump-metricstool by running the following command:sudo dnf install -y vm-dump-metrics
$ sudo dnf install -y vm-dump-metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the metrics results by running the following command:
sudo vm-dump-metrics
$ sudo vm-dump-metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.6. Virtual machine health checks Copy linkLink copied to clipboard!
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine resource.
15.6.1. About readiness and liveness probes Copy linkLink copied to clipboard!
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests:
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-pingcommand to determine if the QEMU guest agent is running on the virtual machine.
15.6.1.1. Defining an HTTP readiness probe Copy linkLink copied to clipboard!
Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The HTTP GET request to perform to connect to the VM.
- 2
- The port of the VM that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.6.1.2. Defining a TCP readiness probe Copy linkLink copied to clipboard!
Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The TCP action to perform.
- 4
- The port of the VM that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.6.1.3. Defining an HTTP liveness probe Copy linkLink copied to clipboard!
Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The time, in seconds, after the VM starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The HTTP GET request to perform to connect to the VM.
- 4
- The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthzpath returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.6.2. Defining a watchdog Copy linkLink copied to clipboard!
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
- Configure a watchdog device for the virtual machine (VM).
- Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
poweroff: The VM powers down immediately. Ifspec.runStrategyis not set tomanual, the VM reboots. reset: The VM reboots in place and the guest operating system cannot react.NoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
shutdown: The VM gracefully powers down by stopping all services.
Watchdog is not available for Windows VMs.
15.6.2.1. Configuring a watchdog device for the virtual machine Copy linkLink copied to clipboard!
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
For
x86systems, the VM must use a kernel that works with thei6300esbwatchdog device. If you uses390xarchitecture, the kernel must be enabled fordiag288. Red Hat Enterprise Linux (RHEL) images supporti6300esbanddiag288. -
You have installed the OpenShift CLI (
oc).
Procedure
Create a
YAMLfile with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The example above configures the watchdog device on a VM with the
poweroffaction and exposes the device as/dev/watchdog.This device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
lspci | grep watchdog -i
$ lspci | grep watchdog -iCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
echo c > /proc/sysrq-trigger
# echo c > /proc/sysrq-triggerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the watchdog service:
pkill -9 watchdog
# pkill -9 watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.6.2.2. Installing the watchdog agent on the guest Copy linkLink copied to clipboard!
You install the watchdog agent on the guest and start the watchdog service.
Procedure
- Log in to the virtual machine as root user.
This step is only required when installing on IBM Z® (
s390x). Enablewatchdogby running the following command:modprobe diag288_wdt
# modprobe diag288_wdtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
/dev/watchdogfile path is present in the VM by running the following command:ls /dev/watchdog
# ls /dev/watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
watchdogpackage and its dependencies:yum install watchdog
# yum install watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Uncomment the following line in the
/etc/watchdog.conffile and save the changes:#watchdog-device = /dev/watchdog
#watchdog-device = /dev/watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
watchdogservice to start on boot:systemctl enable --now watchdog.service
# systemctl enable --now watchdog.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.6.3. Defining a guest agent ping probe Copy linkLink copied to clipboard!
Define a guest agent ping probe by setting the spec.readinessProbe.guestAgentPing field of the virtual machine (VM) configuration.
The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- The QEMU guest agent must be installed and enabled on the virtual machine.
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the guest agent ping probe in the VM configuration file. For example:
Sample guest agent ping probe
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The guest agent ping probe to connect to the VM.
- 2
- Optional: The time, in seconds, after the VM starts before the guest agent probe is initiated.
- 3
- Optional: The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 4
- Optional: The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 5
- Optional: The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 6
- Optional: The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.7. OpenShift Virtualization runbooks Copy linkLink copied to clipboard!
To diagnose and resolve issues that trigger OpenShift Virtualization alerts, follow the procedures in the runbooks for the OpenShift Virtualization Operator. Triggered OpenShift Virtualization alerts can be viewed in the main Observe
Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub.
15.7.1. CDIDataImportCronOutdated Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDataImportCronOutdatedalert.
15.7.2. CDIDataVolumeUnusualRestartCount Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDataVolumeUnusualRestartCountalert.
15.7.3. CDIDefaultStorageClassDegraded Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDefaultStorageClassDegradedalert.
15.7.4. CDIMultipleDefaultVirtStorageClasses Copy linkLink copied to clipboard!
-
View the runbook for the
CDIMultipleDefaultVirtStorageClassesalert.
15.7.5. CDINoDefaultStorageClass Copy linkLink copied to clipboard!
-
View the runbook for the
CDINoDefaultStorageClassalert.
15.7.6. CDINotReady Copy linkLink copied to clipboard!
-
View the runbook for the
CDINotReadyalert.
15.7.7. CDIOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
CDIOperatorDownalert.
15.7.8. CDIStorageProfilesIncomplete Copy linkLink copied to clipboard!
-
View the runbook for the
CDIStorageProfilesIncompletealert.
15.7.9. CnaoDown Copy linkLink copied to clipboard!
-
View the runbook for the
CnaoDownalert.
15.7.10. CnaoNMstateMigration Copy linkLink copied to clipboard!
-
View the runbook for the
CnaoNMstateMigrationalert.
15.7.11. HAControlPlaneDown Copy linkLink copied to clipboard!
-
View the runbook for the
HAControlPlaneDownalert.
15.7.12. HCOInstallationIncomplete Copy linkLink copied to clipboard!
-
View the runbook for the
HCOInstallationIncompletealert.
15.7.13. HCOMisconfiguredDescheduler Copy linkLink copied to clipboard!
-
View the runbook for the
HCOMisconfiguredDescheduleralert.
15.7.14. HPPNotReady Copy linkLink copied to clipboard!
-
View the runbook for the
HPPNotReadyalert.
15.7.15. HPPOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
HPPOperatorDownalert.
15.7.16. HPPSharingPoolPathWithOS Copy linkLink copied to clipboard!
-
View the runbook for the
HPPSharingPoolPathWithOSalert.
15.7.17. HighCPUWorkload Copy linkLink copied to clipboard!
-
View the runbook for the
HighCPUWorkloadalert.
15.7.18. KubemacpoolDown Copy linkLink copied to clipboard!
-
View the runbook for the
KubemacpoolDownalert.
15.7.19. KubeMacPoolDuplicateMacsFound Copy linkLink copied to clipboard!
-
View the runbook for the
KubeMacPoolDuplicateMacsFoundalert.
15.7.20. KubeVirtComponentExceedsRequestedCPU Copy linkLink copied to clipboard!
-
The
KubeVirtComponentExceedsRequestedCPUalert is deprecated.
15.7.21. KubeVirtComponentExceedsRequestedMemory Copy linkLink copied to clipboard!
-
The
KubeVirtComponentExceedsRequestedMemoryalert is deprecated.
15.7.22. KubeVirtCRModified Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtCRModifiedalert.
15.7.23. KubeVirtDeprecatedAPIRequested Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtDeprecatedAPIRequestedalert.
15.7.24. KubeVirtNoAvailableNodesToRunVMs Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtNoAvailableNodesToRunVMsalert.
15.7.25. KubevirtVmHighMemoryUsage Copy linkLink copied to clipboard!
-
View the runbook for the
KubevirtVmHighMemoryUsagealert.
15.7.26. KubeVirtVMIExcessiveMigrations Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtVMIExcessiveMigrationsalert.
15.7.27. LowKVMNodesCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowKVMNodesCountalert.
15.7.28. LowReadyVirtControllersCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowReadyVirtControllersCountalert.
15.7.29. LowReadyVirtOperatorsCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowReadyVirtOperatorsCountalert.
15.7.30. LowVirtAPICount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtAPICountalert.
15.7.31. LowVirtControllersCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtControllersCountalert.
15.7.32. LowVirtOperatorCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtOperatorCountalert.
15.7.33. NetworkAddonsConfigNotReady Copy linkLink copied to clipboard!
-
View the runbook for the
NetworkAddonsConfigNotReadyalert.
15.7.34. NoLeadingVirtOperator Copy linkLink copied to clipboard!
-
View the runbook for the
NoLeadingVirtOperatoralert.
15.7.35. NoReadyVirtController Copy linkLink copied to clipboard!
-
View the runbook for the
NoReadyVirtControlleralert.
15.7.36. NoReadyVirtOperator Copy linkLink copied to clipboard!
-
View the runbook for the
NoReadyVirtOperatoralert.
15.7.37. NodeNetworkInterfaceDown Copy linkLink copied to clipboard!
-
View the runbook for the
NodeNetworkInterfaceDownalert.
15.7.38. OperatorConditionsUnhealthy Copy linkLink copied to clipboard!
-
The
OperatorConditionsUnhealthyalert is deprecated.
15.7.39. OrphanedVirtualMachineInstances Copy linkLink copied to clipboard!
-
View the runbook for the
OrphanedVirtualMachineInstancesalert.
15.7.40. OutdatedVirtualMachineInstanceWorkloads Copy linkLink copied to clipboard!
-
View the runbook for the
OutdatedVirtualMachineInstanceWorkloadsalert.
15.7.41. SingleStackIPv6Unsupported Copy linkLink copied to clipboard!
-
View the runbook for the
SingleStackIPv6Unsupportedalert.
15.7.42. SSPCommonTemplatesModificationReverted Copy linkLink copied to clipboard!
-
View the runbook for the
SSPCommonTemplatesModificationRevertedalert.
15.7.43. SSPDown Copy linkLink copied to clipboard!
-
View the runbook for the
SSPDownalert.
15.7.44. SSPFailingToReconcile Copy linkLink copied to clipboard!
-
View the runbook for the
SSPFailingToReconcilealert.
15.7.45. SSPHighRateRejectedVms Copy linkLink copied to clipboard!
-
View the runbook for the
SSPHighRateRejectedVmsalert.
15.7.46. SSPOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
SSPOperatorDownalert.
15.7.47. SSPTemplateValidatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
SSPTemplateValidatorDownalert.
15.7.48. UnsupportedHCOModification Copy linkLink copied to clipboard!
-
View the runbook for the
UnsupportedHCOModificationalert.
15.7.49. VirtAPIDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtAPIDownalert.
15.7.50. VirtApiRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtApiRESTErrorsBurstalert.
15.7.51. VirtApiRESTErrorsHigh Copy linkLink copied to clipboard!
-
View the runbook for the
VirtApiRESTErrorsHighalert.
15.7.52. VirtControllerDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtControllerDownalert.
15.7.53. VirtControllerRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtControllerRESTErrorsBurstalert.
15.7.54. VirtControllerRESTErrorsHigh Copy linkLink copied to clipboard!
-
View the runbook for the
VirtControllerRESTErrorsHighalert.
15.7.55. VirtHandlerDaemonSetRolloutFailing Copy linkLink copied to clipboard!
-
View the runbook for the
VirtHandlerDaemonSetRolloutFailingalert.
15.7.56. VirtHandlerRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtHandlerRESTErrorsBurstalert.
15.7.57. VirtHandlerRESTErrorsHigh Copy linkLink copied to clipboard!
-
View the runbook for the
VirtHandlerRESTErrorsHighalert.
15.7.58. VirtOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtOperatorDownalert.
15.7.59. VirtOperatorRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtOperatorRESTErrorsBurstalert.
15.7.60. VirtOperatorRESTErrorsHigh Copy linkLink copied to clipboard!
-
View the runbook for the
VirtOperatorRESTErrorsHighalert.
15.7.61. VirtualMachineCRCErrors Copy linkLink copied to clipboard!
The
VirtualMachineCRCErrorsalert is deprecated.The alert is now called
VMStorageClassWarning.
15.7.62. VMCannotBeEvicted Copy linkLink copied to clipboard!
-
View the runbook for the
VMCannotBeEvictedalert.
15.7.63. VMStorageClassWarning Copy linkLink copied to clipboard!
-
View the runbook for the
VMStorageClassWarningalert.