Chapter 12. Monitoring
12.1. Monitoring overview Copy linkLink copied to clipboard!
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
- Monitoring OpenShift Virtualization VMs health status
-
View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home
Overview page in the OpenShift Container Platform web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions. - OpenShift Container Platform cluster checkup framework
Run automated tests on your cluster with the OpenShift Container Platform cluster checkup framework to check the following conditions:
- Network connectivity and latency between two VMs attached to a secondary network interface
- VM running a Data Plane Development Kit (DPDK) workload with zero packet loss
- Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporterservice to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
- Runbooks
- Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the OpenShift Container Platform web console.
12.2. OpenShift Virtualization cluster checkup framework Copy linkLink copied to clipboard!
A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.
The OpenShift Virtualization cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
As a developer or cluster administrator, you can use predefined checkups to improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. You can review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.
12.2.1. Running predefined latency checkups Copy linkLink copied to clipboard!
You can use a latency checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The predefined latency checkup uses the ping utility.
Before you run a latency checkup, you must first create a bridge interface on the cluster nodes to connect the VM’s secondary interface to any interface on the node. If you do not create a bridge interface, the VMs do not start and the job fails.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.
You must always:
- Verify that the checkup image is from a trustworthy source before applying it.
-
Review the checkup permissions before creating the
RoleandRoleBindingobjects.
12.2.1.1. Running a latency checkup Copy linkLink copied to clipboard!
You run a latency checkup using the CLI by performing the following steps:
- Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the latency checkup resources.
Prerequisites
-
You installed the OpenShift CLI (
oc). - The cluster has at least two worker nodes.
- You configured a network attachment definition for a namespace.
Procedure
Create a
ServiceAccount,Role, andRoleBindingmanifest for the latency checkup:Example 12.1. Example role manifest file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ServiceAccount,Role, andRoleBindingmanifest:oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml
$ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<target_namespace>is the namespace where the checkup is to be run. This must be an existing namespace where theNetworkAttachmentDefinitionobject resides.
Create a
ConfigMapmanifest that contains the input parameters for the checkup:Example input config map
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
NetworkAttachmentDefinitionobject. - 2
- Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
- 3
- Optional: The duration of the latency check, in seconds.
- 4
- Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the
spec.param.targetNodefield cannot be empty. - 5
- Optional: When specified, latency is measured from the source node to this node.
Apply the config map manifest in the target namespace:
oc apply -n <target_namespace> -f <latency_config_map>.yaml
$ oc apply -n <target_namespace> -f <latency_config_map>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Jobmanifest to run the checkup:Example job manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
Jobmanifest:oc apply -n <target_namespace> -f <latency_job>.yaml
$ oc apply -n <target_namespace> -f <latency_job>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the job to complete:
oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
$ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the
spec.param.maxDesiredLatencyMillisecondsattribute, the checkup fails and returns an error.oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml
$ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output config map (success)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The maximum measured latency in nanoseconds.
Optional: To view the detailed job log in case of checkup failure, use the following command:
oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
$ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the job and config map that you previously created by running the following commands:
oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
$ oc delete job -n <target_namespace> kubevirt-vm-latency-checkupCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
$ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you do not plan to run another checkup, delete the roles manifest:
oc delete -f <latency_sa_roles_rolebinding>.yaml
$ oc delete -f <latency_sa_roles_rolebinding>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.2. Running predefined DPDK checkups Copy linkLink copied to clipboard!
You can use a DPDK checkup to verify that a node can run a VM with a Data Plane Development Kit (DPDK) workload with zero packet loss.
12.2.2.1. DPDK checkup Copy linkLink copied to clipboard!
Use a predefined checkup to verify that your OpenShift Container Platform cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.
You run a DPDK checkup by performing the following steps:
- Create a service account, role, and role bindings for the DPDK checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the DPDK checkup resources.
Prerequisites
-
You have installed the OpenShift CLI (
oc). - The cluster is configured to run DPDK applications.
- The project is configured to run DPDK applications.
Procedure
Create a
ServiceAccount,Role, andRoleBindingmanifest for the DPDK checkup:Example 12.2. Example service account, role, and rolebinding manifest file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ServiceAccount,Role, andRoleBindingmanifest:oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ConfigMapmanifest that contains the input parameters for the checkup:Example input config map
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
NetworkAttachmentDefinitionobject. - 2
- The container disk image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
- 3
- The container disk image for the VM under test. In this example, the image is pulled from the upstream Project Quay Container Registry.
Apply the
ConfigMapmanifest in the target namespace:oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
$ oc apply -n <target_namespace> -f <dpdk_config_map>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Jobmanifest to run the checkup:Example job manifest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
Jobmanifest:oc apply -n <target_namespace> -f <dpdk_job>.yaml
$ oc apply -n <target_namespace> -f <dpdk_job>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the job to complete:
oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
$ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the results of the checkup by running the following command:
oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output config map (success)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies if the checkup is successful (
true) or not (false). - 2
- The reason for failure if the checkup fails.
- 3
- The time when the checkup started, in RFC 3339 time format.
- 4
- The time when the checkup has completed, in RFC 3339 time format.
- 5
- The number of packets sent from the traffic generator.
- 6
- The number of error packets sent from the traffic generator.
- 7
- The number of error packets received by the traffic generator.
- 8
- The node on which the traffic generator VM was scheduled.
- 9
- The node on which the VM under test was scheduled.
- 10
- The number of packets received on the VM under test.
- 11
- The ingress traffic packets that were dropped by the DPDK application.
- 12
- The egress traffic packets that were dropped from the DPDK application.
Delete the job and config map that you previously created by running the following commands:
oc delete job -n <target_namespace> dpdk-checkup
$ oc delete job -n <target_namespace> dpdk-checkupCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete config-map -n <target_namespace> dpdk-checkup-config
$ oc delete config-map -n <target_namespace> dpdk-checkup-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you do not plan to run another checkup, delete the
ServiceAccount,Role, andRoleBindingmanifest:oc delete -f <dpdk_sa_roles_rolebinding>.yaml
$ oc delete -f <dpdk_sa_roles_rolebinding>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.2.1.1. DPDK checkup config map parameters Copy linkLink copied to clipboard!
The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup:
| Parameter | Description | Is Mandatory |
|---|---|---|
|
| The time, in minutes, before the checkup fails. | True |
|
|
The name of the | True |
|
|
The container disk image for the traffic generator. The default value is | False |
|
| The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The number of packets per second, in kilo (k) or million(m). The default value is 8m. | False |
|
|
The container disk image for the VM under test. The default value is | False |
|
| The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes. | False |
|
| The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps. | False |
|
|
When set to | False |
12.2.2.1.2. Building a container disk image for RHEL virtual machines Copy linkLink copied to clipboard!
You can build a custom Red Hat Enterprise Linux (RHEL) 8 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map.
To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a RHEL 8 VM that can be used to build custom RHEL images.
Prerequisites
-
The image builder VM must run RHEL 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the
/vardirectory. -
You have installed the image builder tool and its CLI (
composer-cli) on the VM. You have installed the
virt-customizetool:dnf install libguestfs-tools
# dnf install libguestfs-toolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
You have installed the Podman CLI tool (
podman).
Procedure
Verify that you can build a RHEL 8.7 image:
composer-cli distros list
# composer-cli distros listCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo run the
composer-clicommands as non-root, add your user to theweldrorrootgroups:usermod -a -G weldr user
# usermod -a -G weldr userCopy to Clipboard Copied! Toggle word wrap Toggle overflow newgrp weldr
$ newgrp weldrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the blueprint file to the image builder tool by running the following command:
composer-cli blueprints push dpdk-vm.toml
# composer-cli blueprints push dpdk-vm.tomlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
composer-cli compose start dpdk_image qcow2
# composer-cli compose start dpdk_image qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the compose process to complete. The compose status must show
FINISHEDbefore you can continue to the next step.composer-cli compose status
# composer-cli compose statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to download the
qcow2image file by specifying its UUID:composer-cli compose image <UUID>
# composer-cli compose image <UUID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the customization scripts by running the following commands:
cat <<EOF >customize-vm echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf tuned-adm profile cpu-partitioning echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf EOF
$ cat <<EOF >customize-vm echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf tuned-adm profile cpu-partitioning echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
virt-customizetool to customize the image generated by the image builder tool:virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabel
$ virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabelCopy to Clipboard Copied! Toggle word wrap Toggle overflow To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:
cat << EOF > Dockerfile FROM scratch COPY <uuid>-disk.qcow2 /disk/ EOF
$ cat << EOF > Dockerfile FROM scratch COPY <uuid>-disk.qcow2 /disk/ EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <uuid>-disk.qcow2
-
Specifies the name of the custom image in
qcow2format.
Build and tag the container by running the following command:
podman build . -t dpdk-rhel:latest
$ podman build . -t dpdk-rhel:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow Push the container disk image to a registry that is accessible from your cluster by running the following command:
podman push dpdk-rhel:latest
$ podman push dpdk-rhel:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Provide a link to the container disk image in the
spec.param.vmContainerDiskImageattribute in the DPDK checkup config map.
12.3. Prometheus queries for virtual resources Copy linkLink copied to clipboard!
OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including vCPU, network, storage, and guest memory swapping. You can also use metrics to query live migration status.
12.3.1. Prerequisites Copy linkLink copied to clipboard!
-
To use the vCPU metric, the
schedstats=enablekernel argument must be applied to theMachineConfigobject. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. For more information, see Adding kernel arguments to nodes. - For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
12.3.2. Querying metrics for all projects with the OpenShift Container Platform web console Copy linkLink copied to clipboard!
You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admincluster role or with view permissions for all projects. -
You have installed the OpenShift CLI (
oc).
Procedure
-
From the Administrator perspective in the OpenShift Container Platform web console, select Observe
Metrics. To add one or more queries, do any of the following:
Expand Option Description Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Select Add query.
Duplicate an existing query.
Select the Options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Select the Options menu
next to the query and choose Disable query.
To run queries that you created, select Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
NoteQueries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
NoteBy default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following:
Expand Option Description Hide all metrics from a query.
Click the Options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box.
Hide the plot.
Select Hide graph.
12.3.3. Querying metrics for user-defined projects with the OpenShift Container Platform web console Copy linkLink copied to clipboard!
You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.
Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a
ServiceMonitorcustom resource definition (CRD) for the service to define how the service is monitored.
Procedure
-
From the Developer perspective in the OpenShift Container Platform web console, select Observe
Metrics. - Select the project that you want to view metrics for from the Project: list.
Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL. The metrics from the queries are visualized on the plot.
NoteIn the Developer perspective, you can only run one query at a time.
Explore the visualized metrics by doing any of the following:
Expand Option Description Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box.
12.3.4. Virtualization metrics Copy linkLink copied to clipboard!
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.
The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.
12.3.4.1. vCPU metrics Copy linkLink copied to clipboard!
The following query can identify virtual machines that are waiting for Input/Output (I/O):
kubevirt_vmi_vcpu_wait_seconds_total- Returns the wait time (in seconds) on I/O for vCPUs of a virtual machine. Type: Counter.
A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.
To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler.
Example vCPU wait time query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0
- 1
- This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.
12.3.4.2. Network metrics Copy linkLink copied to clipboard!
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.
12.3.4.3. Storage metrics Copy linkLink copied to clipboard!
12.3.4.3.1. Storage-related traffic Copy linkLink copied to clipboard!
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
12.3.4.3.2. Storage snapshot data Copy linkLink copied to clipboard!
kubevirt_vmsnapshot_disks_restored_from_source_total- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"}
kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the amount of space in bytes restored from the source virtual machine.
12.3.4.3.3. I/O performance Copy linkLink copied to clipboard!
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.
12.3.4.4. Guest memory swapping metrics Copy linkLink copied to clipboard!
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes_total- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes_total- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
12.3.4.5. Live migration metrics Copy linkLink copied to clipboard!
The following metrics can be queried to show live migration status:
kubevirt_migrate_vmi_data_processed_bytes- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_migrate_vmi_data_remaining_bytes- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_migrate_vmi_dirty_memory_rate_bytes- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_migrate_vmi_pending_count- The number of pending migrations. Type: Gauge.
kubevirt_migrate_vmi_scheduling_count- The number of scheduling migrations. Type: Gauge.
kubevirt_migrate_vmi_running_count- The number of running migrations. Type: Gauge.
kubevirt_migrate_vmi_succeeded- The number of successfully completed migrations. Type: Gauge.
kubevirt_migrate_vmi_failed- The number of failed migrations. Type: Gauge.
12.4. Exposing custom metrics for virtual machines Copy linkLink copied to clipboard!
OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.
In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.
12.4.1. Configuring the node exporter service Copy linkLink copied to clipboard!
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the OpenShift Container Platform CLI
oc. -
Log in to the cluster as a user with
cluster-adminprivileges. -
Create the
cluster-monitoring-configConfigMapobject in theopenshift-monitoringproject. -
Configure the
user-workload-monitoring-configConfigMapobject in theopenshift-user-workload-monitoringproject by settingenableUserWorkloadtotrue.
Procedure
Create the
ServiceYAML file. In the following example, the file is callednode-exporter-service.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The node-exporter service that exposes the metrics from the virtual machines.
- 2
- The namespace where the service is created.
- 3
- The label for the service. The
ServiceMonitoruses this label to match this service. - 4
- The name given to the port that exposes metrics on port 9100 for the
ClusterIPservice. - 5
- The target port used by
node-exporter-serviceto listen for requests. - 6
- The TCP port number of the virtual machine that is configured with the
monitorlabel. - 7
- The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label
monitorand a value ofmetricswill be matched.
Create the node-exporter service:
oc create -f node-exporter-service.yaml
$ oc create -f node-exporter-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4.2. Configuring a virtual machine with the node exporter service Copy linkLink copied to clipboard!
Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.
Prerequisites
-
The pods for the component are running in the
openshift-user-workload-monitoringproject. -
Grant the
monitoring-editrole to users who need to monitor this user-defined project.
Procedure
- Log on to the virtual machine.
Download the
node-exporterfile on to the virtual machine by using the directory path that applies to the version ofnode-exporterfile.wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
$ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the executable and place it in the
/usr/bindirectory.sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"$ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
node_exporter.servicefile in this directory path:/etc/systemd/system. Thissystemdservice file runs the node-exporter service when the virtual machine reboots.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the
systemdservice.sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service
$ sudo systemctl enable node_exporter.service $ sudo systemctl start node_exporter.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
curl http://localhost:9100/metrics
$ curl http://localhost:9100/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4.3. Creating a custom monitoring label for virtual machines Copy linkLink copied to clipboard!
To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the OpenShift Container Platform CLI
oc. -
Log in as a user with
cluster-adminprivileges. - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
templatespec of your virtual machine configuration file. In this example, the labelmonitorhas the valuemetrics.spec: template: metadata: labels: monitor: metricsspec: template: metadata: labels: monitor: metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Stop and restart the virtual machine to create a new pod with the label name given to the
monitorlabel.
12.4.3.1. Querying the node-exporter service for metrics Copy linkLink copied to clipboard!
Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
oc get service -n <namespace> <node-exporter-service>
$ oc get service -n <namespace> <node-exporter-service>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list all available metrics for the node-exporter service, query the
metricsresource.curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"
$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4.4. Creating a ServiceMonitor resource for the node exporter service Copy linkLink copied to clipboard!
You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Create a YAML file for the
ServiceMonitorresource configuration. In this example, the service monitor matches any service with the labelmetricsand queries theexmetport every 30 seconds.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ServiceMonitorconfiguration for the node-exporter service.oc create -f node-exporter-metrics-monitor.yaml
$ oc create -f node-exporter-metrics-monitor.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4.4.1. Accessing the node exporter service outside the cluster Copy linkLink copied to clipboard!
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Expose the node-exporter service.
oc expose service -n <namespace> <node_exporter_service_name>
$ oc expose service -n <namespace> <node_exporter_service_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the FQDN (Fully Qualified Domain Name) for the route.
oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
curlcommand to display metrics for the node-exporter service.curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics
$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.5. Virtual machine health checks Copy linkLink copied to clipboard!
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine resource.
12.5.1. About readiness and liveness probes Copy linkLink copied to clipboard!
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests:
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-pingcommand to determine if the QEMU guest agent is running on the virtual machine.
12.5.1.1. Defining an HTTP readiness probe Copy linkLink copied to clipboard!
Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration.
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The HTTP GET request to perform to connect to the VM.
- 2
- The port of the VM that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.5.1.2. Defining a TCP readiness probe Copy linkLink copied to clipboard!
Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration.
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The TCP action to perform.
- 4
- The port of the VM that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.5.1.3. Defining an HTTP liveness probe Copy linkLink copied to clipboard!
Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The time, in seconds, after the VM starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The HTTP GET request to perform to connect to the VM.
- 4
- The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthzpath returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.5.2. Defining a watchdog Copy linkLink copied to clipboard!
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
- Configure a watchdog device for the virtual machine (VM).
- Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
poweroff: The VM powers down immediately. Ifspec.runningis set totrueorspec.runStrategyis not set tomanual, then the VM reboots. reset: The VM reboots in place and the guest operating system cannot react.NoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
shutdown: The VM gracefully powers down by stopping all services.
Watchdog is not available for Windows VMs.
12.5.2.1. Configuring a watchdog device for the virtual machine Copy linkLink copied to clipboard!
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
The VM must have kernel support for an
i6300esbwatchdog device. Red Hat Enterprise Linux (RHEL) images supporti6300esb.
Procedure
Create a
YAMLfile with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
poweroff,reset, orshutdown.
The example above configures the
i6300esbwatchdog device on a RHEL8 VM with the poweroff action and exposes the device as/dev/watchdog.This device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
lspci | grep watchdog -i
$ lspci | grep watchdog -iCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
echo c > /proc/sysrq-trigger
# echo c > /proc/sysrq-triggerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the watchdog service:
pkill -9 watchdog
# pkill -9 watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.5.2.2. Installing the watchdog agent on the guest Copy linkLink copied to clipboard!
You install the watchdog agent on the guest and start the watchdog service.
Procedure
- Log in to the virtual machine as root user.
Install the
watchdogpackage and its dependencies:yum install watchdog
# yum install watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Uncomment the following line in the
/etc/watchdog.conffile and save the changes:#watchdog-device = /dev/watchdog
#watchdog-device = /dev/watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
watchdogservice to start on boot:systemctl enable --now watchdog.service
# systemctl enable --now watchdog.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.5.3. Defining a guest agent ping probe Copy linkLink copied to clipboard!
Define a guest agent ping probe by setting the spec.readinessProbe.guestAgentPing field of the virtual machine (VM) configuration.
The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- The QEMU guest agent must be installed and enabled on the virtual machine.
Procedure
Include details of the guest agent ping probe in the VM configuration file. For example:
Sample guest agent ping probe
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The guest agent ping probe to connect to the VM.
- 2
- Optional: The time, in seconds, after the VM starts before the guest agent probe is initiated.
- 3
- Optional: The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 4
- Optional: The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 5
- Optional: The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 6
- Optional: The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
12.6. OpenShift Virtualization runbooks Copy linkLink copied to clipboard!
Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub. To diagnose and resolve issues that trigger OpenShift Virtualization alerts, follow the procedures in the runbooks.
OpenShift Virtualization alerts are displayed in the Virtualization
12.6.1. CDIDataImportCronOutdated Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDataImportCronOutdatedalert.
12.6.2. CDIDataVolumeUnusualRestartCount Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDataVolumeUnusualRestartCountalert.
12.6.3. CDIDefaultStorageClassDegraded Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDefaultStorageClassDegradedalert.
12.6.4. CDIMultipleDefaultVirtStorageClasses Copy linkLink copied to clipboard!
-
View the runbook for the
CDIMultipleDefaultVirtStorageClassesalert.
12.6.5. CDINoDefaultStorageClass Copy linkLink copied to clipboard!
-
View the runbook for the
CDINoDefaultStorageClassalert.
12.6.6. CDINotReady Copy linkLink copied to clipboard!
-
View the runbook for the
CDINotReadyalert.
12.6.7. CDIOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
CDIOperatorDownalert.
12.6.8. CDIStorageProfilesIncomplete Copy linkLink copied to clipboard!
-
View the runbook for the
CDIStorageProfilesIncompletealert.
12.6.9. CnaoDown Copy linkLink copied to clipboard!
-
View the runbook for the
CnaoDownalert.
12.6.10. CnaoNMstateMigration Copy linkLink copied to clipboard!
-
View the runbook for the
CnaoNMstateMigrationalert.
12.6.11. HCOInstallationIncomplete Copy linkLink copied to clipboard!
-
View the runbook for the
HCOInstallationIncompletealert.
12.6.12. HPPNotReady Copy linkLink copied to clipboard!
-
View the runbook for the
HPPNotReadyalert.
12.6.13. HPPOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
HPPOperatorDownalert.
12.6.14. HPPSharingPoolPathWithOS Copy linkLink copied to clipboard!
-
View the runbook for the
HPPSharingPoolPathWithOSalert.
12.6.15. KubemacpoolDown Copy linkLink copied to clipboard!
-
View the runbook for the
KubemacpoolDownalert.
12.6.16. KubeMacPoolDuplicateMacsFound Copy linkLink copied to clipboard!
-
View the runbook for the
KubeMacPoolDuplicateMacsFoundalert.
12.6.17. KubeVirtComponentExceedsRequestedCPU Copy linkLink copied to clipboard!
-
The
KubeVirtComponentExceedsRequestedCPUalert is deprecated.
12.6.18. KubeVirtComponentExceedsRequestedMemory Copy linkLink copied to clipboard!
-
The
KubeVirtComponentExceedsRequestedMemoryalert is deprecated.
12.6.19. KubeVirtCRModified Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtCRModifiedalert.
12.6.20. KubeVirtDeprecatedAPIRequested Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtDeprecatedAPIRequestedalert.
12.6.21. KubeVirtNoAvailableNodesToRunVMs Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtNoAvailableNodesToRunVMsalert.
12.6.22. KubevirtVmHighMemoryUsage Copy linkLink copied to clipboard!
-
View the runbook for the
KubevirtVmHighMemoryUsagealert.
12.6.23. KubeVirtVMIExcessiveMigrations Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtVMIExcessiveMigrationsalert.
12.6.24. LowKVMNodesCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowKVMNodesCountalert.
12.6.25. LowReadyVirtControllersCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowReadyVirtControllersCountalert.
12.6.26. LowReadyVirtOperatorsCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowReadyVirtOperatorsCountalert.
12.6.27. LowVirtAPICount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtAPICountalert.
12.6.28. LowVirtControllersCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtControllersCountalert.
12.6.29. LowVirtOperatorCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtOperatorCountalert.
12.6.30. NetworkAddonsConfigNotReady Copy linkLink copied to clipboard!
-
View the runbook for the
NetworkAddonsConfigNotReadyalert.
12.6.31. NoLeadingVirtOperator Copy linkLink copied to clipboard!
-
View the runbook for the
NoLeadingVirtOperatoralert.
12.6.32. NoReadyVirtController Copy linkLink copied to clipboard!
-
View the runbook for the
NoReadyVirtControlleralert.
12.6.33. NoReadyVirtOperator Copy linkLink copied to clipboard!
-
View the runbook for the
NoReadyVirtOperatoralert.
12.6.34. OrphanedVirtualMachineInstances Copy linkLink copied to clipboard!
-
View the runbook for the
OrphanedVirtualMachineInstancesalert.
12.6.35. OutdatedVirtualMachineInstanceWorkloads Copy linkLink copied to clipboard!
-
View the runbook for the
OutdatedVirtualMachineInstanceWorkloadsalert.
12.6.36. SingleStackIPv6Unsupported Copy linkLink copied to clipboard!
-
View the runbook for the
SingleStackIPv6Unsupportedalert.
12.6.37. SSPCommonTemplatesModificationReverted Copy linkLink copied to clipboard!
-
View the runbook for the
SSPCommonTemplatesModificationRevertedalert.
12.6.38. SSPDown Copy linkLink copied to clipboard!
-
View the runbook for the
SSPDownalert.
12.6.39. SSPFailingToReconcile Copy linkLink copied to clipboard!
-
View the runbook for the
SSPFailingToReconcilealert.
12.6.40. SSPHighRateRejectedVms Copy linkLink copied to clipboard!
-
View the runbook for the
SSPHighRateRejectedVmsalert.
12.6.41. SSPTemplateValidatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
SSPTemplateValidatorDownalert.
12.6.42. UnsupportedHCOModification Copy linkLink copied to clipboard!
-
View the runbook for the
UnsupportedHCOModificationalert.
12.6.43. VirtAPIDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtAPIDownalert.
12.6.44. VirtApiRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtApiRESTErrorsBurstalert.
12.6.45. VirtApiRESTErrorsHigh Copy linkLink copied to clipboard!
-
View the runbook for the
VirtApiRESTErrorsHighalert.
12.6.46. VirtControllerDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtControllerDownalert.
12.6.47. VirtControllerRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtControllerRESTErrorsBurstalert.
12.6.48. VirtControllerRESTErrorsHigh Copy linkLink copied to clipboard!
-
View the runbook for the
VirtControllerRESTErrorsHighalert.
12.6.49. VirtHandlerDaemonSetRolloutFailing Copy linkLink copied to clipboard!
-
View the runbook for the
VirtHandlerDaemonSetRolloutFailingalert.
12.6.50. VirtHandlerRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtHandlerRESTErrorsBurstalert.
12.6.51. VirtHandlerRESTErrorsHigh Copy linkLink copied to clipboard!
-
View the runbook for the
VirtHandlerRESTErrorsHighalert.
12.6.52. VirtOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtOperatorDownalert.
12.6.53. VirtOperatorRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtOperatorRESTErrorsBurstalert.
12.6.54. VirtOperatorRESTErrorsHigh Copy linkLink copied to clipboard!
-
View the runbook for the
VirtOperatorRESTErrorsHighalert.
12.6.55. VirtualMachineCRCErrors Copy linkLink copied to clipboard!
The runbook for the
VirtualMachineCRCErrorsalert is deprecated because the alert was renamed toVMStorageClassWarning.-
View the runbook for the
VMStorageClassWarningalert.
-
View the runbook for the
12.6.56. VMCannotBeEvicted Copy linkLink copied to clipboard!
-
View the runbook for the
VMCannotBeEvictedalert.
12.6.57. VMStorageClassWarning Copy linkLink copied to clipboard!
-
View the runbook for the
VMStorageClassWarningalert.