Chapter 13. Monitoring


13.1. Monitoring overview

Monitor the health of your cluster and virtual machines (VMs) to have a unified operational view of your environment. This ensures high availability and optimal resource performance.

You can monitor the health of your cluster and VMs with the following tools:

Monitoring OpenShift Virtualization VM health status
View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home Overview page in the OpenShift Dedicated web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions.
Prometheus queries for virtual resources
Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
VM custom metrics
Configure the node-exporter service to expose internal VM metrics and processes.
VM health checks
Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
Runbooks
Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the OpenShift Dedicated web console.

13.2. Prometheus queries for virtual resources

Monitor the consumption of cluster infrastructure resources using the metrics provided by OpenShift Virtualization. These metrics are also used to query live migration status.

Note
  • For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.

Monitor the state of a cluster and any user-defined workloads by using the OpenShift Dedicated metrics query browser. The query browser uses Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot.

As a dedicated-admin or as a user with view permissions for all projects, you can access metrics for all default OpenShift Dedicated and user-defined projects in the Metrics UI.

Note

Only dedicated administrators have access to the third-party UIs provided with OpenShift Dedicated monitoring.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role or with view permissions for all projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. In the OpenShift Dedicated web console, click Observe Metrics.
  2. To add one or more queries, perform any of the following actions:

    Expand
    OptionDescription

    Select an existing query.

    From the Select query drop-down list, select an existing query.

    Create a custom query.

    Add your Prometheus Query Language (PromQL) query to the Expression field.

    As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.

    Add multiple queries.

    Click Add query.

    Duplicate an existing query.

    Click the options menu kebab next to the query, then choose Duplicate query.

    Disable a query from being run.

    Click the options menu kebab next to the query and choose Disable query.

  3. To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.

    Note
    • When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
    • By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
  4. Optional: Save the page URL to use this set of queries again in the future.
  5. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:

    Expand
    OptionDescription

    Hide all metrics from a query.

    Click the options menu kebab for the query and click Hide all series.

    Hide a specific metric.

    Go to the query table and click the colored square near the metric name.

    Zoom into the plot and change the time range.

    Perform one of the following actions:

    • Visually select the time range by clicking and dragging on the plot horizontally.
    • Use the menu to select the time range.

    Reset the time range.

    Click Reset zoom.

    Display outputs for all queries at a specific point in time.

    Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.

    Hide the plot.

    Click Hide graph.

Monitor user-defined workloads by using the OpenShift Dedicated metrics query browser. The query browser uses Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot.

As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.

Note

Developers cannot access the third-party UIs provided with OpenShift Dedicated monitoring.

Prerequisites

  • You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
  • You have enabled monitoring for user-defined projects.
  • You have deployed a service in a user-defined project.
  • You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored.

Procedure

  1. In the OpenShift Dedicated web console, click Observe Metrics.
  2. To add one or more queries, perform any of the following actions:

    Expand
    OptionDescription

    Select an existing query.

    From the Select query drop-down list, select an existing query.

    Create a custom query.

    Add your Prometheus Query Language (PromQL) query to the Expression field.

    As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.

    Add multiple queries.

    Click Add query.

    Duplicate an existing query.

    Click the options menu kebab next to the query, then choose Duplicate query.

    Disable a query from being run.

    Click the options menu kebab next to the query and choose Disable query.

  3. To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.

    Note
    • When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
    • By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
  4. Optional: Save the page URL to use this set of queries again in the future.
  5. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:

    Expand
    OptionDescription

    Hide all metrics from a query.

    Click the options menu kebab for the query and click Hide all series.

    Hide a specific metric.

    Go to the query table and click the colored square near the metric name.

    Zoom into the plot and change the time range.

    Perform one of the following actions:

    • Visually select the time range by clicking and dragging on the plot horizontally.
    • Use the menu to select the time range.

    Reset the time range.

    Click Reset zoom.

    Display outputs for all queries at a specific point in time.

    Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.

    Hide the plot.

    Click Hide graph.

13.2.3. Virtualization metrics

The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.

Note

The following examples use topk queries that specify a time period. If virtual machines (VMs) are deleted during that time period, they can still appear in the query output.

13.2.3.1. Network metrics

The following queries can identify virtual machines that are saturating the network:

kubevirt_vmi_network_receive_bytes_total
Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total
Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.

Example network traffic query

The following query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period:

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0

13.2.3.2. Storage metrics

You can monitor virtual machine storage traffic and identify high-traffic VMs by using Prometheus queries.

The following queries can identify VMs that are writing large amounts of data:

kubevirt_vmi_storage_read_traffic_bytes_total
Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total
Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.

Example storage-related traffic queries

  • The following query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period:

    topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
  • The following query returns the top 3 VMs with the highest average read latency at every given moment over a six-minute time period:

    topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_times_seconds_total{name='${name}',namespace='${namespace}'${clusterFilter}}[6m]) / rate(kubevirt_vmi_storage_iops_read_total{name='${name}',namespace='${namespace}'${clusterFilter}}[6m]) > 0)) > 0

The following queries can track data restored from storage snapshots:

kubevirt_vmsnapshot_disks_restored_from_source
Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes
Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.

Examples of storage snapshot data queries

  • The following query returns the total number of virtual machine disks restored from the source virtual machine:

    kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"}
  • The following query returns the amount of space in bytes restored from the source virtual machine:

    kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}

The following queries can determine the I/O performance of storage devices:

kubevirt_vmi_storage_iops_read_total
Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total
Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.

Example I/O performance query

The following query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period:

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0

13.2.3.3. Guest memory swapping metrics

The following queries can identify which swap-enabled guests are performing the most memory swapping:

kubevirt_vmi_memory_swap_in_traffic_bytes
Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes
Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.

Example memory swapping query

The following query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period:

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0
Note

Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.

13.2.3.4. Monitoring AAQ operator metrics

The following metrics are exposed by the Application Aware Quota (AAQ) controller for monitoring resource quotas:

kube_application_aware_resourcequota
Returns the current quota usage and the CPU and memory limits enforced by the AAQ Operator resources. Type: Gauge.
kube_application_aware_resourcequota_creation_timestamp
Returns the time, in UNIX timestamp format, when the AAQ Operator resource is created. Type: Gauge.

13.2.3.5. VM label metrics

kubevirt_vm_labels

Returns virtual machine labels as Prometheus labels. Type: Gauge.

You can expose and ignore specific labels by editing the kubevirt-vm-labels-config config map. After you apply the config map to your cluster, the configuration is loaded dynamically.

Example config map:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubevirt-vm-labels-config
  namespace: openshift-cnv
data:
  allowlist: "*"
  ignorelist: ""
  • data.allowlist specifies labels to expose.

    • If data.allowlist has a value of "*", all labels are included.
    • If data.allowlist has a value of "", the metric does not return any labels.
    • If data.allowlist contains a list of label keys, only the explicitly named labels are exposed. For example: allowlist: "example.io/name,example.io/version".
  • data.ignorelist specifies labels to ignore. The ignore list overrides the allow list.

    • The data.ignorelist field does not support wildcard patterns. It can be empty or include a list of specific labels to ignore.
    • If data.ignorelist has a value of "", no labels are ignored.

13.2.3.6. Live migration metrics

The following metrics can be queried to show live migration status.

kubevirt_vmi_migration_data_processed_bytes
The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_vmi_migration_data_remaining_bytes
The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_vmi_migration_memory_transfer_rate_bytes
The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_vmi_migrations_in_pending_phase
The number of pending migrations. Type: Gauge.
kubevirt_vmi_migrations_in_scheduling_phase
The number of scheduling migrations. Type: Gauge.
kubevirt_vmi_migrations_in_running_phase
The number of running migrations. Type: Gauge.
kubevirt_vmi_migration_succeeded
The number of successfully completed migrations. Type: Gauge.
kubevirt_vmi_migration_failed
The number of failed migrations. Type: Gauge.

13.3. Exposing custom metrics for virtual machines

Monitor core platform components using the OpenShift Dedicated monitoring stack based on the Prometheus monitoring system. Additionally, enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.

13.3.1. Configuring the node exporter service

The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in to the cluster as a user with cluster-admin privileges.
  • Create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project.
  • Configure the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project by setting enableUserWorkload to true.

Procedure

  1. Create the Service YAML file. In the following example, the file is called node-exporter-service.yaml.

    kind: Service
    apiVersion: v1
    metadata:
      name: node-exporter-service
      namespace: dynamation
      labels:
        servicetype: metrics
    spec:
      ports:
        - name: exmet
          protocol: TCP
          port: 9100
          targetPort: 9100
      type: ClusterIP
      selector:
        monitor: metrics
    • metadata.name defines the node-exporter service that exposes the metrics from the virtual machines.
    • metadata.namespace defines the namespace where the service is created.
    • metadata.labels.servicetype defines the label for the service. The ServiceMonitor uses this label to match this service.
    • spec.ports.name defines the name given to the port that exposes metrics on port 9100 for the ClusterIP service.
    • spec.ports.port defines the target port used by node-exporter-service to listen for requests.
    • spec.ports.targetPort defines the TCP port number of the virtual machine that is configured with the monitor label.
    • spec.selector.monitor defines the label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label monitor and a value of metrics will be matched.
  2. Create the node-exporter service:

    $ oc create -f node-exporter-service.yaml

Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.

Prerequisites

  • The pods for the component are running in the openshift-user-workload-monitoring project.
  • Grant the monitoring-edit role to users who need to monitor this user-defined project.

Procedure

  1. Log on to the virtual machine.
  2. Download the node-exporter file on to the virtual machine by using the directory path that applies to the version of node-exporter file.

    $ wget https://github.com/prometheus/node_exporter/releases/download/<version>/node_exporter-<version>.linux-<architecture>.tar.gz
  3. Extract the executable and place it in the /usr/bin directory.

    $ sudo tar xvf node_exporter-<version>.linux-<architecture>.tar.gz \
        --directory /usr/bin --strip 1 "*/node_exporter"
  4. Create a node_exporter.service file in this directory path: /etc/systemd/system. This systemd service file runs the node-exporter service when the virtual machine reboots.

    [Unit]
    Description=Prometheus Metrics Exporter
    After=network.target
    StartLimitIntervalSec=0
    
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    User=root
    ExecStart=/usr/bin/node_exporter
    
    [Install]
    WantedBy=multi-user.target
  5. Enable and start the systemd service.

    $ sudo systemctl enable node_exporter.service
    $ sudo systemctl start node_exporter.service

Verification

  • Verify that the node-exporter agent is reporting metrics from the virtual machine.

    $ curl http://localhost:9100/metrics

    Example output:

    go_gc_duration_seconds{quantile="0"} 1.5244e-05
    go_gc_duration_seconds{quantile="0.25"} 3.0449e-05
    go_gc_duration_seconds{quantile="0.5"} 3.7913e-05

To enable queries to multiple virtual machines from a single service, you can add a custom label in the virtual machine’s YAML file.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.
  • Access to the web console for stop and restart a virtual machine.

Procedure

  1. Edit the template spec of your virtual machine configuration file. In this example, the label monitor has the value metrics.

    spec:
      template:
        metadata:
          labels:
            monitor: metrics
  2. Stop and restart the virtual machine to create a new pod with the label name given to the monitor label.

Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Obtain the HTTP service endpoint by specifying the namespace for the service:

    $ oc get service -n <namespace> <node-exporter-service>
  2. To list all available metrics for the node-exporter service, query the metrics resource.

    $ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"

    Example output:

    node_arp_entries{device="eth0"} 1
    node_boot_time_seconds 1.643153218e+09
    node_context_switches_total 4.4938158e+07
    node_cooling_device_cur_state{name="0",type="Processor"} 0
    node_cooling_device_max_state{name="0",type="Processor"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="user"} 0
    node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06
    node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61
    node_cpu_seconds_total{cpu="0",mode="irq"} 233.91
    node_cpu_seconds_total{cpu="0",mode="nice"} 551.47
    node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3
    node_cpu_seconds_total{cpu="0",mode="steal"} 86.12
    node_cpu_seconds_total{cpu="0",mode="system"} 464.15
    node_cpu_seconds_total{cpu="0",mode="user"} 1075.2
    node_disk_discard_time_seconds_total{device="vda"} 0
    node_disk_discard_time_seconds_total{device="vdb"} 0
    node_disk_discarded_sectors_total{device="vda"} 0
    node_disk_discarded_sectors_total{device="vdb"} 0
    node_disk_discards_completed_total{device="vda"} 0
    node_disk_discards_completed_total{device="vdb"} 0
    node_disk_discards_merged_total{device="vda"} 0
    node_disk_discards_merged_total{device="vdb"} 0
    node_disk_info{device="vda",major="252",minor="0"} 1
    node_disk_info{device="vdb",major="252",minor="16"} 1
    node_disk_io_now{device="vda"} 0
    node_disk_io_now{device="vdb"} 0
    node_disk_io_time_seconds_total{device="vda"} 174
    node_disk_io_time_seconds_total{device="vdb"} 0.054
    node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003
    node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039
    node_disk_read_bytes_total{device="vda"} 3.71867136e+08
    node_disk_read_bytes_total{device="vdb"} 366592
    node_disk_read_time_seconds_total{device="vda"} 19.128
    node_disk_read_time_seconds_total{device="vdb"} 0.039
    node_disk_reads_completed_total{device="vda"} 5619
    node_disk_reads_completed_total{device="vdb"} 96
    node_disk_reads_merged_total{device="vda"} 5
    node_disk_reads_merged_total{device="vdb"} 0
    node_disk_write_time_seconds_total{device="vda"} 240.66400000000002
    node_disk_write_time_seconds_total{device="vdb"} 0
    node_disk_writes_completed_total{device="vda"} 71584
    node_disk_writes_completed_total{device="vdb"} 0
    node_disk_writes_merged_total{device="vda"} 19761
    node_disk_writes_merged_total{device="vdb"} 0
    node_disk_written_bytes_total{device="vda"} 2.007924224e+09
    node_disk_written_bytes_total{device="vdb"} 0

You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a YAML file for the ServiceMonitor resource configuration. In this example, the service monitor matches any service with the label metrics and queries the exmet port every 30 seconds.

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: node-exporter-metrics-monitor
      name: node-exporter-metrics-monitor
      namespace: dynamation
    spec:
      endpoints:
      - interval: 30s
        port: exmet
        scheme: http
      selector:
        matchLabels:
          servicetype: metrics
    • metadata.name defines the name of the ServiceMonitor.
    • metadata.namespace defines the namespace where the ServiceMonitor is created.
    • spec.endpoints.interval defines the interval at which the port will be queried.
    • spec.endpoints.port defines the name of the port that is queried every 30 seconds
  2. Create the ServiceMonitor configuration for the node-exporter service.

    $ oc create -f node-exporter-metrics-monitor.yaml

You can access the node-exporter service outside the cluster and view the exposed metrics.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Expose the node-exporter service.

    $ oc expose service -n <namespace> <node_exporter_service_name>
  2. Obtain the FQDN (Fully Qualified Domain Name) for the route.

    $ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host

    Example output:

    NAME                    DNS
    node-exporter-service   node-exporter-service-dynamation.apps.cluster.example.org
  3. Use the curl command to display metrics for the node-exporter service.

    $ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics

    Example output:

    go_gc_duration_seconds{quantile="0"} 1.5382e-05
    go_gc_duration_seconds{quantile="0.25"} 3.1163e-05
    go_gc_duration_seconds{quantile="0.5"} 3.8546e-05
    go_gc_duration_seconds{quantile="0.75"} 4.9139e-05
    go_gc_duration_seconds{quantile="1"} 0.000189423

13.4. Virtual machine health checks

Define probes and watchdogs in the VirtualMachine resource to configure virtual machine (VM) health checks. Health checks monitor and report the internal state of a VM.

You can configure VM health checks by defining readiness and liveness probes in the VirtualMachine resource.

13.4.1. About readiness and liveness probes

Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.

A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.

A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.

You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests:

HTTP GET
The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
TCP socket
The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
Guest agent ping
The probe uses the guest-ping command to determine if the QEMU guest agent is running on the virtual machine.

13.4.1.1. Defining an HTTP readiness probe

You can define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration.

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Include details of the readiness probe in the VM configuration file.

    Sample readiness probe with an HTTP GET test:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          readinessProbe:
            httpGet:
              port: 1500
              path: /healthz
              httpHeaders:
              - name: Custom-Header
                value: Awesome
            initialDelaySeconds: 120
            periodSeconds: 20
            timeoutSeconds: 10
            failureThreshold: 3
            successThreshold: 3
    # ...
    • spec.template.spec.readinessProbe.httpGet defines the HTTP GET request to perform to connect to the VM.
    • spec.template.spec.readinessProbe.httpGet.port defines the port of the VM that the probe queries. In the above example, the probe queries port 1500.
    • spec.template.spec.readinessProbe.httpGet.path defines the path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
    • spec.template.spec.readinessProbe.initialDelaySeconds defines the time, in seconds, after the VM starts before the readiness probe is initiated.
    • spec.template.spec.readinessProbe.periodSeconds defines the delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    • spec.template.spec.readinessProbe.timeoutSeconds defines the number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
    • spec.template.spec.readinessProbe.failureThreshold defines the number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready.
    • spec.template.spec.readinessProbe.successThreshold defines the number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
  2. Create the VM by running the following command:

    $ oc create -f <file_name>.yaml

13.4.1.2. Defining a TCP readiness probe

You can define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration.

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Include details of the TCP readiness probe in the VM configuration file.

    Sample readiness probe with a TCP socket test:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          readinessProbe:
            initialDelaySeconds: 120
            periodSeconds: 20
            tcpSocket:
              port: 1500
            timeoutSeconds: 10
    # ...
    • spec.template.spec.readinessProbe.initialDelaySeconds defines the time, in seconds, after the VM starts before the readiness probe is initiated.
    • spec.template.spec.readinessProbe.periodSeconds`defines the delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than `timeoutSeconds.
    • spec.template.spec.readinessProbe.tcpSocket defines the TCP action to perform.
    • spec.template.spec.readinessProbe.tcpSocket.port defines the port of the VM that the probe queries.
    • spec.template.spec.readinessProbe.timeoutSeconds defines the number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
  2. Create the VM by running the following command:

    $ oc create -f <file_name>.yaml

13.4.1.3. Defining an HTTP liveness probe

Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.

Prerequisites

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Include details of the HTTP liveness probe in the VM configuration file.

    Sample liveness probe with an HTTP GET test:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          livenessProbe:
            initialDelaySeconds: 120
            periodSeconds: 20
            httpGet:
              port: 1500
              path: /healthz
              httpHeaders:
              - name: Custom-Header
                value: Awesome
            timeoutSeconds: 10
    # ...
    • spec.tenmplate.spec.livenessProbe.initialDelaySeconds defines the time, in seconds, after the VM starts before the liveness probe is initiated.
    • spec.tenmplate.spec.livenessProbe.periodSeconds defines the delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    • spec.tenmplate.spec.livenessProbe.httpGet defines the HTTP GET request to perform to connect to the VM.
    • spec.tenmplate.spec.livenessProbe.httpGet.port defines the port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
    • spec.tenmplate.spec.livenessProbe.httpGet.path defines the path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created.
    • spec.tenmplate.spec.livenessProbe.timeoutSeconds defines the number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
  2. Create the VM by running the following command:

    $ oc create -f <file_name>.yaml

13.4.2. About watchdogs

A watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:

  • poweroff: The virtual machine (VM) powers down immediately. If spec.runStrategy is not set to manual, the VM reboots.
  • reset: The VM reboots in place and the guest operating system cannot react.

    Note

    The reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.

  • shutdown: The VM gracefully powers down by stopping all services.
Note

Watchdog functionality is not available for Windows VMs.

You can create a watchdog device by configuring the device for a VM and installing the watchdog agent on the guest.

You configure a watchdog device for the virtual machine (VM).

Prerequisites

  • For x86 systems, the VM must use a kernel that works with the i6300esb watchdog device. If you use s390x architecture, the kernel must be enabled for diag288. Red Hat Enterprise Linux (RHEL) images support i6300esb and diag288.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a YAML file with the following contents:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: <vm-label>
      name: <vm-name>
    spec:
      runStrategy: Halted
      template:
        metadata:
          labels:
            kubevirt.io/vm: <vm-label>
        spec:
          domain:
            devices:
              watchdog:
                name: <watchdog>
                <watchdog-device-model>:
                  action: "poweroff"
    # ...
    • spec.template.spec.domain.devices.watchdog.name.<watchdog-device-model> defines the watchdog device model to use. For x86 specify i6300esb. For s390x specify diag288.
    • spec.template.spec.domain.devices.watchdog.name.<watchdog-device-model>.action defines the watchdog device action. Specify poweroff, reset, or shutdown. The shutdown action requires that the guest virtual machine is responsive to ACPI signals. Using shutdown is not recommended.

      The example above configures the watchdog device on a VM with the poweroff action and exposes the device as /dev/watchdog.

      This device can now be used by the watchdog binary.

  2. Apply the YAML file to your cluster by running the following command:

    $ oc apply -f <file_name>.yaml
Verification

This procedure is provided for testing watchdog functionality only and must not be run on production machines.

  1. Run the following command to verify that the VM is connected to the watchdog device:

    $ lspci | grep watchdog -i
  2. Run one of the following commands to confirm the watchdog is active:

    • Trigger a kernel panic:

      # echo c > /proc/sysrq-trigger
    • Stop the watchdog service:

      # pkill -9 watchdog

You can install the watchdog agent on the guest and start the watchdog service.

Procedure

  1. Log in to the virtual machine as root user.
  2. Verify that the /dev/watchdog file path is present in the VM by running the following command:

    # ls /dev/watchdog
  3. Install the watchdog package and its dependencies:

    # yum install watchdog
  4. Uncomment the following line in the /etc/watchdog.conf file and save the changes:

    #watchdog-device = /dev/watchdog
  5. Enable the watchdog service to start on boot:

    # systemctl enable --now watchdog.service

13.5. OpenShift Virtualization runbooks

To diagnose and resolve OpenShift Virtualization alerts, you can use the OpenShift Virtualization Operator runbooks. These guides help ensure you can effectively troubleshoot cluster issues and restore system health.

Note

Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub.

13.5.1. CDIDataImportCronOutdated

13.5.2. CDIDataVolumeUnusualRestartCount

13.5.3. CDIDefaultStorageClassDegraded

13.5.4. CDIMultipleDefaultVirtStorageClasses

13.5.5. CDINoDefaultStorageClass

13.5.6. CDINotReady

13.5.7. CDIOperatorDown

13.5.8. CDIStorageProfilesIncomplete

13.5.9. CnaoDown

13.5.10. CnaoNMstateMigration

13.5.11. DeprecatedMachineType

13.5.12. DuplicateWaspAgentDSDetected

  • The DuplicateWaspAgentDSDetected alert is deprecated.

13.5.13. GuestFilesystemAlmostOutOfSpace

13.5.14. GuestVCPUQueueHighCritical

13.5.15. GuestVCPUQueueHighWarning

13.5.16. HAControlPlaneDown

13.5.18. HCOGoldenImageWithNoSupportedArchitecture

13.5.19. HCOInstallationIncomplete

13.5.20. HCOMisconfiguredDescheduler

13.5.21. HCOMultiArchGoldenImagesDisabled

13.5.22. HCOOperatorConditionsUnhealthy

13.5.23. HighNodeCPUFrequency

13.5.24. HPPNotReady

13.5.25. HPPOperatorDown

13.5.26. HPPSharingPoolPathWithOS

13.5.27. HighCPUWorkload

13.5.28. KubemacpoolDown

13.5.29. KubeMacPoolDuplicateMacsFound

*The KubeMacPoolDuplicateMacsFound alert is deprecated.

13.5.30. KubeVirtComponentExceedsRequestedCPU

  • The KubeVirtComponentExceedsRequestedCPU alert is deprecated.

13.5.31. KubeVirtComponentExceedsRequestedMemory

  • The KubeVirtComponentExceedsRequestedMemory alert is deprecated.

13.5.32. KubeVirtCRModified

13.5.33. KubeVirtDeprecatedAPIRequested

13.5.34. KubeVirtVMGuestMemoryAvailableLow

13.5.35. KubeVirtVMGuestMemoryPressure

13.5.36. KubeVirtNoAvailableNodesToRunVMs

13.5.37. KubevirtVmHighMemoryUsage

  • The KubevirtVmHighMemoryUsage alert is deprecated.

13.5.38. KubeVirtVMIExcessiveMigrations

13.5.39. LowKVMNodesCount

13.5.40. LowReadyVirtControllersCount

13.5.41. LowReadyVirtOperatorsCount

13.5.42. LowVirtAPICount

13.5.43. LowVirtControllersCount

13.5.44. LowVirtOperatorCount

13.5.45. NetworkAddonsConfigNotReady

13.5.46. NoLeadingVirtOperator

13.5.47. NoReadyVirtController

13.5.48. NoReadyVirtOperator

13.5.49. NodeNetworkInterfaceDown

13.5.50. OperatorConditionsUnhealthy

  • The OperatorConditionsUnhealthy alert is deprecated.

13.5.51. OrphanedVirtualMachineInstances

13.5.52. OutdatedVirtualMachineInstanceWorkloads

13.5.53. PersistentVolumeFillingUp

13.5.54. SingleStackIPv6Unsupported

  • The SingleStackIPv6Unsupported alert is deprecated.

13.5.55. SSPCommonTemplatesModificationReverted

13.5.56. SSPDown

13.5.57. SSPFailingToReconcile

13.5.58. SSPHighRateRejectedVms

13.5.59. SSPOperatorDown

13.5.60. SSPTemplateValidatorDown

13.5.61. UnsupportedHCOModification

13.5.62. VirtAPIDown

13.5.63. VirtApiRESTErrorsBurst

13.5.64. VirtApiRESTErrorsHigh

13.5.65. VirtControllerDown

13.5.66. VirtControllerRESTErrorsBurst

13.5.67. VirtControllerRESTErrorsHigh

  • The VirtControllerRESTErrorsHigh alert is deprecated.

13.5.68. VirtHandlerDaemonSetRolloutFailing

13.5.69. VirtHandlerRESTErrorsBurst

13.5.70. VirtHandlerRESTErrorsHigh

  • The VirtHandlerRESTErrorsHigh alert is deprecated.

13.5.71. VirtLauncherPodsStuckFailed

13.5.72. VirtOperatorDown

13.5.73. VirtOperatorRESTErrorsBurst

13.5.74. VirtOperatorRESTErrorsHigh

  • The VirtOperatorRESTErrorsHigh alert is deprecated.

13.5.75. VirtualMachineCRCErrors

  • The VirtualMachineCRCErrors alert is deprecated.

    The alert is now called VMStorageClassWarning.

  • View the runbook for the VirtualMachineInstanceHasEphemeralHotplugVolume alert.

13.5.77. VirtualMachineStuckInUnhealthyState

13.5.78. VirtualMachineStuckOnNode

13.5.79. VMCannotBeEvicted

13.5.80. VMStorageClassWarning

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top