Chapter 13. Monitoring
13.1. Monitoring overview Copy linkLink copied to clipboard!
Monitor the health of your cluster and virtual machines (VMs) to have a unified operational view of your environment. This ensures high availability and optimal resource performance.
You can monitor the health of your cluster and VMs with the following tools:
- Monitoring OpenShift Virtualization VM health status
-
View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home
Overview page in the OpenShift Dedicated web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions. - Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporterservice to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
- Runbooks
- Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the OpenShift Dedicated web console.
13.2. Prometheus queries for virtual resources Copy linkLink copied to clipboard!
Monitor the consumption of cluster infrastructure resources using the metrics provided by OpenShift Virtualization. These metrics are also used to query live migration status.
- For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
13.2.1. Querying metrics for all projects with the OpenShift Dedicated web console Copy linkLink copied to clipboard!
Monitor the state of a cluster and any user-defined workloads by using the OpenShift Dedicated metrics query browser. The query browser uses Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot.
As a dedicated-admin or as a user with view permissions for all projects, you can access metrics for all default OpenShift Dedicated and user-defined projects in the Metrics UI.
Only dedicated administrators have access to the third-party UIs provided with OpenShift Dedicated monitoring.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-adminrole or with view permissions for all projects. -
You have installed the OpenShift CLI (
oc).
Procedure
-
In the OpenShift Dedicated web console, click Observe
Metrics. To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
13.2.2. Querying metrics for user-defined projects with the OpenShift Dedicated web console Copy linkLink copied to clipboard!
Monitor user-defined workloads by using the OpenShift Dedicated metrics query browser. The query browser uses Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
Developers cannot access the third-party UIs provided with OpenShift Dedicated monitoring.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a
ServiceMonitorcustom resource definition (CRD) for the service to define how the service is monitored.
Procedure
-
In the OpenShift Dedicated web console, click Observe
Metrics. To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
13.2.3. Virtualization metrics Copy linkLink copied to clipboard!
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.
The following examples use topk queries that specify a time period. If virtual machines (VMs) are deleted during that time period, they can still appear in the query output.
13.2.3.1. Network metrics Copy linkLink copied to clipboard!
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
The following query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period:
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
13.2.3.2. Storage metrics Copy linkLink copied to clipboard!
You can monitor virtual machine storage traffic and identify high-traffic VMs by using Prometheus queries.
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic queries
The following query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period:
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0The following query returns the top 3 VMs with the highest average read latency at every given moment over a six-minute time period:
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_times_seconds_total{name='${name}',namespace='${namespace}'${clusterFilter}}[6m]) / rate(kubevirt_vmi_storage_iops_read_total{name='${name}',namespace='${namespace}'${clusterFilter}}[6m]) > 0)) > 0
The following queries can track data restored from storage snapshots:
kubevirt_vmsnapshot_disks_restored_from_source- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
The following query returns the total number of virtual machine disks restored from the source virtual machine:
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"}The following query returns the amount of space in bytes restored from the source virtual machine:
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
The following query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period:
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
13.2.3.3. Guest memory swapping metrics Copy linkLink copied to clipboard!
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
The following query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period:
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
13.2.3.4. Monitoring AAQ operator metrics Copy linkLink copied to clipboard!
The following metrics are exposed by the Application Aware Quota (AAQ) controller for monitoring resource quotas:
kube_application_aware_resourcequota- Returns the current quota usage and the CPU and memory limits enforced by the AAQ Operator resources. Type: Gauge.
kube_application_aware_resourcequota_creation_timestamp- Returns the time, in UNIX timestamp format, when the AAQ Operator resource is created. Type: Gauge.
13.2.3.5. VM label metrics Copy linkLink copied to clipboard!
kubevirt_vm_labelsReturns virtual machine labels as Prometheus labels. Type: Gauge.
You can expose and ignore specific labels by editing the
kubevirt-vm-labels-configconfig map. After you apply the config map to your cluster, the configuration is loaded dynamically.Example config map:
apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-labels-config namespace: openshift-cnv data: allowlist: "*" ignorelist: ""data.allowlistspecifies labels to expose.-
If
data.allowlisthas a value of"*", all labels are included. -
If
data.allowlisthas a value of"", the metric does not return any labels. -
If
data.allowlistcontains a list of label keys, only the explicitly named labels are exposed. For example:allowlist: "example.io/name,example.io/version".
-
If
data.ignorelistspecifies labels to ignore. The ignore list overrides the allow list.-
The
data.ignorelistfield does not support wildcard patterns. It can be empty or include a list of specific labels to ignore. -
If
data.ignorelisthas a value of"", no labels are ignored.
-
The
13.2.3.6. Live migration metrics Copy linkLink copied to clipboard!
The following metrics can be queried to show live migration status.
kubevirt_vmi_migration_data_processed_bytes- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_vmi_migration_data_remaining_bytes- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_vmi_migration_memory_transfer_rate_bytes- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_vmi_migrations_in_pending_phase- The number of pending migrations. Type: Gauge.
kubevirt_vmi_migrations_in_scheduling_phase- The number of scheduling migrations. Type: Gauge.
kubevirt_vmi_migrations_in_running_phase- The number of running migrations. Type: Gauge.
kubevirt_vmi_migration_succeeded- The number of successfully completed migrations. Type: Gauge.
kubevirt_vmi_migration_failed- The number of failed migrations. Type: Gauge.
13.3. Exposing custom metrics for virtual machines Copy linkLink copied to clipboard!
Monitor core platform components using the OpenShift Dedicated monitoring stack based on the Prometheus monitoring system. Additionally, enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.
13.3.1. Configuring the node exporter service Copy linkLink copied to clipboard!
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in to the cluster as a user with
cluster-adminprivileges. -
Create the
cluster-monitoring-configConfigMapobject in theopenshift-monitoringproject. -
Configure the
user-workload-monitoring-configConfigMapobject in theopenshift-user-workload-monitoringproject by settingenableUserWorkloadtotrue.
Procedure
Create the
ServiceYAML file. In the following example, the file is callednode-exporter-service.yaml.kind: Service apiVersion: v1 metadata: name: node-exporter-service namespace: dynamation labels: servicetype: metrics spec: ports: - name: exmet protocol: TCP port: 9100 targetPort: 9100 type: ClusterIP selector: monitor: metrics-
metadata.namedefines the node-exporter service that exposes the metrics from the virtual machines. -
metadata.namespacedefines the namespace where the service is created. -
metadata.labels.servicetypedefines the label for the service. TheServiceMonitoruses this label to match this service. -
spec.ports.namedefines the name given to the port that exposes metrics on port 9100 for theClusterIPservice. -
spec.ports.portdefines the target port used bynode-exporter-serviceto listen for requests. -
spec.ports.targetPortdefines the TCP port number of the virtual machine that is configured with themonitorlabel. -
spec.selector.monitordefines the label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the labelmonitorand a value ofmetricswill be matched.
-
Create the node-exporter service:
$ oc create -f node-exporter-service.yaml
13.3.2. Configuring a virtual machine with the node exporter service Copy linkLink copied to clipboard!
Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.
Prerequisites
-
The pods for the component are running in the
openshift-user-workload-monitoringproject. -
Grant the
monitoring-editrole to users who need to monitor this user-defined project.
Procedure
- Log on to the virtual machine.
Download the
node-exporterfile on to the virtual machine by using the directory path that applies to the version ofnode-exporterfile.$ wget https://github.com/prometheus/node_exporter/releases/download/<version>/node_exporter-<version>.linux-<architecture>.tar.gzExtract the executable and place it in the
/usr/bindirectory.$ sudo tar xvf node_exporter-<version>.linux-<architecture>.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"Create a
node_exporter.servicefile in this directory path:/etc/systemd/system. Thissystemdservice file runs the node-exporter service when the virtual machine reboots.[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.targetEnable and start the
systemdservice.$ sudo systemctl enable node_exporter.service$ sudo systemctl start node_exporter.service
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
$ curl http://localhost:9100/metricsExample output:
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05
13.3.3. Creating a custom monitoring label for virtual machines Copy linkLink copied to clipboard!
To enable queries to multiple virtual machines from a single service, you can add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
templatespec of your virtual machine configuration file. In this example, the labelmonitorhas the valuemetrics.spec: template: metadata: labels: monitor: metrics-
Stop and restart the virtual machine to create a new pod with the label name given to the
monitorlabel.
13.3.3.1. Querying the node-exporter service for metrics Copy linkLink copied to clipboard!
Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
$ oc get service -n <namespace> <node-exporter-service>To list all available metrics for the node-exporter service, query the
metricsresource.$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"Example output:
node_arp_entries{device="eth0"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name="0",type="Processor"} 0 node_cooling_device_max_state{name="0",type="Processor"} 0 node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0 node_cpu_guest_seconds_total{cpu="0",mode="user"} 0 node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06 node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61 node_cpu_seconds_total{cpu="0",mode="irq"} 233.91 node_cpu_seconds_total{cpu="0",mode="nice"} 551.47 node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3 node_cpu_seconds_total{cpu="0",mode="steal"} 86.12 node_cpu_seconds_total{cpu="0",mode="system"} 464.15 node_cpu_seconds_total{cpu="0",mode="user"} 1075.2 node_disk_discard_time_seconds_total{device="vda"} 0 node_disk_discard_time_seconds_total{device="vdb"} 0 node_disk_discarded_sectors_total{device="vda"} 0 node_disk_discarded_sectors_total{device="vdb"} 0 node_disk_discards_completed_total{device="vda"} 0 node_disk_discards_completed_total{device="vdb"} 0 node_disk_discards_merged_total{device="vda"} 0 node_disk_discards_merged_total{device="vdb"} 0 node_disk_info{device="vda",major="252",minor="0"} 1 node_disk_info{device="vdb",major="252",minor="16"} 1 node_disk_io_now{device="vda"} 0 node_disk_io_now{device="vdb"} 0 node_disk_io_time_seconds_total{device="vda"} 174 node_disk_io_time_seconds_total{device="vdb"} 0.054 node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039 node_disk_read_bytes_total{device="vda"} 3.71867136e+08 node_disk_read_bytes_total{device="vdb"} 366592 node_disk_read_time_seconds_total{device="vda"} 19.128 node_disk_read_time_seconds_total{device="vdb"} 0.039 node_disk_reads_completed_total{device="vda"} 5619 node_disk_reads_completed_total{device="vdb"} 96 node_disk_reads_merged_total{device="vda"} 5 node_disk_reads_merged_total{device="vdb"} 0 node_disk_write_time_seconds_total{device="vda"} 240.66400000000002 node_disk_write_time_seconds_total{device="vdb"} 0 node_disk_writes_completed_total{device="vda"} 71584 node_disk_writes_completed_total{device="vdb"} 0 node_disk_writes_merged_total{device="vda"} 19761 node_disk_writes_merged_total{device="vdb"} 0 node_disk_written_bytes_total{device="vda"} 2.007924224e+09 node_disk_written_bytes_total{device="vdb"} 0
13.3.4. Creating a ServiceMonitor resource for the node exporter service Copy linkLink copied to clipboard!
You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a YAML file for the
ServiceMonitorresource configuration. In this example, the service monitor matches any service with the labelmetricsand queries theexmetport every 30 seconds.apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor namespace: dynamation spec: endpoints: - interval: 30s port: exmet scheme: http selector: matchLabels: servicetype: metrics-
metadata.namedefines the name of theServiceMonitor. -
metadata.namespacedefines the namespace where theServiceMonitoris created. -
spec.endpoints.intervaldefines the interval at which the port will be queried. -
spec.endpoints.portdefines the name of the port that is queried every 30 seconds
-
Create the
ServiceMonitorconfiguration for the node-exporter service.$ oc create -f node-exporter-metrics-monitor.yaml
13.3.4.1. Accessing the node exporter service outside the cluster Copy linkLink copied to clipboard!
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Expose the node-exporter service.
$ oc expose service -n <namespace> <node_exporter_service_name>Obtain the FQDN (Fully Qualified Domain Name) for the route.
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.hostExample output:
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.orgUse the
curlcommand to display metrics for the node-exporter service.$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metricsExample output:
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423
13.4. Virtual machine health checks Copy linkLink copied to clipboard!
Define probes and watchdogs in the VirtualMachine resource to configure virtual machine (VM) health checks. Health checks monitor and report the internal state of a VM.
You can configure VM health checks by defining readiness and liveness probes in the VirtualMachine resource.
13.4.1. About readiness and liveness probes Copy linkLink copied to clipboard!
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests:
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-pingcommand to determine if the QEMU guest agent is running on the virtual machine.
13.4.1.1. Defining an HTTP readiness probe Copy linkLink copied to clipboard!
You can define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: httpGet: port: 1500 path: /healthz httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 # ...-
spec.template.spec.readinessProbe.httpGetdefines the HTTP GET request to perform to connect to the VM. -
spec.template.spec.readinessProbe.httpGet.portdefines the port of the VM that the probe queries. In the above example, the probe queries port 1500. -
spec.template.spec.readinessProbe.httpGet.pathdefines the path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints. -
spec.template.spec.readinessProbe.initialDelaySecondsdefines the time, in seconds, after the VM starts before the readiness probe is initiated. -
spec.template.spec.readinessProbe.periodSecondsdefines the delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater thantimeoutSeconds. -
spec.template.spec.readinessProbe.timeoutSecondsdefines the number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower thanperiodSeconds. -
spec.template.spec.readinessProbe.failureThresholddefines the number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is markedUnready. -
spec.template.spec.readinessProbe.successThresholddefines the number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
-
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
13.4.1.2. Defining a TCP readiness probe Copy linkLink copied to clipboard!
You can define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: initialDelaySeconds: 120 periodSeconds: 20 tcpSocket: port: 1500 timeoutSeconds: 10 # ...-
spec.template.spec.readinessProbe.initialDelaySecondsdefines the time, in seconds, after the VM starts before the readiness probe is initiated. -
spec.template.spec.readinessProbe.periodSeconds`defines the delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than `timeoutSeconds. -
spec.template.spec.readinessProbe.tcpSocketdefines the TCP action to perform. -
spec.template.spec.readinessProbe.tcpSocket.portdefines the port of the VM that the probe queries. -
spec.template.spec.readinessProbe.timeoutSecondsdefines the number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower thanperiodSeconds.
-
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
13.4.1.3. Defining an HTTP liveness probe Copy linkLink copied to clipboard!
Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: livenessProbe: initialDelaySeconds: 120 periodSeconds: 20 httpGet: port: 1500 path: /healthz httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 # ...-
spec.tenmplate.spec.livenessProbe.initialDelaySecondsdefines the time, in seconds, after the VM starts before the liveness probe is initiated. -
spec.tenmplate.spec.livenessProbe.periodSecondsdefines the delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater thantimeoutSeconds. -
spec.tenmplate.spec.livenessProbe.httpGetdefines the HTTP GET request to perform to connect to the VM. -
spec.tenmplate.spec.livenessProbe.httpGet.portdefines the port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init. -
spec.tenmplate.spec.livenessProbe.httpGet.pathdefines the path to access on the HTTP server. In the above example, if the handler for the server’s/healthzpath returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. -
spec.tenmplate.spec.livenessProbe.timeoutSecondsdefines the number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower thanperiodSeconds.
-
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
13.4.2. About watchdogs Copy linkLink copied to clipboard!
A watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
poweroff: The virtual machine (VM) powers down immediately. Ifspec.runStrategyis not set tomanual, the VM reboots. reset: The VM reboots in place and the guest operating system cannot react.NoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
shutdown: The VM gracefully powers down by stopping all services.
Watchdog functionality is not available for Windows VMs.
You can create a watchdog device by configuring the device for a VM and installing the watchdog agent on the guest.
13.4.2.1. Configuring a watchdog device for the virtual machine Copy linkLink copied to clipboard!
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
For
x86systems, the VM must use a kernel that works with thei6300esbwatchdog device. If you uses390xarchitecture, the kernel must be enabled fordiag288. Red Hat Enterprise Linux (RHEL) images supporti6300esbanddiag288. -
You have installed the OpenShift CLI (
oc).
Procedure
Create a
YAMLfile with the following contents:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: <vm-label> name: <vm-name> spec: runStrategy: Halted template: metadata: labels: kubevirt.io/vm: <vm-label> spec: domain: devices: watchdog: name: <watchdog> <watchdog-device-model>: action: "poweroff" # ...-
spec.template.spec.domain.devices.watchdog.name.<watchdog-device-model>defines the watchdog device model to use. Forx86specifyi6300esb. Fors390xspecifydiag288. spec.template.spec.domain.devices.watchdog.name.<watchdog-device-model>.actiondefines the watchdog device action. Specifypoweroff,reset, orshutdown. Theshutdownaction requires that the guest virtual machine is responsive to ACPI signals. Usingshutdownis not recommended.The example above configures the watchdog device on a VM with the
poweroffaction and exposes the device as/dev/watchdog.This device can now be used by the watchdog binary.
-
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
$ lspci | grep watchdog -iRun one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
# echo c > /proc/sysrq-triggerStop the watchdog service:
# pkill -9 watchdog
13.4.2.2. Installing the watchdog agent on the guest Copy linkLink copied to clipboard!
You can install the watchdog agent on the guest and start the watchdog service.
Procedure
- Log in to the virtual machine as root user.
Verify that the
/dev/watchdogfile path is present in the VM by running the following command:# ls /dev/watchdogInstall the
watchdogpackage and its dependencies:# yum install watchdogUncomment the following line in the
/etc/watchdog.conffile and save the changes:#watchdog-device = /dev/watchdogEnable the
watchdogservice to start on boot:# systemctl enable --now watchdog.service
13.5. OpenShift Virtualization runbooks Copy linkLink copied to clipboard!
To diagnose and resolve OpenShift Virtualization alerts, you can use the OpenShift Virtualization Operator runbooks. These guides help ensure you can effectively troubleshoot cluster issues and restore system health.
Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub.
13.5.1. CDIDataImportCronOutdated Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDataImportCronOutdatedalert.
13.5.2. CDIDataVolumeUnusualRestartCount Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDataVolumeUnusualRestartCountalert.
13.5.3. CDIDefaultStorageClassDegraded Copy linkLink copied to clipboard!
-
View the runbook for the
CDIDefaultStorageClassDegradedalert.
13.5.4. CDIMultipleDefaultVirtStorageClasses Copy linkLink copied to clipboard!
-
View the runbook for the
CDIMultipleDefaultVirtStorageClassesalert.
13.5.5. CDINoDefaultStorageClass Copy linkLink copied to clipboard!
-
View the runbook for the
CDINoDefaultStorageClassalert.
13.5.6. CDINotReady Copy linkLink copied to clipboard!
-
View the runbook for the
CDINotReadyalert.
13.5.7. CDIOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
CDIOperatorDownalert.
13.5.8. CDIStorageProfilesIncomplete Copy linkLink copied to clipboard!
-
View the runbook for the
CDIStorageProfilesIncompletealert.
13.5.9. CnaoDown Copy linkLink copied to clipboard!
-
View the runbook for the
CnaoDownalert.
13.5.10. CnaoNMstateMigration Copy linkLink copied to clipboard!
-
View the runbook for the
CnaoNMstateMigrationalert.
13.5.11. DeprecatedMachineType Copy linkLink copied to clipboard!
-
View the runbook for the
DeprecatedMachineTypealert.
13.5.12. DuplicateWaspAgentDSDetected Copy linkLink copied to clipboard!
-
The
DuplicateWaspAgentDSDetectedalert is deprecated.
13.5.13. GuestFilesystemAlmostOutOfSpace Copy linkLink copied to clipboard!
-
View the runbook for the
GuestFilesystemAlmostOutOfSpacealert.
13.5.14. GuestVCPUQueueHighCritical Copy linkLink copied to clipboard!
-
View the runbook for the
GuestVCPUQueueHighCriticalalert.
13.5.15. GuestVCPUQueueHighWarning Copy linkLink copied to clipboard!
-
View the runbook for the
GuestVCPUQueueHighWarningalert.
13.5.16. HAControlPlaneDown Copy linkLink copied to clipboard!
-
View the runbook for the
HAControlPlaneDownalert.
13.5.17. HCOGoldenImageWithNoArchitectureAnnotation Copy linkLink copied to clipboard!
-
View the runbook for the
HCOGoldenImageWithNoArchitectureAnnotationalert.
13.5.18. HCOGoldenImageWithNoSupportedArchitecture Copy linkLink copied to clipboard!
-
View the runbook for the
HCOGoldenImageWithNoSupportedArchitecturealert.
13.5.19. HCOInstallationIncomplete Copy linkLink copied to clipboard!
-
View the runbook for the
HCOInstallationIncompletealert.
13.5.20. HCOMisconfiguredDescheduler Copy linkLink copied to clipboard!
-
View the runbook for the
HCOMisconfiguredDescheduleralert.
13.5.21. HCOMultiArchGoldenImagesDisabled Copy linkLink copied to clipboard!
-
View the runbook for the
HCOMultiArchGoldenImagesDisabledalert.
13.5.22. HCOOperatorConditionsUnhealthy Copy linkLink copied to clipboard!
-
View the runbook for the
HCOOperatorConditionsUnhealthyalert.
13.5.23. HighNodeCPUFrequency Copy linkLink copied to clipboard!
-
View the runbook for the
HighNodeCPUFrequencyalert.
13.5.24. HPPNotReady Copy linkLink copied to clipboard!
-
View the runbook for the
HPPNotReadyalert.
13.5.25. HPPOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
HPPOperatorDownalert.
13.5.26. HPPSharingPoolPathWithOS Copy linkLink copied to clipboard!
-
View the runbook for the
HPPSharingPoolPathWithOSalert.
13.5.27. HighCPUWorkload Copy linkLink copied to clipboard!
-
View the runbook for the
HighCPUWorkloadalert.
13.5.28. KubemacpoolDown Copy linkLink copied to clipboard!
-
View the runbook for the
KubemacpoolDownalert.
13.5.29. KubeMacPoolDuplicateMacsFound Copy linkLink copied to clipboard!
*The KubeMacPoolDuplicateMacsFound alert is deprecated.
13.5.30. KubeVirtComponentExceedsRequestedCPU Copy linkLink copied to clipboard!
-
The
KubeVirtComponentExceedsRequestedCPUalert is deprecated.
13.5.31. KubeVirtComponentExceedsRequestedMemory Copy linkLink copied to clipboard!
-
The
KubeVirtComponentExceedsRequestedMemoryalert is deprecated.
13.5.32. KubeVirtCRModified Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtCRModifiedalert.
13.5.33. KubeVirtDeprecatedAPIRequested Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtDeprecatedAPIRequestedalert.
13.5.34. KubeVirtVMGuestMemoryAvailableLow Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtVMGuestMemoryAvailableLowalert.
13.5.35. KubeVirtVMGuestMemoryPressure Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtVMGuestMemoryPressurealert.
13.5.36. KubeVirtNoAvailableNodesToRunVMs Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtNoAvailableNodesToRunVMsalert.
13.5.37. KubevirtVmHighMemoryUsage Copy linkLink copied to clipboard!
-
The
KubevirtVmHighMemoryUsagealert is deprecated.
13.5.38. KubeVirtVMIExcessiveMigrations Copy linkLink copied to clipboard!
-
View the runbook for the
KubeVirtVMIExcessiveMigrationsalert.
13.5.39. LowKVMNodesCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowKVMNodesCountalert.
13.5.40. LowReadyVirtControllersCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowReadyVirtControllersCountalert.
13.5.41. LowReadyVirtOperatorsCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowReadyVirtOperatorsCountalert.
13.5.42. LowVirtAPICount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtAPICountalert.
13.5.43. LowVirtControllersCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtControllersCountalert.
13.5.44. LowVirtOperatorCount Copy linkLink copied to clipboard!
-
View the runbook for the
LowVirtOperatorCountalert.
13.5.45. NetworkAddonsConfigNotReady Copy linkLink copied to clipboard!
-
View the runbook for the
NetworkAddonsConfigNotReadyalert.
13.5.46. NoLeadingVirtOperator Copy linkLink copied to clipboard!
-
View the runbook for the
NoLeadingVirtOperatoralert.
13.5.47. NoReadyVirtController Copy linkLink copied to clipboard!
-
View the runbook for the
NoReadyVirtControlleralert.
13.5.48. NoReadyVirtOperator Copy linkLink copied to clipboard!
-
View the runbook for the
NoReadyVirtOperatoralert.
13.5.49. NodeNetworkInterfaceDown Copy linkLink copied to clipboard!
-
View the runbook for the
NodeNetworkInterfaceDownalert.
13.5.50. OperatorConditionsUnhealthy Copy linkLink copied to clipboard!
-
The
OperatorConditionsUnhealthyalert is deprecated.
13.5.51. OrphanedVirtualMachineInstances Copy linkLink copied to clipboard!
-
View the runbook for the
OrphanedVirtualMachineInstancesalert.
13.5.52. OutdatedVirtualMachineInstanceWorkloads Copy linkLink copied to clipboard!
-
View the runbook for the
OutdatedVirtualMachineInstanceWorkloadsalert.
13.5.53. PersistentVolumeFillingUp Copy linkLink copied to clipboard!
-
View the runbook for the
PersistentVolumeFillingUpalert.
13.5.54. SingleStackIPv6Unsupported Copy linkLink copied to clipboard!
-
The
SingleStackIPv6Unsupportedalert is deprecated.
13.5.55. SSPCommonTemplatesModificationReverted Copy linkLink copied to clipboard!
-
View the runbook for the
SSPCommonTemplatesModificationRevertedalert.
13.5.56. SSPDown Copy linkLink copied to clipboard!
-
View the runbook for the
SSPDownalert.
13.5.57. SSPFailingToReconcile Copy linkLink copied to clipboard!
-
View the runbook for the
SSPFailingToReconcilealert.
13.5.58. SSPHighRateRejectedVms Copy linkLink copied to clipboard!
-
View the runbook for the
SSPHighRateRejectedVmsalert.
13.5.59. SSPOperatorDown Copy linkLink copied to clipboard!
-
The
SSPOperatorDownalert is deprecated.
13.5.60. SSPTemplateValidatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
SSPTemplateValidatorDownalert.
13.5.61. UnsupportedHCOModification Copy linkLink copied to clipboard!
-
View the runbook for the
UnsupportedHCOModificationalert.
13.5.62. VirtAPIDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtAPIDownalert.
13.5.63. VirtApiRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtApiRESTErrorsBurstalert.
13.5.64. VirtApiRESTErrorsHigh Copy linkLink copied to clipboard!
-
The
VirtApiRESTErrorsHighalert is deprecated.
13.5.65. VirtControllerDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtControllerDownalert.
13.5.66. VirtControllerRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtControllerRESTErrorsBurstalert.
13.5.67. VirtControllerRESTErrorsHigh Copy linkLink copied to clipboard!
-
The
VirtControllerRESTErrorsHighalert is deprecated.
13.5.68. VirtHandlerDaemonSetRolloutFailing Copy linkLink copied to clipboard!
-
View the runbook for the
VirtHandlerDaemonSetRolloutFailingalert.
13.5.69. VirtHandlerRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtHandlerRESTErrorsBurstalert.
13.5.70. VirtHandlerRESTErrorsHigh Copy linkLink copied to clipboard!
-
The
VirtHandlerRESTErrorsHighalert is deprecated.
13.5.71. VirtLauncherPodsStuckFailed Copy linkLink copied to clipboard!
-
View the runbook for the
VirtLauncherPodsStuckFailedalert.
13.5.72. VirtOperatorDown Copy linkLink copied to clipboard!
-
View the runbook for the
VirtOperatorDownalert.
13.5.73. VirtOperatorRESTErrorsBurst Copy linkLink copied to clipboard!
-
View the runbook for the
VirtOperatorRESTErrorsBurstalert.
13.5.74. VirtOperatorRESTErrorsHigh Copy linkLink copied to clipboard!
-
The
VirtOperatorRESTErrorsHighalert is deprecated.
13.5.75. VirtualMachineCRCErrors Copy linkLink copied to clipboard!
The
VirtualMachineCRCErrorsalert is deprecated.The alert is now called
VMStorageClassWarning.
13.5.76. VirtualMachineInstanceHasEphemeralHotplugVolume Copy linkLink copied to clipboard!
-
View the runbook for the
VirtualMachineInstanceHasEphemeralHotplugVolumealert.
13.5.77. VirtualMachineStuckInUnhealthyState Copy linkLink copied to clipboard!
-
View the runbook for the
VirtualMachineStuckInUnhealthyStatealert.
13.5.78. VirtualMachineStuckOnNode Copy linkLink copied to clipboard!
-
View the runbook for the
VirtualMachineStuckOnNodealert.
13.5.79. VMCannotBeEvicted Copy linkLink copied to clipboard!
-
View the runbook for the
VMCannotBeEvictedalert.
13.5.80. VMStorageClassWarning Copy linkLink copied to clipboard!
-
View the runbook for the
VMStorageClassWarningalert.