Chapter 11. Monitoring
11.1. Monitoring overview
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
- Monitoring OpenShift Virtualization VM health status
-
View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home
Overview page in the Red Hat OpenShift Service on AWS web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions.
- Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporter
service to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
- Runbooks
- Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the Red Hat OpenShift Service on AWS web console.
11.2. Prometheus queries for virtual resources
Use the Red Hat OpenShift Service on AWS monitoring dashboard to query virtualization metrics. OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including network, storage, and guest memory swapping. You can also use metrics to query live migration status.
11.2.1. Prerequisites
- For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
11.2.2. Querying metrics
The Red Hat OpenShift Service on AWS monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a dedicated-admin
, you can query one or more namespaces at a time for metrics about user-defined projects.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
11.2.2.1. Querying metrics for all projects as a cluster administrator
As a dedicated-admin
or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Service on AWS and user-defined projects in the Metrics UI.
Only dedicated administrators have access to the third-party UIs provided with Red Hat OpenShift Service on AWS monitoring.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role or with view permissions for all projects. -
You have installed the OpenShift CLI (
oc
).
Procedure
-
From the Administrator perspective in the Red Hat OpenShift Service on AWS web console, select Observe
Metrics. To add one or more queries, do any of the following:
Option Description Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Select Add query.
Duplicate an existing query.
Select the Options menu next to the query, then choose Duplicate query.
Disable a query from being run.
Select the Options menu next to the query and choose Disable query.
To run queries that you created, select Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
NoteQueries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
NoteBy default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query.
- Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following:
Option Description Hide all metrics from a query.
Click the Options menu for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box.
Hide the plot.
Select Hide graph.
11.2.2.2. Querying metrics for user-defined projects as a developer
You can access metrics for a user-defined project as a developer or as a user with view permissions for the project.
In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.
Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Developers cannot access the third-party UIs provided with Red Hat OpenShift Service on AWS monitoring.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a
ServiceMonitor
custom resource definition (CRD) for the service to define how the service is monitored.
Procedure
-
From the Developer perspective in the Red Hat OpenShift Service on AWS web console, select Observe
Metrics. - Select the project that you want to view metrics for in the Project: list.
Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL. The metrics from the queries are visualized on the plot.
NoteIn the Developer perspective, you can only run one query at a time.
Explore the visualized metrics by doing any of the following:
Option Description Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box.
11.2.3. Virtualization metrics
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.
The following examples use topk
queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.
11.2.3.1. Network metrics
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total
- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total
- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1
- 1
- This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.
11.2.3.2. Storage metrics
11.2.3.2.1. Storage-related traffic
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total
- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total
- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1
- 1
- This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
11.2.3.2.2. Storage snapshot data
kubevirt_vmsnapshot_disks_restored_from_source
- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes
- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"} 1
- 1
- This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"} 1
- 1
- This query returns the amount of space in bytes restored from the source virtual machine.
11.2.3.2.3. I/O performance
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total
- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total
- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1
- 1
- This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.
11.2.3.3. Guest memory swapping metrics
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes
- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes
- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 1
- 1
- This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
11.2.3.4. Live migration metrics
The following metrics can be queried to show live migration status:
kubevirt_vmi_migration_data_processed_bytes
- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_vmi_migration_data_remaining_bytes
- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_vmi_migration_memory_transfer_rate_bytes
- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_vmi_migrations_in_pending_phase
- The number of pending migrations. Type: Gauge.
kubevirt_vmi_migrations_in_scheduling_phase
- The number of scheduling migrations. Type: Gauge.
kubevirt_vmi_migrations_in_running_phase
- The number of running migrations. Type: Gauge.
kubevirt_vmi_migration_succeeded
- The number of successfully completed migrations. Type: Gauge.
kubevirt_vmi_migration_failed
- The number of failed migrations. Type: Gauge.
11.2.4. Additional resources
11.3. Exposing custom metrics for virtual machines
Red Hat OpenShift Service on AWS includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.
In addition to using the Red Hat OpenShift Service on AWS monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter
service.
11.3.1. Configuring the node exporter service
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the Red Hat OpenShift Service on AWS CLI
oc
. -
Log in to the cluster as a user with
cluster-admin
privileges. -
Create the
cluster-monitoring-config
ConfigMap
object in theopenshift-monitoring
project. -
Configure the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project by settingenableUserWorkload
totrue
.
Procedure
Create the
Service
YAML file. In the following example, the file is callednode-exporter-service.yaml
.kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7
- 1
- The node-exporter service that exposes the metrics from the virtual machines.
- 2
- The namespace where the service is created.
- 3
- The label for the service. The
ServiceMonitor
uses this label to match this service. - 4
- The name given to the port that exposes metrics on port 9100 for the
ClusterIP
service. - 5
- The target port used by
node-exporter-service
to listen for requests. - 6
- The TCP port number of the virtual machine that is configured with the
monitor
label. - 7
- The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label
monitor
and a value ofmetrics
will be matched.
Create the node-exporter service:
$ oc create -f node-exporter-service.yaml
11.3.2. Configuring a virtual machine with the node exporter service
Download the node-exporter
file on to the virtual machine. Then, create a systemd
service that runs the node-exporter service when the virtual machine boots.
Prerequisites
-
The pods for the component are running in the
openshift-user-workload-monitoring
project. -
Grant the
monitoring-edit
role to users who need to monitor this user-defined project.
Procedure
- Log on to the virtual machine.
Download the
node-exporter
file on to the virtual machine by using the directory path that applies to the version ofnode-exporter
file.$ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
Extract the executable and place it in the
/usr/bin
directory.$ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"
Create a
node_exporter.service
file in this directory path:/etc/systemd/system
. Thissystemd
service file runs the node-exporter service when the virtual machine reboots.[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target
Enable and start the
systemd
service.$ sudo systemctl enable node_exporter.service $ sudo systemctl start node_exporter.service
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
$ curl http://localhost:9100/metrics
Example output
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05
11.3.3. Creating a custom monitoring label for virtual machines
To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the Red Hat OpenShift Service on AWS CLI
oc
. -
Log in as a user with
cluster-admin
privileges. - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
template
spec of your virtual machine configuration file. In this example, the labelmonitor
has the valuemetrics
.spec: template: metadata: labels: monitor: metrics
-
Stop and restart the virtual machine to create a new pod with the label name given to the
monitor
label.
11.3.3.1. Querying the node-exporter service for metrics
Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics
canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges or themonitoring-edit
role. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
$ oc get service -n <namespace> <node-exporter-service>
To list all available metrics for the node-exporter service, query the
metrics
resource.$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"
Example output
node_arp_entries{device="eth0"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name="0",type="Processor"} 0 node_cooling_device_max_state{name="0",type="Processor"} 0 node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0 node_cpu_guest_seconds_total{cpu="0",mode="user"} 0 node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06 node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61 node_cpu_seconds_total{cpu="0",mode="irq"} 233.91 node_cpu_seconds_total{cpu="0",mode="nice"} 551.47 node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3 node_cpu_seconds_total{cpu="0",mode="steal"} 86.12 node_cpu_seconds_total{cpu="0",mode="system"} 464.15 node_cpu_seconds_total{cpu="0",mode="user"} 1075.2 node_disk_discard_time_seconds_total{device="vda"} 0 node_disk_discard_time_seconds_total{device="vdb"} 0 node_disk_discarded_sectors_total{device="vda"} 0 node_disk_discarded_sectors_total{device="vdb"} 0 node_disk_discards_completed_total{device="vda"} 0 node_disk_discards_completed_total{device="vdb"} 0 node_disk_discards_merged_total{device="vda"} 0 node_disk_discards_merged_total{device="vdb"} 0 node_disk_info{device="vda",major="252",minor="0"} 1 node_disk_info{device="vdb",major="252",minor="16"} 1 node_disk_io_now{device="vda"} 0 node_disk_io_now{device="vdb"} 0 node_disk_io_time_seconds_total{device="vda"} 174 node_disk_io_time_seconds_total{device="vdb"} 0.054 node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039 node_disk_read_bytes_total{device="vda"} 3.71867136e+08 node_disk_read_bytes_total{device="vdb"} 366592 node_disk_read_time_seconds_total{device="vda"} 19.128 node_disk_read_time_seconds_total{device="vdb"} 0.039 node_disk_reads_completed_total{device="vda"} 5619 node_disk_reads_completed_total{device="vdb"} 96 node_disk_reads_merged_total{device="vda"} 5 node_disk_reads_merged_total{device="vdb"} 0 node_disk_write_time_seconds_total{device="vda"} 240.66400000000002 node_disk_write_time_seconds_total{device="vdb"} 0 node_disk_writes_completed_total{device="vda"} 71584 node_disk_writes_completed_total{device="vdb"} 0 node_disk_writes_merged_total{device="vda"} 19761 node_disk_writes_merged_total{device="vdb"} 0 node_disk_written_bytes_total{device="vda"} 2.007924224e+09 node_disk_written_bytes_total{device="vdb"} 0
11.3.4. Creating a ServiceMonitor resource for the node exporter service
You can use a Prometheus client library and scrape metrics from the /metrics
endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor
custom resource definition (CRD) to monitor the node exporter service.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges or themonitoring-edit
role. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Create a YAML file for the
ServiceMonitor
resource configuration. In this example, the service monitor matches any service with the labelmetrics
and queries theexmet
port every 30 seconds.apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics
Create the
ServiceMonitor
configuration for the node-exporter service.$ oc create -f node-exporter-metrics-monitor.yaml
11.3.4.1. Accessing the node exporter service outside the cluster
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges or themonitoring-edit
role. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Expose the node-exporter service.
$ oc expose service -n <namespace> <node_exporter_service_name>
Obtain the FQDN (Fully Qualified Domain Name) for the route.
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host
Example output
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org
Use the
curl
command to display metrics for the node-exporter service.$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics
Example output
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423
11.3.5. Additional resources
11.4. Virtual machine health checks
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine
resource.
11.4.1. About readiness and liveness probes
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the spec.readinessProbe
and the spec.livenessProbe
fields of the VirtualMachine
object. These fields support the following tests:
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-ping
command to determine if the QEMU guest agent is running on the virtual machine.
11.4.1.1. Defining an HTTP readiness probe
Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet
field of the virtual machine (VM) configuration.
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8 # ...
- 1
- The HTTP GET request to perform to connect to the VM.
- 2
- The port of the VM that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready
. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
11.4.1.2. Defining a TCP readiness probe
Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket
field of the virtual machine (VM) configuration.
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5 # ...
- 1
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 3
- The TCP action to perform.
- 4
- The port of the VM that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
11.4.1.3. Defining an HTTP liveness probe
Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet
field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6 # ...
- 1
- The time, in seconds, after the VM starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 3
- The HTTP GET request to perform to connect to the VM.
- 4
- The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthz
path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
11.4.2. Defining a watchdog
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
- Configure a watchdog device for the virtual machine (VM).
- Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
poweroff
: The VM powers down immediately. Ifspec.running
is set totrue
orspec.runStrategy
is not set tomanual
, then the VM reboots. reset
: The VM reboots in place and the guest operating system cannot react.NoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
shutdown
: The VM gracefully powers down by stopping all services.
Watchdog is not available for Windows VMs.
11.4.2.1. Configuring a watchdog device for the virtual machine
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
The VM must have kernel support for an
i6300esb
watchdog device. Red Hat Enterprise Linux (RHEL) images supporti6300esb
.
Procedure
Create a
YAML
file with the following contents:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 # ...
- 1
- Specify
poweroff
,reset
, orshutdown
.
The example above configures the
i6300esb
watchdog device on a RHEL8 VM with the poweroff action and exposes the device as/dev/watchdog
.This device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
$ lspci | grep watchdog -i
Run one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
# echo c > /proc/sysrq-trigger
Stop the watchdog service:
# pkill -9 watchdog
11.4.2.2. Installing the watchdog agent on the guest
You install the watchdog agent on the guest and start the watchdog
service.
Procedure
- Log in to the virtual machine as root user.
Install the
watchdog
package and its dependencies:# yum install watchdog
Uncomment the following line in the
/etc/watchdog.conf
file and save the changes:#watchdog-device = /dev/watchdog
Enable the
watchdog
service to start on boot:# systemctl enable --now watchdog.service
11.5. OpenShift Virtualization runbooks
To diagnose and resolve issues that trigger OpenShift Virtualization alerts, follow the procedures in the runbooks for the OpenShift Virtualization Operator. Triggered OpenShift Virtualization alerts can be viewed in the main Observe
Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub.
11.5.1. CDIDataImportCronOutdated
-
View the runbook for the
CDIDataImportCronOutdated
alert.
11.5.2. CDIDataVolumeUnusualRestartCount
-
View the runbook for the
CDIDataVolumeUnusualRestartCount
alert.
11.5.3. CDIDefaultStorageClassDegraded
-
View the runbook for the
CDIDefaultStorageClassDegraded
alert.
11.5.4. CDIMultipleDefaultVirtStorageClasses
-
View the runbook for the
CDIMultipleDefaultVirtStorageClasses
alert.
11.5.5. CDINoDefaultStorageClass
-
View the runbook for the
CDINoDefaultStorageClass
alert.
11.5.6. CDINotReady
-
View the runbook for the
CDINotReady
alert.
11.5.7. CDIOperatorDown
-
View the runbook for the
CDIOperatorDown
alert.
11.5.8. CDIStorageProfilesIncomplete
-
View the runbook for the
CDIStorageProfilesIncomplete
alert.
11.5.9. CnaoDown
-
View the runbook for the
CnaoDown
alert.
11.5.10. CnaoNMstateMigration
-
View the runbook for the
CnaoNMstateMigration
alert.
11.5.11. HCOInstallationIncomplete
-
View the runbook for the
HCOInstallationIncomplete
alert.
11.5.12. HPPNotReady
-
View the runbook for the
HPPNotReady
alert.
11.5.13. HPPOperatorDown
-
View the runbook for the
HPPOperatorDown
alert.
11.5.14. HPPSharingPoolPathWithOS
-
View the runbook for the
HPPSharingPoolPathWithOS
alert.
11.5.15. KubemacpoolDown
-
View the runbook for the
KubemacpoolDown
alert.
11.5.16. KubeMacPoolDuplicateMacsFound
-
View the runbook for the
KubeMacPoolDuplicateMacsFound
alert.
11.5.17. KubeVirtComponentExceedsRequestedCPU
-
The
KubeVirtComponentExceedsRequestedCPU
alert is deprecated.
11.5.18. KubeVirtComponentExceedsRequestedMemory
-
The
KubeVirtComponentExceedsRequestedMemory
alert is deprecated.
11.5.19. KubeVirtCRModified
-
View the runbook for the
KubeVirtCRModified
alert.
11.5.20. KubeVirtDeprecatedAPIRequested
-
View the runbook for the
KubeVirtDeprecatedAPIRequested
alert.
11.5.21. KubeVirtNoAvailableNodesToRunVMs
-
View the runbook for the
KubeVirtNoAvailableNodesToRunVMs
alert.
11.5.22. KubevirtVmHighMemoryUsage
-
View the runbook for the
KubevirtVmHighMemoryUsage
alert.
11.5.23. KubeVirtVMIExcessiveMigrations
-
View the runbook for the
KubeVirtVMIExcessiveMigrations
alert.
11.5.24. LowKVMNodesCount
-
View the runbook for the
LowKVMNodesCount
alert.
11.5.25. LowReadyVirtControllersCount
-
View the runbook for the
LowReadyVirtControllersCount
alert.
11.5.26. LowReadyVirtOperatorsCount
-
View the runbook for the
LowReadyVirtOperatorsCount
alert.
11.5.27. LowVirtAPICount
-
View the runbook for the
LowVirtAPICount
alert.
11.5.28. LowVirtControllersCount
-
View the runbook for the
LowVirtControllersCount
alert.
11.5.29. LowVirtOperatorCount
-
View the runbook for the
LowVirtOperatorCount
alert.
11.5.30. NetworkAddonsConfigNotReady
-
View the runbook for the
NetworkAddonsConfigNotReady
alert.
11.5.31. NoLeadingVirtOperator
-
View the runbook for the
NoLeadingVirtOperator
alert.
11.5.32. NoReadyVirtController
-
View the runbook for the
NoReadyVirtController
alert.
11.5.33. NoReadyVirtOperator
-
View the runbook for the
NoReadyVirtOperator
alert.
11.5.34. OrphanedVirtualMachineInstances
-
View the runbook for the
OrphanedVirtualMachineInstances
alert.
11.5.35. OutdatedVirtualMachineInstanceWorkloads
-
View the runbook for the
OutdatedVirtualMachineInstanceWorkloads
alert.
11.5.36. SingleStackIPv6Unsupported
-
View the runbook for the
SingleStackIPv6Unsupported
alert.
11.5.37. SSPCommonTemplatesModificationReverted
-
View the runbook for the
SSPCommonTemplatesModificationReverted
alert.
11.5.38. SSPDown
-
View the runbook for the
SSPDown
alert.
11.5.39. SSPFailingToReconcile
-
View the runbook for the
SSPFailingToReconcile
alert.
11.5.40. SSPHighRateRejectedVms
-
View the runbook for the
SSPHighRateRejectedVms
alert.
11.5.41. SSPTemplateValidatorDown
-
View the runbook for the
SSPTemplateValidatorDown
alert.
11.5.42. SSPOperatorDown
-
View the runbook for the
SSPOperatorDown
alert.
11.5.43. UnsupportedHCOModification
-
View the runbook for the
UnsupportedHCOModification
alert.
11.5.44. VirtAPIDown
-
View the runbook for the
VirtAPIDown
alert.
11.5.45. VirtApiRESTErrorsBurst
-
View the runbook for the
VirtApiRESTErrorsBurst
alert.
11.5.46. VirtApiRESTErrorsHigh
-
View the runbook for the
VirtApiRESTErrorsHigh
alert.
11.5.47. VirtControllerDown
-
View the runbook for the
VirtControllerDown
alert.
11.5.48. VirtControllerRESTErrorsBurst
-
View the runbook for the
VirtControllerRESTErrorsBurst
alert.
11.5.49. VirtControllerRESTErrorsHigh
-
View the runbook for the
VirtControllerRESTErrorsHigh
alert.
11.5.50. VirtHandlerDaemonSetRolloutFailing
-
View the runbook for the
VirtHandlerDaemonSetRolloutFailing
alert.
11.5.51. VirtHandlerRESTErrorsBurst
-
View the runbook for the
VirtHandlerRESTErrorsBurst
alert.
11.5.52. VirtHandlerRESTErrorsHigh
-
View the runbook for the
VirtHandlerRESTErrorsHigh
alert.
11.5.53. VirtOperatorDown
-
View the runbook for the
VirtOperatorDown
alert.
11.5.54. VirtOperatorRESTErrorsBurst
-
View the runbook for the
VirtOperatorRESTErrorsBurst
alert.
11.5.55. VirtOperatorRESTErrorsHigh
-
View the runbook for the
VirtOperatorRESTErrorsHigh
alert.
11.5.56. VirtualMachineCRCErrors
The runbook for the
VirtualMachineCRCErrors
alert is deprecated because the alert was renamed toVMStorageClassWarning
.-
View the runbook for the
VMStorageClassWarning
alert.
-
View the runbook for the
11.5.57. VMCannotBeEvicted
-
View the runbook for the
VMCannotBeEvicted
alert.
11.5.58. VMStorageClassWarning
-
View the runbook for the
VMStorageClassWarning
alert.