Questo contenuto non è disponibile nella lingua selezionata.
Chapter 13. Monitoring
13.1. Monitoring overview Copia collegamentoCollegamento copiato negli appunti!
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
- Monitoring OpenShift Virtualization VM health status
-
View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home
Overview page in the Red Hat OpenShift Service on AWS web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions.
- Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporterservice to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
- Runbooks
13.2. Prometheus queries for virtual resources Copia collegamentoCollegamento copiato negli appunti!
Use the Red Hat OpenShift Service on AWS monitoring dashboard to query virtualization metrics. OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including network, storage, and guest memory swapping. You can also use metrics to query live migration status.
13.2.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
13.2.2. Querying metrics for all projects with the Red Hat OpenShift Service on AWS web console Copia collegamentoCollegamento copiato negli appunti!
You can use the Red Hat OpenShift Service on AWS metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a dedicated-admin or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Service on AWS and user-defined projects in the Metrics UI.
Only dedicated administrators have access to the third-party UIs provided with Red Hat OpenShift Service on AWS monitoring.
The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-adminrole or with view permissions for all projects. -
You have installed the OpenShift CLI (
oc).
Procedure
-
In the Red Hat OpenShift Service on AWS web console, click Observe
Metrics. To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
13.2.3. Querying metrics for user-defined projects with the Red Hat OpenShift Service on AWS web console Copia collegamentoCollegamento copiato negli appunti!
You can use the Red Hat OpenShift Service on AWS metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet. These queries are restricted to the selected project. You can also run custom Prometheus Query Language (PromQL) queries for the project.
Developers cannot access the third-party UIs provided with Red Hat OpenShift Service on AWS monitoring.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a
ServiceMonitorcustom resource definition (CRD) for the service to define how the service is monitored.
Procedure
-
In the Red Hat OpenShift Service on AWS web console, click Observe
Metrics. To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
// // * virt/support/virt-prometheus-queries.adoc
//
// * virt/support/virt-prometheus-queries.adoc
13.2.4. Virtualization metrics Copia collegamentoCollegamento copiato negli appunti!
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.
The following examples use topk queries that specify a time period. If virtual machines (VMs) are deleted during that time period, they can still appear in the query output.
13.2.4.1. vCPU metrics Copia collegamentoCollegamento copiato negli appunti!
The following query can identify virtual machines that are waiting for Input/Output (I/O):
kubevirt_vmi_vcpu_wait_seconds_total- Returns the wait time (in seconds) on I/O for vCPUs of a virtual machine. Type: Counter.
A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.
To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler.
kubevirt_vmi_vcpu_delay_seconds_total- Returns the cumulative time, in seconds, that a vCPU was enqueued by the host scheduler but could not run immediately. This delay appears to the virtual machine as steal time, which is CPU time lost when the host runs other workloads. Steal time can impact performance and often indicates CPU overcommitment or contention on the host. Type: Counter.
Example vCPU delay query
irate(kubevirt_vmi_vcpu_delay_seconds_total[5m]) > 0.05
irate(kubevirt_vmi_vcpu_delay_seconds_total[5m]) > 0.05
- 1
- This query returns the average per-second delay over a 5-minute period. A high value may indicate CPU overcommitment or contention on the node.
Example vCPU wait time query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0
- 1
- This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.
13.2.4.2. Network metrics Copia collegamentoCollegamento copiato negli appunti!
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.
13.2.4.3. Storage metrics Copia collegamentoCollegamento copiato negli appunti!
13.2.4.3.1. Storage-related traffic Copia collegamentoCollegamento copiato negli appunti!
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
13.2.4.3.2. Storage snapshot data Copia collegamentoCollegamento copiato negli appunti!
kubevirt_vmsnapshot_disks_restored_from_source- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"}
kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the amount of space in bytes restored from the source virtual machine.
13.2.4.3.3. I/O performance Copia collegamentoCollegamento copiato negli appunti!
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.
13.2.4.4. Guest memory swapping metrics Copia collegamentoCollegamento copiato negli appunti!
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 +
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0
+
- 1
- This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
13.2.4.5. Monitoring AAQ operator metrics Copia collegamentoCollegamento copiato negli appunti!
The following metrics are exposed by the Application Aware Quota (AAQ) controller for monitoring resource quotas:
kube_application_aware_resourcequota- Returns the current quota usage and the CPU and memory limits enforced by the AAQ Operator resources. Type: Gauge.
kube_application_aware_resourcequota_creation_timestamp- Returns the time, in UNIX timestamp format, when the AAQ Operator resource is created. Type: Gauge.
13.2.4.6. Live migration metrics Copia collegamentoCollegamento copiato negli appunti!
The following metrics can be queried to show live migration status:
kubevirt_vmi_migration_data_processed_bytes- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_vmi_migration_data_remaining_bytes- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_vmi_migration_memory_transfer_rate_bytes- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_vmi_migrations_in_pending_phase- The number of pending migrations. Type: Gauge.
kubevirt_vmi_migrations_in_scheduling_phase- The number of scheduling migrations. Type: Gauge.
kubevirt_vmi_migrations_in_running_phase- The number of running migrations. Type: Gauge.
kubevirt_vmi_migration_succeeded- The number of successfully completed migrations. Type: Gauge.
kubevirt_vmi_migration_failed- The number of failed migrations. Type: Gauge.
13.3. Exposing custom metrics for virtual machines Copia collegamentoCollegamento copiato negli appunti!
Red Hat OpenShift Service on AWS includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.
In addition to using the Red Hat OpenShift Service on AWS monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.
13.3.1. Configuring the node exporter service Copia collegamentoCollegamento copiato negli appunti!
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in to the cluster as a user with
cluster-adminprivileges. -
Create the
cluster-monitoring-configConfigMapobject in theopenshift-monitoringproject. -
Configure the
user-workload-monitoring-configConfigMapobject in theopenshift-user-workload-monitoringproject by settingenableUserWorkloadtotrue.
Procedure
Create the
ServiceYAML file. In the following example, the file is callednode-exporter-service.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The node-exporter service that exposes the metrics from the virtual machines.
- 2
- The namespace where the service is created.
- 3
- The label for the service. The
ServiceMonitoruses this label to match this service. - 4
- The name given to the port that exposes metrics on port 9100 for the
ClusterIPservice. - 5
- The target port used by
node-exporter-serviceto listen for requests. - 6
- The TCP port number of the virtual machine that is configured with the
monitorlabel. - 7
- The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label
monitorand a value ofmetricswill be matched.
Create the node-exporter service:
oc create -f node-exporter-service.yaml
$ oc create -f node-exporter-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.2. Configuring a virtual machine with the node exporter service Copia collegamentoCollegamento copiato negli appunti!
Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.
Prerequisites
-
The pods for the component are running in the
openshift-user-workload-monitoringproject. -
Grant the
monitoring-editrole to users who need to monitor this user-defined project.
Procedure
- Log on to the virtual machine.
Download the
node-exporterfile on to the virtual machine by using the directory path that applies to the version ofnode-exporterfile.wget https://github.com/prometheus/node_exporter/releases/download/<version>/node_exporter-<version>.linux-<architecture>.tar.gz
$ wget https://github.com/prometheus/node_exporter/releases/download/<version>/node_exporter-<version>.linux-<architecture>.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the executable and place it in the
/usr/bindirectory.sudo tar xvf node_exporter-<version>.linux-<architecture>.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"$ sudo tar xvf node_exporter-<version>.linux-<architecture>.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
node_exporter.servicefile in this directory path:/etc/systemd/system. Thissystemdservice file runs the node-exporter service when the virtual machine reboots.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the
systemdservice.sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service
$ sudo systemctl enable node_exporter.service $ sudo systemctl start node_exporter.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
curl http://localhost:9100/metrics
$ curl http://localhost:9100/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.3. Creating a custom monitoring label for virtual machines Copia collegamentoCollegamento copiato negli appunti!
To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
templatespec of your virtual machine configuration file. In this example, the labelmonitorhas the valuemetrics.spec: template: metadata: labels: monitor: metricsspec: template: metadata: labels: monitor: metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Stop and restart the virtual machine to create a new pod with the label name given to the
monitorlabel.
13.3.3.1. Querying the node-exporter service for metrics Copia collegamentoCollegamento copiato negli appunti!
Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
oc get service -n <namespace> <node-exporter-service>
$ oc get service -n <namespace> <node-exporter-service>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list all available metrics for the node-exporter service, query the
metricsresource.curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"
$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.4. Creating a ServiceMonitor resource for the node exporter service Copia collegamentoCollegamento copiato negli appunti!
You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a YAML file for the
ServiceMonitorresource configuration. In this example, the service monitor matches any service with the labelmetricsand queries theexmetport every 30 seconds.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ServiceMonitorconfiguration for the node-exporter service.oc create -f node-exporter-metrics-monitor.yaml
$ oc create -f node-exporter-metrics-monitor.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.3.4.1. Accessing the node exporter service outside the cluster Copia collegamentoCollegamento copiato negli appunti!
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
-
You have installed the OpenShift CLI (
oc).
Procedure
Expose the node-exporter service.
oc expose service -n <namespace> <node_exporter_service_name>
$ oc expose service -n <namespace> <node_exporter_service_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the FQDN (Fully Qualified Domain Name) for the route.
oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
curlcommand to display metrics for the node-exporter service.curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics
$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4. Virtual machine health checks Copia collegamentoCollegamento copiato negli appunti!
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine resource.
13.4.1. About readiness and liveness probes Copia collegamentoCollegamento copiato negli appunti!
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests:
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-pingcommand to determine if the QEMU guest agent is running on the virtual machine.
13.4.1.1. Defining an HTTP readiness probe Copia collegamentoCollegamento copiato negli appunti!
Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The HTTP GET request to perform to connect to the VM.
- 2
- The port of the VM that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.1.2. Defining a TCP readiness probe Copia collegamentoCollegamento copiato negli appunti!
Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The TCP action to perform.
- 4
- The port of the VM that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.1.3. Defining an HTTP liveness probe Copia collegamentoCollegamento copiato negli appunti!
Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.
Prerequisites
-
You have installed the OpenShift CLI (
oc).
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The time, in seconds, after the VM starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The HTTP GET request to perform to connect to the VM.
- 4
- The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthzpath returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
oc create -f <file_name>.yaml
$ oc create -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.2. Defining a watchdog Copia collegamentoCollegamento copiato negli appunti!
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
- Configure a watchdog device for the virtual machine (VM).
- Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
poweroff: The VM powers down immediately. Ifspec.runStrategyis not set tomanual, the VM reboots. reset: The VM reboots in place and the guest operating system cannot react.NoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
shutdown: The VM gracefully powers down by stopping all services.
Watchdog is not available for Windows VMs.
13.4.2.1. Configuring a watchdog device for the virtual machine Copia collegamentoCollegamento copiato negli appunti!
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
For
x86systems, the VM must use a kernel that works with thei6300esbwatchdog device. If you uses390xarchitecture, the kernel must be enabled fordiag288. Red Hat Enterprise Linux (RHEL) images supporti6300esbanddiag288. -
You have installed the OpenShift CLI (
oc).
Procedure
Create a
YAMLfile with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The example above configures the watchdog device on a VM with the
poweroffaction and exposes the device as/dev/watchdog.This device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
$ oc apply -f <file_name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
lspci | grep watchdog -i
$ lspci | grep watchdog -iCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
echo c > /proc/sysrq-trigger
# echo c > /proc/sysrq-triggerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the watchdog service:
pkill -9 watchdog
# pkill -9 watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4.2.2. Installing the watchdog agent on the guest Copia collegamentoCollegamento copiato negli appunti!
You install the watchdog agent on the guest and start the watchdog service.
Procedure
- Log in to the virtual machine as root user.
This step is only required when installing on IBM Z® (
s390x). Enablewatchdogby running the following command:modprobe diag288_wdt
# modprobe diag288_wdtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
/dev/watchdogfile path is present in the VM by running the following command:ls /dev/watchdog
# ls /dev/watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
watchdogpackage and its dependencies:yum install watchdog
# yum install watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Uncomment the following line in the
/etc/watchdog.conffile and save the changes:#watchdog-device = /dev/watchdog
#watchdog-device = /dev/watchdogCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
watchdogservice to start on boot:systemctl enable --now watchdog.service
# systemctl enable --now watchdog.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.5. OpenShift Virtualization runbooks Copia collegamentoCollegamento copiato negli appunti!
To diagnose and resolve issues that trigger OpenShift Virtualization alerts, follow the procedures in the runbooks for the OpenShift Virtualization Operator. Triggered OpenShift Virtualization alerts can be viewed in the main Observe
Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub.
13.5.1. CDIDataImportCronOutdated Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CDIDataImportCronOutdatedalert.
13.5.2. CDIDataVolumeUnusualRestartCount Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CDIDataVolumeUnusualRestartCountalert.
13.5.3. CDIDefaultStorageClassDegraded Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CDIDefaultStorageClassDegradedalert.
13.5.4. CDIMultipleDefaultVirtStorageClasses Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CDIMultipleDefaultVirtStorageClassesalert.
13.5.5. CDINoDefaultStorageClass Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CDINoDefaultStorageClassalert.
13.5.6. CDINotReady Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CDINotReadyalert.
13.5.7. CDIOperatorDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CDIOperatorDownalert.
13.5.8. CDIStorageProfilesIncomplete Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CDIStorageProfilesIncompletealert.
13.5.9. CnaoDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CnaoDownalert.
13.5.10. CnaoNMstateMigration Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
CnaoNMstateMigrationalert.
13.5.11. HAControlPlaneDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
HAControlPlaneDownalert.
13.5.12. HCOInstallationIncomplete Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
HCOInstallationIncompletealert.
13.5.13. HCOMisconfiguredDescheduler Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
HCOMisconfiguredDescheduleralert.
13.5.14. HPPNotReady Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
HPPNotReadyalert.
13.5.15. HPPOperatorDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
HPPOperatorDownalert.
13.5.16. HPPSharingPoolPathWithOS Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
HPPSharingPoolPathWithOSalert.
13.5.17. HighCPUWorkload Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
HighCPUWorkloadalert.
13.5.18. KubemacpoolDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
KubemacpoolDownalert.
13.5.19. KubeMacPoolDuplicateMacsFound Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
KubeMacPoolDuplicateMacsFoundalert.
13.5.20. KubeVirtComponentExceedsRequestedCPU Copia collegamentoCollegamento copiato negli appunti!
-
The
KubeVirtComponentExceedsRequestedCPUalert is deprecated.
13.5.21. KubeVirtComponentExceedsRequestedMemory Copia collegamentoCollegamento copiato negli appunti!
-
The
KubeVirtComponentExceedsRequestedMemoryalert is deprecated.
13.5.22. KubeVirtCRModified Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
KubeVirtCRModifiedalert.
13.5.23. KubeVirtDeprecatedAPIRequested Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
KubeVirtDeprecatedAPIRequestedalert.
13.5.24. KubeVirtNoAvailableNodesToRunVMs Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
KubeVirtNoAvailableNodesToRunVMsalert.
13.5.25. KubevirtVmHighMemoryUsage Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
KubevirtVmHighMemoryUsagealert.
13.5.26. KubeVirtVMIExcessiveMigrations Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
KubeVirtVMIExcessiveMigrationsalert.
13.5.27. LowKVMNodesCount Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
LowKVMNodesCountalert.
13.5.28. LowReadyVirtControllersCount Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
LowReadyVirtControllersCountalert.
13.5.29. LowReadyVirtOperatorsCount Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
LowReadyVirtOperatorsCountalert.
13.5.30. LowVirtAPICount Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
LowVirtAPICountalert.
13.5.31. LowVirtControllersCount Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
LowVirtControllersCountalert.
13.5.32. LowVirtOperatorCount Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
LowVirtOperatorCountalert.
13.5.33. NetworkAddonsConfigNotReady Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
NetworkAddonsConfigNotReadyalert.
13.5.34. NoLeadingVirtOperator Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
NoLeadingVirtOperatoralert.
13.5.35. NoReadyVirtController Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
NoReadyVirtControlleralert.
13.5.36. NoReadyVirtOperator Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
NoReadyVirtOperatoralert.
13.5.37. NodeNetworkInterfaceDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
NodeNetworkInterfaceDownalert.
13.5.38. OperatorConditionsUnhealthy Copia collegamentoCollegamento copiato negli appunti!
-
The
OperatorConditionsUnhealthyalert is deprecated.
13.5.39. OrphanedVirtualMachineInstances Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
OrphanedVirtualMachineInstancesalert.
13.5.40. OutdatedVirtualMachineInstanceWorkloads Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
OutdatedVirtualMachineInstanceWorkloadsalert.
13.5.41. SingleStackIPv6Unsupported Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
SingleStackIPv6Unsupportedalert.
13.5.42. SSPCommonTemplatesModificationReverted Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
SSPCommonTemplatesModificationRevertedalert.
13.5.43. SSPDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
SSPDownalert.
13.5.44. SSPFailingToReconcile Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
SSPFailingToReconcilealert.
13.5.45. SSPHighRateRejectedVms Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
SSPHighRateRejectedVmsalert.
13.5.46. SSPOperatorDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
SSPOperatorDownalert.
13.5.47. SSPTemplateValidatorDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
SSPTemplateValidatorDownalert.
13.5.48. UnsupportedHCOModification Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
UnsupportedHCOModificationalert.
13.5.49. VirtAPIDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtAPIDownalert.
13.5.50. VirtApiRESTErrorsBurst Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtApiRESTErrorsBurstalert.
13.5.51. VirtApiRESTErrorsHigh Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtApiRESTErrorsHighalert.
13.5.52. VirtControllerDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtControllerDownalert.
13.5.53. VirtControllerRESTErrorsBurst Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtControllerRESTErrorsBurstalert.
13.5.54. VirtControllerRESTErrorsHigh Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtControllerRESTErrorsHighalert.
13.5.55. VirtHandlerDaemonSetRolloutFailing Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtHandlerDaemonSetRolloutFailingalert.
13.5.56. VirtHandlerRESTErrorsBurst Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtHandlerRESTErrorsBurstalert.
13.5.57. VirtHandlerRESTErrorsHigh Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtHandlerRESTErrorsHighalert.
13.5.58. VirtOperatorDown Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtOperatorDownalert.
13.5.59. VirtOperatorRESTErrorsBurst Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtOperatorRESTErrorsBurstalert.
13.5.60. VirtOperatorRESTErrorsHigh Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VirtOperatorRESTErrorsHighalert.
13.5.61. VirtualMachineCRCErrors Copia collegamentoCollegamento copiato negli appunti!
The
VirtualMachineCRCErrorsalert is deprecated.The alert is now called
VMStorageClassWarning.
13.5.62. VMCannotBeEvicted Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VMCannotBeEvictedalert.
13.5.63. VMStorageClassWarning Copia collegamentoCollegamento copiato negli appunti!
-
View the runbook for the
VMStorageClassWarningalert.