Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 12. Monitoring
12.1. Monitoring overview Link kopierenLink in die Zwischenablage kopiert!
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
- Monitoring OpenShift Virtualization VMs health status
-
View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home
Overview page in the OpenShift Container Platform web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions. - OpenShift Container Platform cluster checkup framework
Run automated tests on your cluster with the OpenShift Container Platform cluster checkup framework to check the following conditions:
- Network connectivity and latency between two VMs attached to a secondary network interface
- VM running a Data Plane Development Kit (DPDK) workload with zero packet loss
- Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporterservice to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
- Runbooks
- Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the OpenShift Container Platform web console.
12.2. OpenShift Virtualization cluster checkup framework Link kopierenLink in die Zwischenablage kopiert!
A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.
The OpenShift Virtualization cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
As a developer or cluster administrator, you can use predefined checkups to improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. You can review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.
12.2.1. Running predefined latency checkups Link kopierenLink in die Zwischenablage kopiert!
You can use a latency checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The predefined latency checkup uses the ping utility.
Before you run a latency checkup, you must first create a bridge interface on the cluster nodes to connect the VM’s secondary interface to any interface on the node. If you do not create a bridge interface, the VMs do not start and the job fails.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the
Role
RoleBinding
You must always:
- Verify that the checkup image is from a trustworthy source before applying it.
-
Review the checkup permissions before creating the and
Roleobjects.RoleBinding
12.2.1.1. Running a latency checkup Link kopierenLink in die Zwischenablage kopiert!
You run a latency checkup using the CLI by performing the following steps:
- Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the latency checkup resources.
Prerequisites
-
You installed the OpenShift CLI ().
oc - The cluster has at least two worker nodes.
- You configured a network attachment definition for a namespace.
Procedure
Create a
,ServiceAccount, andRolemanifest for the latency checkup:RoleBindingExample 12.1. Example role manifest file
--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: ["kubevirt.io"] resources: ["virtualmachineinstances"] verbs: ["get", "create", "delete"] - apiGroups: ["subresources.kubevirt.io"] resources: ["virtualmachineinstances/console"] verbs: ["get"] - apiGroups: ["k8s.cni.cncf.io"] resources: ["network-attachment-definitions"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: ["get", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.ioApply the
,ServiceAccount, andRolemanifest:RoleBinding$ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yamlwhere:
<target_namespace>-
Specifies the namespace where the checkup is to be run. This must be an existing namespace where the
NetworkAttachmentDefinitionobject resides.
Create a
manifest that contains the input parameters for the checkup:ConfigMapExample input config map
apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" spec.param.maxDesiredLatencyMilliseconds: "10" spec.param.sampleDurationSeconds: "5" spec.param.sourceNode: "worker1" spec.param.targetNode: "worker2"where:
data.spec.param.networkAttachmentDefinitionName-
Specifies the name of the
NetworkAttachmentDefinitionobject. data.spec.param.maxDesiredLatencyMilliseconds- Optional: Specifies the maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
data.spec.param.sampleDurationSeconds- Optional: Specifies the duration of the latency check, in seconds.
data.spec.param.sourceNode-
Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the
spec.param.targetNodefield cannot be empty. data.spec.param.targetNode- Optional: When specified, latency is measured from the source node to this node.
Apply the config map manifest in the target namespace:
$ oc apply -n <target_namespace> -f <latency_config_map>.yamlCreate a
manifest to run the checkup:JobExample job manifest
apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.14.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uidApply the
manifest:Job$ oc apply -n <target_namespace> -f <latency_job>.yamlWait for the job to complete:
$ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6mReview the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the
attribute, the checkup fails and returns an error.spec.param.maxDesiredLatencyMilliseconds$ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yamlExample output config map (success)
apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" spec.param.maxDesiredLatencyMilliseconds: "10" spec.param.sampleDurationSeconds: "5" spec.param.sourceNode: "worker1" spec.param.targetNode: "worker2" status.succeeded: "true" status.failureReason: "" status.completionTimestamp: "2022-01-01T09:00:00Z" status.startTimestamp: "2022-01-01T09:00:07Z" status.result.avgLatencyNanoSec: "177000" status.result.maxLatencyNanoSec: "244000" status.result.measurementDurationSec: "5" status.result.minLatencyNanoSec: "135000" status.result.sourceNode: "worker1" status.result.targetNode: "worker2"where:
data.status.result.maxLatencyNanoSec- Specifies the maximum measured latency in nanoseconds.
Optional: To view the detailed job log in case of checkup failure, use the following command:
$ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup$ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-configOptional: If you do not plan to run another checkup, delete the roles manifest:
$ oc delete -f <latency_sa_roles_rolebinding>.yaml
12.2.2. Running predefined DPDK checkups Link kopierenLink in die Zwischenablage kopiert!
You can use a DPDK checkup to verify that a node can run a VM with a Data Plane Development Kit (DPDK) workload with zero packet loss.
12.2.2.1. DPDK checkup Link kopierenLink in die Zwischenablage kopiert!
Use a predefined checkup to verify that your OpenShift Container Platform cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.
You run a DPDK checkup by performing the following steps:
- Create a service account, role, and role bindings for the DPDK checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the DPDK checkup resources.
Prerequisites
-
You have installed the OpenShift CLI ().
oc - The cluster is configured to run DPDK applications.
- The project is configured to run DPDK applications.
Procedure
Create a
,ServiceAccount, andRolemanifest for the DPDK checkup:RoleBindingExample 12.2. Example service account, role, and rolebinding manifest file
--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "get", "update" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstances" ] verbs: [ "create", "get", "delete" ] - apiGroups: [ "subresources.kubevirt.io" ] resources: [ "virtualmachineinstances/console" ] verbs: [ "get" ] - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "create", "delete" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checkerApply the
,ServiceAccount, andRolemanifest:RoleBinding$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yamlCreate a
manifest that contains the input parameters for the checkup:ConfigMapExample input config map
apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name>1 spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.2.02 spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.2.0"3 - 1
- The name of the
NetworkAttachmentDefinitionobject. - 2
- The container disk image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
- 3
- The container disk image for the VM under test. In this example, the image is pulled from the upstream Project Quay Container Registry.
Apply the
manifest in the target namespace:ConfigMap$ oc apply -n <target_namespace> -f <dpdk_config_map>.yamlCreate a
manifest to run the checkup:JobExample job manifest
apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.14.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uidApply the
manifest:Job$ oc apply -n <target_namespace> -f <dpdk_job>.yamlWait for the job to complete:
$ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10mReview the results of the checkup by running the following command:
$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yamlExample output config map (success)
apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 10m spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1" spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.2.0" spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.2.0" status.succeeded: "true"1 status.failureReason: ""2 status.startTimestamp: "2023-07-31T13:14:38Z"3 status.completionTimestamp: "2023-07-31T13:19:41Z"4 status.result.trafficGenSentPackets: "480000000"5 status.result.trafficGenOutputErrorPackets: "0"6 status.result.trafficGenInputErrorPackets: "0"7 status.result.trafficGenActualNodeName: worker-dpdk18 status.result.vmUnderTestActualNodeName: worker-dpdk29 status.result.vmUnderTestReceivedPackets: "480000000"10 status.result.vmUnderTestRxDroppedPackets: "0"11 status.result.vmUnderTestTxDroppedPackets: "0"12 - 1
- Specifies if the checkup is successful (
true) or not (false). - 2
- The reason for failure if the checkup fails.
- 3
- The time when the checkup started, in RFC 3339 time format.
- 4
- The time when the checkup has completed, in RFC 3339 time format.
- 5
- The number of packets sent from the traffic generator.
- 6
- The number of error packets sent from the traffic generator.
- 7
- The number of error packets received by the traffic generator.
- 8
- The node on which the traffic generator VM was scheduled.
- 9
- The node on which the VM under test was scheduled.
- 10
- The number of packets received on the VM under test.
- 11
- The ingress traffic packets that were dropped by the DPDK application.
- 12
- The egress traffic packets that were dropped from the DPDK application.
Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> dpdk-checkup$ oc delete config-map -n <target_namespace> dpdk-checkup-configOptional: If you do not plan to run another checkup, delete the
,ServiceAccount, andRolemanifest:RoleBinding$ oc delete -f <dpdk_sa_roles_rolebinding>.yaml
12.2.2.1.1. DPDK checkup config map parameters Link kopierenLink in die Zwischenablage kopiert!
The following table shows the mandatory and optional parameters that you can set in the
data
ConfigMap
| Parameter | Description | Is Mandatory |
|---|---|---|
|
| The time, in minutes, before the checkup fails. | True |
|
| The name of the
| True |
|
| The container disk image for the traffic generator. The default value is
| False |
|
| The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The number of packets per second, in kilo (k) or million(m). The default value is 8m. | False |
|
| The container disk image for the VM under test. The default value is
| False |
|
| The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes. | False |
|
| The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps. | False |
|
| When set to
| False |
12.2.2.1.2. Building a container disk image for RHEL virtual machines Link kopierenLink in die Zwischenablage kopiert!
You can build a custom Red Hat Enterprise Linux (RHEL) 8 OS image in
qcow2
spec.param.vmContainerDiskImage
To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a RHEL 8 VM that can be used to build custom RHEL images.
Prerequisites
-
The image builder VM must run RHEL 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the directory.
/var -
You have installed the image builder tool and its CLI () on the VM.
composer-cli You have installed the
tool:virt-customize# dnf install libguestfs-tools-
You have installed the Podman CLI tool ().
podman
Procedure
Verify that you can build a RHEL 8.7 image:
# composer-cli distros listNoteTo run the
commands as non-root, add your user to thecomposer-cliorweldrgroups:root# usermod -a -G weldr user$ newgrp weldrEnter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
$ cat << EOF > dpdk-vm.toml name = "dpdk_image" description = "Image to use with the DPDK checkup" version = "0.0.1" distro = "rhel-87" [[packages]] name = "dpdk" [[packages]] name = "dpdk-tools" [[packages]] name = "driverctl" [[packages]] name = "tuned-profiles-cpu-partitioning" [customizations.kernel] append = "default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7" [customizations.services] disabled = ["NetworkManager-wait-online", "sshd"] EOFPush the blueprint file to the image builder tool by running the following command:
# composer-cli blueprints push dpdk-vm.tomlGenerate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
# composer-cli compose start dpdk_image qcow2Wait for the compose process to complete. The compose status must show
before you can continue to the next step.FINISHED# composer-cli compose statusEnter the following command to download the
image file by specifying its UUID:qcow2# composer-cli compose image <UUID>Create the customization scripts by running the following commands:
$ cat <<EOF >customize-vm echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf tuned-adm profile cpu-partitioning echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf EOF$ cat <<EOF >first-boot driverctl set-override 0000:06:00.0 vfio-pci driverctl set-override 0000:07:00.0 vfio-pci mkdir /mnt/huge mount /mnt/huge --source nodev -t hugetlbfs -o pagesize=1GB EOFUse the
tool to customize the image generated by the image builder tool:virt-customize$ virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabelTo create a Dockerfile that contains all the commands to build the container disk image, enter the following command:
$ cat << EOF > Dockerfile FROM scratch COPY <uuid>-disk.qcow2 /disk/ EOFwhere:
- <uuid>-disk.qcow2
-
Specifies the name of the custom image in
qcow2format.
Build and tag the container by running the following command:
$ podman build . -t dpdk-rhel:latestPush the container disk image to a registry that is accessible from your cluster by running the following command:
$ podman push dpdk-rhel:latest-
Provide a link to the container disk image in the attribute in the DPDK checkup config map.
spec.param.vmContainerDiskImage
12.3. Prometheus queries for virtual resources Link kopierenLink in die Zwischenablage kopiert!
OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including vCPU, network, storage, and guest memory swapping. You can also use metrics to query live migration status.
12.3.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
-
To use the vCPU metric, the kernel argument must be applied to the
schedstats=enableobject. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. For more information, see Adding kernel arguments to nodes.MachineConfig - For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
12.3.2. Querying metrics for all projects with the OpenShift Container Platform web console Link kopierenLink in die Zwischenablage kopiert!
You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI.
Prerequisites
-
You have access to the cluster as a user with the cluster role or with view permissions for all projects.
cluster-admin -
You have installed the OpenShift CLI ().
oc
Procedure
-
From the Administrator perspective in the OpenShift Container Platform web console, select Observe
Metrics. To add one or more queries, do any of the following:
Expand Option Description Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Select Add query.
Duplicate an existing query.
Select the Options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Select the Options menu
next to the query and choose Disable query.
To run queries that you created, select Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
NoteQueries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
NoteBy default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following:
Expand Option Description Hide all metrics from a query.
Click the Options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box.
Hide the plot.
Select Hide graph.
12.3.3. Querying metrics for user-defined projects with the OpenShift Container Platform web console Link kopierenLink in die Zwischenablage kopiert!
You can use the OpenShift Container Platform metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.
Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a custom resource definition (CRD) for the service to define how the service is monitored.
ServiceMonitor
Procedure
-
From the Developer perspective in the OpenShift Container Platform web console, select Observe
Metrics. - Select the project that you want to view metrics for from the Project: list.
Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL. The metrics from the queries are visualized on the plot.
NoteIn the Developer perspective, you can only run one query at a time.
Explore the visualized metrics by doing any of the following:
Expand Option Description Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box.
12.3.4. Virtualization metrics Link kopierenLink in die Zwischenablage kopiert!
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see KubeVirt components metrics.
The following examples use
topk
12.3.4.1. vCPU metrics Link kopierenLink in die Zwischenablage kopiert!
The following query can identify virtual machines that are waiting for Input/Output (I/O):
kubevirt_vmi_vcpu_wait_seconds_total- Returns the wait time (in seconds) on I/O for vCPUs of a virtual machine. Type: Counter.
A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.
To query the vCPU metric, the
schedstats=enable
MachineConfig
Example vCPU wait time query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0
- 1
- This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.
12.3.4.2. Network metrics Link kopierenLink in die Zwischenablage kopiert!
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.
12.3.4.3. Storage metrics Link kopierenLink in die Zwischenablage kopiert!
12.3.4.3.1. Storage-related traffic Link kopierenLink in die Zwischenablage kopiert!
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
12.3.4.3.2. Storage snapshot data Link kopierenLink in die Zwischenablage kopiert!
kubevirt_vmsnapshot_disks_restored_from_source_total- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"}
- 1
- This query returns the amount of space in bytes restored from the source virtual machine.
12.3.4.3.3. I/O performance Link kopierenLink in die Zwischenablage kopiert!
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0
- 1
- This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.
12.3.4.4. Guest memory swapping metrics Link kopierenLink in die Zwischenablage kopiert!
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes_total- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes_total- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0
- 1
- This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
12.3.4.5. Live migration metrics Link kopierenLink in die Zwischenablage kopiert!
The following metrics can be queried to show live migration status:
kubevirt_migrate_vmi_data_processed_bytes- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_migrate_vmi_data_remaining_bytes- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_migrate_vmi_dirty_memory_rate_bytes- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_migrate_vmi_pending_count- The number of pending migrations. Type: Gauge.
kubevirt_migrate_vmi_scheduling_count- The number of scheduling migrations. Type: Gauge.
kubevirt_migrate_vmi_running_count- The number of running migrations. Type: Gauge.
kubevirt_migrate_vmi_succeeded- The number of successfully completed migrations. Type: Gauge.
kubevirt_migrate_vmi_failed- The number of failed migrations. Type: Gauge.
12.4. Exposing custom metrics for virtual machines Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.
In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the
node-exporter
12.4.1. Configuring the node exporter service Link kopierenLink in die Zwischenablage kopiert!
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the OpenShift Container Platform CLI .
oc -
Log in to the cluster as a user with privileges.
cluster-admin -
Create the
cluster-monitoring-configobject in theConfigMapproject.openshift-monitoring -
Configure the
user-workload-monitoring-configobject in theConfigMapproject by settingopenshift-user-workload-monitoringtoenableUserWorkload.true
Procedure
Create the
YAML file. In the following example, the file is calledService.node-exporter-service.yamlkind: Service apiVersion: v1 metadata: name: node-exporter-service1 namespace: dynamation2 labels: servicetype: metrics3 spec: ports: - name: exmet4 protocol: TCP port: 91005 targetPort: 91006 type: ClusterIP selector: monitor: metrics7 - 1
- The node-exporter service that exposes the metrics from the virtual machines.
- 2
- The namespace where the service is created.
- 3
- The label for the service. The
ServiceMonitoruses this label to match this service. - 4
- The name given to the port that exposes metrics on port 9100 for the
ClusterIPservice. - 5
- The target port used by
node-exporter-serviceto listen for requests. - 6
- The TCP port number of the virtual machine that is configured with the
monitorlabel. - 7
- The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label
monitorand a value ofmetricswill be matched.
Create the node-exporter service:
$ oc create -f node-exporter-service.yaml
12.4.2. Configuring a virtual machine with the node exporter service Link kopierenLink in die Zwischenablage kopiert!
Download the
node-exporter
systemd
Prerequisites
-
The pods for the component are running in the project.
openshift-user-workload-monitoring -
Grant the role to users who need to monitor this user-defined project.
monitoring-edit
Procedure
- Log on to the virtual machine.
Download the
file on to the virtual machine by using the directory path that applies to the version ofnode-exporterfile.node-exporter$ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gzExtract the executable and place it in the
directory./usr/bin$ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"Create a
file in this directory path:node_exporter.service. This/etc/systemd/systemservice file runs the node-exporter service when the virtual machine reboots.systemd[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.targetEnable and start the
service.systemd$ sudo systemctl enable node_exporter.service$ sudo systemctl start node_exporter.service
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
$ curl http://localhost:9100/metricsExample output
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05
12.4.3. Creating a custom monitoring label for virtual machines Link kopierenLink in die Zwischenablage kopiert!
To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the OpenShift Container Platform CLI .
oc -
Log in as a user with privileges.
cluster-admin - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
spec of your virtual machine configuration file. In this example, the labeltemplatehas the valuemonitor.metricsspec: template: metadata: labels: monitor: metrics-
Stop and restart the virtual machine to create a new pod with the label name given to the label.
monitor
12.4.3.1. Querying the node-exporter service for metrics Link kopierenLink in die Zwischenablage kopiert!
Metrics are exposed for virtual machines through an HTTP service endpoint under the
/metrics
Prerequisites
-
You have access to the cluster as a user with privileges or the
cluster-adminrole.monitoring-edit - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
$ oc get service -n <namespace> <node-exporter-service>To list all available metrics for the node-exporter service, query the
resource.metrics$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"Example output
node_arp_entries{device="eth0"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name="0",type="Processor"} 0 node_cooling_device_max_state{name="0",type="Processor"} 0 node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0 node_cpu_guest_seconds_total{cpu="0",mode="user"} 0 node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06 node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61 node_cpu_seconds_total{cpu="0",mode="irq"} 233.91 node_cpu_seconds_total{cpu="0",mode="nice"} 551.47 node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3 node_cpu_seconds_total{cpu="0",mode="steal"} 86.12 node_cpu_seconds_total{cpu="0",mode="system"} 464.15 node_cpu_seconds_total{cpu="0",mode="user"} 1075.2 node_disk_discard_time_seconds_total{device="vda"} 0 node_disk_discard_time_seconds_total{device="vdb"} 0 node_disk_discarded_sectors_total{device="vda"} 0 node_disk_discarded_sectors_total{device="vdb"} 0 node_disk_discards_completed_total{device="vda"} 0 node_disk_discards_completed_total{device="vdb"} 0 node_disk_discards_merged_total{device="vda"} 0 node_disk_discards_merged_total{device="vdb"} 0 node_disk_info{device="vda",major="252",minor="0"} 1 node_disk_info{device="vdb",major="252",minor="16"} 1 node_disk_io_now{device="vda"} 0 node_disk_io_now{device="vdb"} 0 node_disk_io_time_seconds_total{device="vda"} 174 node_disk_io_time_seconds_total{device="vdb"} 0.054 node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039 node_disk_read_bytes_total{device="vda"} 3.71867136e+08 node_disk_read_bytes_total{device="vdb"} 366592 node_disk_read_time_seconds_total{device="vda"} 19.128 node_disk_read_time_seconds_total{device="vdb"} 0.039 node_disk_reads_completed_total{device="vda"} 5619 node_disk_reads_completed_total{device="vdb"} 96 node_disk_reads_merged_total{device="vda"} 5 node_disk_reads_merged_total{device="vdb"} 0 node_disk_write_time_seconds_total{device="vda"} 240.66400000000002 node_disk_write_time_seconds_total{device="vdb"} 0 node_disk_writes_completed_total{device="vda"} 71584 node_disk_writes_completed_total{device="vdb"} 0 node_disk_writes_merged_total{device="vda"} 19761 node_disk_writes_merged_total{device="vdb"} 0 node_disk_written_bytes_total{device="vda"} 2.007924224e+09 node_disk_written_bytes_total{device="vdb"} 0
12.4.4. Creating a ServiceMonitor resource for the node exporter service Link kopierenLink in die Zwischenablage kopiert!
You can use a Prometheus client library and scrape metrics from the
/metrics
ServiceMonitor
Prerequisites
-
You have access to the cluster as a user with privileges or the
cluster-adminrole.monitoring-edit - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Create a YAML file for the
resource configuration. In this example, the service monitor matches any service with the labelServiceMonitorand queries themetricsport every 30 seconds.exmetapiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor1 namespace: dynamation2 spec: endpoints: - interval: 30s3 port: exmet4 scheme: http selector: matchLabels: servicetype: metricsCreate the
configuration for the node-exporter service.ServiceMonitor$ oc create -f node-exporter-metrics-monitor.yaml
12.4.4.1. Accessing the node exporter service outside the cluster Link kopierenLink in die Zwischenablage kopiert!
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with privileges or the
cluster-adminrole.monitoring-edit - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Expose the node-exporter service.
$ oc expose service -n <namespace> <node_exporter_service_name>Obtain the FQDN (Fully Qualified Domain Name) for the route.
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.hostExample output
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.orgUse the
command to display metrics for the node-exporter service.curl$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metricsExample output
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423
12.5. Virtual machine health checks Link kopierenLink in die Zwischenablage kopiert!
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the
VirtualMachine
12.5.1. About readiness and liveness probes Link kopierenLink in die Zwischenablage kopiert!
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the
spec.readinessProbe
spec.livenessProbe
VirtualMachine
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-pingcommand to determine if the QEMU guest agent is running on the virtual machine.
12.5.1.1. Defining an HTTP readiness probe Link kopierenLink in die Zwischenablage kopiert!
Define an HTTP readiness probe by setting the
spec.readinessProbe.httpGet
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: httpGet:1 port: 15002 path: /healthz3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 1204 periodSeconds: 205 timeoutSeconds: 106 failureThreshold: 37 successThreshold: 38 # ...- 1
- The HTTP GET request to perform to connect to the VM.
- 2
- The port of the VM that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
12.5.1.2. Defining a TCP readiness probe Link kopierenLink in die Zwischenablage kopiert!
Define a TCP readiness probe by setting the
spec.readinessProbe.tcpSocket
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: initialDelaySeconds: 1201 periodSeconds: 202 tcpSocket:3 port: 15004 timeoutSeconds: 105 # ...- 1
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The TCP action to perform.
- 4
- The port of the VM that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
12.5.1.3. Defining an HTTP liveness probe Link kopierenLink in die Zwischenablage kopiert!
Define an HTTP liveness probe by setting the
spec.livenessProbe.httpGet
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: livenessProbe: initialDelaySeconds: 1201 periodSeconds: 202 httpGet:3 port: 15004 path: /healthz5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 106 # ...- 1
- The time, in seconds, after the VM starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The HTTP GET request to perform to connect to the VM.
- 4
- The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthzpath returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
12.5.2. Defining a watchdog Link kopierenLink in die Zwischenablage kopiert!
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
- Configure a watchdog device for the virtual machine (VM).
- Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
: The VM powers down immediately. If
poweroffis set tospec.runningortrueis not set tospec.runStrategy, then the VM reboots.manual - : The VM reboots in place and the guest operating system cannot react.
resetNoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
: The VM gracefully powers down by stopping all services.
shutdown
Watchdog is not available for Windows VMs.
12.5.2.1. Configuring a watchdog device for the virtual machine Link kopierenLink in die Zwischenablage kopiert!
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
The VM must have kernel support for an watchdog device. Red Hat Enterprise Linux (RHEL) images support
i6300esb.i6300esb
Procedure
Create a
file with the following contents:YAMLapiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff"1 # ...- 1
- Specify
poweroff,reset, orshutdown.
The example above configures the
watchdog device on a RHEL8 VM with the poweroff action and exposes the device asi6300esb./dev/watchdogThis device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
Verification
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
$ lspci | grep watchdog -iRun one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
# echo c > /proc/sysrq-triggerStop the watchdog service:
# pkill -9 watchdog
12.5.2.2. Installing the watchdog agent on the guest Link kopierenLink in die Zwischenablage kopiert!
You install the watchdog agent on the guest and start the
watchdog
Procedure
- Log in to the virtual machine as root user.
Install the
package and its dependencies:watchdog# yum install watchdogUncomment the following line in the
file and save the changes:/etc/watchdog.conf#watchdog-device = /dev/watchdogEnable the
service to start on boot:watchdog# systemctl enable --now watchdog.service
12.5.3. Defining a guest agent ping probe Link kopierenLink in die Zwischenablage kopiert!
Define a guest agent ping probe by setting the
spec.readinessProbe.guestAgentPing
The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- The QEMU guest agent must be installed and enabled on the virtual machine.
Procedure
Include details of the guest agent ping probe in the VM configuration file. For example:
Sample guest agent ping probe
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace # ... spec: template: spec: readinessProbe: guestAgentPing: {}1 initialDelaySeconds: 1202 periodSeconds: 203 timeoutSeconds: 104 failureThreshold: 35 successThreshold: 36 # ...- 1
- The guest agent ping probe to connect to the VM.
- 2
- Optional: The time, in seconds, after the VM starts before the guest agent probe is initiated.
- 3
- Optional: The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 4
- Optional: The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 5
- Optional: The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 6
- Optional: The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
12.6. OpenShift Virtualization runbooks Link kopierenLink in die Zwischenablage kopiert!
Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub. To diagnose and resolve issues that trigger OpenShift Virtualization alerts, follow the procedures in the runbooks.
OpenShift Virtualization alerts are displayed in the Virtualization
12.6.1. CDIDataImportCronOutdated Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CDIDataImportCronOutdated
12.6.2. CDIDataVolumeUnusualRestartCount Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CDIDataVolumeUnusualRestartCount
12.6.3. CDIDefaultStorageClassDegraded Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CDIDefaultStorageClassDegraded
12.6.4. CDIMultipleDefaultVirtStorageClasses Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CDIMultipleDefaultVirtStorageClasses
12.6.5. CDINoDefaultStorageClass Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CDINoDefaultStorageClass
12.6.6. CDINotReady Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CDINotReady
12.6.7. CDIOperatorDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CDIOperatorDown
12.6.8. CDIStorageProfilesIncomplete Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CDIStorageProfilesIncomplete
12.6.9. CnaoDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CnaoDown
12.6.10. CnaoNMstateMigration Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
CnaoNMstateMigration
12.6.11. HCOInstallationIncomplete Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
HCOInstallationIncomplete
12.6.12. HPPNotReady Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
HPPNotReady
12.6.13. HPPOperatorDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
HPPOperatorDown
12.6.14. HPPSharingPoolPathWithOS Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
HPPSharingPoolPathWithOS
12.6.15. KubemacpoolDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
KubemacpoolDown
12.6.16. KubeMacPoolDuplicateMacsFound Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
KubeMacPoolDuplicateMacsFound
12.6.17. KubeVirtComponentExceedsRequestedCPU Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
KubeVirtComponentExceedsRequestedCPU
12.6.18. KubeVirtComponentExceedsRequestedMemory Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
KubeVirtComponentExceedsRequestedMemory
12.6.19. KubeVirtCRModified Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
KubeVirtCRModified
12.6.20. KubeVirtDeprecatedAPIRequested Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
KubeVirtDeprecatedAPIRequested
12.6.21. KubeVirtNoAvailableNodesToRunVMs Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
KubeVirtNoAvailableNodesToRunVMs
12.6.22. KubevirtVmHighMemoryUsage Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
KubevirtVmHighMemoryUsage
12.6.23. KubeVirtVMIExcessiveMigrations Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
KubeVirtVMIExcessiveMigrations
12.6.24. LowKVMNodesCount Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
LowKVMNodesCount
12.6.25. LowReadyVirtControllersCount Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
LowReadyVirtControllersCount
12.6.26. LowReadyVirtOperatorsCount Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
LowReadyVirtOperatorsCount
12.6.27. LowVirtAPICount Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
LowVirtAPICount
12.6.28. LowVirtControllersCount Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
LowVirtControllersCount
12.6.29. LowVirtOperatorCount Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
LowVirtOperatorCount
12.6.30. NetworkAddonsConfigNotReady Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
NetworkAddonsConfigNotReady
12.6.31. NoLeadingVirtOperator Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
NoLeadingVirtOperator
12.6.32. NoReadyVirtController Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
NoReadyVirtController
12.6.33. NoReadyVirtOperator Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
NoReadyVirtOperator
12.6.34. OrphanedVirtualMachineInstances Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
OrphanedVirtualMachineInstances
12.6.35. OutdatedVirtualMachineInstanceWorkloads Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
OutdatedVirtualMachineInstanceWorkloads
12.6.36. SingleStackIPv6Unsupported Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
SingleStackIPv6Unsupported
12.6.37. SSPCommonTemplatesModificationReverted Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
SSPCommonTemplatesModificationReverted
12.6.38. SSPDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
SSPDown
12.6.39. SSPFailingToReconcile Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
SSPFailingToReconcile
12.6.40. SSPHighRateRejectedVms Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
SSPHighRateRejectedVms
12.6.41. SSPTemplateValidatorDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
SSPTemplateValidatorDown
12.6.42. UnsupportedHCOModification Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
UnsupportedHCOModification
12.6.43. VirtAPIDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VirtAPIDown
12.6.44. VirtApiRESTErrorsBurst Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VirtApiRESTErrorsBurst
12.6.45. VirtApiRESTErrorsHigh Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
VirtApiRESTErrorsHigh
12.6.46. VirtControllerDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VirtControllerDown
12.6.47. VirtControllerRESTErrorsBurst Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VirtControllerRESTErrorsBurst
12.6.48. VirtControllerRESTErrorsHigh Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
VirtControllerRESTErrorsHigh
12.6.49. VirtHandlerDaemonSetRolloutFailing Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VirtHandlerDaemonSetRolloutFailing
12.6.50. VirtHandlerRESTErrorsBurst Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VirtHandlerRESTErrorsBurst
12.6.51. VirtHandlerRESTErrorsHigh Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
VirtHandlerRESTErrorsHigh
12.6.52. VirtOperatorDown Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VirtOperatorDown
12.6.53. VirtOperatorRESTErrorsBurst Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VirtOperatorRESTErrorsBurst
12.6.54. VirtOperatorRESTErrorsHigh Link kopierenLink in die Zwischenablage kopiert!
-
The alert is deprecated.
VirtOperatorRESTErrorsHigh
12.6.55. VirtualMachineCRCErrors Link kopierenLink in die Zwischenablage kopiert!
The runbook for the
alert is deprecated because the alert was renamed toVirtualMachineCRCErrors.VMStorageClassWarning-
View the runbook for the alert.
VMStorageClassWarning
-
View the runbook for the
12.6.56. VMCannotBeEvicted Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VMCannotBeEvicted
12.6.57. VMStorageClassWarning Link kopierenLink in die Zwischenablage kopiert!
-
View the runbook for the alert.
VMStorageClassWarning