Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 11. Monitoring

download PDF

11.1. Monitoring overview

You can monitor the health of your cluster and virtual machines (VMs) with the following tools:

Monitoring OpenShift Virtualization VM health status
View the overall health of your OpenShift Virtualization environment in the web console by navigating to the Home Overview page in the Red Hat OpenShift Service on AWS web console. The Status card displays the overall health of OpenShift Virtualization based on the alerts and conditions.
Prometheus queries for virtual resources
Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
VM custom metrics
Configure the node-exporter service to expose internal VM metrics and processes.
VM health checks
Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
Runbooks
Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the Red Hat OpenShift Service on AWS web console.

11.2. Prometheus queries for virtual resources

Use the Red Hat OpenShift Service on AWS monitoring dashboard to query virtualization metrics. OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including network, storage, and guest memory swapping. You can also use metrics to query live migration status.

11.2.1. Prerequisites

  • For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.

11.2.2. Querying metrics

The Red Hat OpenShift Service on AWS monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.

As a dedicated-admin, you can query one or more namespaces at a time for metrics about user-defined projects.

As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.

11.2.2.1. Querying metrics for all projects as a cluster administrator

As a dedicated-admin or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Service on AWS and user-defined projects in the Metrics UI.

Note

Only dedicated administrators have access to the third-party UIs provided with Red Hat OpenShift Service on AWS monitoring.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role or with view permissions for all projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. From the Administrator perspective in the Red Hat OpenShift Service on AWS web console, select Observe Metrics.
  2. To add one or more queries, do any of the following:

    OptionDescription

    Create a custom query.

    Add your Prometheus Query Language (PromQL) query to the Expression field.

    As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item.

    Add multiple queries.

    Select Add query.

    Duplicate an existing query.

    Select the Options menu kebab next to the query, then choose Duplicate query.

    Disable a query from being run.

    Select the Options menu kebab next to the query and choose Disable query.

  3. To run queries that you created, select Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.

    Note

    Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.

    Note

    By default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query.

  4. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL.
  5. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following:

    OptionDescription

    Hide all metrics from a query.

    Click the Options menu kebab for the query and click Hide all series.

    Hide a specific metric.

    Go to the query table and click the colored square near the metric name.

    Zoom into the plot and change the time range.

    Either:

    • Visually select the time range by clicking and dragging on the plot horizontally.
    • Use the menu in the left upper corner to select the time range.

    Reset the time range.

    Select Reset zoom.

    Display outputs for all queries at a specific point in time.

    Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box.

    Hide the plot.

    Select Hide graph.

11.2.2.2. Querying metrics for user-defined projects as a developer

You can access metrics for a user-defined project as a developer or as a user with view permissions for the project.

In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.

Note

Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Developers cannot access the third-party UIs provided with Red Hat OpenShift Service on AWS monitoring.

Prerequisites

  • You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
  • You have enabled monitoring for user-defined projects.
  • You have deployed a service in a user-defined project.
  • You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored.

Procedure

  1. From the Developer perspective in the Red Hat OpenShift Service on AWS web console, select Observe Metrics.
  2. Select the project that you want to view metrics for in the Project: list.
  3. Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL. The metrics from the queries are visualized on the plot.

    Note

    In the Developer perspective, you can only run one query at a time.

  4. Explore the visualized metrics by doing any of the following:

    OptionDescription

    Zoom into the plot and change the time range.

    Either:

    • Visually select the time range by clicking and dragging on the plot horizontally.
    • Use the menu in the left upper corner to select the time range.

    Reset the time range.

    Select Reset zoom.

    Display outputs for all queries at a specific point in time.

    Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box.

11.2.3. Virtualization metrics

The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions.

Note

The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.

11.2.3.1. Network metrics

The following queries can identify virtual machines that are saturating the network:

kubevirt_vmi_network_receive_bytes_total
Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total
Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.

Example network traffic query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1

1
This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.

11.2.3.2. Storage metrics

11.2.3.2.1. Storage-related traffic

The following queries can identify VMs that are writing large amounts of data:

kubevirt_vmi_storage_read_traffic_bytes_total
Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total
Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.

Example storage-related traffic query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1

1
This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
11.2.3.2.2. Storage snapshot data
kubevirt_vmsnapshot_disks_restored_from_source
Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes
Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.

Examples of storage snapshot data queries

kubevirt_vmsnapshot_disks_restored_from_source{vm_name="simple-vm", vm_namespace="default"} 1

1
This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"} 1
1
This query returns the amount of space in bytes restored from the source virtual machine.
11.2.3.2.3. I/O performance

The following queries can determine the I/O performance of storage devices:

kubevirt_vmi_storage_iops_read_total
Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total
Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.

Example I/O performance query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1

1
This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.

11.2.3.3. Guest memory swapping metrics

The following queries can identify which swap-enabled guests are performing the most memory swapping:

kubevirt_vmi_memory_swap_in_traffic_bytes
Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes
Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.

Example memory swapping query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 1

1
This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Note

Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.

11.2.3.4. Live migration metrics

The following metrics can be queried to show live migration status:

kubevirt_vmi_migration_data_processed_bytes
The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_vmi_migration_data_remaining_bytes
The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_vmi_migration_memory_transfer_rate_bytes
The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_vmi_migrations_in_pending_phase
The number of pending migrations. Type: Gauge.
kubevirt_vmi_migrations_in_scheduling_phase
The number of scheduling migrations. Type: Gauge.
kubevirt_vmi_migrations_in_running_phase
The number of running migrations. Type: Gauge.
kubevirt_vmi_migration_succeeded
The number of successfully completed migrations. Type: Gauge.
kubevirt_vmi_migration_failed
The number of failed migrations. Type: Gauge.

11.2.4. Additional resources

11.3. Exposing custom metrics for virtual machines

Red Hat OpenShift Service on AWS includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.

In addition to using the Red Hat OpenShift Service on AWS monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.

11.3.1. Configuring the node exporter service

The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.

Prerequisites

  • Install the Red Hat OpenShift Service on AWS CLI oc.
  • Log in to the cluster as a user with cluster-admin privileges.
  • Create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project.
  • Configure the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project by setting enableUserWorkload to true.

Procedure

  1. Create the Service YAML file. In the following example, the file is called node-exporter-service.yaml.

    kind: Service
    apiVersion: v1
    metadata:
      name: node-exporter-service 1
      namespace: dynamation 2
      labels:
        servicetype: metrics 3
    spec:
      ports:
        - name: exmet 4
          protocol: TCP
          port: 9100 5
          targetPort: 9100 6
      type: ClusterIP
      selector:
        monitor: metrics 7
    1
    The node-exporter service that exposes the metrics from the virtual machines.
    2
    The namespace where the service is created.
    3
    The label for the service. The ServiceMonitor uses this label to match this service.
    4
    The name given to the port that exposes metrics on port 9100 for the ClusterIP service.
    5
    The target port used by node-exporter-service to listen for requests.
    6
    The TCP port number of the virtual machine that is configured with the monitor label.
    7
    The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label monitor and a value of metrics will be matched.
  2. Create the node-exporter service:

    $ oc create -f node-exporter-service.yaml

11.3.2. Configuring a virtual machine with the node exporter service

Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.

Prerequisites

  • The pods for the component are running in the openshift-user-workload-monitoring project.
  • Grant the monitoring-edit role to users who need to monitor this user-defined project.

Procedure

  1. Log on to the virtual machine.
  2. Download the node-exporter file on to the virtual machine by using the directory path that applies to the version of node-exporter file.

    $ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
  3. Extract the executable and place it in the /usr/bin directory.

    $ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \
        --directory /usr/bin --strip 1 "*/node_exporter"
  4. Create a node_exporter.service file in this directory path: /etc/systemd/system. This systemd service file runs the node-exporter service when the virtual machine reboots.

    [Unit]
    Description=Prometheus Metrics Exporter
    After=network.target
    StartLimitIntervalSec=0
    
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    User=root
    ExecStart=/usr/bin/node_exporter
    
    [Install]
    WantedBy=multi-user.target
  5. Enable and start the systemd service.

    $ sudo systemctl enable node_exporter.service
    $ sudo systemctl start node_exporter.service

Verification

  • Verify that the node-exporter agent is reporting metrics from the virtual machine.

    $ curl http://localhost:9100/metrics

    Example output

    go_gc_duration_seconds{quantile="0"} 1.5244e-05
    go_gc_duration_seconds{quantile="0.25"} 3.0449e-05
    go_gc_duration_seconds{quantile="0.5"} 3.7913e-05

11.3.3. Creating a custom monitoring label for virtual machines

To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.

Prerequisites

  • Install the Red Hat OpenShift Service on AWS CLI oc.
  • Log in as a user with cluster-admin privileges.
  • Access to the web console for stop and restart a virtual machine.

Procedure

  1. Edit the template spec of your virtual machine configuration file. In this example, the label monitor has the value metrics.

    spec:
      template:
        metadata:
          labels:
            monitor: metrics
  2. Stop and restart the virtual machine to create a new pod with the label name given to the monitor label.

11.3.3.1. Querying the node-exporter service for metrics

Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Obtain the HTTP service endpoint by specifying the namespace for the service:

    $ oc get service -n <namespace> <node-exporter-service>
  2. To list all available metrics for the node-exporter service, query the metrics resource.

    $ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"

    Example output

    node_arp_entries{device="eth0"} 1
    node_boot_time_seconds 1.643153218e+09
    node_context_switches_total 4.4938158e+07
    node_cooling_device_cur_state{name="0",type="Processor"} 0
    node_cooling_device_max_state{name="0",type="Processor"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="user"} 0
    node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06
    node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61
    node_cpu_seconds_total{cpu="0",mode="irq"} 233.91
    node_cpu_seconds_total{cpu="0",mode="nice"} 551.47
    node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3
    node_cpu_seconds_total{cpu="0",mode="steal"} 86.12
    node_cpu_seconds_total{cpu="0",mode="system"} 464.15
    node_cpu_seconds_total{cpu="0",mode="user"} 1075.2
    node_disk_discard_time_seconds_total{device="vda"} 0
    node_disk_discard_time_seconds_total{device="vdb"} 0
    node_disk_discarded_sectors_total{device="vda"} 0
    node_disk_discarded_sectors_total{device="vdb"} 0
    node_disk_discards_completed_total{device="vda"} 0
    node_disk_discards_completed_total{device="vdb"} 0
    node_disk_discards_merged_total{device="vda"} 0
    node_disk_discards_merged_total{device="vdb"} 0
    node_disk_info{device="vda",major="252",minor="0"} 1
    node_disk_info{device="vdb",major="252",minor="16"} 1
    node_disk_io_now{device="vda"} 0
    node_disk_io_now{device="vdb"} 0
    node_disk_io_time_seconds_total{device="vda"} 174
    node_disk_io_time_seconds_total{device="vdb"} 0.054
    node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003
    node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039
    node_disk_read_bytes_total{device="vda"} 3.71867136e+08
    node_disk_read_bytes_total{device="vdb"} 366592
    node_disk_read_time_seconds_total{device="vda"} 19.128
    node_disk_read_time_seconds_total{device="vdb"} 0.039
    node_disk_reads_completed_total{device="vda"} 5619
    node_disk_reads_completed_total{device="vdb"} 96
    node_disk_reads_merged_total{device="vda"} 5
    node_disk_reads_merged_total{device="vdb"} 0
    node_disk_write_time_seconds_total{device="vda"} 240.66400000000002
    node_disk_write_time_seconds_total{device="vdb"} 0
    node_disk_writes_completed_total{device="vda"} 71584
    node_disk_writes_completed_total{device="vdb"} 0
    node_disk_writes_merged_total{device="vda"} 19761
    node_disk_writes_merged_total{device="vdb"} 0
    node_disk_written_bytes_total{device="vda"} 2.007924224e+09
    node_disk_written_bytes_total{device="vdb"} 0

11.3.4. Creating a ServiceMonitor resource for the node exporter service

You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Create a YAML file for the ServiceMonitor resource configuration. In this example, the service monitor matches any service with the label metrics and queries the exmet port every 30 seconds.

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: node-exporter-metrics-monitor
      name: node-exporter-metrics-monitor 1
      namespace: dynamation 2
    spec:
      endpoints:
      - interval: 30s 3
        port: exmet 4
        scheme: http
      selector:
        matchLabels:
          servicetype: metrics
    1
    The name of the ServiceMonitor.
    2
    The namespace where the ServiceMonitor is created.
    3
    The interval at which the port will be queried.
    4
    The name of the port that is queried every 30 seconds
  2. Create the ServiceMonitor configuration for the node-exporter service.

    $ oc create -f node-exporter-metrics-monitor.yaml

11.3.4.1. Accessing the node exporter service outside the cluster

You can access the node-exporter service outside the cluster and view the exposed metrics.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Expose the node-exporter service.

    $ oc expose service -n <namespace> <node_exporter_service_name>
  2. Obtain the FQDN (Fully Qualified Domain Name) for the route.

    $ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host

    Example output

    NAME                    DNS
    node-exporter-service   node-exporter-service-dynamation.apps.cluster.example.org

  3. Use the curl command to display metrics for the node-exporter service.

    $ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics

    Example output

    go_gc_duration_seconds{quantile="0"} 1.5382e-05
    go_gc_duration_seconds{quantile="0.25"} 3.1163e-05
    go_gc_duration_seconds{quantile="0.5"} 3.8546e-05
    go_gc_duration_seconds{quantile="0.75"} 4.9139e-05
    go_gc_duration_seconds{quantile="1"} 0.000189423

11.3.5. Additional resources

11.4. Virtual machine health checks

You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine resource.

11.4.1. About readiness and liveness probes

Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.

A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.

A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.

You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests:

HTTP GET
The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
TCP socket
The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
Guest agent ping
The probe uses the guest-ping command to determine if the QEMU guest agent is running on the virtual machine.

11.4.1.1. Defining an HTTP readiness probe

Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration.

Procedure

  1. Include details of the readiness probe in the VM configuration file.

    Sample readiness probe with an HTTP GET test

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          readinessProbe:
            httpGet: 1
              port: 1500 2
              path: /healthz 3
              httpHeaders:
              - name: Custom-Header
                value: Awesome
            initialDelaySeconds: 120 4
            periodSeconds: 20 5
            timeoutSeconds: 10 6
            failureThreshold: 3 7
            successThreshold: 3 8
    # ...

    1
    The HTTP GET request to perform to connect to the VM.
    2
    The port of the VM that the probe queries. In the above example, the probe queries port 1500.
    3
    The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
    4
    The time, in seconds, after the VM starts before the readiness probe is initiated.
    5
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    6
    The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
    7
    The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready.
    8
    The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
  2. Create the VM by running the following command:

    $ oc create -f <file_name>.yaml

11.4.1.2. Defining a TCP readiness probe

Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration.

Procedure

  1. Include details of the TCP readiness probe in the VM configuration file.

    Sample readiness probe with a TCP socket test

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          readinessProbe:
            initialDelaySeconds: 120 1
            periodSeconds: 20 2
            tcpSocket: 3
              port: 1500 4
            timeoutSeconds: 10 5
    # ...

    1
    The time, in seconds, after the VM starts before the readiness probe is initiated.
    2
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    3
    The TCP action to perform.
    4
    The port of the VM that the probe queries.
    5
    The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
  2. Create the VM by running the following command:

    $ oc create -f <file_name>.yaml

11.4.1.3. Defining an HTTP liveness probe

Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.

Procedure

  1. Include details of the HTTP liveness probe in the VM configuration file.

    Sample liveness probe with an HTTP GET test

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      annotations:
      name: fedora-vm
      namespace: example-namespace
    # ...
    spec:
      template:
        spec:
          livenessProbe:
            initialDelaySeconds: 120 1
            periodSeconds: 20 2
            httpGet: 3
              port: 1500 4
              path: /healthz 5
              httpHeaders:
              - name: Custom-Header
                value: Awesome
            timeoutSeconds: 10 6
    # ...

    1
    The time, in seconds, after the VM starts before the liveness probe is initiated.
    2
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    3
    The HTTP GET request to perform to connect to the VM.
    4
    The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
    5
    The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created.
    6
    The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
  2. Create the VM by running the following command:

    $ oc create -f <file_name>.yaml

11.4.2. Defining a watchdog

You can define a watchdog to monitor the health of the guest operating system by performing the following steps:

  1. Configure a watchdog device for the virtual machine (VM).
  2. Install the watchdog agent on the guest.

The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:

  • poweroff: The VM powers down immediately. If spec.running is set to true or spec.runStrategy is not set to manual, then the VM reboots.
  • reset: The VM reboots in place and the guest operating system cannot react.

    Note

    The reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.

  • shutdown: The VM gracefully powers down by stopping all services.
Note

Watchdog is not available for Windows VMs.

11.4.2.1. Configuring a watchdog device for the virtual machine

You configure a watchdog device for the virtual machine (VM).

Prerequisites

  • The VM must have kernel support for an i6300esb watchdog device. Red Hat Enterprise Linux (RHEL) images support i6300esb.

Procedure

  1. Create a YAML file with the following contents:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm2-rhel84-watchdog
      name: <vm-name>
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm2-rhel84-watchdog
        spec:
          domain:
            devices:
              watchdog:
                name: <watchdog>
                i6300esb:
                  action: "poweroff" 1
    # ...
    1
    Specify poweroff, reset, or shutdown.

    The example above configures the i6300esb watchdog device on a RHEL8 VM with the poweroff action and exposes the device as /dev/watchdog.

    This device can now be used by the watchdog binary.

  2. Apply the YAML file to your cluster by running the following command:

    $ oc apply -f <file_name>.yaml
Important

This procedure is provided for testing watchdog functionality only and must not be run on production machines.

  1. Run the following command to verify that the VM is connected to the watchdog device:

    $ lspci | grep watchdog -i
  2. Run one of the following commands to confirm the watchdog is active:

    • Trigger a kernel panic:

      # echo c > /proc/sysrq-trigger
    • Stop the watchdog service:

      # pkill -9 watchdog

11.4.2.2. Installing the watchdog agent on the guest

You install the watchdog agent on the guest and start the watchdog service.

Procedure

  1. Log in to the virtual machine as root user.
  2. Install the watchdog package and its dependencies:

    # yum install watchdog
  3. Uncomment the following line in the /etc/watchdog.conf file and save the changes:

    #watchdog-device = /dev/watchdog
  4. Enable the watchdog service to start on boot:

    # systemctl enable --now watchdog.service

11.5. OpenShift Virtualization runbooks

You can use the procedures in these runbooks to diagnose and resolve issues that trigger OpenShift Virtualization alerts.

OpenShift Virtualization alerts are displayed in the Virtualization Overview tab in the web console.

11.5.1. CDIDataImportCronOutdated

Meaning

This alert fires when DataImportCron cannot poll or import the latest disk image versions.

DataImportCron polls disk images, checking for the latest versions, and imports the images as persistent volume claims (PVCs). This process ensures that PVCs are updated to the latest version so that they can be used as reliable clone sources or golden images for virtual machines (VMs).

For golden images, latest refers to the latest operating system of the distribution. For other disk images, latest refers to the latest hash of the image that is available.

Impact

VMs might be created from outdated disk images.

VMs might fail to start because no source PVC is available for cloning.

Diagnosis
  1. Check the cluster for a default storage class:

    $ oc get sc

    The output displays the storage classes with (default) beside the name of the default storage class. You must set a default storage class, either on the cluster or in the DataImportCron specification, in order for the DataImportCron to poll and import golden images. If no storage class is defined, the DataVolume controller fails to create PVCs and the following event is displayed: DataVolume.storage spec is missing accessMode and no storageClass to choose profile.

  2. Obtain the DataImportCron namespace and name:

    $ oc get dataimportcron -A -o json | jq -r '.items[] | \
      select(.status.conditions[] | select(.type == "UpToDate" and \
      .status == "False")) | .metadata.namespace + "/" + .metadata.name'
  3. If a default storage class is not defined on the cluster, check the DataImportCron specification for a default storage class:

    $ oc get dataimportcron <dataimportcron> -o yaml | \
      grep -B 5 storageClassName

    Example output

          url: docker://.../cdi-func-test-tinycore
        storage:
          resources:
            requests:
              storage: 5Gi
        storageClassName: rook-ceph-block

  4. Obtain the name of the DataVolume associated with the DataImportCron object:

    $ oc -n <namespace> get dataimportcron <dataimportcron> -o json | \
      jq .status.lastImportedPVC.name
  5. Check the DataVolume log for error messages:

    $ oc -n <namespace> get dv <datavolume> -o yaml
  6. Set the CDI_NAMESPACE environment variable:

    $ export CDI_NAMESPACE="$(oc get deployment -A | \
      grep cdi-operator | awk '{print $1}')"
  7. Check the cdi-deployment log for error messages:

    $ oc logs -n $CDI_NAMESPACE deployment/cdi-deployment
Mitigation
  1. Set a default storage class, either on the cluster or in the DataImportCron specification, to poll and import golden images. The updated Containerized Data Importer (CDI) will resolve the issue within a few seconds.
  2. If the issue does not resolve itself, delete the data volumes associated with the affected DataImportCron objects. The CDI will recreate the data volumes with the default storage class.
  3. If your cluster is installed in a restricted network environment, disable the enableCommonBootImageImport feature gate in order to opt out of automatic updates:

    $ oc patch hco kubevirt-hyperconverged -n $CDI_NAMESPACE --type json \
      -p '[{"op": "replace", "path": \
      "/spec/featureGates/enableCommonBootImageImport", "value": false}]'

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.2. CDIDataVolumeUnusualRestartCount

Meaning

This alert fires when a DataVolume object restarts more than three times.

Impact

Data volumes are responsible for importing and creating a virtual machine disk on a persistent volume claim. If a data volume restarts more than three times, these operations are unlikely to succeed. You must diagnose and resolve the issue.

Diagnosis
  1. Find Containerized Data Importer (CDI) pods with more than three restarts:

    $ oc get pods --all-namespaces -l app=containerized-data-importer -o=jsonpath='{range .items[?(@.status.containerStatuses[0].restartCount>3)]}{.metadata.name}{"/"}{.metadata.namespace}{"\n"}'
  2. Obtain the details of the pods:

    $ oc -n <namespace> describe pods <pod>
  3. Check the pod logs for error messages:

    $ oc -n <namespace> logs <pod>
Mitigation

Delete the data volume, resolve the issue, and create a new data volume.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.

11.5.3. CDIDefaultStorageClassDegraded

Meaning

This alert fires when there is no default storage class that supports smart cloning (CSI or snapshot-based) or the ReadWriteMany access mode.

Impact

If the default storage class does not support smart cloning, the default cloning method is host-assisted cloning, which is much less efficient.

If the default storage class does not support ReadWriteMany, virtual machines (VMs) cannot be live migrated.

Note

A default OpenShift Virtualization storage class has precedence over a default Red Hat OpenShift Service on AWS storage class when creating a VM disk.

Diagnosis
  1. Get the default OpenShift Virtualization storage class by running the following command:

    $ oc get sc -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubevirt\.io/is-default-virt-class=="true")].metadata.name}'
  2. If a default OpenShift Virtualization storage class exists, check that it supports ReadWriteMany by running the following command:

    $ oc get storageprofile <storage_class> -o json | jq '.status.claimPropertySets'| grep ReadWriteMany
  3. If there is no default OpenShift Virtualization storage class, get the default Red Hat OpenShift Service on AWS storage class by running the following command:

    $ oc get sc -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubevirt\.io/is-default-class=="true")].metadata.name}'
  4. If a default Red Hat OpenShift Service on AWS storage class exists, check that it supports ReadWriteMany by running the following command:

    $ oc get storageprofile <storage_class> -o json | jq '.status.claimPropertySets'| grep ReadWriteMany
Mitigation

Ensure that you have a default storage class, either Red Hat OpenShift Service on AWS or OpenShift Virtualization, and that the default storage class supports smart cloning and ReadWriteMany.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.4. CDIMultipleDefaultVirtStorageClasses

Meaning

This alert fires when more than one storage class has the annotation storageclass.kubevirt.io/is-default-virt-class: "true".

Impact

The storageclass.kubevirt.io/is-default-virt-class: "true" annotation defines a default OpenShift Virtualization storage class.

If more than one default OpenShift Virtualization storage class is defined, a data volume with no storage class specified receives the most recently created default storage class.

Diagnosis

Obtain a list of default OpenShift Virtualization storage classes by running the following command:

$ oc get sc -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubevirt\.io/is-default-virt-class=="true")].metadata.name}'
Mitigation

Ensure that only one default OpenShift Virtualization storage class is defined by removing the annotation from the other storage classes.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.5. CDINoDefaultStorageClass

Meaning

This alert fires when no default Red Hat OpenShift Service on AWS or OpenShift Virtualization storage class is defined.

Impact

If no default Red Hat OpenShift Service on AWS or OpenShift Virtualization storage class is defined, a data volume requesting a default storage class (the storage class is not specified), remains in a "pending" state.

Diagnosis
  1. Check for a default Red Hat OpenShift Service on AWS storage class by running the following command:

    $ oc get sc -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubevirt\.io/is-default-class=="true")].metadata.name}'
  2. Check for a default OpenShift Virtualization storage class by running the following command:

    $ oc get sc -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubevirt\.io/is-default-virt-class=="true")].metadata.name}'
Mitigation

Create a default storage class for either Red Hat OpenShift Service on AWS or OpenShift Virtualization or for both.

A default OpenShift Virtualization storage class has precedence over a default Red Hat OpenShift Service on AWS storage class for creating a virtual machine disk image.

  • Create a default Red Hat OpenShift Service on AWS storage class by running the following command:

    $ oc patch storageclass <storage-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  • Create a default OpenShift Virtualization storage class by running the following command:

    $ oc patch storageclass <storage-class-name> -p '{"metadata": {"annotations":{"storageclass.kubevirt.io/is-default-virt-class":"true"}}}'

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.6. CDINotReady

Meaning

This alert fires when the Containerized Data Importer (CDI) is in a degraded state:

  • Not progressing
  • Not available to use
Impact

CDI is not usable, so users cannot build virtual machine disks on persistent volume claims (PVCs) using CDI’s data volumes. CDI components are not ready and they stopped progressing towards a ready state.

Diagnosis
  1. Set the CDI_NAMESPACE environment variable:

    $ export CDI_NAMESPACE="$(oc get deployment -A | \
      grep cdi-operator | awk '{print $1}')"
  2. Check the CDI deployment for components that are not ready:

    $ oc -n $CDI_NAMESPACE get deploy -l cdi.kubevirt.io
  3. Check the details of the failing pod:

    $ oc -n $CDI_NAMESPACE describe pods <pod>
  4. Check the logs of the failing pod:

    $ oc -n $CDI_NAMESPACE logs <pod>
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.7. CDIOperatorDown

Meaning

This alert fires when the Containerized Data Importer (CDI) Operator is down. The CDI Operator deploys and manages the CDI infrastructure components, such as data volume and persistent volume claim (PVC) controllers. These controllers help users build virtual machine disks on PVCs.

Impact

The CDI components might fail to deploy or to stay in a required state. The CDI installation might not function correctly.

Diagnosis
  1. Set the CDI_NAMESPACE environment variable:

    $ export CDI_NAMESPACE="$(oc get deployment -A | grep cdi-operator | \
      awk '{print $1}')"
  2. Check whether the cdi-operator pod is currently running:

    $ oc -n $CDI_NAMESPACE get pods -l name=cdi-operator
  3. Obtain the details of the cdi-operator pod:

    $ oc -n $CDI_NAMESPACE describe pods -l name=cdi-operator
  4. Check the log of the cdi-operator pod for errors:

    $ oc -n $CDI_NAMESPACE logs -l name=cdi-operator
Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.8. CDIStorageProfilesIncomplete

Meaning

This alert fires when a Containerized Data Importer (CDI) storage profile is incomplete.

If a storage profile is incomplete, the CDI cannot infer persistent volume claim (PVC) fields, such as volumeMode and accessModes, which are required to create a virtual machine (VM) disk.

Impact

The CDI cannot create a VM disk on the PVC.

Diagnosis
  • Identify the incomplete storage profile:

    $ oc get storageprofile <storage_class>
Mitigation
  • Add the missing storage profile information as in the following example:

    $ oc patch storageprofile <storage_class> --type=merge -p '{"spec": {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], "volumeMode": "Filesystem"}]}}'

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.9. CnaoDown

Meaning

This alert fires when the Cluster Network Addons Operator (CNAO) is down. The CNAO deploys additional networking components on top of the cluster.

Impact

If the CNAO is not running, the cluster cannot reconcile changes to virtual machine components. As a result, the changes might fail to take effect.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | \
      grep cluster-network-addons-operator | awk '{print $1}')"
  2. Check the status of the cluster-network-addons-operator pod:

    $ oc -n $NAMESPACE get pods -l name=cluster-network-addons-operator
  3. Check the cluster-network-addons-operator logs for error messages:

    $ oc -n $NAMESPACE logs -l name=cluster-network-addons-operator
  4. Obtain the details of the cluster-network-addons-operator pods:

    $ oc -n $NAMESPACE describe pods -l name=cluster-network-addons-operator
Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.10. HCOInstallationIncomplete

Meaning

This alert fires when the HyperConverged Cluster Operator (HCO) runs for more than an hour without a HyperConverged custom resource (CR).

This alert has the following causes:

  • During the installation process, you installed the HCO but you did not create the HyperConverged CR.
  • During the uninstall process, you removed the HyperConverged CR before uninstalling the HCO and the HCO is still running.
Mitigation

The mitigation depends on whether you are installing or uninstalling the HCO:

  • Complete the installation by creating a HyperConverged CR with its default values:

    $ cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: hco-operatorgroup
      namespace: kubevirt-hyperconverged
    spec: {}
    EOF
  • Uninstall the HCO. If the uninstall process continues to run, you must resolve that issue in order to cancel the alert.

11.5.11. HPPNotReady

Meaning

This alert fires when a hostpath provisioner (HPP) installation is in a degraded state.

The HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).

Impact

HPP is not usable. Its components are not ready and they are not progressing towards a ready state.

Diagnosis
  1. Set the HPP_NAMESPACE environment variable:

    $ export HPP_NAMESPACE="$(oc get deployment -A | \
      grep hostpath-provisioner-operator | awk '{print $1}')"
  2. Check for HPP components that are currently not ready:

    $ oc -n $HPP_NAMESPACE get all -l k8s-app=hostpath-provisioner
  3. Obtain the details of the failing pod:

    $ oc -n $HPP_NAMESPACE describe pods <pod>
  4. Check the logs of the failing pod:

    $ oc -n $HPP_NAMESPACE logs <pod>
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.12. HPPOperatorDown

Meaning

This alert fires when the hostpath provisioner (HPP) Operator is down.

The HPP Operator deploys and manages the HPP infrastructure components, such as the daemon set that provisions hostpath volumes.

Impact

The HPP components might fail to deploy or to remain in the required state. As a result, the HPP installation might not work correctly in the cluster.

Diagnosis
  1. Configure the HPP_NAMESPACE environment variable:

    $ HPP_NAMESPACE="$(oc get deployment -A | grep \
      hostpath-provisioner-operator | awk '{print $1}')"
  2. Check whether the hostpath-provisioner-operator pod is currently running:

    $ oc -n $HPP_NAMESPACE get pods -l name=hostpath-provisioner-operator
  3. Obtain the details of the hostpath-provisioner-operator pod:

    $ oc -n $HPP_NAMESPACE describe pods -l name=hostpath-provisioner-operator
  4. Check the log of the hostpath-provisioner-operator pod for errors:

    $ oc -n $HPP_NAMESPACE logs -l name=hostpath-provisioner-operator
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.13. HPPSharingPoolPathWithOS

Meaning

This alert fires when the hostpath provisioner (HPP) shares a file system with other critical components, such as kubelet or the operating system (OS).

HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).

Impact

A shared hostpath pool puts pressure on the node’s disks. The node might have degraded performance and stability.

Diagnosis
  1. Configure the HPP_NAMESPACE environment variable:

    $ export HPP_NAMESPACE="$(oc get deployment -A | \
      grep hostpath-provisioner-operator | awk '{print $1}')"
  2. Obtain the status of the hostpath-provisioner-csi daemon set pods:

    $ oc -n $HPP_NAMESPACE get pods | grep hostpath-provisioner-csi
  3. Check the hostpath-provisioner-csi logs to identify the shared pool and path:

    $ oc -n $HPP_NAMESPACE logs <csi_daemonset> -c hostpath-provisioner

    Example output

    I0208 15:21:03.769731       1 utils.go:221] pool (<legacy, csi-data-dir>/csi),
    shares path with OS which can lead to node disk pressure

Mitigation

Using the data obtained in the Diagnosis section, try to prevent the pool path from being shared with the OS. The specific steps vary based on the node and other circumstances.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.14. KubemacpoolDown

Meaning

KubeMacPool is down. KubeMacPool is responsible for allocating MAC addresses and preventing MAC address conflicts.

Impact

If KubeMacPool is down, VirtualMachine objects cannot be created.

Diagnosis
  1. Set the KMP_NAMESPACE environment variable:

    $ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l \
      control-plane=mac-controller-manager | awk '{print $1}')"
  2. Set the KMP_NAME environment variable:

    $ export KMP_NAME="$(oc get pod -A --no-headers -l \
      control-plane=mac-controller-manager | awk '{print $2}')"
  3. Obtain the KubeMacPool-manager pod details:

    $ oc describe pod -n $KMP_NAMESPACE $KMP_NAME
  4. Check the KubeMacPool-manager logs for error messages:

    $ oc logs -n $KMP_NAMESPACE $KMP_NAME
Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.15. KubeMacPoolDuplicateMacsFound

Meaning

This alert fires when KubeMacPool detects duplicate MAC addresses.

KubeMacPool is responsible for allocating MAC addresses and preventing MAC address conflicts. When KubeMacPool starts, it scans the cluster for the MAC addresses of virtual machines (VMs) in managed namespaces.

Impact

Duplicate MAC addresses on the same LAN might cause network issues.

Diagnosis
  1. Obtain the namespace and the name of the kubemacpool-mac-controller pod:

    $ oc get pod -A -l control-plane=mac-controller-manager --no-headers \
      -o custom-columns=":metadata.namespace,:metadata.name"
  2. Obtain the duplicate MAC addresses from the kubemacpool-mac-controller logs:

    $ oc logs -n <namespace> <kubemacpool_mac_controller> | \
      grep "already allocated"

    Example output

    mac address 02:00:ff:ff:ff:ff already allocated to
    vm/kubemacpool-test/testvm, br1,
    conflict with: vm/kubemacpool-test/testvm2, br1

Mitigation
  1. Update the VMs to remove the duplicate MAC addresses.
  2. Restart the kubemacpool-mac-controller pod:

    $ oc delete pod -n <namespace> <kubemacpool_mac_controller>

11.5.16. KubeVirtComponentExceedsRequestedCPU

Meaning

This alert fires when a component’s CPU usage exceeds the requested limit.

Impact

Usage of CPU resources is not optimal and the node might be overloaded.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the component’s CPU request limit:

    $ oc -n $NAMESPACE get deployment <component> -o yaml | grep requests: -A 2
  3. Check the actual CPU usage by using a PromQL query:

    node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate
    {namespace="$NAMESPACE",container="<component>"}

See the Prometheus documentation for more information.

Mitigation

Update the CPU request limit in the HCO custom resource.

11.5.17. KubeVirtComponentExceedsRequestedMemory

Meaning

This alert fires when a component’s memory usage exceeds the requested limit.

Impact

Usage of memory resources is not optimal and the node might be overloaded.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the component’s memory request limit:

    $ oc -n $NAMESPACE get deployment <component> -o yaml | \
      grep requests: -A 2
  3. Check the actual memory usage by using a PromQL query:

    container_memory_usage_bytes{namespace="$NAMESPACE",container="<component>"}

See the Prometheus documentation for more information.

Mitigation

Update the memory request limit in the HCO custom resource.

11.5.18. KubeVirtCRModified

Meaning

This alert fires when an operand of the HyperConverged Cluster Operator (HCO) is changed by someone or something other than HCO.

HCO configures OpenShift Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly. The HyperConverged custom resource is the source of truth for the configuration.

Impact

Changing the operands manually causes the cluster configuration to fluctuate and might lead to instability.

Diagnosis
  • Check the component_name value in the alert details to determine the operand kind (kubevirt) and the operand name (kubevirt-kubevirt-hyperconverged) that are being changed:

    Labels
      alertname=KubevirtHyperconvergedClusterOperatorCRModification
      component_name=kubevirt/kubevirt-kubevirt-hyperconverged
      severity=warning
Mitigation

Do not change the HCO operands directly. Use HyperConverged objects to configure the cluster.

The alert resolves itself after 10 minutes if the operands are not changed manually.

11.5.19. KubeVirtDeprecatedAPIRequested

Meaning

This alert fires when a deprecated KubeVirt API is used.

Impact

Using a deprecated API is not recommended because the request will fail when the API is removed in a future release.

Diagnosis
  • Check the Description and Summary sections of the alert to identify the deprecated API as in the following example:

    Description

    Detected requests to the deprecated virtualmachines.kubevirt.io/v1alpha3 API.

    Summary

    2 requests were detected in the last 10 minutes.

Mitigation

Use fully supported APIs. The alert resolves itself after 10 minutes if the deprecated API is not used.

11.5.20. KubeVirtNoAvailableNodesToRunVMs

Meaning

This alert fires when the node CPUs in the cluster do not support virtualization or the virtualization extensions are not enabled.

Impact

The nodes must support virtualization and the virtualization features must be enabled in the BIOS to run virtual machines (VMs).

Diagnosis
  • Check the nodes for hardware virtualization support:

    $ oc get nodes -o json|jq '.items[]|{"name": .metadata.name, "kvm": .status.allocatable["devices.kubevirt.io/kvm"]}'

    Example output

    {
      "name": "shift-vwpsz-master-0",
      "kvm": null
    }
    {
      "name": "shift-vwpsz-master-1",
      "kvm": null
    }
    {
      "name": "shift-vwpsz-master-2",
      "kvm": null
    }
    {
      "name": "shift-vwpsz-worker-8bxkp",
      "kvm": "1k"
    }
    {
      "name": "shift-vwpsz-worker-ctgmc",
      "kvm": "1k"
    }
    {
      "name": "shift-vwpsz-worker-gl5zl",
      "kvm": "1k"
    }

    Nodes with "kvm": null or "kvm": 0 do not support virtualization extensions.

    Nodes with "kvm": "1k" do support virtualization extensions.

Mitigation

Ensure that hardware and CPU virtualization extensions are enabled on all nodes and that the nodes are correctly labeled.

See OpenShift Virtualization reports no nodes are available, cannot start VMs for details.

If you cannot resolve the issue, log in to the Customer Portal and open a support case.

11.5.21. KubevirtVmHighMemoryUsage

Meaning

This alert fires when a container hosting a virtual machine (VM) has less than 20 MB free memory.

Impact

The virtual machine running inside the container is terminated by the runtime if the container’s memory limit is exceeded.

Diagnosis
  1. Obtain the virt-launcher pod details:

    $ oc get pod <virt-launcher> -o yaml
  2. Identify compute container processes with high memory usage in the virt-launcher pod:

    $ oc exec -it <virt-launcher> -c compute -- top
Mitigation
  • Increase the memory limit in the VirtualMachine specification as in the following example:

    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-name
        spec:
          domain:
            resources:
              limits:
                memory: 200Mi
              requests:
                memory: 128Mi

11.5.22. KubeVirtVMIExcessiveMigrations

Meaning

This alert fires when a virtual machine instance (VMI) live migrates more than 12 times over a period of 24 hours.

This migration rate is abnormally high, even during an upgrade. This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient resources.

Impact

A virtual machine (VM) that migrates too frequently might experience degraded performance because memory page faults occur during the transition.

Diagnosis
  1. Verify that the worker node has sufficient resources:

    $ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
      jq .items[].status.allocatable

    Example output

    {
      "cpu": "3500m",
      "devices.kubevirt.io/kvm": "1k",
      "devices.kubevirt.io/sev": "0",
      "devices.kubevirt.io/tun": "1k",
      "devices.kubevirt.io/vhost-net": "1k",
      "ephemeral-storage": "38161122446",
      "hugepages-1Gi": "0",
      "hugepages-2Mi": "0",
      "memory": "7000128Ki",
      "pods": "250"
    }

  2. Check the status of the worker node:

    $ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
      jq .items[].status.conditions

    Example output

    {
      "lastHeartbeatTime": "2022-05-26T07:36:01Z",
      "lastTransitionTime": "2022-05-23T08:12:02Z",
      "message": "kubelet has sufficient memory available",
      "reason": "KubeletHasSufficientMemory",
      "status": "False",
      "type": "MemoryPressure"
    },
    {
      "lastHeartbeatTime": "2022-05-26T07:36:01Z",
      "lastTransitionTime": "2022-05-23T08:12:02Z",
      "message": "kubelet has no disk pressure",
      "reason": "KubeletHasNoDiskPressure",
      "status": "False",
      "type": "DiskPressure"
    },
    {
      "lastHeartbeatTime": "2022-05-26T07:36:01Z",
      "lastTransitionTime": "2022-05-23T08:12:02Z",
      "message": "kubelet has sufficient PID available",
      "reason": "KubeletHasSufficientPID",
      "status": "False",
      "type": "PIDPressure"
    },
    {
      "lastHeartbeatTime": "2022-05-26T07:36:01Z",
      "lastTransitionTime": "2022-05-23T08:24:15Z",
      "message": "kubelet is posting ready status",
      "reason": "KubeletReady",
      "status": "True",
      "type": "Ready"
    }

  3. Log in to the worker node and verify that the kubelet service is running:

    $ systemctl status kubelet
  4. Check the kubelet journal log for error messages:

    $ journalctl -r -u kubelet
Mitigation

Ensure that the worker nodes have sufficient resources (CPU, memory, disk) to run VM workloads without interruption.

If the problem persists, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.23. LowKVMNodesCount

Meaning

This alert fires when fewer than two nodes in the cluster have KVM resources.

Impact

The cluster must have at least two nodes with KVM resources for live migration.

Virtual machines cannot be scheduled or run if no nodes have KVM resources.

Diagnosis
  • Identify the nodes with KVM resources:

    $ oc get nodes -o jsonpath='{.items[*].status.allocatable}' | \
      grep devices.kubevirt.io/kvm
Mitigation

Install KVM on the nodes without KVM resources.

11.5.24. LowReadyVirtControllersCount

Meaning

This alert fires when one or more virt-controller pods are running, but none of these pods has been in the Ready state for the past 5 minutes.

A virt-controller device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device creates pods for VMIs and manages their lifecycle. The device is critical for cluster-wide virtualization functionality.

Impact

This alert indicates that a cluster-level failure might occur. Actions related to VM lifecycle management, such as launching a new VMI or shutting down an existing VMI, will fail.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Verify a virt-controller device is available:

    $ oc get deployment -n $NAMESPACE virt-controller \
      -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-controller deployment:

    $ oc -n $NAMESPACE get deploy virt-controller -o yaml
  4. Obtain the details of the virt-controller deployment to check for status conditions, such as crashing pods or failures to pull images:

    $ oc -n $NAMESPACE describe deploy virt-controller
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    $ oc get nodes
Mitigation

This alert can have multiple causes, including the following:

  • The cluster has insufficient memory.
  • The nodes are down.
  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
  • There are network issues.

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.25. LowReadyVirtOperatorsCount

Meaning

This alert fires when one or more virt-operator pods are running, but none of these pods has been in a Ready state for the last 10 minutes.

The virt-operator is the first Operator to start in a cluster. The virt-operator deployment has a default replica of two virt-operator pods.

Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster
  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management
Impact

A cluster-level failure might occur. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might become unavailable. Such a state also triggers the NoReadyVirtOperator alert.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Obtain the name of the virt-operator deployment:

    $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Obtain the details of the virt-operator deployment:

    $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check for node issues, such as a NotReady state:

    $ oc get nodes
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.26. LowVirtAPICount

Meaning

This alert fires when only one available virt-api pod is detected during a 60-minute period, although at least two nodes are available for scheduling.

Impact

An API call outage might occur during node eviction because the virt-api pod becomes a single point of failure.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the number of available virt-api pods:

    $ oc get deployment -n $NAMESPACE virt-api \
      -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-api deployment for error conditions:

    $ oc -n $NAMESPACE get deploy virt-api -o yaml
  4. Check the nodes for issues such as nodes in a NotReady state:

    $ oc get nodes
Mitigation

Try to identify the root cause and to resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.27. LowVirtControllersCount

Meaning

This alert fires when a low number of virt-controller pods is detected. At least one virt-controller pod must be available in order to ensure high availability. The default number of replicas is 2.

A virt-controller device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device create pods for VMIs and manages the lifecycle of the pods. The device is critical for cluster-wide virtualization functionality.

Impact

The responsiveness of OpenShift Virtualization might become negatively affected. For example, certain requests might be missed.

In addition, if another virt-launcher instance terminates unexpectedly, OpenShift Virtualization might become completely unresponsive.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Verify that running virt-controller pods are available:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-controller
  3. Check the virt-launcher logs for error messages:

    $ oc -n $NAMESPACE logs <virt-launcher>
  4. Obtain the details of the virt-launcher pod to check for status conditions such as unexpected termination or a NotReady state.

    $ oc -n $NAMESPACE describe pod/<virt-launcher>
Mitigation

This alert can have a variety of causes, including:

  • Not enough memory on the cluster
  • Nodes are down
  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
  • Networking issues

Identify the root cause and fix it, if possible.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.28. LowVirtOperatorCount

Meaning

This alert fires when only one virt-operator pod in a Ready state has been running for the last 60 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster
  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management
Impact

The virt-operator cannot provide high availability (HA) for the deployment. HA requires two or more virt-operator pods in a Ready state. The default deployment is two pods.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its decreased availability does not significantly affect VM workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the states of the virt-operator pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Review the logs of the affected virt-operator pods:

    $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the affected virt-operator pods:

    $ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.

11.5.29. NetworkAddonsConfigNotReady

Meaning

This alert fires when the NetworkAddonsConfig custom resource (CR) of the Cluster Network Addons Operator (CNAO) is not ready.

CNAO deploys additional networking components on the cluster. This alert indicates that one of the deployed components is not ready.

Impact

Network functionality is affected.

Diagnosis
  1. Check the status conditions of the NetworkAddonsConfig CR to identify the deployment or daemon set that is not ready:

    $ oc get networkaddonsconfig \
      -o custom-columns="":.status.conditions[*].message

    Example output

    DaemonSet "cluster-network-addons/macvtap-cni" update is being processed...

  2. Check the component’s pod for errors:

    $ oc -n cluster-network-addons get daemonset <pod> -o yaml
  3. Check the component’s logs:

    $ oc -n cluster-network-addons logs <pod>
  4. Check the component’s details for error conditions:

    $ oc -n cluster-network-addons describe <pod>
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.30. NoLeadingVirtOperator

Meaning

This alert fires when no virt-operator pod with a leader lease has been detected for 10 minutes, although the virt-operator pods are in a Ready state. The alert indicates that no leader pod is available.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live updating, and live upgrading a cluster
  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The virt-operator deployment has a default replica of 2 pods, with one pod holding a leader lease.

Impact

This alert indicates a failure at the level of the cluster. As a result, critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A -o \
      custom-columns="":.metadata.namespace)"
  2. Obtain the status of the virt-operator pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator pod logs to determine the leader status:

    $ oc -n $NAMESPACE logs | grep lead

    Leader pod example:

    {"component":"virt-operator","level":"info","msg":"Attempting to acquire
    leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"}
    I1130 12:15:18.635452       1 leaderelection.go:243] attempting to acquire
    leader lease <namespace>/virt-operator...
    I1130 12:15:19.216582       1 leaderelection.go:253] successfully acquired
    lease <namespace>/virt-operator
    {"component":"virt-operator","level":"info","msg":"Started leading",
    "pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"}

    Non-leader pod example:

    {"component":"virt-operator","level":"info","msg":"Attempting to acquire
    leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"}
    I1130 12:15:20.533792       1 leaderelection.go:243] attempting to acquire
    leader lease <namespace>/virt-operator...
  4. Obtain the details of the affected virt-operator pods:

    $ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.31. NoReadyVirtController

Meaning

This alert fires when no available virt-controller devices have been detected for 5 minutes.

The virt-controller devices monitor the custom resource definitions of virtual machine instances (VMIs) and manage the associated pods. The devices create pods for VMIs and manage the lifecycle of the pods.

Therefore, virt-controller devices are critical for all cluster-wide virtualization functionality.

Impact

Any actions related to VM lifecycle management fail. This notably includes launching a new VMI or shutting down an existing VMI.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Verify the number of virt-controller devices:

    $ oc get deployment -n $NAMESPACE virt-controller \
      -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-controller deployment:

    $ oc -n $NAMESPACE get deploy virt-controller -o yaml
  4. Obtain the details of the virt-controller deployment to check for status conditions such as crashing pods or failure to pull images:

    $ oc -n $NAMESPACE describe deploy virt-controller
  5. Obtain the details of the virt-controller pods:

    $ get pods -n $NAMESPACE | grep virt-controller
  6. Check the logs of the virt-controller pods for error messages:

    $ oc logs -n $NAMESPACE <virt-controller>
  7. Check the nodes for problems, such as a NotReady state:

    $ oc get nodes
Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.32. NoReadyVirtOperator

Meaning

This alert fires when no virt-operator pod in a Ready state has been detected for 10 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster
  • Monitoring the life cycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The default deployment is two virt-operator pods.

Impact

This alert indicates a cluster-level failure. Critical cluster management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be not available.

The virt-operator is not directly responsible for virtual machines in the cluster. Therefore, its temporary unavailability does not significantly affect workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Obtain the name of the virt-operator deployment:

    $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Generate the description of the virt-operator deployment:

    $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check for node issues, such as a NotReady state:

    $ oc get nodes
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.

11.5.33. OrphanedVirtualMachineInstances

Meaning

This alert fires when a virtual machine instance (VMI), or virt-launcher pod, runs on a node that does not have a running virt-handler pod. Such a VMI is called orphaned.

Impact

Orphaned VMIs cannot be managed.

Diagnosis
  1. Check the status of the virt-handler pods to view the nodes on which they are running:

    $ oc get pods --all-namespaces -o wide -l kubevirt.io=virt-handler
  2. Check the status of the VMIs to identify VMIs running on nodes that do not have a running virt-handler pod:

    $ oc get vmis --all-namespaces
  3. Check the status of the virt-handler daemon:

    $ oc get daemonset virt-handler --all-namespaces

    Example output

    NAME          DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE ...
    virt-handler  2        2        2      2           2         ...

    The daemon set is considered healthy if the Desired, Ready, and Available columns contain the same value.

  4. If the virt-handler daemon set is not healthy, check the virt-handler daemon set for pod deployment issues:

    $ oc get daemonset virt-handler --all-namespaces -o yaml | jq .status
  5. Check the nodes for issues such as a NotReady status:

    $ oc get nodes
  6. Check the spec.workloads stanza of the KubeVirt custom resource (CR) for a workloads placement policy:

    $ oc get kubevirt kubevirt --all-namespaces -o yaml
Mitigation

If a workloads placement policy is configured, add the node with the VMI to the policy.

Possible causes for the removal of a virt-handler pod from a node include changes to the node’s taints and tolerations or to a pod’s scheduling rules.

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.34. OutdatedVirtualMachineInstanceWorkloads

Meaning

This alert fires when running virtual machine instances (VMIs) in outdated virt-launcher pods are detected 24 hours after the OpenShift Virtualization control plane has been updated.

Impact

Outdated VMIs might not have access to new OpenShift Virtualization features.

Outdated VMIs will not receive the security fixes associated with the virt-launcher pod update.

Diagnosis
  1. Identify the outdated VMIs:

    $ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
  2. Check the KubeVirt custom resource (CR) to determine whether workloadUpdateMethods is configured in the workloadUpdateStrategy stanza:

    $ oc get kubevirt --all-namespaces -o yaml
  3. Check each outdated VMI to determine whether it is live-migratable:

    $ oc get vmi <vmi> -o yaml

    Example output

    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstance
    # ...
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: null
          message: cannot migrate VMI which does not use masquerade
          to connect to the pod network
          reason: InterfaceNotLiveMigratable
          status: "False"
          type: LiveMigratable

Mitigation
Configuring automated workload updates

Update the HyperConverged CR to enable automatic workload updates.

Stopping a VM associated with a non-live-migratable VMI
  • If a VMI is not live-migratable and if runStrategy: always is set in the corresponding VirtualMachine object, you can update the VMI by manually stopping the virtual machine (VM):

    $ virctl stop --namespace <namespace> <vm>

A new VMI spins up immediately in an updated virt-launcher pod to replace the stopped VMI. This is the equivalent of a restart action.

Note

Manually stopping a live-migratable VM is destructive and not recommended because it interrupts the workload.

Migrating a live-migratable VMI

If a VMI is live-migratable, you can update it by creating a VirtualMachineInstanceMigration object that targets a specific running VMI. The VMI is migrated into an updated virt-launcher pod.

  1. Create a VirtualMachineInstanceMigration manifest and save it as migration.yaml:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstanceMigration
    metadata:
      name: <migration_name>
      namespace: <namespace>
    spec:
      vmiName: <vmi_name>
  2. Create a VirtualMachineInstanceMigration object to trigger the migration:

    $ oc create -f migration.yaml

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.35. SingleStackIPv6Unsupported

Meaning

This alert fires when you install OpenShift Virtualization on a single stack IPv6 cluster.

Impact

You cannot create virtual machines.

Diagnosis
  • Check the cluster network configuration by running the following command:

    $ oc get network.config cluster -o yaml

    The output displays only an IPv6 CIDR for the cluster network.

    Example output

    apiVersion: config.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      clusterNetwork:
      - cidr: fd02::/48
        hostPrefix: 64

Mitigation

Install OpenShift Virtualization on a single stack IPv4 cluster or on a dual stack IPv4/IPv6 cluster.

11.5.36. SSPCommonTemplatesModificationReverted

Meaning

This alert fires when the Scheduling, Scale, and Performance (SSP) Operator reverts changes to common templates as part of its reconciliation procedure.

The SSP Operator deploys and reconciles the common templates and the Template Validator. If a user or script changes a common template, the changes are reverted by the SSP Operator.

Impact

Changes to common templates are overwritten.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Check the ssp-operator logs for templates with reverted changes:

    $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator | \
      grep 'common template' -C 3
Mitigation

Try to identify and resolve the cause of the changes.

Ensure that changes are made only to copies of templates, and not to the templates themselves.

11.5.37. SSPDown

Meaning

This alert fires when all the Scheduling, Scale and Performance (SSP) Operator pods are down.

The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.

Impact

Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates and/or the Template Validator might not be updated or reset if they fail.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Check the status of the ssp-operator pods.

    $ oc -n $NAMESPACE get pods -l control-plane=ssp-operator
  3. Obtain the details of the ssp-operator pods:

    $ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
  4. Check the ssp-operator logs for error messages:

    $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.38. SSPFailingToReconcile

Meaning

This alert fires when the reconcile cycle of the Scheduling, Scale and Performance (SSP) Operator fails repeatedly, although the SSP Operator is running.

The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.

Impact

Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates or the Template Validator might not be updated or reset if they fail.

Diagnosis
  1. Export the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Obtain the details of the ssp-operator pods:

    $ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
  3. Check the ssp-operator logs for errors:

    $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
  4. Obtain the status of the virt-template-validator pods:

    $ oc -n $NAMESPACE get pods -l name=virt-template-validator
  5. Obtain the details of the virt-template-validator pods:

    $ oc -n $NAMESPACE describe pods -l name=virt-template-validator
  6. Check the virt-template-validator logs for errors:

    $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.39. SSPHighRateRejectedVms

Meaning

This alert fires when a user or script attempts to create or modify a large number of virtual machines (VMs), using an invalid configuration.

Impact

The VMs are not created or modified. As a result, the environment might not behave as expected.

Diagnosis
  1. Export the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Check the virt-template-validator logs for errors that might indicate the cause:

    $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator

    Example output

    {"component":"kubevirt-template-validator","level":"info","msg":"evalution
    summary for ubuntu-3166wmdbbfkroku0:\nminimal-required-memory applied: FAIL,
    value 1073741824 is lower than minimum [2147483648]\n\nsucceeded=false",
    "pos":"admission.go:25","timestamp":"2021-09-28T17:59:10.934470Z"}

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.40. SSPTemplateValidatorDown

Meaning

This alert fires when all the Template Validator pods are down.

The Template Validator checks virtual machines (VMs) to ensure that they do not violate their templates.

Impact

VMs are not validated against their templates. As a result, VMs might be created with specifications that do not match their respective workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Obtain the status of the virt-template-validator pods:

    $ oc -n $NAMESPACE get pods -l name=virt-template-validator
  3. Obtain the details of the virt-template-validator pods:

    $ oc -n $NAMESPACE describe pods -l name=virt-template-validator
  4. Check the virt-template-validator logs for error messages:

    $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.41. UnsupportedHCOModification

Meaning

This alert fires when a JSON Patch annotation is used to change an operand of the HyperConverged Cluster Operator (HCO).

HCO configures OpenShift Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly.

However, if a change is required and it is not supported by the HCO API, you can force HCO to set a change in an operator by using JSON Patch annotations. These changes are not reverted by HCO during its reconciliation process.

Impact

Incorrect use of JSON Patch annotations might lead to unexpected results or an unstable environment.

Upgrading a system with JSON Patch annotations is dangerous because the structure of the component custom resources might change.

Diagnosis
  • Check the annotation_name in the alert details to identify the JSON Patch annotation:

    Labels
      alertname=KubevirtHyperconvergedClusterOperatorUSModification
      annotation_name=kubevirt.kubevirt.io/jsonpatch
      severity=info
Mitigation

It is best to use the HCO API to change an operand. However, if the change can only be done with a JSON Patch annotation, proceed with caution.

Remove JSON Patch annotations before upgrade to avoid potential issues.

11.5.42. VirtAPIDown

Meaning

This alert fires when all the API Server pods are down.

Impact

OpenShift Virtualization objects cannot send API calls.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-api pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the status of the virt-api deployment:

    $ oc -n $NAMESPACE get deploy virt-api -o yaml
  4. Check the virt-api deployment details for issues such as crashing pods or image pull failures:

    $ oc -n $NAMESPACE describe deploy virt-api
  5. Check for issues such as nodes in a NotReady state:

    $ oc get nodes
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.43. VirtApiRESTErrorsBurst

Meaning

More than 80% of REST calls have failed in the virt-api pods in the last 5 minutes.

Impact

A very high rate of failed REST calls to virt-api might lead to slow response and execution of API calls, and potentially to API calls being completely dismissed.

However, currently running virtual machine workloads are not likely to be affected.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Obtain the list of virt-api pods on your deployment:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the virt-api logs for error messages:

    $ oc logs -n $NAMESPACE <virt-api>
  4. Obtain the details of the virt-api pods:

    $ oc describe -n $NAMESPACE <virt-api>
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    $ oc get nodes
  6. Check the status of the virt-api deployment:

    $ oc -n $NAMESPACE get deploy virt-api -o yaml
  7. Obtain the details of the virt-api deployment:

    $ oc -n $NAMESPACE describe deploy virt-api
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.44. VirtApiRESTErrorsHigh

Meaning

More than 5% of REST calls have failed in the virt-api pods in the last 60 minutes.

Impact

A high rate of failed REST calls to virt-api might lead to slow response and execution of API calls.

However, currently running virtual machine workloads are not likely to be affected.

Diagnosis
  1. Set the NAMESPACE environment variable as follows:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-api pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the virt-api logs:

    $ oc logs -n  $NAMESPACE <virt-api>
  4. Obtain the details of the virt-api pods:

    $ oc describe -n $NAMESPACE <virt-api>
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    $ oc get nodes
  6. Check the status of the virt-api deployment:

    $ oc -n $NAMESPACE get deploy virt-api -o yaml
  7. Obtain the details of the virt-api deployment:

    $ oc -n $NAMESPACE describe deploy virt-api
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.45. VirtControllerDown

Meaning

No running virt-controller pod has been detected for 5 minutes.

Impact

Any actions related to virtual machine (VM) lifecycle management fail. This notably includes launching a new virtual machine instance (VMI) or shutting down an existing VMI.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-controller deployment:

    $ oc get deployment -n $NAMESPACE virt-controller -o yaml
  3. Review the logs of the virt-controller pod:

    $ oc get logs <virt-controller>
Mitigation

This alert can have a variety of causes, including the following:

  • Node resource exhaustion
  • Not enough memory on the cluster
  • Nodes are down
  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
  • Networking issues

Identify the root cause and fix it, if possible.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.46. VirtControllerRESTErrorsBurst

Meaning

More than 80% of REST calls in virt-controller pods failed in the last 5 minutes.

The virt-controller has likely fully lost the connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-controller pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Status updates are not propagated and actions like migrations cannot take place. However, running workloads are not impacted.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. List the available virt-controller pods:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
  3. Check the virt-controller logs for error messages when connecting to the API server:

    $ oc logs -n  $NAMESPACE <virt-controller>
Mitigation
  • If the virt-controller pod cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-controller>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.47. VirtControllerRESTErrorsHigh

Meaning

More than 5% of REST calls failed in virt-controller in the last 60 minutes.

This is most likely because virt-controller has partially lost connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-controller pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Node-related actions, such as starting and migrating, and scheduling virtual machines, are delayed. Running workloads are not affected, but reporting their current status might be delayed.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. List the available virt-controller pods:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
  3. Check the virt-controller logs for error messages when connecting to the API server:

    $ oc logs -n  $NAMESPACE <virt-controller>
Mitigation
  • If the virt-controller pod cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-controller>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.48. VirtHandlerDaemonSetRolloutFailing

Meaning

The virt-handler daemon set has failed to deploy on one or more worker nodes after 15 minutes.

Impact

This alert is a warning. It does not indicate that all virt-handler daemon sets have failed to deploy. Therefore, the normal lifecycle of virtual machines is not affected unless the cluster is overloaded.

Diagnosis

Identify worker nodes that do not have a running virt-handler pod:

  1. Export the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-handler pods to identify pods that have not deployed:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Obtain the name of the worker node of the virt-handler pod:

    $ oc -n $NAMESPACE get pod <virt-handler> -o jsonpath='{.spec.nodeName}'
Mitigation

If the virt-handler pods failed to deploy because of insufficient resources, you can delete other pods on the affected worker node.

11.5.49. VirtHandlerRESTErrorsBurst

Meaning

More than 80% of REST calls failed in virt-handler in the last 5 minutes. This alert usually indicates that the virt-handler pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-handler pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Status updates are not propagated and node-related actions, such as migrations, fail. However, running workloads on the affected node are not impacted.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-handler pod:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Check the virt-handler logs for error messages when connecting to the API server:

    $ oc logs -n  $NAMESPACE <virt-handler>
Mitigation
  • If the virt-handler cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-handler>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.50. VirtHandlerRESTErrorsHigh

Meaning

More than 5% of REST calls failed in virt-handler in the last 60 minutes. This alert usually indicates that the virt-handler pods have partially lost connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-handler pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Node-related actions, such as starting and migrating workloads, are delayed on the node that virt-handler is running on. Running workloads are not affected, but reporting their current status might be delayed.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. List the available virt-handler pods to identify the failing virt-handler pod:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Check the failing virt-handler pod log for API server connectivity errors:

    $ oc logs -n $NAMESPACE <virt-handler>

    Example error message:

    {"component":"virt-handler","level":"error","msg":"Can't patch node my-node","pos":"heartbeat.go:96","reason":"the server has received too many API requests and has asked us to try again later","timestamp":"2023-11-06T11:11:41.099883Z","uid":"132c50c2-8d82-4e49-8857-dc737adcd6cc"}
Mitigation

Delete the pod to force a restart:

$ oc delete -n $NAMESPACE <virt-handler>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.51. VirtOperatorDown

Meaning

This alert fires when no virt-operator pod in the Running state has been detected for 10 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster
  • Monitoring the life cycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The virt-operator deployment has a default replica of 2 pods.

Impact

This alert indicates a failure at the level of the cluster. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator deployment:

    $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Obtain the details of the virt-operator deployment:

    $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check the status of the virt-operator pods:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-operator
  5. Check for node issues, such as a NotReady state:

    $ oc get nodes
Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.52. VirtOperatorRESTErrorsBurst

Meaning

This alert fires when more than 80% of the REST calls in the virt-operator pods failed in the last 5 minutes. This usually indicates that the virt-operator pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-operator pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Cluster-level actions, such as upgrading and controller reconciliation, might not be available.

However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator logs for error messages when connecting to the API server:

    $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the virt-operator pod:

    $ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
  • If the virt-operator pod cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-operator>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.53. VirtOperatorRESTErrorsHigh

Meaning

This alert fires when more than 5% of the REST calls in virt-operator pods failed in the last 60 minutes. This usually indicates the virt-operator pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-operator pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Cluster-level actions, such as upgrading and controller reconciliation, might be delayed.

However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator logs for error messages when connecting to the API server:

    $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the virt-operator pod:

    $ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
  • If the virt-operator pod cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-operator>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

11.5.54. VMCannotBeEvicted

Meaning

This alert fires when the eviction strategy of a virtual machine (VM) is set to LiveMigration but the VM is not migratable.

Impact

Non-migratable VMs prevent node eviction. This condition affects operations such as node drain and updates.

Diagnosis
  1. Check the VMI configuration to determine whether the value of evictionStrategy is LiveMigrate:

    $ oc get vmis -o yaml
  2. Check for a False status in the LIVE-MIGRATABLE column to identify VMIs that are not migratable:

    $ oc get vmis -o wide
  3. Obtain the details of the VMI and check spec.conditions to identify the issue:

    $ oc get vmi <vmi> -o yaml

    Example output

    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: null
        message: cannot migrate VMI which does not use masquerade to connect
        to the pod network
        reason: InterfaceNotLiveMigratable
        status: "False"
        type: LiveMigratable

Mitigation

Set the evictionStrategy of the VMI to None or resolve the issue that prevents the VMI from migrating. The None startegy shuts down VMs in case of node drains and pod evictions.

11.5.55. VMStorageClassWarning

Meaning

This alert fires when the storage class is incorrectly configured. A system-wide, shared dummy page causes CRC errors when data is written and read across different processes or threads.

Impact

A large number of CRC errors might cause the cluster to display severe performance degradation.

Diagnosis
  1. Navigate to Observe Metrics in the web console.
  2. Obtain a list of virtual machines with incorrectly configured storage classes by running the following PromQL query:

    kubevirt_ssp_vm_rbd_volume{rxbounce_enabled="false", volume_mode="Block"} == 1

    The output displays a list of virtual machines that use a storage class without rxbounce_enabled.

    Example output

    kubevirt_ssp_vm_rbd_volume{name="testvmi-gwgdqp22k7", namespace="test_ns", pv_name="testvmi-gwgdqp22k7", rxbounce_enabled="false", volume_mode="Block"} 1

  3. Obtain the storage class name by running the following command:

    $ oc get pv <pv_name> -o=jsonpath='{.spec.storageClassName}'
Mitigation

Create a default OpenShift Virtualization storage class with the krbd:rxbounce map option. See Optimizing ODF PersistentVolumes for Windows VMs for details.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.