Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 13. Logging, events, and monitoring


13.1. Reviewing Virtualization Overview

The Virtualization Overview page provides a comprehensive view of virtualization resources, details, status, and top consumers. By gaining an insight into the overall health of OpenShift Virtualization, you can determine if intervention is required to resolve specific issues identified by examining the data.

Use the Getting Started resources to access quick starts, read the latest blogs on virtualization, and learn how to use operators. Obtain complete information about alerts, events, inventory, and status of virtual machines. Customize the Top Consumer cards to obtain data on high utilization of a specific resource by projects, virtual machines, or nodes. Click View virtualization dashboard for quick access to the Dashboards page.

13.1.1. Prerequisites

To use the vCPU wait metric in the Top Consumers card, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. See the OpenShift Container Platform machine configuration tasks documentation for more information on applying a kernel argument.

13.1.2. Resources monitored actively in the Virtualization Overview page

The following table shows actively monitored resources, metrics, and fields in the Virtualization Overview page. This information is useful when you need to obtain relevant data and intervene to resolve a problem.

Monitored resources, fields, and metrics

Description

Details

A brief overview of service and version information for OpenShift Virtualization.

Status

Alerts for virtualization and networking.

Activity

Ongoing events for virtual machines. Messages are related to recent activity in the cluster, such as pod creation or virtual machine migration to another host.

Running VMs by Template

The donut chart displays a unique color for each virtual machine template and shows the number of running virtual machines that use each template.

Inventory

Total number of active virtual machines, templates, nodes, and networks.

Status of VMs

Current status of virtual machines: running, provisioning, starting, migrating, paused, stopping, terminating, and unknown.

Permissions

Tasks for which capabilities are enabled through permissions: Access to public templates, Access to public boot sources, Clone a VM, Attach VM to multiple networks, Upload a base image from local disk, and Share templates.

13.1.3. Resources monitored for top consumption

The Top Consumers cards in Virtualization Overview page display projects, virtual machines or nodes with maximum consumption of a resource. You can select a project, a virtual machine, or a node and view the top five or top ten consumers of a specific resource.

Note

Viewing the maximum resource consumption is limited to the top five or top ten consumers within each Top Consumers card.

The following table shows resources monitored for top consumers.

Resources monitored for top consumption

Description

CPU

Projects, virtual machines, or nodes consuming the most CPU.

Memory

Projects, virtual machines, or nodes consuming the most memory (in bytes). The unit of display (for example, MiB or GiB) is determined by the size of the resource consumption.

Used filesystem

Projects, virtual machines, or nodes with the highest consumption of filesystems (in bytes). The unit of display (for example, MiB or GiB) is determined by the size of the resource consumption.

Memory swap

Projects, virtual machines, or nodes consuming the most memory pressure when memory is swapped .

vCPU wait

Projects, virtual machines, or nodes experiencing the maximum wait time (in seconds) for the vCPUs.

Storage throughput

Projects, virtual machines, or nodes with the highest data transfer rate to and from the storage media (in mbps).

Storage IOPS

Projects, virtual machines, or nodes with the highest amount of storage IOPS (input/output operations per second) over a time period.

13.1.4. Reviewing top consumers for projects, virtual machines, and nodes

You can view the top consumers of resources for a selected project, virtual machine, or node in the Virtualization Overview page.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. In the Administrator perspective in the OpenShift Virtualization web console, navigate to Virtualization Overview.
  2. Navigate to the Top Consumers cards.
  3. From the drop-down menu, select Show top 5 or Show top 10.
  4. For a Top Consumer card, select the type of resource from the drop-down menu: CPU, Memory, Used Filesystem, Memory Swap, vCPU Wait, or Storage Throughput.
  5. Select By Project, By VM, or By Node. A list of the top five or top ten consumers of the selected resource is displayed.

13.1.5. Additional resources

13.2. Viewing virtual machine logs

13.2.1. About virtual machine logs

Logs are collected for OpenShift Container Platform builds, deployments, and pods. In OpenShift Virtualization, virtual machine logs can be retrieved from the virtual machine launcher pod in either the web console or the CLI.

The -f option follows the log output in real time, which is useful for monitoring progress and error checking.

If the launcher pod is failing to start, use the --previous option to see the logs of the last attempt.

Warning

ErrImagePull and ImagePullBackOff errors can be caused by an incorrect deployment configuration or problems with the images that are referenced.

13.2.2. Viewing virtual machine logs in the CLI

Get virtual machine logs from the virtual machine launcher pod.

Procedure

  • Use the following command:

    $ oc logs <virt-launcher-name>

13.2.3. Viewing virtual machine logs in the web console

Get virtual machine logs from the associated virtual machine launcher pod.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the virt-launcher-<name> pod in the Pod section to open the Pod details page.
  5. Click the Logs tab to view the pod logs.

13.3. Viewing events

13.3.1. About virtual machine events

OpenShift Container Platform events are records of important life-cycle information in a namespace and are useful for monitoring and troubleshooting resource scheduling, creation, and deletion issues.

OpenShift Virtualization adds events for virtual machines and virtual machine instances. These can be viewed from either the web console or the CLI.

See also: Viewing system event information in an OpenShift Container Platform cluster.

13.3.2. Viewing the events for a virtual machine in the web console

You can view streaming events for a running virtual machine on the VirtualMachine details page of the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Events tab to view streaming events for the virtual machine.

    • The ▮▮ button pauses the events stream.
    • The ▶ button resumes a paused events stream.

13.3.3. Viewing namespace events in the CLI

Use the OpenShift Container Platform client to get the events for a namespace.

Procedure

  • In the namespace, use the oc get command:

    $ oc get events

13.3.4. Viewing resource events in the CLI

Events are included in the resource description, which you can get using the OpenShift Container Platform client.

Procedure

  • In the namespace, use the oc describe command. The following example shows how to get the events for a virtual machine, a virtual machine instance, and the virt-launcher pod for a virtual machine:

    $ oc describe vm <vm>
    $ oc describe vmi <vmi>
    $ oc describe pod virt-launcher-<name>

13.4. Diagnosing data volumes using events and conditions

Use the oc describe command to analyze and help resolve issues with data volumes.

13.4.1. About conditions and events

Diagnose data volume issues by examining the output of the Conditions and Events sections generated by the command:

$ oc describe dv <DataVolume>

There are three Types in the Conditions section that display:

  • Bound
  • Running
  • Ready

The Events section provides the following additional information:

  • Type of event
  • Reason for logging
  • Source of the event
  • Message containing additional diagnostic information.

The output from oc describe does not always contains Events.

An event is generated when either Status, Reason, or Message changes. Both conditions and events react to changes in the state of the data volume.

For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well.

13.4.2. Analyzing data volumes using conditions and events

By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state.

There are many different combinations of conditions. Each must be evaluated in its unique context.

Examples of various combinations follow.

  • Bound – A successfully bound PVC displays in this example.

    Note that the Type is Bound, so the Status is True. If the PVC is not bound, the Status is False.

    When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the Reason is Bound and Status is True. The Message indicates which PVC owns the data volume.

    Message, in the Events section, provides further details including how long the PVC has been bound (Age) and by what resource (From), in this case datavolume-controller:

    Example output

    Status:
    	Conditions:
    		Last Heart Beat Time:  2020-07-15T03:58:24Z
    		Last Transition Time:  2020-07-15T03:58:24Z
    		Message:               PVC win10-rootdisk Bound
    		Reason:                Bound
    		Status:                True
    		Type:                  Bound
    
    	Events:
    		Type     Reason     Age    From                   Message
    		----     ------     ----   ----                   -------
    		Normal   Bound      24s    datavolume-controller  PVC example-dv Bound

  • Running – In this case, note that Type is Running and Status is False, indicating that an event has occurred that caused an attempted operation to fail, changing the Status from True to False.

    However, note that Reason is Completed and the Message field indicates Import Complete.

    In the Events section, the Reason and Message contain additional troubleshooting information about the failed operation. In this example, the Message displays an inability to connect due to a 404, listed in the Events section’s first Warning.

    From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume:

    Example output

    Status:
    	 Conditions:
    		 Last Heart Beat Time:  2020-07-15T04:31:39Z
    		 Last Transition Time:  2020-07-15T04:31:39Z
    		 Message:               Import Complete
    		 Reason:                Completed
    		 Status:                False
    		 Type:                  Running
    
    	Events:
    		Type     Reason           Age                From                   Message
    		----     ------           ----               ----                   -------
    		Warning  Error            12s (x2 over 14s)  datavolume-controller  Unable to connect
    		to http data source: expected status code 200, got 404. Status: 404 Not Found

  • Ready – If Type is Ready and Status is True, then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, the Status is False:

    Example output

    Status:
    	 Conditions:
    		 Last Heart Beat Time: 2020-07-15T04:31:39Z
    		 Last Transition Time:  2020-07-15T04:31:39Z
    		 Status:                True
    		 Type:                  Ready

13.5. Viewing information about virtual machine workloads

You can view high-level information about your virtual machines by using the Virtual Machines dashboard in the OpenShift Container Platform web console.

13.5.1. About the Virtual Machines dashboard

Access virtual machines (VMs) from the OpenShift Container Platform web console by navigating to the Virtualization VirtualMachines page and clicking a virtual machine (VM) to view the VirtualMachine details page.

The Overview tab displays the following cards:

  • Details provides identifying information about the virtual machine, including:

    • Name
    • Namespace
    • Date of creation
    • Node name
    • IP address
  • Inventory lists the virtual machine’s resources, including:

    • Network interface controllers (NICs)
    • Disks
  • Status includes:

    • The current status of the virtual machine
    • A note indicating whether or not the QEMU guest agent is installed on the virtual machine
  • Utilization includes charts that display usage data for:

    • CPU
    • Memory
    • Filesystem
    • Network transfer
Note

Use the drop-down list to choose a duration for the utilization data. The available options are 1 Hour, 6 Hours, and 24 Hours.

  • Events lists messages about virtual machine activity over the past hour. To view additional events, click View all.

13.6. Monitoring virtual machine health

A virtual machine instance (VMI) can become unhealthy due to transient issues such as connectivity loss, deadlocks, or problems with external dependencies. A health check periodically performs diagnostics on a VMI by using any combination of the readiness and liveness probes.

13.6.1. About readiness and liveness probes

Use readiness and liveness probes to detect and handle unhealthy virtual machine instances (VMIs). You can include one or more probes in the specification of the VMI to ensure that traffic does not reach a VMI that is not ready for it and that a new instance is created when a VMI becomes unresponsive.

A readiness probe determines whether a VMI is ready to accept service requests. If the probe fails, the VMI is removed from the list of available endpoints until the VMI is ready.

A liveness probe determines whether a VMI is responsive. If the probe fails, the VMI is deleted and a new instance is created to restore responsiveness.

You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachineInstance object. These fields support the following tests:

HTTP GET
The probe determines the health of the VMI by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
TCP socket
The probe attempts to open a socket to the VMI. The VMI is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.

13.6.2. Defining an HTTP readiness probe

Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine instance (VMI) configuration.

Procedure

  1. Include details of the readiness probe in the VMI configuration file.

    Sample readiness probe with an HTTP GET test

    # ...
    spec:
      readinessProbe:
        httpGet: 1
          port: 1500 2
          path: /healthz 3
          httpHeaders:
          - name: Custom-Header
            value: Awesome
        initialDelaySeconds: 120 4
        periodSeconds: 20 5
        timeoutSeconds: 10 6
        failureThreshold: 3 7
        successThreshold: 3 8
    # ...

    1
    The HTTP GET request to perform to connect to the VMI.
    2
    The port of the VMI that the probe queries. In the above example, the probe queries port 1500.
    3
    The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is removed from the list of available endpoints.
    4
    The time, in seconds, after the VMI starts before the readiness probe is initiated.
    5
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    6
    The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
    7
    The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready.
    8
    The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
  2. Create the VMI by running the following command:

    $ oc create -f <file_name>.yaml

13.6.3. Defining a TCP readiness probe

Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine instance (VMI) configuration.

Procedure

  1. Include details of the TCP readiness probe in the VMI configuration file.

    Sample readiness probe with a TCP socket test

    ...
    spec:
      readinessProbe:
        initialDelaySeconds: 120 1
        periodSeconds: 20 2
        tcpSocket: 3
          port: 1500 4
        timeoutSeconds: 10 5
    ...

    1
    The time, in seconds, after the VMI starts before the readiness probe is initiated.
    2
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    3
    The TCP action to perform.
    4
    The port of the VMI that the probe queries.
    5
    The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
  2. Create the VMI by running the following command:

    $ oc create -f <file_name>.yaml

13.6.4. Defining an HTTP liveness probe

Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine instance (VMI) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.

Procedure

  1. Include details of the HTTP liveness probe in the VMI configuration file.

    Sample liveness probe with an HTTP GET test

    # ...
    spec:
      livenessProbe:
        initialDelaySeconds: 120 1
        periodSeconds: 20 2
        httpGet: 3
          port: 1500 4
          path: /healthz 5
          httpHeaders:
          - name: Custom-Header
            value: Awesome
        timeoutSeconds: 10 6
    # ...

    1
    The time, in seconds, after the VMI starts before the liveness probe is initiated.
    2
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    3
    The HTTP GET request to perform to connect to the VMI.
    4
    The port of the VMI that the probe queries. In the above example, the probe queries port 1500. The VMI installs and runs a minimal HTTP server on port 1500 via cloud-init.
    5
    The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is deleted and a new instance is created.
    6
    The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
  2. Create the VMI by running the following command:

    $ oc create -f <file_name>.yaml

13.6.5. Template: Virtual machine configuration file for defining health checks

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  labels:
    special: vm-fedora
  name: vm-fedora
spec:
  template:
    metadata:
      labels:
        special: vm-fedora
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 1024M
      readinessProbe:
        httpGet:
          port: 1500
        initialDelaySeconds: 120
        periodSeconds: 20
        timeoutSeconds: 10
        failureThreshold: 3
        successThreshold: 3
      terminationGracePeriodSeconds: 180
      volumes:
      - name: containerdisk
        containerDisk:
          image: kubevirt/fedora-cloud-registry-disk-demo
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
            bootcmd:
              - setenforce 0
              - dnf install -y nmap-ncat
              - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!'
        name: cloudinitdisk

13.6.6. Additional resources

13.7. Using the OpenShift Container Platform dashboard to get cluster information

Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by clicking Home > Dashboards > Overview from the OpenShift Container Platform web console.

The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards.

13.7.1. About the OpenShift Container Platform dashboards page

The OpenShift Container Platform dashboard consists of the following cards:

  • Details provides a brief overview of informational cluster details.

    Status include ok, error, warning, in progress, and unknown. Resources can add custom status names.

    • Cluster ID
    • Provider
    • Version
  • Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about:

    • Number of nodes
    • Number of pods
    • Persistent storage volume claims
    • Virtual machines (available if OpenShift Virtualization is installed)
    • Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment).
  • Cluster Health summarizes the current health of the cluster as a whole, including relevant alerts and descriptions. If OpenShift Virtualization is installed, the overall health of OpenShift Virtualization is diagnosed as well. If more than one subsystem is present, click See All to view the status of each subsystem.
  • Cluster Capacity charts help administrators understand when additional resources are required in the cluster. The charts contain an inner ring that displays current consumption, while an outer ring displays thresholds configured for the resource, including information about:

    • CPU time
    • Memory allocation
    • Storage consumed
    • Network resources consumed
  • Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption.
  • Events lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host.
  • Top Consumers helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage).

13.8. Reviewing resource usage by virtual machines

Dashboards in the OpenShift Container Platform web console provide visual representations of cluster metrics to help you to quickly understand the state of your cluster. Dashboards belong to the Monitoring overview that provides monitoring for core platform components.

The OpenShift Virtualization dashboard provides data on resource consumption for virtual machines and associated pods. The visualization metrics displayed in the OpenShift Virtualization dashboard are based on Prometheus Query Language (PromQL) queries.

A monitoring role is required to monitor user-defined namespaces in the OpenShift Virtualization dashboard.

13.8.1. About reviewing top consumers

In the OpenShift Virtualization dashboard, you can select a specific time period and view the top consumers of resources within that time period. Top consumers are virtual machines or virt-launcher pods that are consuming the highest amount of resources.

The following table shows resources monitored in the dashboard and describes the metrics associated with each resource for top consumers.

Monitored resources

Description

Memory swap traffic

Virtual machines consuming the most memory pressure when swapping memory.

vCPU wait

Virtual machines experiencing the maximum wait time (in seconds) for their vCPUs.

CPU usage by pod

The virt-launcher pods that are using the most CPU.

Network traffic

Virtual machines that are saturating the network by receiving the most amount of network traffic (in bytes).

Storage traffic

Virtual machines with the highest amount (in bytes) of storage-related traffic.

Storage IOPS

Virtual machines with the highest amount of I/O operations per second over a time period.

Memory usage

The virt-launcher pods that are using the most memory (in bytes).

Note

Viewing the maximum resource consumption is limited to the top five consumers.

13.8.2. Reviewing top consumers

In the Administrator perspective, you can view the OpenShift Virtualization dashboard where top consumers of resources are displayed.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. In the Administrator perspective in the OpenShift Virtualization web console, navigate to Observe Dashboards.
  2. Select the KubeVirt/Infrastructure Resources/Top Consumers dashboard from the Dashboard list.
  3. Select a predefined time period from the drop-down menu for Period. You can review the data for top consumers in the tables.
  4. Optional: Click Inspect to view or edit the Prometheus Query Language (PromQL) query associated with the top consumers for a table.

13.8.3. Additional resources

13.9. OpenShift Container Platform cluster monitoring, logging, and Telemetry

OpenShift Container Platform provides various resources for monitoring at the cluster level.

13.9.1. About OpenShift Container Platform monitoring

OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components. OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify cluster administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster.

After installing OpenShift Container Platform 4.10, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. You can then query metrics, review dashboards, and manage alerting rules and silences for your own projects in the OpenShift Container Platform web console.

Note

Cluster administrators can grant developers and other users permission to monitor their own projects. Privileges are granted by assigning one of the predefined monitoring roles.

13.9.2. About logging subsystem components

The logging subsystem components include a collector deployed to each node in the OpenShift Container Platform cluster that collects all node and container logs and writes them to a log store. You can use a centralized web UI to create rich visualizations and dashboards with the aggregated data.

The major components of the logging subsystem are:

  • collection - This is the component that collects logs from the cluster, formats them, and forwards them to the log store. The current implementation is Fluentd.
  • log store - This is where the logs are stored. The default implementation is Elasticsearch. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage.
  • visualization - This is the UI component you can use to view logs, graphs, charts, and so forth. The current implementation is Kibana.

For more information on OpenShift Logging, see the OpenShift Logging documentation.

13.9.3. About Telemetry

Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document.

This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience.

This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use.

13.9.3.1. Information collected by Telemetry

The following information is collected by Telemetry:

13.9.3.1.1. System information
  • Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability
  • Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update
  • The unique random identifier that is generated during an installation
  • Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services
  • The OpenShift Container Platform framework components installed in a cluster and their condition and status
  • Events for all namespaces listed as "related objects" for a degraded Operator
  • Information about degraded software
  • Information about the validity of certificates
  • The name of the provider platform that OpenShift Container Platform is deployed on and the data center location
13.9.3.1.2. Sizing Information
  • Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each
  • The number of running virtual machine instances in a cluster
  • The number of etcd members and the number of objects stored in the etcd cluster
  • Number of application builds by build strategy type
13.9.3.1.3. Usage information
  • Usage information about components, features, and extensions
  • Usage details about Technology Previews and unsupported configurations

Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat’s privacy practices.

13.9.4. CLI troubleshooting and debugging commands

For a list of the oc client troubleshooting and debugging commands, see the OpenShift Container Platform CLI tools documentation.

13.10. Prometheus queries for virtual resources

OpenShift Virtualization provides metrics for monitoring how infrastructure resources are consumed in the cluster. The metrics cover the following resources:

  • vCPU
  • Network
  • Storage
  • Guest memory swapping

Use the OpenShift Container Platform monitoring dashboard to query virtualization metrics.

13.10.1. Prerequisites

  • To use the vCPU metric, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. See the OpenShift Container Platform machine configuration tasks documentation for more information on applying a kernel argument.
  • For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.

13.10.2. Querying metrics

The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.

As a cluster administrator, you can query metrics for all core OpenShift Container Platform and user-defined projects.

As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.

13.10.2.1. Querying metrics for all projects as a cluster administrator

As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI.

Note

Only cluster administrators have access to the third-party UIs provided with OpenShift Container Platform Monitoring.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. In the Administrator perspective within the OpenShift Container Platform web console, select Observe Metrics.
  2. Select Insert Metric at Cursor to view a list of predefined queries.
  3. To create a custom query, add your Prometheus Query Language (PromQL) query to the Expression field.
  4. To add multiple queries, select Add Query.
  5. To delete a query, select kebab next to the query, then choose Delete query.
  6. To disable a query from being run, select kebab next to the query and choose Disable query.
  7. Select Run Queries to run the queries that you have created. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.

    Note

    Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.

  8. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL.

Additional resources

13.10.2.2. Querying metrics for user-defined projects as a developer

You can access metrics for a user-defined project as a developer or as a user with view permissions for the project.

In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.

Note

Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Developers cannot access the third-party UIs provided with OpenShift Container Platform monitoring that are for core platform components. Instead, use the Metrics UI for your user-defined project.

Prerequisites

  • You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
  • You have enabled monitoring for user-defined projects.
  • You have deployed a service in a user-defined project.
  • You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored.

Procedure

  1. From the Developer perspective in the OpenShift Container Platform web console, select Observe Metrics.
  2. Select the project that you want to view metrics for in the Project: list.
  3. Choose a query from the Select Query list, or run a custom PromQL query by selecting Show PromQL.

    Note

    In the Developer perspective, you can only run one query at a time.

Additional resources

13.10.3. Virtualization metrics

The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions.

Note

The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.

13.10.3.1. vCPU metrics

The following query can identify virtual machines that are waiting for Input/Output (I/O):

kubevirt_vmi_vcpu_wait_seconds
Returns the wait time (in seconds) for a virtual machine’s vCPU.

A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.

Note

To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler.

Example vCPU wait time query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1

1
This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.

13.10.3.2. Network metrics

The following queries can identify virtual machines that are saturating the network:

kubevirt_vmi_network_receive_bytes_total
Returns the total amount of traffic received (in bytes) on the virtual machine’s network.
kubevirt_vmi_network_transmit_bytes_total
Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network.

Example network traffic query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1

1
This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.

13.10.3.3. Storage metrics

13.10.3.3.1. Storage-related traffic

The following queries can identify VMs that are writing large amounts of data:

kubevirt_vmi_storage_read_traffic_bytes_total
Returns the total amount (in bytes) of the virtual machine’s storage-related traffic.
kubevirt_vmi_storage_write_traffic_bytes_total
Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic.

Example storage-related traffic query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1

1
This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
13.10.3.3.2. I/O performance

The following queries can determine the I/O performance of storage devices:

kubevirt_vmi_storage_iops_read_total
Returns the amount of write I/O operations the virtual machine is performing per second.
kubevirt_vmi_storage_iops_write_total
Returns the amount of read I/O operations the virtual machine is performing per second.

Example I/O performance query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1

1
This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.

13.10.3.4. Guest memory swapping metrics

The following queries can identify which swap-enabled guests are performing the most memory swapping:

kubevirt_vmi_memory_swap_in_traffic_bytes_total
Returns the total amount (in bytes) of memory the virtual guest is swapping in.
kubevirt_vmi_memory_swap_out_traffic_bytes_total
Returns the total amount (in bytes) of memory the virtual guest is swapping out.

Example memory swapping query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1

1
This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Note

Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.

13.10.4. Additional resources

13.11. Exposing custom metrics for virtual machines

OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.

In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.

13.11.1. Configuring the node exporter service

The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.

Prerequisites

  • Install the OpenShift Container Platform CLI oc.
  • Log in to the cluster as a user with cluster-admin privileges.
  • Create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project.
  • Configure the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project by setting enableUserWorkload to true.

Procedure

  1. Create the Service YAML file. In the following example, the file is called node-exporter-service.yaml.

    kind: Service
    apiVersion: v1
    metadata:
      name: node-exporter-service 1
      namespace: dynamation 2
      labels:
        servicetype: metrics 3
    spec:
      ports:
        - name: exmet 4
          protocol: TCP
          port: 9100 5
          targetPort: 9100 6
      type: ClusterIP
      selector:
        monitor: metrics 7
    1
    The node-exporter service that exposes the metrics from the virtual machines.
    2
    The namespace where the service is created.
    3
    The label for the service. The ServiceMonitor uses this label to match this service.
    4
    The name given to the port that exposes metrics on port 9100 for the ClusterIP service.
    5
    The target port used by node-exporter-service to listen for requests.
    6
    The TCP port number of the virtual machine that is configured with the monitor label.
    7
    The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label monitor and a value of metrics will be matched.
  2. Create the node-exporter service:

    $ oc create -f node-exporter-service.yaml

13.11.2. Configuring a virtual machine with the node exporter service

Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.

Prerequisites

  • The pods for the component are running in the openshift-user-workload-monitoring project.
  • Grant the monitoring-edit role to users who need to monitor this user-defined project.

Procedure

  1. Log on to the virtual machine.
  2. Download the node-exporter file on to the virtual machine by using the directory path that applies to the version of node-exporter file.

    $ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
  3. Extract the executable and place it in the /usr/bin directory.

    $ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \
        --directory /usr/bin --strip 1 "*/node_exporter"
  4. Create a node_exporter.service file in this directory path: /etc/systemd/system. This systemd service file runs the node-exporter service when the virtual machine reboots.

    [Unit]
    Description=Prometheus Metrics Exporter
    After=network.target
    StartLimitIntervalSec=0
    
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    User=root
    ExecStart=/usr/bin/node_exporter
    
    [Install]
    WantedBy=multi-user.target
  5. Enable and start the systemd service.

    $ sudo systemctl enable node_exporter.service
    $ sudo systemctl start node_exporter.service

Verification

  • Verify that the node-exporter agent is reporting metrics from the virtual machine.

    $ curl http://localhost:9100/metrics

    Example output

    go_gc_duration_seconds{quantile="0"} 1.5244e-05
    go_gc_duration_seconds{quantile="0.25"} 3.0449e-05
    go_gc_duration_seconds{quantile="0.5"} 3.7913e-05

13.11.3. Creating a custom monitoring label for virtual machines

To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.

Prerequisites

  • Install the OpenShift Container Platform CLI oc.
  • Log in as a user with cluster-admin privileges.
  • Access to the web console for stop and restart a virtual machine.

Procedure

  1. Edit the template spec of your virtual machine configuration file. In this example, the label monitor has the value metrics.

    spec:
      template:
        metadata:
          labels:
            monitor: metrics
  2. Stop and restart the virtual machine to create a new pod with the label name given to the monitor label.

13.11.3.1. Querying the node-exporter service for metrics

Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Obtain the HTTP service endpoint by specifying the namespace for the service:

    $ oc get service -n <namespace> <node-exporter-service>
  2. To list all available metrics for the node-exporter service, query the metrics resource.

    $ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"

    Example output

    node_arp_entries{device="eth0"} 1
    node_boot_time_seconds 1.643153218e+09
    node_context_switches_total 4.4938158e+07
    node_cooling_device_cur_state{name="0",type="Processor"} 0
    node_cooling_device_max_state{name="0",type="Processor"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="user"} 0
    node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06
    node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61
    node_cpu_seconds_total{cpu="0",mode="irq"} 233.91
    node_cpu_seconds_total{cpu="0",mode="nice"} 551.47
    node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3
    node_cpu_seconds_total{cpu="0",mode="steal"} 86.12
    node_cpu_seconds_total{cpu="0",mode="system"} 464.15
    node_cpu_seconds_total{cpu="0",mode="user"} 1075.2
    node_disk_discard_time_seconds_total{device="vda"} 0
    node_disk_discard_time_seconds_total{device="vdb"} 0
    node_disk_discarded_sectors_total{device="vda"} 0
    node_disk_discarded_sectors_total{device="vdb"} 0
    node_disk_discards_completed_total{device="vda"} 0
    node_disk_discards_completed_total{device="vdb"} 0
    node_disk_discards_merged_total{device="vda"} 0
    node_disk_discards_merged_total{device="vdb"} 0
    node_disk_info{device="vda",major="252",minor="0"} 1
    node_disk_info{device="vdb",major="252",minor="16"} 1
    node_disk_io_now{device="vda"} 0
    node_disk_io_now{device="vdb"} 0
    node_disk_io_time_seconds_total{device="vda"} 174
    node_disk_io_time_seconds_total{device="vdb"} 0.054
    node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003
    node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039
    node_disk_read_bytes_total{device="vda"} 3.71867136e+08
    node_disk_read_bytes_total{device="vdb"} 366592
    node_disk_read_time_seconds_total{device="vda"} 19.128
    node_disk_read_time_seconds_total{device="vdb"} 0.039
    node_disk_reads_completed_total{device="vda"} 5619
    node_disk_reads_completed_total{device="vdb"} 96
    node_disk_reads_merged_total{device="vda"} 5
    node_disk_reads_merged_total{device="vdb"} 0
    node_disk_write_time_seconds_total{device="vda"} 240.66400000000002
    node_disk_write_time_seconds_total{device="vdb"} 0
    node_disk_writes_completed_total{device="vda"} 71584
    node_disk_writes_completed_total{device="vdb"} 0
    node_disk_writes_merged_total{device="vda"} 19761
    node_disk_writes_merged_total{device="vdb"} 0
    node_disk_written_bytes_total{device="vda"} 2.007924224e+09
    node_disk_written_bytes_total{device="vdb"} 0

13.11.4. Creating a ServiceMonitor resource for the node exporter service

You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Create a YAML file for the ServiceMonitor resource configuration. In this example, the service monitor matches any service with the label metrics and queries the exmet port every 30 seconds.

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: node-exporter-metrics-monitor
      name: node-exporter-metrics-monitor 1
      namespace: dynamation 2
    spec:
      endpoints:
      - interval: 30s 3
        port: exmet 4
        scheme: http
      selector:
        matchLabels:
          servicetype: metrics
    1
    The name of the ServiceMonitor.
    2
    The namespace where the ServiceMonitor is created.
    3
    The interval at which the port will be queried.
    4
    The name of the port that is queried every 30 seconds
  2. Create the ServiceMonitor configuration for the node-exporter service.

    $ oc create -f node-exporter-metrics-monitor.yaml

13.11.4.1. Accessing the node exporter service outside the cluster

You can access the node-exporter service outside the cluster and view the exposed metrics.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Expose the node-exporter service.

    $ oc expose service -n <namespace> <node_exporter_service_name>
  2. Obtain the FQDN (Fully Qualified Domain Name) for the route.

    $ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host

    Example output

    NAME                    DNS
    node-exporter-service   node-exporter-service-dynamation.apps.cluster.example.org

  3. Use the curl command to display metrics for the node-exporter service.

    $ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics

    Example output

    go_gc_duration_seconds{quantile="0"} 1.5382e-05
    go_gc_duration_seconds{quantile="0.25"} 3.1163e-05
    go_gc_duration_seconds{quantile="0.5"} 3.8546e-05
    go_gc_duration_seconds{quantile="0.75"} 4.9139e-05
    go_gc_duration_seconds{quantile="1"} 0.000189423

13.11.5. Additional resources

13.12. OpenShift Virtualization critical alerts

Important

OpenShift Virtualization critical alerts is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenShift Virtualization has alerts that inform you when a problem occurs. Critical alerts require immediate attention.

Each alert has a corresponding description of the problem, a reason for why the alert is occurring, a troubleshooting process to diagnose the source of the problem, and steps for resolving the alert.

13.12.1. Network alerts

Network alerts provide information about problems for the OpenShift Virtualization Network Operator.

13.12.1.1. KubeMacPoolDown alert

Description

The KubeMacPool component allocates MAC addresses and prevents MAC address conflicts.

Reason

If the KubeMacPool-manager pod is down, then the creation of VirtualMachine objects fails.

Troubleshoot

  1. Determine the Kubemacpool-manager pod namespace and name.

    $ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print $1}')"
    $ export KMP_NAME="$(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print $2}')"
  2. Check the Kubemacpool-manager pod description and logs to determine the source of the problem.

    $ oc describe pod -n $KMP_NAMESPACE $KMP_NAME
    $ oc logs -n $KMP_NAMESPACE $KMP_NAME

Resolution

Open a support issue and provide the information gathered in the troubleshooting process.

13.12.2. SSP alerts

SSP alerts provide information about problems for the OpenShift Virtualization SSP Operator.

13.12.2.1. SSPFailingToReconcile alert

Description

The SSP Operator’s pod is up, but the pod’s reconcile cycle consistently fails. This failure includes failure to update the resources for which it is responsible, failure to deploy the template validator, or failure to deploy or update the common templates.

Reason

If the SSP Operator fails to reconcile, then the deployment of dependent components fails, reconciliation of component changes fails, or both. Additionally, the updates to the common templates and template validator reset and fail.

Troubleshoot

  1. Check the ssp-operator pod’s logs for errors:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | awk '{print $1}')"
    $ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
    $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
  2. Verify that the template validator is up. If the template validator is not up, then check the pod’s logs for errors.

    $ export NAMESPACE="$($ oc get deployment -A | grep ssp-operator | awk '{print $1}')"
    $ oc -n $NAMESPACE get pods -l name=virt-template-validator
    $ oc -n $NAMESPACE describe pods -l name=virt-template-validator
    $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator

Resolution

Open a support issue and provide the information gathered in the troubleshooting process.

13.12.2.2. SSPOperatorDown alert

Description

The SSP Operator deploys and reconciles the common templates and the template validator.

Reason

If the SSP Operator is down, then the deployment of dependent components fails, reconciliation of component changes fails, or both. Additionally, the updates to the common template and template validator reset and fail.

Troubleshoot

  1. Check ssp-operator’s pod namespace:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | awk '{print $1}')"
  2. Verify that the ssp-operator’s pod is currently down.

    $ oc -n $NAMESPACE get pods -l control-plane=ssp-operator
  3. Check the ssp-operator’s pod description and logs.

    $ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
    $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator

Resolution

Open a support issue and provide the information gathered in the troubleshooting process.

13.12.2.3. SSPTemplateValidatorDown alert

Description

The template validator validates that virtual machines (VMs) do not violate their assigned templates.

Reason

If every template validator pod is down, then the template validator fails to validate VMs against their assigned templates.

Troubleshoot

  1. Check the namespaces of the ssp-operator pods and the virt-template-validator pods.

    $ export NAMESPACE_SSP="$(oc get deployment -A | grep ssp-operator | awk '{print $1}')"
    $ export NAMESPACE="$(oc get deployment -A | grep virt-template-validator | awk '{print $1}')"
  2. Verify that the virt-template-validator’s pod is currently down.

    $ oc -n $NAMESPACE get pods -l name=virt-template-validator
  3. Check the pod description and logs of the ssp-operator and the virt-template-validator.

    $ oc -n $NAMESPACE_SSP describe pods -l name=ssp-operator
    $ oc -n $NAMESPACE_SSP logs --tail=-1 -l name=ssp-operator
    $ oc -n $NAMESPACE describe pods -l name=virt-template-validator
    $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator

Resolution

Open a support issue and provide the information gathered in the troubleshooting process.

13.12.3. Virt alerts

Virt alerts provide information about problems for the OpenShift Virtualization Virt Operator.

13.12.3.1. NoLeadingVirtOperator alert

Description

In the past 10 minutes, no virt-operator pod holds the leader lease, despite one or more virt-operator pods being in Ready state. The alert suggests no operating virt-operator pod exists.

Reason

The virt-operator is the first Kubernetes Operator active in a OpenShift Container Platform cluster. Its primary responsibilities are:

  • Installation
  • Live-update
  • Live-upgrade of a cluster
  • Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher
  • Managing the reconciliation of top-level controllers

In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management.

The virt-operator deployment has a default replica of two pods with one leader pod holding a leader lease, indicating an operating virt-operator pod.

This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers may be temporarily unavailable.

Troubleshoot

Determine a virt-operator pod’s leader status from the pod logs. The log messages containing Started leading and acquire leader indicate the leader status of a given virt-operator pod.

Additionally, always check if there are any running virt-operator pods and the pods' statuses with these commands:

$ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
$ oc -n $NAMESPACE logs <pod-name>
$ oc -n $NAMESPACE describe pod <pod-name>

Leader pod example:

$ oc -n $NAMESPACE logs <pod-name> |grep lead

Example output

{"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"}
I1130 12:15:18.635452       1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator...
I1130 12:15:19.216582       1 leaderelection.go:253] successfully acquired lease <namespace>/virt-operator

{"component":"virt-operator","level":"info","msg":"Started leading","pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"}

Non-leader pod example:

$ oc -n $NAMESPACE logs <pod-name> |grep lead

Example output

{"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"}
I1130 12:15:20.533792       1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator...

Resolution

There are several reasons for no virt-operator pod holding the leader lease, despite one or more virt-operator pods being in Ready state. Identify the root cause and take appropriate action.

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.2. NoReadyVirtController alert

Description

The virt-controller monitors virtual machine instances (VMIs). The virt-controller also manages the associated pods by creating and managing the lifecycle of the pods associated with the VMI objects.

A VMI object always associates with a pod during its lifetime. However, the pod instance can change over time because of VMI migration.

This alert occurs when detection of no ready virt-controllers occurs for five minutes.

Reason

If the virt-controller fails, then VM lifecycle management completely fails. Lifecycle management tasks include launching a new VMI or shutting down an existing VMI.

Troubleshoot

  1. Check the vdeployment status of the virt-controller for available replicas and conditions.

    $ oc -n $NAMESPACE get deployment virt-controller -o yaml
  2. Check if the virt-controller pods exist and check their statuses.

    get pods -n $NAMESPACE |grep virt-controller
  3. Check the virt-controller pods' events.

    $ oc -n $NAMESPACE describe pods <virt-controller pod>
  4. Check the virt-controller pods' logs.

    $ oc -n $NAMESPACE logs <virt-controller pod>
  5. Check if there are issues with the nodes, such as if the nodes are in a NotReady state.

    $ oc get nodes

Resolution

There are several reasons for no virt-controller pods being in a Ready state. Identify the root cause and take appropriate action.

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.3. NoReadyVirtOperator alert

Description

No detection of a virt-operator pod in the Ready state occurs in the past 10 minutes. The virt-operator deployment has a default replica of two pods.

Reason

The virt-operator is the first Kubernetes Operator active in an OpenShift Container Platform cluster. Its primary responsibilities are:

  • Installation
  • Live-update
  • Live-upgrade of a cluster
  • Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher
  • Managing the reconciliation of top-level controllers

In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management.

Note

Virt-operator is not directly responsible for virtual machines in the cluster. Virt-operator’s unavailability does not affect the custom workloads.

This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers are temporarily unavailable.

Troubleshoot

  1. Check the deployment status of the virt-operator for available replicas and conditions.

    $ oc -n $NAMESPACE get deployment virt-operator -o yaml
  2. Check the virt-controller pods' events.

    $ oc -n $NAMESPACE describe pods <virt-operator pod>
  3. Check the virt-operator pods' logs.

    $ oc -n $NAMESPACE logs <virt-operator pod>
  4. Check if there are issues with the nodes for the control plane and masters, such as if they are in a NotReady state.

    $ oc get nodes

Resolution

There are several reasons for no virt-operator pods being in a Ready state. Identify the root cause and take appropriate action.

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.4. VirtAPIDown alert

Description

All OpenShift Container Platform API servers are down.

Reason

If all OpenShift Container Platform API servers are down, then no API calls for OpenShift Container Platform entities occur.

Troubleshoot

  1. Modify the environment variable NAMESPACE.

    $ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
  2. Verify if there are any running virt-api pods.

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. View the pods' logs using oc logs and the pods' statuses using oc describe.
  4. Check the status of the virt-api deployment. Use these commands to learn about related events and show if there are any issues with pulling an image, a crashing pod, or other similar problems.

    $ oc -n $NAMESPACE get deployment virt-api -o yaml
    $ oc -n $NAMESPACE describe deployment virt-api
  5. Check if there are issues with the nodes, such as if the nodes are in a NotReady state.

    $ oc get nodes

Resolution

Virt-api pods can be down for several reasons. Identify the root cause and take appropriate action.

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.5. VirtApiRESTErrorsBurst alert

Description

More than 80% of the REST calls fail in virt-api in the last five minutes.

Reason

A very high rate of failed REST calls to virt-api causes slow response, slow execution of API calls, or even complete dismissal of API calls.

Troubleshoot

  1. Modify the environment variable NAMESPACE.

    $ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
  2. Check to see how many running virt-api pods exist.

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. View the pods' logs using oc logs and the pods' statuses using oc describe.
  4. Check the status of the virt-api deployment to find out more information. These commands provide the associated events and show if there are any issues with pulling an image or a crashing pod.

    $ oc -n $NAMESPACE get deployment virt-api -o yaml
    $ oc -n $NAMESPACE describe deployment virt-api
  5. Check if there are issues with the nodes, such as if the nodes are overloaded or not in a NotReady state.

    $ oc get nodes

Resolution

There are several reasons for a high rate of failed REST calls. Identify the root cause and take appropriate action.

  • Node resource exhaustion
  • Not enough memory on the cluster
  • Nodes are down
  • The API server overloads, such as when the scheduler is not 100% available)
  • Networking issues

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.6. VirtControllerDown alert

Description

If no detection of virt-controllers occurs in the past five minutes, then virt-controller deployment has a default replica of two pods.

Reason

If the virt-controller fails, then VM lifecycle management tasks, such as launching a new VMI or shutting down an existing VMI, completely fail.

Troubleshoot

  1. Modify the environment variable NAMESPACE.

    $ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-controller deployment.

    $ oc get deployment -n $NAMESPACE virt-controller -o yaml
  3. Check the virt-controller pods' events.

    $ oc -n $NAMESPACE describe pods <virt-controller pod>
  4. Check the virt-controller pods' logs.

    $ oc -n $NAMESPACE logs <virt-controller pod>
  5. Check the manager pod’s logs to determine why creating the virt-controller pods fails.

    $ oc get logs <virt-controller-pod>

An example of a virt-controller pod name in the logs is virt-controller-7888c64d66-dzc9p. However, there may be several pods that run virt-controller.

Resolution

There are several known reasons why the detection of no running virt-controller occurs. Identify the root cause from the list of possible reasons and take appropriate action.

  • Node resource exhaustion
  • Not enough memory on the cluster
  • Nodes are down
  • The API server overloads, such as when the scheduler is not 100% available)
  • Networking issues

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.7. VirtControllerRESTErrorsBurst alert

Description

More than 80% of the REST calls failed in virt-controller in the last five minutes.

Reason

Virt-controller has potentially fully lost connectivity to the API server. This loss does not affect running workloads, but propagation of status updates and actions like migrations cannot occur.

Troubleshoot

There are two common error types associated with virt-controller REST call failure:

  • The API server overloads, causing timeouts. Check the API server metrics and details like response times and overall calls.
  • The virt-controller pod cannot reach the API server. Common causes are:

    • DNS issues on the node
    • Networking connectivity issues

Resolution

Check the virt-controller logs to determine if the virt-controller pod cannot connect to the API server at all. If so, delete the pod to force a restart.

Additionally, verify if node resource exhaustion or not having enough memory on the cluster is causing the connection failure.

The issue normally relates to DNS or CNI issues outside of the scope of this alert.

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.8. VirtHandlerRESTErrorsBurst alert

Description

More than 80% of the REST calls failed in virt-handler in the last five minutes.

Reason

Virt-handler lost the connection to the API server. Running workloads on the affected node still run, but status updates cannot propagate and actions such as migrations cannot occur.

Troubleshoot

There are two common error types associated with virt-operator REST call failure:

  • The API server overloads, causing timeouts. Check the API server metrics and details like response times and overall calls.
  • The virt-operator pod cannot reach the API server. Common causes are:

    • DNS issues on the node
    • Networking connectivity issues

Resolution

If the virt-handler cannot connect to the API server, delete the pod to force a restart. The issue normally relates to DNS or CNI issues outside of the scope of this alert. Identify the root cause and take appropriate action.

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.9. VirtOperatorDown alert

Description

This alert occurs when no virt-operator pod is in the Running state in the past 10 minutes. The virt-operator deployment has a default replica of two pods.

Reason

The virt-operator is the first Kubernetes Operator active in an OpenShift Container Platform cluster. Its primary responsibilities are:

  • Installation
  • Live-update
  • Live-upgrade of a cluster
  • Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher
  • Managing the reconciliation of top-level controllers

In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management.

Note

The virt-operator is not directly responsible for virtual machines in the cluster. The virt-operator’s unavailability does not affect the custom workloads.

This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers are temporarily unavailable.

Troubleshoot

  1. Modify the environment variable NAMESPACE.

    $ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator deployment.

    $ oc get deployment -n $NAMESPACE virt-operator -o yaml
  3. Check the virt-operator pods' events.

    $ oc -n $NAMESPACE describe pods <virt-operator pod>
  4. Check the virt-operator pods' logs.

    $ oc -n $NAMESPACE logs <virt-operator pod>
  5. Check the manager pod’s logs to determine why creating the virt-operator pods fails.

    $ oc get logs <virt-operator-pod>

An example of a virt-operator pod name in the logs is virt-operator-7888c64d66-dzc9p. However, there may be several pods that run virt-operator.

Resolution

There are several known reasons why the detection of no running virt-operator occurs. Identify the root cause from the list of possible reasons and take appropriate action.

  • Node resource exhaustion
  • Not enough memory on the cluster
  • Nodes are down
  • The API server overloads, such as when the scheduler is not 100% available)
  • Networking issues

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.3.10. VirtOperatorRESTErrorsBurst alert

Description

More than 80% of the REST calls failed in virt-operator in the last five minutes.

Reason

Virt-operator lost the connection to the API server. Cluster-level actions such as upgrading and controller reconciliation do not function. There is no effect to customer workloads such as VMs and VMIs.

Troubleshoot

There are two common error types associated with virt-operator REST call failure:

  • The API server overloads, causing timeouts. Check the API server metrics and details, such as response times and overall calls.
  • The virt-operator pod cannot reach the API server. Common causes are network connectivity problems and DNS issues on the node. Check the virt-operator logs to verify that the pod can connect to the API server at all.

    $ export NAMESPACE="$(oc get kubevirt -A -o custom-columns="":.metadata.namespace)"
    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
    $ oc -n $NAMESPACE logs <pod-name>
    $ oc -n $NAMESPACE describe pod <pod-name>

Resolution

If the virt-operator cannot connect to the API server, delete the pod to force a restart. The issue normally relates to DNS or CNI issues outside of the scope of this alert. Identify the root cause and take appropriate action.

Otherwise, open a support issue and provide the information gathered in the troubleshooting process.

13.12.4. Additional resources

13.13. Collecting data for Red Hat Support

When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools:

must-gather tool
The must-gather tool collects diagnostic information, including resource definitions and service logs.
Prometheus
Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing.
Alertmanager
The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems.

13.13.1. Collecting data about your environment

Collecting data about your environment minimizes the time required to analyze and determine the root cause.

Prerequisites

  • Set the retention time for Prometheus metrics data to a minimum of seven days.
  • Configure the Alertmanager to capture relevant alerts and to send them to a dedicated mailbox so that they can be viewed and persisted outside the cluster.
  • Record the exact number of affected nodes and virtual machines.

Procedure

  1. Collect must-gather data for the cluster by using the default must-gather image.
  2. Collect must-gather data for Red Hat OpenShift Data Foundation, if necessary.
  3. Collect must-gather data for OpenShift Virtualization by using the OpenShift Virtualization must-gather image.
  4. Collect Prometheus metrics for the cluster.

13.13.1.1. Additional resources

13.13.2. Collecting data about virtual machines

Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause.

Prerequisites

  • Windows VMs:

    • Record the Windows patch update details for Red Hat Support.
    • Install the latest version of the VirtIO drivers. The VirtIO drivers include the QEMU guest agent.
    • If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP to determine whether there is a problem with the connection software.

Procedure

  1. Collect detailed must-gather data about the malfunctioning VMs.
  2. Collect screenshots of VMs that have crashed before you restart them.
  3. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network.

13.13.2.1. Additional resources

13.13.3. Using the must-gather tool for OpenShift Virtualization

You can collect data about OpenShift Virtualization resources by running the must-gather command with the OpenShift Virtualization image.

The default data collection includes information about the following resources:

  • OpenShift Virtualization Operator namespaces, including child objects
  • OpenShift Virtualization custom resource definitions
  • Namespaces that contain virtual machines
  • Basic virtual machine definitions

Procedure

  • Run the following command to collect data about OpenShift Virtualization:

    $ oc adm must-gather --image-stream=openshift/must-gather \
      --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10

13.13.3.1. must-gather tool options

You can specify a combination of scripts and environment variables for the following options:

  • Collecting detailed virtual machine (VM) information from a namespace
  • Collecting detailed information about specified VMs
  • Collecting image and image stream information
  • Limiting the maximum number of parallel processes used by the must-gather tool
13.13.3.1.1. Parameters

Environment variables

You can specify environment variables for a compatible script.

NS=<namespace_name>
Collect virtual machine information, including virt-launcher pod details, from the namespace that you specify. The VirtualMachine and VirtualMachineInstance CR data is collected for all namespaces.
VM=<vm_name>
Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the NS environment variable.
PROS=<number_of_processes>

Modify the maximum number of parallel processes that the must-gather tool uses. The default value is 5.

Important

Using too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended.

Scripts

Each script is only compatible with certain environment variable combinations.

gather_vms_details
Collect VM log files, VM definitions, and namespaces (and their child objects) that belong to OpenShift Virtualization resources. If you use this parameter without specifying a namespace or VM, the must-gather tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the VM variable.
gather
Use the default must-gather script, which collects cluster data from all namespaces and includes only basic VM information. This script is only compatible with the PROS variable.
gather_images
Collect image and image stream custom resource information. This script is only compatible with the PROS variable.
13.13.3.1.2. Usage and examples

Environment variables are optional. You can run a script by itself or with one or more compatible environment variables.

Table 13.1. Compatible parameters
ScriptCompatible environment variable

gather_vms_details

  • For a namespace: NS=<namespace_name>
  • For a VM: VM=<vm_name> NS=<namespace_name>
  • PROS=<number_of_processes>

gather

  • PROS=<number_of_processes>

gather_images

  • PROS=<number_of_processes>

To customize the data that must-gather collects, you append a double dash (--) to the command, followed by a space and one or more compatible parameters.

Syntax

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 \
  -- <environment_variable_1> <environment_variable_2> <script_name>

Detailed VM information

The following command collects detailed VM information for the my-vm VM in the mynamespace namespace:

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 \
  -- NS=mynamespace VM=my-vm gather_vms_details 1
1
The NS environment variable is mandatory if you use the VM environment variable.

Default data collection limited to three parallel processes

The following command collects default must-gather information by using a maximum of three parallel processes:

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 \
  -- PROS=3 gather

Image and image stream information

The following command collects image and image stream information from the cluster:

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 \
  -- gather_images

13.13.3.2. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.