Chapter 12. Logging, events, and monitoring
12.1. Viewing virtual machine logs
12.1.1. Understanding virtual machine logs
Logs are collected for OpenShift Container Platform builds, deployments, and pods. In OpenShift Virtualization, virtual machine logs can be retrieved from the virtual machine launcher pod in either the web console or the CLI.
The -f
option follows the log output in real time, which is useful for monitoring progress and error checking.
If the launcher pod is failing to start, use the --previous
option to see the logs of the last attempt.
ErrImagePull
and ImagePullBackOff
errors can be caused by an incorrect deployment configuration or problems with the images that are referenced.
12.1.2. Viewing virtual machine logs in the CLI
Get virtual machine logs from the virtual machine launcher pod.
Procedure
Use the following command:
$ oc logs <virt-launcher-name>
12.1.3. Viewing virtual machine logs in the web console
Get virtual machine logs from the associated virtual machine launcher pod.
Procedure
-
In the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
-
In the Details tab, click the
virt-launcher-<name>
pod in the Pod section. - Click Logs.
12.2. Viewing events
12.2.1. Understanding virtual machine events
OpenShift Container Platform events are records of important life-cycle information in a namespace and are useful for monitoring and troubleshooting resource scheduling, creation, and deletion issues.
OpenShift Virtualization adds events for virtual machines and virtual machine instances. These can be viewed from either the web console or the CLI.
See also: Viewing system event information in an OpenShift Container Platform cluster.
12.2.2. Viewing the events for a virtual machine in the web console
You can view the stream events for a running a virtual machine from the Virtual Machine Overview panel of the web console.
The ▮▮ button pauses the events stream.
The ▶ button continues a paused events stream.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click Events to view all events for the virtual machine.
12.2.3. Viewing namespace events in the CLI
Use the OpenShift Container Platform client to get the events for a namespace.
Procedure
In the namespace, use the
oc get
command:$ oc get events
12.2.4. Viewing resource events in the CLI
Events are included in the resource description, which you can get using the OpenShift Container Platform client.
Procedure
In the namespace, use the
oc describe
command. The following example shows how to get the events for a virtual machine, a virtual machine instance, and the virt-launcher pod for a virtual machine:$ oc describe vm <vm>
$ oc describe vmi <vmi>
$ oc describe pod virt-launcher-<name>
12.3. Diagnosing data volumes using events and conditions
Use the oc describe
command to analyze and help resolve issues with data volumes.
12.3.1. About conditions and events
Diagnose data volume issues by examining the output of the Conditions
and Events
sections generated by the command:
$ oc describe dv <DataVolume>
There are three Types
in the Conditions
section that display:
-
Bound
-
Running
-
Ready
The Events
section provides the following additional information:
-
Type
of event -
Reason
for logging -
Source
of the event -
Message
containing additional diagnostic information.
The output from oc describe
does not always contains Events
.
An event is generated when either Status
, Reason
, or Message
changes. Both conditions and events react to changes in the state of the data volume.
For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions
section is updated as well.
12.3.2. Analyzing data volumes using conditions and events
By inspecting the Conditions
and Events
sections generated by the describe
command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state.
There are many different combinations of conditions. Each must be evaluated in its unique context.
Examples of various combinations follow.
Bound
– A successfully bound PVC displays in this example.Note that the
Type
isBound
, so theStatus
isTrue
. If the PVC is not bound, theStatus
isFalse
.When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the
Reason
isBound
andStatus
isTrue
. TheMessage
indicates which PVC owns the data volume.Message
, in theEvents
section, provides further details including how long the PVC has been bound (Age
) and by what resource (From
), in this casedatavolume-controller
:Example output
Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound
Running
– In this case, note thatType
isRunning
andStatus
isFalse
, indicating that an event has occurred that caused an attempted operation to fail, changing the Status fromTrue
toFalse
.However, note that
Reason
isCompleted
and theMessage
field indicatesImport Complete
.In the
Events
section, theReason
andMessage
contain additional troubleshooting information about the failed operation. In this example, theMessage
displays an inability to connect due to a404
, listed in theEvents
section’s firstWarning
.From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume:
Example output
Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found
Ready
– IfType
isReady
andStatus
isTrue
, then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, theStatus
isFalse
:Example output
Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready
12.4. Viewing information about virtual machine workloads
You can view high-level information about your virtual machines by using the Virtual Machines dashboard in the OpenShift Container Platform web console.
12.4.1. About the Virtual Machines dashboard
Access virtual machines from the OpenShift Container Platform web console by navigating to the Workloads
- Virtual Machines
- Virtual Machine Templates
The following cards describe each virtual machine:
Details provides identifying information about the virtual machine, including:
- Name
- Namespace
- Date of creation
- Node name
- IP address
Inventory lists the virtual machine’s resources, including:
- Network interface controllers (NICs)
- Disks
Status includes:
- The current status of the virtual machine
- A note indicating whether or not the QEMU guest agent is installed on the virtual machine
Utilization includes charts that display usage data for:
- CPU
- Memory
- Filesystem
- Network transfer
Use the drop-down list to choose a duration for the utilization data. The available options are 1 Hour, 6 Hours, and 24 Hours.
- Events lists messages about virtual machine activity over the past hour. To view additional events, click View all.
12.5. Monitoring virtual machine health
A virtual machine instance (VMI) can become unhealthy due to transient issues such as connectivity loss, deadlocks, or problems with external dependencies. A health check periodically performs diagnostics on a VMI by using any combination of the readiness and liveness probes.
12.5.1. About readiness and liveness probes
Use readiness and liveness probes to detect and handle unhealthy virtual machine instances (VMIs). You can include one or more probes in the specification of the VMI to ensure that traffic does not reach a VMI that is not ready for it and that a new instance is created when a VMI becomes unresponsive.
A readiness probe determines whether a VMI is ready to accept service requests. If the probe fails, the VMI is removed from the list of available endpoints until the VMI is ready.
A liveness probe determines whether a VMI is responsive. If the probe fails, the VMI is deleted and a new instance is created to restore responsiveness.
You can configure readiness and liveness probes by setting the spec.readinessProbe
and the spec.livenessProbe
fields of the VirtualMachineInstance
object. These fields support the following tests:
- HTTP GET
- The probe determines the health of the VMI by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VMI. The VMI is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
12.5.2. Defining an HTTP readiness probe
Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet
field of the virtual machine instance (VMI) configuration.
Procedure
Include details of the readiness probe in the VMI configuration file.
Sample readiness probe with an HTTP GET test
# ... spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8 # ...
- 1
- The HTTP GET request to perform to connect to the VMI.
- 2
- The port of the VMI that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VMI starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 6
- The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready
. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VMI by running the following command:
$ oc create -f <file_name>.yaml
12.5.3. Defining a TCP readiness probe
Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket
field of the virtual machine instance (VMI) configuration.
Procedure
Include details of the TCP readiness probe in the VMI configuration file.
Sample readiness probe with a TCP socket test
... spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5 ...
- 1
- The time, in seconds, after the VMI starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 3
- The TCP action to perform.
- 4
- The port of the VMI that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
.
Create the VMI by running the following command:
$ oc create -f <file_name>.yaml
12.5.4. Defining an HTTP liveness probe
Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet
field of the virtual machine instance (VMI) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.
Procedure
Include details of the HTTP liveness probe in the VMI configuration file.
Sample liveness probe with an HTTP GET test
# ... spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6 # ...
- 1
- The time, in seconds, after the VMI starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds
. - 3
- The HTTP GET request to perform to connect to the VMI.
- 4
- The port of the VMI that the probe queries. In the above example, the probe queries port 1500. The VMI installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthz
path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is deleted and a new instance is created. - 6
- The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds
.
Create the VMI by running the following command:
$ oc create -f <file_name>.yaml
12.5.5. Template: Virtual machine instance configuration file for defining health checks
apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: labels: special: vmi-fedora name: vmi-fedora spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M readinessProbe: httpGet: port: 1500 initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-registry-disk-demo - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - setenforce 0 - dnf install -y nmap-ncat - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!' name: cloudinitdisk
12.5.6. Additional resources
12.6. Using the OpenShift Container Platform dashboard to get cluster information
Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by clicking Home > Dashboards > Overview from the OpenShift Container Platform web console.
The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards.
12.6.1. About the OpenShift Container Platform dashboards page
The OpenShift Container Platform dashboard consists of the following cards:
Details provides a brief overview of informational cluster details.
Status include ok, error, warning, in progress, and unknown. Resources can add custom status names.
- Cluster ID
- Provider
- Version
Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about:
- Number of nodes
- Number of pods
- Persistent storage volume claims
- Virtual machines (available if OpenShift Virtualization is installed)
- Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment).
- Cluster Health summarizes the current health of the cluster as a whole, including relevant alerts and descriptions. If OpenShift Virtualization is installed, the overall health of OpenShift Virtualization is diagnosed as well. If more than one subsystem is present, click See All to view the status of each subsystem.
Cluster Capacity charts help administrators understand when additional resources are required in the cluster. The charts contain an inner ring that displays current consumption, while an outer ring displays thresholds configured for the resource, including information about:
- CPU time
- Memory allocation
- Storage consumed
- Network resources consumed
- Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption.
- Events lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host.
- Top Consumers helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage).
12.7. OpenShift Container Platform cluster monitoring, logging, and Telemetry
OpenShift Container Platform provides various resources for monitoring at the cluster level.
12.7.1. About OpenShift Container Platform monitoring
OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components. OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify cluster administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster.
After installing OpenShift Container Platform 4.6, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. You can then query metrics, review dashboards, and manage alerting rules and silences for your own projects in the OpenShift Container Platform web console.
Cluster administrators can grant developers and other users permission to monitor their own projects. Privileges are granted by assigning one of the predefined monitoring roles.
12.7.2. About cluster logging components
The cluster logging components include a collector deployed to each node in the OpenShift Container Platform cluster that collects all node and container logs and writes them to a log store. You can use a centralized web UI to create rich visualizations and dashboards with the aggregated data.
The major components of cluster logging are:
- collection - This is the component that collects logs from the cluster, formats them, and forwards them to the log store. The current implementation is Fluentd.
- log store - This is where the logs are stored. The default implementation is Elasticsearch. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage.
- visualization - This is the UI component you can use to view logs, graphs, charts, and so forth. The current implementation is Kibana.
For more information on cluster logging, see the OpenShift Container Platform cluster logging documentation.
12.7.3. About Telemetry
Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document.
This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience.
This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use.
12.7.3.1. Information collected by Telemetry
The following information is collected by Telemetry:
- The unique random identifier that is generated during an installation
- Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability
- Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update
- The name of the provider platform that OpenShift Container Platform is deployed on and the data center location
- Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each
- The number of running virtual machine instances in a cluster
- The number of etcd members and the number of objects stored in the etcd cluster
- The OpenShift Container Platform framework components installed in a cluster and their condition and status
- Usage information about components, features, and extensions
- Usage details about Technology Previews and unsupported configurations
- Information about degraded software
-
Information about nodes that are marked as
NotReady
- Events for all namespaces listed as "related objects" for a degraded Operator
- Configuration details that help Red Hat Support to provide beneficial support for customers. This includes node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services.
- Information about the validity of certificates
Telemetry does not collect identifying information such as user names, or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat’s privacy practices.
12.7.4. CLI troubleshooting and debugging commands
For a list of the oc
client troubleshooting and debugging commands, see the OpenShift Container Platform CLI tools documentation.
12.8. Collecting data for Red Hat Support
When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools:
- must-gather tool
-
The
must-gather
tool collects diagnostic information, including resource definitions and service logs. - Prometheus
- Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing.
- Alertmanager
- The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems.
12.8.1. Collecting data about your environment
Collecting data about your environment minimizes the time required to analyze and determine the root cause.
Prerequisites
- Set the retention time for Prometheus metrics data to a minimum of seven days.
- Configure the Alertmanager to capture relevant alerts and to send them to a dedicated mailbox so that they can be viewed and persisted outside the cluster.
- Record the exact number of affected nodes and virtual machines.
Procedure
-
Collect
must-gather
data for the cluster by using the defaultmust-gather
image. -
Collect
must-gather
data for Red Hat OpenShift Container Storage, if necessary. -
Collect
must-gather
data for OpenShift Virtualization by using the OpenShift Virtualizationmust-gather
image. - Collect Prometheus metrics for the cluster.
12.8.1.1. Additional resources
- Configuring the retention time for Prometheus metrics data
- Configuring the Alertmanager to send alert notifications to external systems
-
Collecting
must-gather
data for OpenShift Container Platform -
Collecting
must-gather
data for OpenShift Virtualization - Collecting Prometheus metrics for all projects as a cluster administrator
12.8.2. Collecting data about virtual machines
Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause.
Prerequisites
Windows VMs:
- Record the Windows patch update details for Red Hat Support.
- Install the latest version of the VirtIO drivers. The VirtIO drivers include the QEMU guest agent.
- If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP to determine whether there is a problem with the connection software.
Procedure
-
Collect detailed
must-gather
data about the malfunctioning VMs. - Collect screenshots of VMs that have crashed before you restart them.
- Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network.
12.8.2.1. Additional resources
- Installing VirtIO drivers on Windows VMs
- Downloading and installing VirtIO drivers on Windows VMs without host access
- Connecting to Windows VMs with RDP using the web console or the command line
-
Collecting
must-gather
data about virtual machines
12.8.3. Using the must-gather tool for OpenShift Virtualization
You can collect data about OpenShift Virtualization resources by running the must-gather
command with the OpenShift Virtualization image.
The default data collection includes information about the following resources:
- OpenShift Virtualization Operator namespaces, including child objects
- OpenShift Virtualization custom resource definitions
- Namespaces that contain virtual machines
- Basic virtual machine definitions
Procedure
Run the following command to collect data about OpenShift Virtualization:
$ oc adm must-gather --image-stream=openshift/must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v{HCOVersion}
12.8.3.1. must-gather tool options
You can specify a combination of scripts and environment variables for the following options:
- Collecting detailed virtual machine (VM) information from a namespace
- Collecting detailed information about specified VMs
- Collecting image and image stream information
-
Limiting the maximum number of parallel processes used by the
must-gather
tool
12.8.3.1.1. Parameters
Environment variables
You can specify environment variables for a compatible script.
NS=<namespace_name>
-
Collect virtual machine information, including
virt-launcher
pod details, from the namespace that you specify. TheVirtualMachine
andVirtualMachineInstance
CR data is collected for all namespaces. VM=<vm_name>
-
Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the
NS
environment variable. PROS=<number_of_processes>
Modify the maximum number of parallel processes that the
must-gather
tool uses. The default value is5
.ImportantUsing too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended.
Scripts
Each script is only compatible with certain environment variable combinations.
gather_vms_details
-
Collect VM log files, VM definitions, and namespaces (and their child objects) that belong to OpenShift Virtualization resources. If you use this parameter without specifying a namespace or VM, the
must-gather
tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use theVM
variable. gather
-
Use the default
must-gather
script, which collects cluster data from all namespaces and includes only basic VM information. This script is only compatible with thePROS
variable. gather_images
-
Collect image and image stream custom resource information. This script is only compatible with the
PROS
variable.
12.8.3.1.2. Usage and examples
Environment variables are optional. You can run a script by itself or with one or more compatible environment variables.
Script | Compatible environment variable |
---|---|
|
|
|
|
|
|
To customize the data that must-gather
collects, you append a double dash (--
) to the command, followed by a space and one or more compatible parameters.
Syntax
$ oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.5.8 \ -- <environment_variable_1> <environment_variable_2> <script_name>
Detailed VM information
The following command collects detailed VM information for the my-vm
VM in the mynamespace
namespace:
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.5.8 \
-- NS=mynamespace VM=my-vm gather_vms_details 1
- 1
- The
NS
environment variable is mandatory if you use theVM
environment variable.
Default data collection limited to three parallel processes
The following command collects default must-gather
information by using a maximum of three parallel processes:
$ oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.5.8 \ -- PROS=3 gather
Image and image stream information
The following command collects image and image stream information from the cluster:
$ oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.5.8 \ -- gather_images