This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Gathering data about your cluster
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
It is recommended to provide:
5.1. About the must-gather tool Link kopierenLink in die Zwischenablage kopiert!
The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including:
- Resource definitions
- Service logs
By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local.
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
To collect data related to one or more specific features, use the
--imageargument with an image, as listed in a following section.For example:
oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.0
$ oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow To collect the audit logs, use the
-- /usr/bin/gather_audit_logsargument, as described in a following section.For example:
oc adm must-gather -- /usr/bin/gather_audit_logs
$ oc adm must-gather -- /usr/bin/gather_audit_logsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAudit logs are not collected as part of the default set of information to reduce the size of the files.
When you run oc adm must-gather, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.
For example:
NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ...
NAMESPACE NAME READY STATUS RESTARTS AGE
...
openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s
openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s
...
5.1.1. Gathering data about your cluster for Red Hat Support Link kopierenLink in die Zwischenablage kopiert!
You can gather debugging information about your cluster by using the oc adm must-gather CLI command.
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole. -
The OpenShift Container Platform CLI (
oc) installed.
Procedure
Navigate to the directory where you want to store the
must-gatherdata.NoteIf your cluster is in a disconnected environment, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters in disconnected environments, you must import the default
must-gatherimage as an image stream.oc import-image is/must-gather -n openshift
$ oc import-image is/must-gather -n openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
oc adm must-gathercommand:oc adm must-gather
$ oc adm must-gatherCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you are in a disconnected environment, use the
--imageflag as part of must-gather and point to the payload image.NoteBecause this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the
NotReadyandSchedulingDisabledstate.If this command fails, for example, if you cannot schedule a pod on your cluster, then use the
oc adm inspectcommand to gather information for particular resources.NoteContact Red Hat Support for the recommended resources to gather.
Create a compressed file from the
must-gatherdirectory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/
$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Make sure to replace
must-gather-local.5421342344627712289/with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.1.2. Gathering data about specific features Link kopierenLink in die Zwischenablage kopiert!
You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command.
| Image | Purpose |
|---|---|
|
| Data collection for OpenShift Virtualization. |
|
| Data collection for OpenShift Serverless. |
|
| Data collection for Red Hat OpenShift Service Mesh. |
|
| Data collection for the Migration Toolkit for Containers. |
|
| Data collection for Red Hat OpenShift Data Foundation. |
|
| Data collection for OpenShift Logging. |
|
| Data collection for Local Storage Operator. |
|
| Data collection for OpenShift sandboxed containers. |
|
| Data collection for the Poison Pill Operator and the Node Health Check Operator. |
|
| Data collection for the Node Maintenance Operator. |
|
| Data collection for Red Hat OpenShift Pipelines |
To determine the latest version for an OpenShift Container Platform component’s image, see the Red Hat OpenShift Container Platform Life Cycle Policy web page on the Red Hat Customer Portal.
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole. -
The OpenShift Container Platform CLI (
oc) installed.
Procedure
-
Navigate to the directory where you want to store the
must-gatherdata. Run the
oc adm must-gathercommand with one or more--imageor--image-streamarguments.Note-
To collect the default
must-gatherdata in addition to specific feature data, add the--image-stream=openshift/must-gatherargument. - For information on gathering data about the Custom Metrics Autoscaler, see the Additional resources section that follows.
For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization:
oc adm must-gather \ --image-stream=openshift/must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10
$ oc adm must-gather \ --image-stream=openshift/must-gather \1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.102 Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the
must-gathertool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command:oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.1. Example
must-gatheroutput for OpenShift LoggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
To collect the default
Run the
oc adm must-gathercommand with one or more--imageor--image-streamarguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:oc adm must-gather \ --image-stream=openshift/must-gather \ --image=quay.io/kubevirt/must-gather
$ oc adm must-gather \ --image-stream=openshift/must-gather \1 --image=quay.io/kubevirt/must-gather2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed file from the
must-gatherdirectory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/
$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Make sure to replace
must-gather-local.5421342344627712289/with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.2. Additional resources Link kopierenLink in die Zwischenablage kopiert!
- Gathering debugging data for the Custom Metrics Autoscaler.
- Red Hat OpenShift Container Platform Life Cycle Policy
5.2.1. Gathering audit logs Link kopierenLink in die Zwischenablage kopiert!
You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. You can gather audit logs for:
- etcd server
- Kubernetes API server
- OpenShift OAuth API server
- OpenShift API server
Procedure
Run the
oc adm must-gathercommand with the-- /usr/bin/gather_audit_logsflag:oc adm must-gather -- /usr/bin/gather_audit_logs
$ oc adm must-gather -- /usr/bin/gather_audit_logsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed file from the
must-gatherdirectory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:tar cvaf must-gather.tar.gz must-gather.local.472290403699006248
$ tar cvaf must-gather.tar.gz must-gather.local.4722904036990062481 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
must-gather-local.472290403699006248with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.3. Obtaining your cluster ID Link kopierenLink in die Zwischenablage kopiert!
When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI (oc).
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole. -
Access to the web console or the OpenShift CLI (
oc) installed.
Procedure
To open a support case and have your cluster ID autofilled using the web console:
-
From the toolbar, navigate to (?) Help
Open Support Case. - The Cluster ID value is autofilled.
-
From the toolbar, navigate to (?) Help
To manually obtain your cluster ID using the web console:
-
Navigate to Home
Dashboards Overview. - The value is available in the Cluster ID field of the Details section.
-
Navigate to Home
To obtain your cluster ID using the OpenShift CLI (
oc), run the following command:oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. About sosreport Link kopierenLink in die Zwischenablage kopiert!
sosreport is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems. sosreport provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis.
In some support interactions, Red Hat Support may ask you to collect a sosreport archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of oc adm must-gather.
5.5. Generating a sosreport archive for an OpenShift Container Platform cluster node Link kopierenLink in die Zwischenablage kopiert!
The recommended way to generate a sosreport for an OpenShift Container Platform 4.10 cluster node is through a debug pod.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - You have SSH access to your hosts.
-
You have installed the OpenShift CLI (
oc). - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
Procedure
Obtain a list of cluster nodes:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug:oc debug node/my-cluster-node
$ oc debug node/my-cluster-nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow To enter into a debug session on the target node that is tainted with the
NoExecuteeffect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace:oc new-project dummy
$ oc new-project dummyCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}'$ oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc debug node/my-cluster-node
$ oc debug node/my-cluster-nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/hostas the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.10 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
ocoperations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>instead.Start a
toolboxcontainer, which includes the required binaries and plugins to runsosreport:toolbox
# toolboxCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf an existing
toolboxpod is already running, thetoolboxcommand outputs'toolbox-' already exists. Trying to start…. Remove the running toolbox container withpodman rm toolbox-and spawn a new toolbox container, to avoid issues withsosreportplugins.Collect a
sosreportarchive.Run the
sosreportcommand and enable thecrio.allandcrio.logsCRI-O container enginesosreportplugins:sosreport -k crio.all=on -k crio.logs=on
# sosreport -k crio.all=on -k crio.logs=on1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
-kenables you to definesosreportplugin parameters outside of the defaults.
- Press Enter when prompted, to continue.
-
Provide the Red Hat Support case ID.
sosreportadds the ID to the archive’s file name. The
sosreportoutput provides the archive’s location and checksum. The following sample output references support case ID01234567:Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4eCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
sosreportarchive’s file path is outside of thechrootenvironment because the toolbox container mounts the host’s root directory at/host.
Provide the
sosreportarchive to Red Hat Support for analysis, using one of the following methods.Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
From within the toolbox container, run
redhat-support-toolto attach the archive directly to an existing Red Hat support case. This example uses support case ID01234567:redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The toolbox container mounts the host’s root directory at
/host. Reference the absolute path from the toolbox container’s root directory, including/host/, when specifying files to upload through theredhat-support-toolcommand.
Upload the file to an existing Red Hat support case.
Concatenate the
sosreportarchive by running theoc debug node/<node_name>command and redirect the output to a file. This command assumes you have exited the previousoc debugsession:oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The debug container mounts the host’s root directory at
/host. Reference the absolute path from the debug container’s root directory, including/host, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.10 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a
sosreportarchive from a cluster node by usingscpis not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,ocoperations will be impacted. In such situations, it is possible to copy asosreportarchive from a node by runningscp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>.- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
5.6. Querying bootstrap node journal logs Link kopierenLink in die Zwischenablage kopiert!
If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node.
Prerequisites
- You have SSH access to your bootstrap node.
- You have the fully qualified domain name of the bootstrap node.
Procedure
Query
bootkube.servicejournaldunit logs from a bootstrap node during OpenShift Container Platform installation. Replace<bootstrap_fqdn>with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
bootkube.servicelog on the bootstrap node outputs etcdconnection refusederrors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.Collect logs from the bootstrap node containers using
podmanon the bootstrap node. Replace<bootstrap_fqdn>with the bootstrap node’s fully qualified domain name:ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7. Querying cluster node journal logs Link kopierenLink in die Zwischenablage kopiert!
You can gather journald unit logs and other logs within /var/log on individual cluster nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - Your API service is still functional.
-
You have installed the OpenShift CLI (
oc). - You have SSH access to your hosts.
Procedure
Query
kubeletjournaldunit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only:oc adm node-logs --role=master -u kubelet
$ oc adm node-logs --role=master -u kubelet1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
kubeletas appropriate to query other unit logs.
Collect logs from specific subdirectories under
/var/log/on cluster nodes.Retrieve a list of logs contained within a
/var/log/subdirectory. The following example lists files in/var/log/openshift-apiserver/on all control plane nodes:oc adm node-logs --role=master --path=openshift-apiserver
$ oc adm node-logs --role=master --path=openshift-apiserverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Inspect a specific log within a
/var/log/subdirectory. The following example outputs/var/log/openshift-apiserver/audit.logcontents from all control plane nodes:oc adm node-logs --role=master --path=openshift-apiserver/audit.log
$ oc adm node-logs --role=master --path=openshift-apiserver/audit.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the API is not functional, review the logs on each node using SSH instead. The following example tails
/var/log/openshift-apiserver/audit.log:ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.10 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gatherand otheroccommands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,ocoperations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>.
5.8. Network trace methods Link kopierenLink in die Zwischenablage kopiert!
Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues.
OpenShift Container Platform supports two ways of performing a network trace. Review the following table and choose the method that meets your needs.
| Method | Benefits and capabilities |
|---|---|
| Collecting a host network trace | You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met. You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue. |
| Collecting a network trace from an OpenShift Container Platform node or container |
You perform a packet capture on one node or one container. You run the You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually.
This method uses the |
5.9. Collecting a host network trace Link kopierenLink in die Zwischenablage kopiert!
Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time.
You can use a combination of the oc adm must-gather command and the registry.redhat.io/openshift4/network-tools-rhel8 container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues.
The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes. The tcpdump command records the packet captures in the pods. When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine.
The sample command in the following procedure demonstrates performing a packet capture with the tcpdump command. However, you can run any command in the container image that is specified in the --image argument to gather troubleshooting information from multiple nodes at the same time.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
Procedure
Run a packet capture from the host network on some nodes by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <.> The
--dest-dirargument specifies thatoc adm must-gatherstores the packet captures in directories that are relative to/tmp/captureson the client machine. You can specify any writable directory. <.> Whentcpdumpis run in the debug pod thatoc adm must-gatherstarts, the--source-dirargument specifies that the packet captures are temporarily stored in the/tmp/tcpdumpdirectory on the pod. <.> The--imageargument specifies a container image that includes thetcpdumpcommand. <.> The--node-selectorargument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the--node-nameargument instead to run the packet capture on a single node. If you omit both the--node-selectorand the--node-nameargument, the packet captures are performed on all nodes. <.> The--host-network=trueargument is required so that the packet captures are performed on the network interfaces of the node. <.> The--timeoutargument and value specify to run the debug pod for 30 seconds. If you do not specify the--timeoutargument and a duration, the debug pod runs for 10 minutes. <.> The-i anyargument for thetcpdumpcommand specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.- Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.
Review the packet capture files that
oc adm must-gathertransferred from the pods to your client machine:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.10. Collecting a network trace from an OpenShift Container Platform node or container Link kopierenLink in die Zwischenablage kopiert!
When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc). - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
- You have SSH access to your hosts.
Procedure
Obtain a list of cluster nodes:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug:oc debug node/my-cluster-node
$ oc debug node/my-cluster-nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/hostas the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.10 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
ocoperations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>instead.From within the
chrootenvironment console, obtain the node’s interface names:ip ad
# ip adCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start a
toolboxcontainer, which includes the required binaries and plugins to runsosreport:toolbox
# toolboxCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf an existing
toolboxpod is already running, thetoolboxcommand outputs'toolbox-' already exists. Trying to start…. To avoidtcpdumpissues, remove the running toolbox container withpodman rm toolbox-and spawn a new toolbox container.Initiate a
tcpdumpsession on the cluster node and redirect output to a capture file. This example usesens5as the interface name:tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap
$ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
tcpdumpcapture file’s path is outside of thechrootenvironment because the toolbox container mounts the host’s root directory at/host.
If a
tcpdumpcapture is required for a specific container on the node, follow these steps.Determine the target container ID. The
chroot hostcommand precedes thecrictlcommand in this step because the toolbox container mounts the host’s root directory at/host:chroot /host crictl ps
# chroot /host crictl psCopy to Clipboard Copied! Toggle word wrap Toggle overflow Determine the container’s process ID. In this example, the container ID is
a7fe32346b120:chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'# chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Initiate a
tcpdumpsession on the container and redirect output to a capture file. This example uses49628as the container’s process ID andens5as the interface name. Thensentercommand enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container’s process ID, thetcpdumpcommand is run in the container’s namespace from the host:nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap
# nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
tcpdumpcapture file’s path is outside of thechrootenvironment because the toolbox container mounts the host’s root directory at/host.
Provide the
tcpdumpcapture file to Red Hat Support for analysis, using one of the following methods.Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
From within the toolbox container, run
redhat-support-toolto attach the file directly to an existing Red Hat Support case. This example uses support case ID01234567:redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The toolbox container mounts the host’s root directory at
/host. Reference the absolute path from the toolbox container’s root directory, including/host/, when specifying files to upload through theredhat-support-toolcommand.
Upload the file to an existing Red Hat support case.
Concatenate the
sosreportarchive by running theoc debug node/<node_name>command and redirect the output to a file. This command assumes you have exited the previousoc debugsession:oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The debug container mounts the host’s root directory at
/host. Reference the absolute path from the debug container’s root directory, including/host, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.10 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a
tcpdumpcapture file from a cluster node by usingscpis not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,ocoperations will be impacted. In such situations, it is possible to copy atcpdumpcapture file from a node by runningscp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>.- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
5.11. Providing diagnostic data to Red Hat Support Link kopierenLink in die Zwischenablage kopiert!
When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal, or from an OpenShift Container Platform cluster directly by using the redhat-support-tool command.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - You have SSH access to your hosts.
-
You have installed the OpenShift CLI (
oc). - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
Procedure
Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal.
Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the
oc debug node/<node_name>command and redirect the output to a file. The following example copies/host/var/tmp/my-diagnostic-data.tar.gzfrom a debug container to/var/tmp/my-diagnostic-data.tar.gz:oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The debug container mounts the host’s root directory at
/host. Reference the absolute path from the debug container’s root directory, including/host, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.10 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using
scpis not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,ocoperations will be impacted. In such situations, it is possible to copy diagnostic files from a node by runningscp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>.- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
Upload diagnostic data to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
Obtain a list of cluster nodes:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug:oc debug node/my-cluster-node
$ oc debug node/my-cluster-nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/hostas the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOpenShift Container Platform 4.10 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
ocoperations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>instead.Start a
toolboxcontainer, which includes the required binaries to runredhat-support-tool:toolbox
# toolboxCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf an existing
toolboxpod is already running, thetoolboxcommand outputs'toolbox-' already exists. Trying to start…. Remove the running toolbox container withpodman rm toolbox-and spawn a new toolbox container, to avoid issues.Run
redhat-support-toolto attach a file from the debug pod directly to an existing Red Hat Support case. This example uses support case ID '01234567' and example file path/host/var/tmp/my-diagnostic-data.tar.gz:redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The toolbox container mounts the host’s root directory at
/host. Reference the absolute path from the toolbox container’s root directory, including/host/, when specifying files to upload through theredhat-support-toolcommand.
5.12. About toolbox Link kopierenLink in die Zwischenablage kopiert!
toolbox is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport and redhat-support-tool.
The primary purpose for a toolbox container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image.
Installing packages to a toolbox container
By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.
Prerequisites
-
You have accessed a node with the
oc debug node/<node_name>command.
Procedure
Set
/hostas the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the toolbox container:
toolbox
# toolboxCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the additional package, such as
wget:dnf install -y <package_name>
# dnf install -y <package_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Starting an alternative image with toolbox
By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. You can start an alternative image by creating a .toolboxrc file and specifying the image to run.
Prerequisites
-
You have accessed a node with the
oc debug node/<node_name>command.
Procedure
Set
/hostas the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
.toolboxrcfile in the home directory for the root user ID:vi ~/.toolboxrc
# vi ~/.toolboxrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow REGISTRY=quay.io IMAGE=fedora/fedora:33-x86_64 TOOLBOX_NAME=toolbox-fedora-33
REGISTRY=quay.io1 IMAGE=fedora/fedora:33-x86_642 TOOLBOX_NAME=toolbox-fedora-333 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a toolbox container with the alternative image:
toolbox
# toolboxCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf an existing
toolboxpod is already running, thetoolboxcommand outputs'toolbox-' already exists. Trying to start…. Remove the running toolbox container withpodman rm toolbox-and spawn a new toolbox container, to avoid issues withsosreportplugins.