Este contenido no está disponible en el idioma seleccionado.
Chapter 5. Gathering data about your cluster
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
It is recommended to provide:
5.1. About the must-gather tool Copiar enlaceEnlace copiado en el portapapeles!
The
oc adm must-gather
- Resource definitions
- Service logs
By default, the
oc adm must-gather
./must-gather.local
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
To collect data related to one or more specific features, use the
argument with an image, as listed in a following section.--imageFor example:
$ oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0To collect the audit logs, use the
argument, as described in a following section.-- /usr/bin/gather_audit_logsFor example:
$ oc adm must-gather -- /usr/bin/gather_audit_logsNoteAudit logs are not collected as part of the default set of information to reduce the size of the files.
When you run
oc adm must-gather
must-gather.local
For example:
NAMESPACE NAME READY STATUS RESTARTS AGE
...
openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s
openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s
...
5.1.1. Gathering data about your cluster for Red Hat Support Copiar enlaceEnlace copiado en el portapapeles!
You can gather debugging information about your cluster by using the
oc adm must-gather
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin -
The OpenShift Container Platform CLI () installed.
oc
Procedure
Navigate to the directory where you want to store the
data.must-gatherNoteIf your cluster is using a restricted network, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters on restricted networks, you must import the default
image as an image stream.must-gather$ oc import-image is/must-gather -n openshiftRun the
command:oc adm must-gather$ oc adm must-gatherNoteBecause this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the
andNotReadystate.SchedulingDisabledIf this command fails, for example, if you cannot schedule a pod on your cluster, then use the
command to gather information for particular resources.oc adm inspectNoteContact Red Hat Support for the recommended resources to gather.
Create a compressed file from the
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:must-gather$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/1 - 1
- Make sure to replace
must-gather-local.5421342344627712289/with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.1.2. Gathering data about specific features Copiar enlaceEnlace copiado en el portapapeles!
You can gather debugging information about specific features by using the
oc adm must-gather
--image
--image-stream
must-gather
| Image | Purpose |
|---|---|
|
| Data collection for OpenShift Virtualization. |
|
| Data collection for OpenShift Serverless. |
|
| Data collection for Red Hat OpenShift Service Mesh. |
|
| Data collection for the Migration Toolkit for Containers. |
|
| Data collection for Red Hat OpenShift Container Storage. |
|
| Data collection for OpenShift Logging. |
|
| Data collection for Local Storage Operator. |
To collect the default
must-gather
--image-stream=openshift/must-gather
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin -
The OpenShift Container Platform CLI () installed.
oc
Procedure
-
Navigate to the directory where you want to store the data.
must-gather Run the
command with one or moreoc adm must-gatheror--imagearguments. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization:--image-stream$ oc adm must-gather \ --image-stream=openshift/must-gather \1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.8.72 You can use the
tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command:must-gather$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')Example 5.1. Example
must-gatheroutput for OpenShift Logging├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├── ...Run the
command with one or moreoc adm must-gatheror--imagearguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:--image-stream$ oc adm must-gather \ --image-stream=openshift/must-gather \1 --image=quay.io/kubevirt/must-gather2 Create a compressed file from the
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:must-gather$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/1 - 1
- Make sure to replace
must-gather-local.5421342344627712289/with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.1.3. Gathering audit logs Copiar enlaceEnlace copiado en el portapapeles!
You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. You can gather audit logs for:
- etcd server
- Kubernetes API server
- OpenShift OAuth API server
- OpenShift API server
Procedure
Run the
command with theoc adm must-gatherflag:-- /usr/bin/gather_audit_logs$ oc adm must-gather -- /usr/bin/gather_audit_logsCreate a compressed file from the
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:must-gather$ tar cvaf must-gather.tar.gz must-gather.local.4722904036990062481 - 1
- Replace
must-gather-local.472290403699006248with the actual directory name.
- Attach the compressed file to your support case on the Red Hat Customer Portal.
5.2. Obtaining your cluster ID Copiar enlaceEnlace copiado en el portapapeles!
When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI (
oc
Prerequisites
-
Access to the cluster as a user with the role.
cluster-admin -
Access to the web console or the OpenShift CLI () installed.
oc
Procedure
To open a support case and have your cluster ID autofilled using the web console:
-
From the toolbar, navigate to (?) Help
Open Support Case. - The Cluster ID value is autofilled.
-
From the toolbar, navigate to (?) Help
To manually obtain your cluster ID using the web console:
-
Navigate to Home
Dashboards Overview. - The value is available in the Cluster ID field of the Details section.
-
Navigate to Home
To obtain your cluster ID using the OpenShift CLI (
), run the following command:oc$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
5.3. About sosreport Copiar enlaceEnlace copiado en el portapapeles!
sosreport
sosreport
In some support interactions, Red Hat Support may ask you to collect a
sosreport
oc adm must-gather
5.4. Generating a sosreport archive for an OpenShift Container Platform cluster node Copiar enlaceEnlace copiado en el portapapeles!
The recommended way to generate a
sosreport
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin - You have SSH access to your hosts.
-
You have installed the OpenShift CLI ().
oc - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
Procedure
Obtain a list of cluster nodes:
$ oc get nodesEnter into a debug session on the target node. This step instantiates a debug pod called
:<node_name>-debug$ oc debug node/my-cluster-nodeTo enter into a debug session on the target node that is tainted with the
effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace:NoExecute$ oc new-project dummy$ oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}'$ oc debug node/my-cluster-nodeSet
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:/host# chroot /hostNoteOpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
operations will be impacted. In such situations, it is possible to access nodes usingocinstead.ssh core@<node>.<cluster_name>.<base_domain>Start a
container, which includes the required binaries and plugins to runtoolbox:sosreport# toolboxNoteIf an existing
pod is already running, thetoolboxcommand outputstoolbox. Remove the running toolbox container with'toolbox-' already exists. Trying to start…and spawn a new toolbox container, to avoid issues withpodman rm toolbox-plugins.sosreportCollect a
archive.sosreportRun the
command and enable thesosreportandcrio.allCRI-O container enginecrio.logsplugins:sosreport# sosreport -k crio.all=on -k crio.logs=on1 - 1
-kenables you to definesosreportplugin parameters outside of the defaults.
- Press Enter when prompted, to continue.
-
Provide the Red Hat Support case ID. adds the ID to the archive’s file name.
sosreport The
output provides the archive’s location and checksum. The following sample output references support case IDsosreport:01234567Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e- 1
- The
sosreportarchive’s file path is outside of thechrootenvironment because the toolbox container mounts the host’s root directory at/host.
Provide the
archive to Red Hat Support for analysis, using one of the following methods.sosreportUpload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
From within the toolbox container, run
to attach the archive directly to an existing Red Hat support case. This example uses support case IDredhat-support-tool:01234567# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz1 - 1
- The toolbox container mounts the host’s root directory at
/host. Reference the absolute path from the toolbox container’s root directory, including/host/, when specifying files to upload through theredhat-support-toolcommand.
Upload the file to an existing Red Hat support case.
Concatenate the
archive by running thesosreportcommand and redirect the output to a file. This command assumes you have exited the previousoc debug node/<node_name>session:oc debug$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz1 - 1
- The debug container mounts the host’s root directory at
/host. Reference the absolute path from the debug container’s root directory, including/host, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a
archive from a cluster node by usingsosreportis not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,scpoperations will be impacted. In such situations, it is possible to copy aocarchive from a node by runningsosreport.scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
5.5. Querying bootstrap node journal logs Copiar enlaceEnlace copiado en el portapapeles!
If you experience bootstrap-related issues, you can gather
bootkube.service
journald
Prerequisites
- You have SSH access to your bootstrap node.
- You have the fully qualified domain name of the bootstrap node.
Procedure
Query
bootkube.serviceunit logs from a bootstrap node during OpenShift Container Platform installation. Replacejournaldwith the bootstrap node’s fully qualified domain name:<bootstrap_fqdn>$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.serviceNoteThe
log on the bootstrap node outputs etcdbootkube.serviceerrors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.connection refusedCollect logs from the bootstrap node containers using
on the bootstrap node. Replacepodmanwith the bootstrap node’s fully qualified domain name:<bootstrap_fqdn>$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
5.6. Querying cluster node journal logs Copiar enlaceEnlace copiado en el portapapeles!
You can gather
journald
/var/log
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin - Your API service is still functional.
-
You have installed the OpenShift CLI ().
oc - You have SSH access to your hosts.
Procedure
Query
kubeletunit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes (also known as the master nodes) only:journald$ oc adm node-logs --role=master -u kubelet1 - 1
- Replace
kubeletas appropriate to query other unit logs.
Collect logs from specific subdirectories under
on cluster nodes./var/log/Retrieve a list of logs contained within a
subdirectory. The following example lists files in/var/log/on all control plane nodes:/var/log/openshift-apiserver/$ oc adm node-logs --role=master --path=openshift-apiserverInspect a specific log within a
subdirectory. The following example outputs/var/log/contents from all control plane nodes:/var/log/openshift-apiserver/audit.log$ oc adm node-logs --role=master --path=openshift-apiserver/audit.logIf the API is not functional, review the logs on each node using SSH instead. The following example tails
:/var/log/openshift-apiserver/audit.log$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.logNoteOpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
and otheroc adm must gathercommands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,ocoperations will be impacted. In such situations, it is possible to access nodes usingoc.ssh core@<node>.<cluster_name>.<base_domain>
5.7. Collecting a network trace from an OpenShift Container Platform node or container Copiar enlaceEnlace copiado en el portapapeles!
When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod.
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin -
You have installed the OpenShift CLI ().
oc - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
- You have SSH access to your hosts.
Procedure
Obtain a list of cluster nodes:
$ oc get nodesEnter into a debug session on the target node. This step instantiates a debug pod called
:<node_name>-debug$ oc debug node/my-cluster-nodeSet
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:/host# chroot /hostNoteOpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
operations will be impacted. In such situations, it is possible to access nodes usingocinstead.ssh core@<node>.<cluster_name>.<base_domain>From within the
environment console, obtain the node’s interface names:chroot# ip adStart a
container, which includes the required binaries and plugins to runtoolbox:sosreport# toolboxNoteIf an existing
pod is already running, thetoolboxcommand outputstoolbox. To avoid'toolbox-' already exists. Trying to start…issues, remove the running toolbox container withtcpdumpand spawn a new toolbox container.podman rm toolbox-Initiate a
session on the cluster node and redirect output to a capture file. This example usestcpdumpas the interface name:ens5$ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap1 - 1
- The
tcpdumpcapture file’s path is outside of thechrootenvironment because the toolbox container mounts the host’s root directory at/host.
If a
capture is required for a specific container on the node, follow these steps.tcpdumpDetermine the target container ID. The
command precedes thechroot hostcommand in this step because the toolbox container mounts the host’s root directory atcrictl:/host# chroot /host crictl psDetermine the container’s process ID. In this example, the container ID is
:a7fe32346b120# chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'Initiate a
session on the container and redirect output to a capture file. This example usestcpdumpas the container’s process ID and49628as the interface name. Theens5command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container’s process ID, thensentercommand is run in the container’s namespace from the host:tcpdump# nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap1 - 1
- The
tcpdumpcapture file’s path is outside of thechrootenvironment because the toolbox container mounts the host’s root directory at/host.
Provide the
capture file to Red Hat Support for analysis, using one of the following methods.tcpdumpUpload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
From within the toolbox container, run
to attach the file directly to an existing Red Hat Support case. This example uses support case IDredhat-support-tool:01234567# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap1 - 1
- The toolbox container mounts the host’s root directory at
/host. Reference the absolute path from the toolbox container’s root directory, including/host/, when specifying files to upload through theredhat-support-toolcommand.
Upload the file to an existing Red Hat support case.
Concatenate the
archive by running thesosreportcommand and redirect the output to a file. This command assumes you have exited the previousoc debug node/<node_name>session:oc debug$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap1 - 1
- The debug container mounts the host’s root directory at
/host. Reference the absolute path from the debug container’s root directory, including/host, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a
capture file from a cluster node by usingtcpdumpis not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,scpoperations will be impacted. In such situations, it is possible to copy aoccapture file from a node by runningtcpdump.scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
5.8. Providing diagnostic data to Red Hat Support Copiar enlaceEnlace copiado en el portapapeles!
When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal, or from an OpenShift Container Platform cluster directly by using the
redhat-support-tool
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin - You have SSH access to your hosts.
-
You have installed the OpenShift CLI ().
oc - You have a Red Hat standard or premium Subscription.
- You have a Red Hat Customer Portal account.
- You have an existing Red Hat Support case ID.
Procedure
Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal.
Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the
command and redirect the output to a file. The following example copiesoc debug node/<node_name>from a debug container to/host/var/tmp/my-diagnostic-data.tar.gz:/var/tmp/my-diagnostic-data.tar.gz$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz1 - 1
- The debug container mounts the host’s root directory at
/host. Reference the absolute path from the debug container’s root directory, including/host, when specifying target files for concatenation.
NoteOpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using
is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,scpoperations will be impacted. In such situations, it is possible to copy diagnostic files from a node by runningoc.scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>- Navigate to an existing support case within https://access.redhat.com/support/cases/.
- Select Attach files and follow the prompts to upload the file.
Upload diagnostic data to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
Obtain a list of cluster nodes:
$ oc get nodesEnter into a debug session on the target node. This step instantiates a debug pod called
:<node_name>-debug$ oc debug node/my-cluster-nodeSet
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:/host# chroot /hostNoteOpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
operations will be impacted. In such situations, it is possible to access nodes usingocinstead.ssh core@<node>.<cluster_name>.<base_domain>Start a
container, which includes the required binaries to runtoolbox:redhat-support-tool# toolboxNoteIf an existing
pod is already running, thetoolboxcommand outputstoolbox. Remove the running toolbox container with'toolbox-' already exists. Trying to start…and spawn a new toolbox container, to avoid issues.podman rm toolbox-Run
to attach a file from the debug pod directly to an existing Red Hat Support case. This example uses support case ID '01234567' and example file pathredhat-support-tool:/host/var/tmp/my-diagnostic-data.tar.gz# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz1 - 1
- The toolbox container mounts the host’s root directory at
/host. Reference the absolute path from the toolbox container’s root directory, including/host/, when specifying files to upload through theredhat-support-toolcommand.
5.9. About toolbox Copiar enlaceEnlace copiado en el portapapeles!
toolbox
sosreport
redhat-support-tool
The primary purpose for a
toolbox
Installing packages to a toolbox container
By default, running the
toolbox
registry.redhat.io/rhel8/support-tools:latest
Prerequisites
-
You have accessed a node with the command.
oc debug node/<node_name>
Procedure
Set
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:/host# chroot /hostStart the toolbox container:
# toolboxInstall the additional package, such as
:wget# dnf install -y <package_name>
Starting an alternative image with toolbox
By default, running the
toolbox
registry.redhat.io/rhel8/support-tools:latest
.toolboxrc
Prerequisites
-
You have accessed a node with the command.
oc debug node/<node_name>
Procedure
Set
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:/host# chroot /hostCreate a
file in the home directory for the root user ID:.toolboxrc# vi ~/.toolboxrcREGISTRY=quay.io1 IMAGE=fedora/fedora:33-x86_642 TOOLBOX_NAME=toolbox-fedora-333 Start a toolbox container with the alternative image:
# toolboxNoteIf an existing
pod is already running, thetoolboxcommand outputstoolbox. Remove the running toolbox container with'toolbox-' already exists. Trying to start…and spawn a new toolbox container, to avoid issues withpodman rm toolbox-plugins.sosreport