Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 5. Gathering data about your cluster
You can use the following tools to get debugging information about your OpenShift Dedicated cluster.
5.1. About the must-gather tool
The oc adm must-gather
CLI command collects the information from your cluster that is most likely needed for debugging issues, including:
- Resource definitions
- Service logs
By default, the oc adm must-gather
command uses the default plugin image and writes into ./must-gather.local
.
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
To collect data related to one or more specific features, use the
--image
argument with an image, as listed in a following section.For example:
$ oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0
To collect the audit logs, use the
-- /usr/bin/gather_audit_logs
argument, as described in a following section.For example:
$ oc adm must-gather -- /usr/bin/gather_audit_logs
NoteAudit logs are not collected as part of the default set of information to reduce the size of the files.
When you run oc adm must-gather
, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local
in the current working directory.
For example:
NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ...
Optionally, you can run the oc adm must-gather
command in a specific namespace by using the --run-namespace
option.
For example:
$ oc adm must-gather --run-namespace <namespace> \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0
5.1.1. Gathering data about your cluster for Red Hat Support
You can gather debugging information about your cluster by using the oc adm must-gather
CLI command.
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.NoteIn OpenShift Dedicated deployments, customers who are not using the Customer Cloud Subscription (CCS) model cannot use the
oc adm must-gather
command as it requirescluster-admin
privileges.-
The OpenShift CLI (
oc
) is installed.
Procedure
-
Navigate to the directory where you want to store the
must-gather
data. Run the
oc adm must-gather
command:$ oc adm must-gather
NoteBecause this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the
NotReady
andSchedulingDisabled
state.If this command fails, for example, if you cannot schedule a pod on your cluster, then use the
oc adm inspect
command to gather information for particular resources.NoteContact Red Hat Support for the recommended resources to gather.
Create a compressed file from the
must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1
- 1
- Make sure to replace
must-gather-local.5421342344627712289/
with the actual directory name.
- Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.
5.1.2. Must-gather flags
The flags listed in the following table are available to use with the oc adm must-gather
command.
Flag | Example command | Description |
---|---|---|
|
|
Collect |
|
| Set a specific directory on the local machine where the gathered data is written. |
|
|
Run |
|
|
Specify a |
|
|
Specify an`<image_stream>` using a namespace or name:tag value containing a |
|
| Set a specific node to use. If not specified, by default a random master is used. |
|
| Set a specific node selector to use. Only relevant when specifying a command and image which needs to capture data on a set of cluster nodes simultaneously. |
|
|
An existing privileged namespace where |
|
|
Only return logs newer than the specified duration. Defaults to all logs. Plugins are encouraged but not required to support this. Only one |
|
|
Only return logs after a specific date and time, expressed in (RFC3339) format. Defaults to all logs. Plugins are encouraged but not required to support this. Only one |
|
| Set the specific directory on the pod where you copy the gathered data from. |
|
| The length of time to gather data before timing out, expressed as seconds, minutes, or hours, for example, 3s, 5m, or 2h. Time specified must be higher than zero. Defaults to 10 minutes if not specified. |
|
|
Specify maximum percentage of pod’s allocated volume that can be used for |
5.1.3. Gathering data about specific features
You can gather debugging information about specific features by using the oc adm must-gather
CLI command with the --image
or --image-stream
argument. The must-gather
tool supports multiple images, so you can gather data about more than one feature by running a single command.
Image | Purpose |
---|---|
| Data collection for OpenShift Virtualization. |
| Data collection for OpenShift Serverless. |
| Data collection for Red Hat OpenShift Service Mesh. |
| Data collection for the Migration Toolkit for Containers. |
| Data collection for logging. |
| Data collection for the Network Observability Operator. |
| Data collection for OpenShift Shared Resource CSI Driver. |
| Data collection for Red Hat OpenShift GitOps. |
| Data collection for the Secrets Store CSI Driver Operator. |
To determine the latest version for an OpenShift Dedicated component’s image, see the OpenShift Operator Life Cycles web page on the Red Hat Customer Portal.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
The OpenShift CLI (
oc
) is installed.
Procedure
-
Navigate to the directory where you want to store the
must-gather
data. Run the
oc adm must-gather
command with one or more--image
or--image-stream
arguments.Note-
To collect the default
must-gather
data in addition to specific feature data, add the--image-stream=openshift/must-gather
argument.
For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization:
$ oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0 2
You can use the
must-gather
tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Cluster Logging Operator in your cluster. For OpenShift Logging, run the following command:$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')
Example 5.1. Example
must-gather
output for OpenShift Logging├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── curator │ │ └── curator-1596028500-zkz4s │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── curator-1596021300-wn2ks.162634ebf0055a94.yaml │ │ │ ├── curator.162638330681bee2.yaml │ │ │ ├── elasticsearch-delete-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-delete-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-delete-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-rollover-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-rollover-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-rollover-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── curator-1596028500-zkz4s │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-delete-app-1596030300-bpgcx │ │ │ ├── elasticsearch-delete-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├── ...
-
To collect the default
Run the
oc adm must-gather
command with one or more--image
or--image-stream
arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:$ oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=quay.io/kubevirt/must-gather 2
Create a compressed file from the
must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1
- 1
- Make sure to replace
must-gather-local.5421342344627712289/
with the actual directory name.
- Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.
5.2. Additional resources
5.2.1. Gathering network logs
You can gather network logs on all nodes in a cluster.
Procedure
Run the
oc adm must-gather
command with-- gather_network_logs
:$ oc adm must-gather -- gather_network_logs
NoteBy default, the
must-gather
tool collects the OVNnbdb
andsbdb
databases from all of the nodes in the cluster. Adding the-- gather_network_logs
option to include additional logs that contain OVN-Kubernetes transactions for OVNnbdb
database.Create a compressed file from the
must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:$ tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1
- 1
- Replace
must-gather-local.472290403699006248
with the actual directory name.
- Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.
5.2.2. Changing the must-gather storage limit
When using the oc adm must-gather
command to collect data the default maximum storage for the information is 30% of the storage capacity of the container. After the 30% limit is reached the container is killed and the gathering process stops. Information already gathered is downloaded to your local storage. To run the must-gather command again, you need either a container with more storage capacity or to adjust the maximum volume percentage.
If the container reaches the storage limit, an error message similar to the following example is generated.
Example output
Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting...
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
The OpenShift CLI (
oc
) is installed.
Procedure
Run the
oc adm must-gather
command with thevolume-percentage
flag. The new value cannot exceed 100.$ oc adm must-gather --volume-percentage <storage_percentage>
5.3. Obtaining your cluster ID
When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Dedicated web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI (oc
).
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have access to the web console or the OpenShift CLI (
oc
) installed.
Procedure
To manually obtain your cluster ID using OpenShift Cluster Manager:
- Navigate to Cluster List.
- Click on the name of the cluster you need to open a support case for.
- Find the value in the Cluster ID field of the Details section of the Overview tab.
To open a support case and have your cluster ID autofilled using the web console:
- From the toolbar, navigate to (?) Help and select Share Feedback from the list.
- Click Open a support case from the Tell us about your experience window.
To manually obtain your cluster ID using the web console:
-
Navigate to Home
Overview. - The value is available in the Cluster ID field of the Details section.
-
Navigate to Home
To obtain your cluster ID using the OpenShift CLI (
oc
), run the following command:$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
5.4. Querying cluster node journal logs
You can gather journald
unit logs and other logs within /var/log
on individual cluster nodes.
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.NoteIn OpenShift Dedicated deployments, customers who are not using the Customer Cloud Subscription (CCS) model cannot use the
oc adm node-logs
command as it requirescluster-admin
privileges.-
You have installed the OpenShift CLI (
oc
).
Procedure
Query
kubelet
journald
unit logs from OpenShift Dedicated cluster nodes. The following example queries control plane nodes only:$ oc adm node-logs --role=master -u kubelet 1
- 1
- Replace
kubelet
as appropriate to query other unit logs.
Collect logs from specific subdirectories under
/var/log/
on cluster nodes.Retrieve a list of logs contained within a
/var/log/
subdirectory. The following example lists files in/var/log/openshift-apiserver/
on all control plane nodes:$ oc adm node-logs --role=master --path=openshift-apiserver
Inspect a specific log within a
/var/log/
subdirectory. The following example outputs/var/log/openshift-apiserver/audit.log
contents from all control plane nodes:$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
5.5. Network trace methods
Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues.
OpenShift Dedicated supports two ways of performing a network trace. Review the following table and choose the method that meets your needs.
Method | Benefits and capabilities |
---|---|
Collecting a host network trace | You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met. You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue. |
Collecting a network trace from an OpenShift Dedicated node or container |
You perform a packet capture on one node or one container. You run the You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually.
This method uses the |
5.5.1. Collecting a host network trace
Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time.
You can use a combination of the oc adm must-gather
command and the registry.redhat.io/openshift4/network-tools-rhel8
container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues.
The oc adm must-gather
command is used to run the tcpdump
command in pods on specific nodes. The tcpdump
command records the packet captures in the pods. When the tcpdump
command exits, the oc adm must-gather
command transfers the files with the packet captures from the pods to your client machine.
The sample command in the following procedure demonstrates performing a packet capture with the tcpdump
command. However, you can run any command in the container image that is specified in the --image
argument to gather troubleshooting information from multiple nodes at the same time.
Prerequisites
You are logged in to OpenShift Dedicated as a user with the
cluster-admin
role.NoteIn OpenShift Dedicated deployments, customers who are not using the Customer Cloud Subscription (CCS) model cannot use the
oc adm must-gather
command as it requirescluster-admin
privileges.-
You have installed the OpenShift CLI (
oc
).
Procedure
Run a packet capture from the host network on some nodes by running the following command:
$ oc adm must-gather \ --dest-dir /tmp/captures \ <.> --source-dir '/tmp/tcpdump/' \ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \ <.> --node-selector 'node-role.kubernetes.io/worker' \ <.> --host-network=true \ <.> --timeout 30s \ <.> -- \ tcpdump -i any \ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300
<.> The
--dest-dir
argument specifies thatoc adm must-gather
stores the packet captures in directories that are relative to/tmp/captures
on the client machine. You can specify any writable directory. <.> Whentcpdump
is run in the debug pod thatoc adm must-gather
starts, the--source-dir
argument specifies that the packet captures are temporarily stored in the/tmp/tcpdump
directory on the pod. <.> The--image
argument specifies a container image that includes thetcpdump
command. <.> The--node-selector
argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the--node-name
argument instead to run the packet capture on a single node. If you omit both the--node-selector
and the--node-name
argument, the packet captures are performed on all nodes. <.> The--host-network=true
argument is required so that the packet captures are performed on the network interfaces of the node. <.> The--timeout
argument and value specify to run the debug pod for 30 seconds. If you do not specify the--timeout
argument and a duration, the debug pod runs for 10 minutes. <.> The-i any
argument for thetcpdump
command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.- Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.
Review the packet capture files that
oc adm must-gather
transferred from the pods to your client machine:tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... │ └── 2022-01-13T19:31:30.pcap ├── ip-... └── timestamp
5.5.2. Collecting a network trace from an OpenShift Dedicated node or container
When investigating potential network-related OpenShift Dedicated issues, Red Hat Support might request a network packet trace from a specific OpenShift Dedicated cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Dedicated is through a debug pod.
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.NoteIn OpenShift Dedicated deployments, customers who are not using the Customer Cloud Subscription (CCS) model cannot use the
oc debug
command as it requirescluster-admin
privileges.-
You have installed the OpenShift CLI (
oc
). - You have an existing Red Hat Support case ID.
Procedure
Obtain a list of cluster nodes:
$ oc get nodes
Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug
:$ oc debug node/my-cluster-node
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
From within the
chroot
environment console, obtain the node’s interface names:# ip ad
Start a
toolbox
container, which includes the required binaries and plugins to runsosreport
:# toolbox
NoteIf an existing
toolbox
pod is already running, thetoolbox
command outputs'toolbox-' already exists. Trying to start…
. To avoidtcpdump
issues, remove the running toolbox container withpodman rm toolbox-
and spawn a new toolbox container.Initiate a
tcpdump
session on the cluster node and redirect output to a capture file. This example usesens5
as the interface name:$ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1
- 1
- The
tcpdump
capture file’s path is outside of thechroot
environment because the toolbox container mounts the host’s root directory at/host
.
If a
tcpdump
capture is required for a specific container on the node, follow these steps.Determine the target container ID. The
chroot host
command precedes thecrictl
command in this step because the toolbox container mounts the host’s root directory at/host
:# chroot /host crictl ps
Determine the container’s process ID. In this example, the container ID is
a7fe32346b120
:# chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'
Initiate a
tcpdump
session on the container and redirect output to a capture file. This example uses49628
as the container’s process ID andens5
as the interface name. Thensenter
command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container’s process ID, thetcpdump
command is run in the container’s namespace from the host:# nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1
- 1
- The
tcpdump
capture file’s path is outside of thechroot
environment because the toolbox container mounts the host’s root directory at/host
.
Provide the
tcpdump
capture file to Red Hat Support for analysis, using one of the following methods.Upload the file to an existing Red Hat support case directly from an OpenShift Dedicated cluster.
From within the toolbox container, run
redhat-support-tool
to attach the file directly to an existing Red Hat Support case. This example uses support case ID01234567
:# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap 1
- 1
- The toolbox container mounts the host’s root directory at
/host
. Reference the absolute path from the toolbox container’s root directory, including/host/
, when specifying files to upload through theredhat-support-tool
command.
Upload the file to an existing Red Hat support case.
Concatenate the
sosreport
archive by running theoc debug node/<node_name>
command and redirect the output to a file. This command assumes you have exited the previousoc debug
session:$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1
- 1
- The debug container mounts the host’s root directory at
/host
. Reference the absolute path from the debug container’s root directory, including/host
, when specifying target files for concatenation.
- Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal.
- Select Attach files and follow the prompts to upload the file.
5.5.3. Providing diagnostic data to Red Hat Support
When investigating OpenShift Dedicated issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal, or from an OpenShift Dedicated cluster directly by using the redhat-support-tool
command.
Prerequisites
You have access to the cluster as a user with the
cluster-admin
role.NoteIn OpenShift Dedicated deployments, customers who are not using the Customer Cloud Subscription (CCS) model cannot use the
oc debug
command as it requirescluster-admin
privileges.-
You have installed the OpenShift CLI (
oc
). - You have an existing Red Hat Support case ID.
Procedure
Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal.
Concatenate a diagnostic file contained on an OpenShift Dedicated node by using the
oc debug node/<node_name>
command and redirect the output to a file. The following example copies/host/var/tmp/my-diagnostic-data.tar.gz
from a debug container to/var/tmp/my-diagnostic-data.tar.gz
:$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1
- 1
- The debug container mounts the host’s root directory at
/host
. Reference the absolute path from the debug container’s root directory, including/host
, when specifying target files for concatenation.
- Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal.
- Select Attach files and follow the prompts to upload the file.
Upload diagnostic data to an existing Red Hat support case directly from an OpenShift Dedicated cluster.
Obtain a list of cluster nodes:
$ oc get nodes
Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug
:$ oc debug node/my-cluster-node
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
Start a
toolbox
container, which includes the required binaries to runredhat-support-tool
:# toolbox
NoteIf an existing
toolbox
pod is already running, thetoolbox
command outputs'toolbox-' already exists. Trying to start…
. Remove the running toolbox container withpodman rm toolbox-
and spawn a new toolbox container, to avoid issues.Run
redhat-support-tool
to attach a file from the debug pod directly to an existing Red Hat Support case. This example uses support case ID '01234567' and example file path/host/var/tmp/my-diagnostic-data.tar.gz
:# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz 1
- 1
- The toolbox container mounts the host’s root directory at
/host
. Reference the absolute path from the toolbox container’s root directory, including/host/
, when specifying files to upload through theredhat-support-tool
command.
5.5.4. About toolbox
toolbox
is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport
and redhat-support-tool
.
The primary purpose for a toolbox
container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image.
Installing packages to a toolbox
container
By default, running the toolbox
command starts a container with the registry.redhat.io/rhel8/support-tools:latest
image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.
Prerequisites
-
You have accessed a node with the
oc debug node/<node_name>
command.
Procedure
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
Start the toolbox container:
# toolbox
Install the additional package, such as
wget
:# dnf install -y <package_name>
Starting an alternative image with toolbox
By default, running the toolbox
command starts a container with the registry.redhat.io/rhel8/support-tools:latest
image. You can start an alternative image by creating a .toolboxrc
file and specifying the image to run.
Prerequisites
-
You have accessed a node with the
oc debug node/<node_name>
command.
Procedure
Set
/host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in/host
within the pod. By changing the root directory to/host
, you can run binaries contained in the host’s executable paths:# chroot /host
Create a
.toolboxrc
file in the home directory for the root user ID:# vi ~/.toolboxrc
REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3
Start a toolbox container with the alternative image:
# toolbox
NoteIf an existing
toolbox
pod is already running, thetoolbox
command outputs'toolbox-' already exists. Trying to start…
. Remove the running toolbox container withpodman rm toolbox-
and spawn a new toolbox container, to avoid issues withsosreport
plugins.