Questo contenuto non è disponibile nella lingua selezionata.

Chapter 5. Gathering data about your cluster


When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

It is recommended to provide:

5.1. About the must-gather tool

The

oc adm must-gather
CLI command collects the information from your cluster that is most likely needed for debugging issues, including:

  • Resource definitions
  • Service logs

By default, the

oc adm must-gather
command uses the default plugin image and writes into
./must-gather.local
.

Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:

  • To collect data related to one or more specific features, use the

    --image
    argument with an image, as listed in a following section.

    For example:

    $ oc adm must-gather  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.11.0
  • To collect the audit logs, use the

    -- /usr/bin/gather_audit_logs
    argument, as described in a following section.

    For example:

    $ oc adm must-gather -- /usr/bin/gather_audit_logs
    Note

    Audit logs are not collected as part of the default set of information to reduce the size of the files.

When you run

oc adm must-gather
, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with
must-gather.local
. This directory is created in the current working directory.

For example:

NAMESPACE                      NAME                 READY   STATUS      RESTARTS      AGE
...
openshift-must-gather-5drcj    must-gather-bklx4    2/2     Running     0             72s
openshift-must-gather-5drcj    must-gather-s8sdh    2/2     Running     0             72s
...

5.1.1. Gathering data about your cluster for Red Hat Support

You can gather debugging information about your cluster by using the

oc adm must-gather
CLI command.

Prerequisites

  • Access to the cluster as a user with the
    cluster-admin
    role.
  • The OpenShift Container Platform CLI (
    oc
    ) installed.

Procedure

  1. Navigate to the directory where you want to store the

    must-gather
    data.

    Note

    If your cluster is in a disconnected environment, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters in disconnected environments, you must import the default

    must-gather
    image as an image stream.

    $ oc import-image is/must-gather -n openshift
  2. Run the

    oc adm must-gather
    command:

    $ oc adm must-gather
    Important

    If you are in a disconnected environment, use the

    --image
    flag as part of must-gather and point to the payload image.

    Note

    Because this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the

    NotReady
    and
    SchedulingDisabled
    state.

    1. If this command fails, for example, if you cannot schedule a pod on your cluster, then use the

      oc adm inspect
      command to gather information for particular resources.

      Note

      Contact Red Hat Support for the recommended resources to gather.

  3. Create a compressed file from the

    must-gather
    directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 
    1
    1
    Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name.
  4. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.

5.1.2. Gathering data about specific features

You can gather debugging information about specific features by using the

oc adm must-gather
CLI command with the
--image
or
--image-stream
argument. The
must-gather
tool supports multiple images, so you can gather data about more than one feature by running a single command.

Expand
Table 5.1. Supported must-gather images
ImagePurpose

registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.11.8

Data collection for OpenShift Virtualization.

registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8

Data collection for OpenShift Serverless.

registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:v<installed_version_service_mesh>

Data collection for Red Hat OpenShift Service Mesh.

registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v<installed_version_migration_toolkit>

Data collection for the Migration Toolkit for Containers.

registry.redhat.io/odf4/ocs-must-gather-rhel8:v<installed_version_ODF>

Data collection for Red Hat OpenShift Data Foundation.

registry.redhat.io/openshift-logging/cluster-logging-rhel8-operator

Data collection for OpenShift Logging.

registry.redhat.io/openshift4/ose-csi-driver-shared-resource-mustgather-rhel8

Data collection for OpenShift Shared Resource CSI Driver.

registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel8:v<installed_version_LSO>

Data collection for Local Storage Operator.

registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:v<installed_version_sandboxed_containers>

Data collection for OpenShift sandboxed containers.

registry.redhat.io/workload-availability/self-node-remediation-must-gather-rhel8:v<installed_version_self_node>

Data collection for the Self Node Remediation Operator and the Node Health Check Operator.

registry.redhat.io/openshift4/ptp-must-gather-rhel8:v<installed-version-ptp>

Data collection for the PTP Operator.

registry.redhat.io/workload-availability/node-maintenance-must-gather-rhel8:v<installed_version_node_maintenance>

Data collection for the Node Maintenance Operator.

quay.io/openshift-pipeline/must-gather

Data collection for Red Hat OpenShift Pipelines

Note

To determine the latest version for an OpenShift Container Platform component’s image, see the Red Hat OpenShift Container Platform Life Cycle Policy web page on the Red Hat Customer Portal.

Prerequisites

  • Access to the cluster as a user with the
    cluster-admin
    role.
  • The OpenShift Container Platform CLI (
    oc
    ) installed.

Procedure

  1. Navigate to the directory where you want to store the
    must-gather
    data.
  2. Run the

    oc adm must-gather
    command with one or more
    --image
    or
    --image-stream
    arguments.

    Note
    • To collect the default
      must-gather
      data in addition to specific feature data, add the
      --image-stream=openshift/must-gather
      argument.
    • For information on gathering data about the Custom Metrics Autoscaler, see the Additional resources section that follows.

    For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization:

    $ oc adm must-gather \
     --image-stream=openshift/must-gather \ 
    1
    
     --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.11.8 
    2
    1
    The default OpenShift Container Platform must-gather image
    2
    The must-gather image for OpenShift Virtualization

    You can use the

    must-gather
    tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command:

    $ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \
     -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')

    Example 5.1. Example must-gather output for OpenShift Logging

    ├── cluster-logging
    │  ├── clo
    │  │  ├── cluster-logging-operator-74dd5994f-6ttgt
    │  │  ├── clusterlogforwarder_cr
    │  │  ├── cr
    │  │  ├── csv
    │  │  ├── deployment
    │  │  └── logforwarding_cr
    │  ├── collector
    │  │  ├── fluentd-2tr64
    │  ├── eo
    │  │  ├── csv
    │  │  ├── deployment
    │  │  └── elasticsearch-operator-7dc7d97b9d-jb4r4
    │  ├── es
    │  │  ├── cluster-elasticsearch
    │  │  │  ├── aliases
    │  │  │  ├── health
    │  │  │  ├── indices
    │  │  │  ├── latest_documents.json
    │  │  │  ├── nodes
    │  │  │  ├── nodes_stats.json
    │  │  │  └── thread_pool
    │  │  ├── cr
    │  │  ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
    │  │  └── logs
    │  │     ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
    │  ├── install
    │  │  ├── co_logs
    │  │  ├── install_plan
    │  │  ├── olmo_logs
    │  │  └── subscription
    │  └── kibana
    │     ├── cr
    │     ├── kibana-9d69668d4-2rkvz
    ├── cluster-scoped-resources
    │  └── core
    │     ├── nodes
    │     │  ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml
    │     └── persistentvolumes
    │        ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml
    ├── event-filter.html
    ├── gather-debug.log
    └── namespaces
       ├── openshift-logging
       │  ├── apps
       │  │  ├── daemonsets.yaml
       │  │  ├── deployments.yaml
       │  │  ├── replicasets.yaml
       │  │  └── statefulsets.yaml
       │  ├── batch
       │  │  ├── cronjobs.yaml
       │  │  └── jobs.yaml
       │  ├── core
       │  │  ├── configmaps.yaml
       │  │  ├── endpoints.yaml
       │  │  ├── events
       │  │  │  ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml
       │  │  │  ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml
       │  │  │  ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml
       │  │  │  ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml
       │  │  │  ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml
       │  │  │  ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml
       │  │  ├── events.yaml
       │  │  ├── persistentvolumeclaims.yaml
       │  │  ├── pods.yaml
       │  │  ├── replicationcontrollers.yaml
       │  │  ├── secrets.yaml
       │  │  └── services.yaml
       │  ├── openshift-logging.yaml
       │  ├── pods
       │  │  ├── cluster-logging-operator-74dd5994f-6ttgt
       │  │  │  ├── cluster-logging-operator
       │  │  │  │  └── cluster-logging-operator
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  └── cluster-logging-operator-74dd5994f-6ttgt.yaml
       │  │  ├── cluster-logging-operator-registry-6df49d7d4-mxxff
       │  │  │  ├── cluster-logging-operator-registry
       │  │  │  │  └── cluster-logging-operator-registry
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml
       │  │  │  └── mutate-csv-and-generate-sqlite-db
       │  │  │     └── mutate-csv-and-generate-sqlite-db
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
       │  │  ├── elasticsearch-im-app-1596030300-bpgcx
       │  │  │  ├── elasticsearch-im-app-1596030300-bpgcx.yaml
       │  │  │  └── indexmanagement
       │  │  │     └── indexmanagement
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── fluentd-2tr64
       │  │  │  ├── fluentd
       │  │  │  │  └── fluentd
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── fluentd-2tr64.yaml
       │  │  │  └── fluentd-init
       │  │  │     └── fluentd-init
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── kibana-9d69668d4-2rkvz
       │  │  │  ├── kibana
       │  │  │  │  └── kibana
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── kibana-9d69668d4-2rkvz.yaml
       │  │  │  └── kibana-proxy
       │  │  │     └── kibana-proxy
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  └── route.openshift.io
       │     └── routes.yaml
       └── openshift-operators-redhat
          ├── ...
  3. Run the

    oc adm must-gather
    command with one or more
    --image
    or
    --image-stream
    arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:

    $ oc adm must-gather \
     --image-stream=openshift/must-gather \ 
    1
    
     --image=quay.io/kubevirt/must-gather 
    2
    1
    The default OpenShift Container Platform must-gather image
    2
    The must-gather image for KubeVirt
  4. Create a compressed file from the

    must-gather
    directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 
    1
    1
    Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name.
  5. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.

5.2. Additional resources

5.2.1. Gathering audit logs

You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. You can gather audit logs for:

  • etcd server
  • Kubernetes API server
  • OpenShift OAuth API server
  • OpenShift API server

Procedure

  1. Run the

    oc adm must-gather
    command with
    -- /usr/bin/gather_audit_logs
    :

    $ oc adm must-gather -- /usr/bin/gather_audit_logs
  2. Create a compressed file from the

    must-gather
    directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 
    1
    1
    Replace must-gather-local.472290403699006248 with the actual directory name.
  3. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.

5.2.2. Gathering network logs

You can gather network logs on all nodes in a cluster.

Procedure

  1. Run the

    oc adm must-gather
    command with
    -- gather_network_logs
    :

    $ oc adm must-gather -- gather_network_logs
  2. Create a compressed file from the

    must-gather
    directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 
    1
    1
    Replace must-gather-local.472290403699006248 with the actual directory name.
  3. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.

5.3. Obtaining your cluster ID

When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI (

oc
).

Prerequisites

  • Access to the cluster as a user with the
    cluster-admin
    role.
  • Access to the web console or the OpenShift CLI (
    oc
    ) installed.

Procedure

  • To open a support case and have your cluster ID autofilled using the web console:

    1. From the toolbar, navigate to (?) Help Open Support Case.
    2. The Cluster ID value is autofilled.
  • To manually obtain your cluster ID using the web console:

    1. Navigate to Home Overview.
    2. The value is available in the Cluster ID field of the Details section.
  • To obtain your cluster ID using the OpenShift CLI (

    oc
    ), run the following command:

    $ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'

5.4. About sosreport

sosreport
is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems.
sosreport
provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis.

In some support interactions, Red Hat Support may ask you to collect a

sosreport
archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of
oc adm must-gather
.

The recommended way to generate a

sosreport
for an OpenShift Container Platform 4.11 cluster node is through a debug pod.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have SSH access to your hosts.
  • You have installed the OpenShift CLI (
    oc
    ).
  • You have a Red Hat standard or premium Subscription.
  • You have a Red Hat Customer Portal account.
  • You have an existing Red Hat Support case ID.

Procedure

  1. Obtain a list of cluster nodes:

    $ oc get nodes
  2. Enter into a debug session on the target node. This step instantiates a debug pod called

    <node_name>-debug
    :

    $ oc debug node/my-cluster-node

    To enter into a debug session on the target node that is tainted with the

    NoExecute
    effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace:

    $ oc new-project dummy
    $ oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}'
    $ oc debug node/my-cluster-node
  3. Set

    /host
    as the root directory within the debug shell. The debug pod mounts the host’s root file system in
    /host
    within the pod. By changing the root directory to
    /host
    , you can run binaries contained in the host’s executable paths:

    # chroot /host
    Note

    OpenShift Container Platform 4.11 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,

    oc
    operations will be impacted. In such situations, it is possible to access nodes using
    ssh core@<node>.<cluster_name>.<base_domain>
    instead.

  4. Start a

    toolbox
    container, which includes the required binaries and plugins to run
    sosreport
    :

    # toolbox
    Note

    If an existing

    toolbox
    pod is already running, the
    toolbox
    command outputs
    'toolbox-' already exists. Trying to start…​
    . Remove the running toolbox container with
    podman rm toolbox-
    and spawn a new toolbox container, to avoid issues with
    sosreport
    plugins.

  5. Collect a

    sosreport
    archive.

    1. Run the

      sosreport
      command and enable the
      crio.all
      and
      crio.logs
      CRI-O container engine
      sosreport
      plugins:

      # sosreport -k crio.all=on -k crio.logs=on 
      1
      1
      -k enables you to define sosreport plugin parameters outside of the defaults.
    2. Press Enter when prompted, to continue.
    3. Provide the Red Hat Support case ID.
      sosreport
      adds the ID to the archive’s file name.
    4. The

      sosreport
      output provides the archive’s location and checksum. The following sample output references support case ID
      01234567
      :

      Your sosreport has been generated and saved in:
        /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 
      1
      
      
      The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
      1
      The sosreport archive’s file path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host.
  6. Provide the

    sosreport
    archive to Red Hat Support for analysis, using one of the following methods.

    • Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.

      1. From within the toolbox container, run

        redhat-support-tool
        to attach the archive directly to an existing Red Hat support case. This example uses support case ID
        01234567
        :

        # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz 
        1
        1
        The toolbox container mounts the host’s root directory at /host. Reference the absolute path from the toolbox container’s root directory, including /host/, when specifying files to upload through the redhat-support-tool command.
    • Upload the file to an existing Red Hat support case.

      1. Concatenate the

        sosreport
        archive by running the
        oc debug node/<node_name>
        command and redirect the output to a file. This command assumes you have exited the previous
        oc debug
        session:

        $ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 
        1
        1
        The debug container mounts the host’s root directory at /host. Reference the absolute path from the debug container’s root directory, including /host, when specifying target files for concatenation.
        Note

        OpenShift Container Platform 4.11 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a

        sosreport
        archive from a cluster node by using
        scp
        is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
        oc
        operations will be impacted. In such situations, it is possible to copy a
        sosreport
        archive from a node by running
        scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>
        .

      2. Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal.
      3. Select Attach files and follow the prompts to upload the file.

5.6. Querying bootstrap node journal logs

If you experience bootstrap-related issues, you can gather

bootkube.service
journald
unit logs and container logs from the bootstrap node.

Prerequisites

  • You have SSH access to your bootstrap node.
  • You have the fully qualified domain name of the bootstrap node.

Procedure

  1. Query

    bootkube.service
    journald
    unit logs from a bootstrap node during OpenShift Container Platform installation. Replace
    <bootstrap_fqdn>
    with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
    Note

    The

    bootkube.service
    log on the bootstrap node outputs etcd
    connection refused
    errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  2. Collect logs from the bootstrap node containers using

    podman
    on the bootstrap node. Replace
    <bootstrap_fqdn>
    with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'

5.7. Querying cluster node journal logs

You can gather

journald
unit logs and other logs within
/var/log
on individual cluster nodes.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • Your API service is still functional.
  • You have installed the OpenShift CLI (
    oc
    ).
  • You have SSH access to your hosts.

Procedure

  1. Query

    kubelet
    journald
    unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only:

    $ oc adm node-logs --role=master -u kubelet  
    1
    1
    Replace kubelet as appropriate to query other unit logs.
  2. Collect logs from specific subdirectories under

    /var/log/
    on cluster nodes.

    1. Retrieve a list of logs contained within a

      /var/log/
      subdirectory. The following example lists files in
      /var/log/openshift-apiserver/
      on all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver
    2. Inspect a specific log within a

      /var/log/
      subdirectory. The following example outputs
      /var/log/openshift-apiserver/audit.log
      contents from all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
    3. If the API is not functional, review the logs on each node using SSH instead. The following example tails

      /var/log/openshift-apiserver/audit.log
      :

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
      Note

      OpenShift Container Platform 4.11 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running

      oc adm must gather
      and other
      oc
      commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
      oc
      operations will be impacted. In such situations, it is possible to access nodes using
      ssh core@<node>.<cluster_name>.<base_domain>
      .

5.8. Network trace methods

Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues.

OpenShift Container Platform supports two ways of performing a network trace. Review the following table and choose the method that meets your needs.

Expand
Table 5.2. Supported methods of collecting a network trace
MethodBenefits and capabilities

Collecting a host network trace

You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met.

You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue.

Collecting a network trace from an OpenShift Container Platform node or container

You perform a packet capture on one node or one container. You run the

tcpdump
command interactively, so you can control the duration of the packet capture.

You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually.

This method uses the

cat
command and shell redirection to copy the packet capture data from the node or container to the client machine.

5.9. Collecting a host network trace

Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time.

You can use a combination of the

oc adm must-gather
command and the
registry.redhat.io/openshift4/network-tools-rhel8
container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues.

The

oc adm must-gather
command is used to run the
tcpdump
command in pods on specific nodes. The
tcpdump
command records the packet captures in the pods. When the
tcpdump
command exits, the
oc adm must-gather
command transfers the files with the packet captures from the pods to your client machine.

Tip

The sample command in the following procedure demonstrates performing a packet capture with the

tcpdump
command. However, you can run any command in the container image that is specified in the
--image
argument to gather troubleshooting information from multiple nodes at the same time.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have installed the OpenShift CLI (
    oc
    ).

Procedure

  1. Run a packet capture from the host network on some nodes by running the following command:

    $ oc adm must-gather \
        --dest-dir /tmp/captures \  <.>
        --source-dir '/tmp/tcpdump/' \  <.>
        --image registry.redhat.io/openshift4/network-tools-rhel8:latest \  <.>
        --node-selector 'node-role.kubernetes.io/worker' \  <.>
        --host-network=true \  <.>
        --timeout 30s \  <.>
        -- \
        tcpdump -i any \  <.>
        -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300

    <.> The

    --dest-dir
    argument specifies that
    oc adm must-gather
    stores the packet captures in directories that are relative to
    /tmp/captures
    on the client machine. You can specify any writable directory. <.> When
    tcpdump
    is run in the debug pod that
    oc adm must-gather
    starts, the
    --source-dir
    argument specifies that the packet captures are temporarily stored in the
    /tmp/tcpdump
    directory on the pod. <.> The
    --image
    argument specifies a container image that includes the
    tcpdump
    command. <.> The
    --node-selector
    argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the
    --node-name
    argument instead to run the packet capture on a single node. If you omit both the
    --node-selector
    and the
    --node-name
    argument, the packet captures are performed on all nodes. <.> The
    --host-network=true
    argument is required so that the packet captures are performed on the network interfaces of the node. <.> The
    --timeout
    argument and value specify to run the debug pod for 30 seconds. If you do not specify the
    --timeout
    argument and a duration, the debug pod runs for 10 minutes. <.> The
    -i any
    argument for the
    tcpdump
    command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name.

  2. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.
  3. Review the packet capture files that

    oc adm must-gather
    transferred from the pods to your client machine:

    tmp/captures
    ├── event-filter.html
    ├── ip-10-0-192-217-ec2-internal  
    1
    
    │   └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca...
    │       └── 2022-01-13T19:31:31.pcap
    ├── ip-10-0-201-178-ec2-internal  
    2
    
    │   └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca...
    │       └── 2022-01-13T19:31:30.pcap
    ├── ip-...
    └── timestamp
    1 2
    The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the --node-selector argument, then the directory level for the hostname is not present.

When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have installed the OpenShift CLI (
    oc
    ).
  • You have a Red Hat standard or premium Subscription.
  • You have a Red Hat Customer Portal account.
  • You have an existing Red Hat Support case ID.
  • You have SSH access to your hosts.

Procedure

  1. Obtain a list of cluster nodes:

    $ oc get nodes
  2. Enter into a debug session on the target node. This step instantiates a debug pod called

    <node_name>-debug
    :

    $ oc debug node/my-cluster-node
  3. Set

    /host
    as the root directory within the debug shell. The debug pod mounts the host’s root file system in
    /host
    within the pod. By changing the root directory to
    /host
    , you can run binaries contained in the host’s executable paths:

    # chroot /host
    Note

    OpenShift Container Platform 4.11 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,

    oc
    operations will be impacted. In such situations, it is possible to access nodes using
    ssh core@<node>.<cluster_name>.<base_domain>
    instead.

  4. From within the

    chroot
    environment console, obtain the node’s interface names:

    # ip ad
  5. Start a

    toolbox
    container, which includes the required binaries and plugins to run
    sosreport
    :

    # toolbox
    Note

    If an existing

    toolbox
    pod is already running, the
    toolbox
    command outputs
    'toolbox-' already exists. Trying to start…​
    . To avoid
    tcpdump
    issues, remove the running toolbox container with
    podman rm toolbox-
    and spawn a new toolbox container.

  6. Initiate a

    tcpdump
    session on the cluster node and redirect output to a capture file. This example uses
    ens5
    as the interface name:

    $ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap  
    1
    1
    The tcpdump capture file’s path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host.
  7. If a

    tcpdump
    capture is required for a specific container on the node, follow these steps.

    1. Determine the target container ID. The

      chroot host
      command precedes the
      crictl
      command in this step because the toolbox container mounts the host’s root directory at
      /host
      :

      # chroot /host crictl ps
    2. Determine the container’s process ID. In this example, the container ID is

      a7fe32346b120
      :

      # chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'
    3. Initiate a

      tcpdump
      session on the container and redirect output to a capture file. This example uses
      49628
      as the container’s process ID and
      ens5
      as the interface name. The
      nsenter
      command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container’s process ID, the
      tcpdump
      command is run in the container’s namespace from the host:

      # nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap  
      1
      1
      The tcpdump capture file’s path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host.
  8. Provide the

    tcpdump
    capture file to Red Hat Support for analysis, using one of the following methods.

    • Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.

      1. From within the toolbox container, run

        redhat-support-tool
        to attach the file directly to an existing Red Hat Support case. This example uses support case ID
        01234567
        :

        # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap 
        1
        1
        The toolbox container mounts the host’s root directory at /host. Reference the absolute path from the toolbox container’s root directory, including /host/, when specifying files to upload through the redhat-support-tool command.
    • Upload the file to an existing Red Hat support case.

      1. Concatenate the

        sosreport
        archive by running the
        oc debug node/<node_name>
        command and redirect the output to a file. This command assumes you have exited the previous
        oc debug
        session:

        $ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 
        1
        1
        The debug container mounts the host’s root directory at /host. Reference the absolute path from the debug container’s root directory, including /host, when specifying target files for concatenation.
        Note

        OpenShift Container Platform 4.11 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a

        tcpdump
        capture file from a cluster node by using
        scp
        is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
        oc
        operations will be impacted. In such situations, it is possible to copy a
        tcpdump
        capture file from a node by running
        scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>
        .

      2. Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal.
      3. Select Attach files and follow the prompts to upload the file.

5.11. Providing diagnostic data to Red Hat Support

When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal, or from an OpenShift Container Platform cluster directly by using the

redhat-support-tool
command.

Prerequisites

  • You have access to the cluster as a user with the
    cluster-admin
    role.
  • You have SSH access to your hosts.
  • You have installed the OpenShift CLI (
    oc
    ).
  • You have a Red Hat standard or premium Subscription.
  • You have a Red Hat Customer Portal account.
  • You have an existing Red Hat Support case ID.

Procedure

  • Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal.

    1. Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the

      oc debug node/<node_name>
      command and redirect the output to a file. The following example copies
      /host/var/tmp/my-diagnostic-data.tar.gz
      from a debug container to
      /var/tmp/my-diagnostic-data.tar.gz
      :

      $ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 
      1
      1
      The debug container mounts the host’s root directory at /host. Reference the absolute path from the debug container’s root directory, including /host, when specifying target files for concatenation.
      Note

      OpenShift Container Platform 4.11 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using

      scp
      is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
      oc
      operations will be impacted. In such situations, it is possible to copy diagnostic files from a node by running
      scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path>
      .

    2. Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal.
    3. Select Attach files and follow the prompts to upload the file.
  • Upload diagnostic data to an existing Red Hat support case directly from an OpenShift Container Platform cluster.

    1. Obtain a list of cluster nodes:

      $ oc get nodes
    2. Enter into a debug session on the target node. This step instantiates a debug pod called

      <node_name>-debug
      :

      $ oc debug node/my-cluster-node
    3. Set

      /host
      as the root directory within the debug shell. The debug pod mounts the host’s root file system in
      /host
      within the pod. By changing the root directory to
      /host
      , you can run binaries contained in the host’s executable paths:

      # chroot /host
      Note

      OpenShift Container Platform 4.11 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,

      oc
      operations will be impacted. In such situations, it is possible to access nodes using
      ssh core@<node>.<cluster_name>.<base_domain>
      instead.

    4. Start a

      toolbox
      container, which includes the required binaries to run
      redhat-support-tool
      :

      # toolbox
      Note

      If an existing

      toolbox
      pod is already running, the
      toolbox
      command outputs
      'toolbox-' already exists. Trying to start…​
      . Remove the running toolbox container with
      podman rm toolbox-
      and spawn a new toolbox container, to avoid issues.

      1. Run

        redhat-support-tool
        to attach a file from the debug pod directly to an existing Red Hat Support case. This example uses support case ID '01234567' and example file path
        /host/var/tmp/my-diagnostic-data.tar.gz
        :

        # redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz 
        1
        1
        The toolbox container mounts the host’s root directory at /host. Reference the absolute path from the toolbox container’s root directory, including /host/, when specifying files to upload through the redhat-support-tool command.

5.12. About toolbox

toolbox
is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as
sosreport
and
redhat-support-tool
.

The primary purpose for a

toolbox
container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image.

Installing packages to a toolbox container

By default, running the

toolbox
command starts a container with the
registry.redhat.io/rhel8/support-tools:latest
image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.

Prerequisites

  • You have accessed a node with the
    oc debug node/<node_name>
    command.

Procedure

  1. Set

    /host
    as the root directory within the debug shell. The debug pod mounts the host’s root file system in
    /host
    within the pod. By changing the root directory to
    /host
    , you can run binaries contained in the host’s executable paths:

    # chroot /host
  2. Start the toolbox container:

    # toolbox
  3. Install the additional package, such as

    wget
    :

    # dnf install -y <package_name>

Starting an alternative image with toolbox

By default, running the

toolbox
command starts a container with the
registry.redhat.io/rhel8/support-tools:latest
image. You can start an alternative image by creating a
.toolboxrc
file and specifying the image to run.

Prerequisites

  • You have accessed a node with the
    oc debug node/<node_name>
    command.

Procedure

  1. Set

    /host
    as the root directory within the debug shell. The debug pod mounts the host’s root file system in
    /host
    within the pod. By changing the root directory to
    /host
    , you can run binaries contained in the host’s executable paths:

    # chroot /host
  2. Create a

    .toolboxrc
    file in the home directory for the root user ID:

    # vi ~/.toolboxrc
    REGISTRY=quay.io                
    1
    
    IMAGE=fedora/fedora:33-x86_64   
    2
    
    TOOLBOX_NAME=toolbox-fedora-33  
    3
    1
    Optional: Specify an alternative container registry.
    2
    Specify an alternative image to start.
    3
    Optional: Specify an alternative name for the toolbox container.
  3. Start a toolbox container with the alternative image:

    # toolbox
    Note

    If an existing

    toolbox
    pod is already running, the
    toolbox
    command outputs
    'toolbox-' already exists. Trying to start…​
    . Remove the running toolbox container with
    podman rm toolbox-
    and spawn a new toolbox container, to avoid issues with
    sosreport
    plugins.

Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2026 Red Hat
Torna in cima