Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 24. Retrieving diagnostic and troubleshooting data

download PDF

The report.sh diagnostics tool is a script provided by Red Hat to gather essential data for troubleshooting Streams for Apache Kafka deployments on OpenShift. It collects relevant logs, configuration files, and other diagnostic data to assist in identifying and resolving issues. When you run the script, you can specify additional parameters to retrieve specific data.

Prerequisites

  • Bash 4 or newer to run the script.
  • The OpenShift oc command-line tool is installed and configured to connect to the running cluster.

This establishes the necessary authentication for the oc command-line tool to interact with your cluster and retrieve the required diagnostic data.

Procedure

  1. Download and extract the tool.

    The diagnostics tool is available from Streams for Apache Kafka software downloads page.

  2. From the directory where you extracted the tool, open a terminal and run the reporting tool:

    ./report.sh --namespace=<cluster_namespace> --cluster=<cluster_name> --out-dir=<local_output_directory>

    Replace <cluster_namespace> with the actual OpenShift namespace of your Streams for Apache Kafka deployment, <cluster_name> with the name of your Kafka cluster, and <local_output_directory> with the path to the local directory where you want to save the generated report. If you don’t specify a directory, a temporary directory is created.

    Include other optional reporting options, as necessary:

    --bridge=<string>
    Specify the name of the Kafka Bridge cluster to get data on its pods and logs.
    --connect=<string>
    Specify the name of the Kafka Connect cluster to get data on its pods and logs.
    --mm2=<string>
    Specify the name of the Mirror Maker 2 cluster to get data on its pods and logs.
    --secrets=(off|hidden|all)

    Specify the secret verbosity level. The default is hidden. The available options are as follows:

    • all: Secret keys and data values are reported.
    • hidden: Secrets with only keys are reported. Data values, such as passwords, are removed.
    • off: Secrets are not reported at all.

    Example request with data collection options

    ./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports

    Note

    If required, assign execute permissions on the script to your user with the chmod command. For example, chmod +x report.sh.

After the script has finished executing, the output directory contains files and directories of logs, configurations, and other diagnostic data collected for each component of your Streams for Apache Kafka deployment.

Data collected by the reporting diagnostics tool

Data on the following components is returned if present:

Cluster Operator

  • Deployment YAML and logs
  • All related pods and their logs
  • YAML files for resources related to the cluster operator (ClusterRoles, ClusterRoleBindings)

Drain Cleaner (if present)

  • Deployment YAML and logs
  • Pod logs

Custom Resources

  • Custom Resource Definitions (CRD) YAML
  • YAML files for all related Custom Resources (CR)

Events

  • Events related to the specified namespace

Configurations

  • Kafka pod logs and configuration file (strimzi.properties)
  • Zookeeper pod logs and configuration file (zookeeper.properties)
  • Entity Operator (Topic Operator, User Operator) pod logs
  • Cruise Control pod logs
  • Kafka Exporter pod logs
  • Bridge pod logs if specified in the options
  • Connect pod logs if specified in the options
  • MirrorMaker 2 pod logs if specified in the options

Secrets (if requested in the options)

  • YAML files for all secrets related to the specified Kafka cluster
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.