Using the Streams for Apache Kafka Console


Red Hat Streams for Apache Kafka 2.9

The Streams for Apache Kafka Console supports your deployment of Streams for Apache Kafka.

Abstract

Connect the console to a Kafka cluster that’s managed by Streams for Apache Kafka and use it to monitor and manage the cluster.

Preface

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation.

To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.

Prerequisite

  • You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance.
    If you do not have an account, you will be prompted to create one.

Procedure

  1. Click the following: Create issue.
  2. In the Summary text box, enter a brief description of the issue.
  3. In the Description text box, provide the following information:

    • The URL of the page where you found the issue.
    • A detailed description of the issue.
      You can leave the information in any other fields at their default values.
  4. Add a reporter name.
  5. Click Create to submit the Jira issue to the documentation team.

Thank you for taking the time to provide feedback.

Chapter 1. Streams for Apache Kafka Console overview

The Streams for Apache Kafka Console provides a user interface to facilitate the administration of Kafka clusters, delivering real-time insights for monitoring, managing, and optimizing each cluster from its user interface.

Connect a Kafka cluster managed by Streams for Apache Kafka to gain real-time insights and optimize cluster performance from its user interface. The console’s homepage displays connected Kafka clusters, allowing you to access detailed information on components such as brokers, topics, partitions, and consumer groups.

From the console, you can view the status of a Kafka cluster before navigating to view information on the cluster’s brokers and topics, or the consumer groups connected to the Kafka cluster.

Chapter 2. Deploying the console

Deploy the console using the dedicated operator. After installing the operator, you can create instances of the console.

For each console instance, the operator needs a Prometheus instance to collect and display Kafka cluster metrics. You can configure the console to use an existing Prometheus source, like OpenShift’s built-in user workload monitoring. If no source is set, the operator creates a private Prometheus instance when the console is deployed. However, this default setup is not recommended for production and should only be used for development or evaluation purposes.

2.1. Deployment prerequisites

To deploy the console, you need the following:

  • An OpenShift 4.14 to 4.18 cluster.
  • The oc command-line tool is installed and configured to connect to the OpenShift cluster.
  • Access to the OpenShift cluster using an account with cluster-admin permissions, such as system-admin.
  • A Kafka cluster managed by Streams for Apache Kafka, running on the OpenShift cluster.

Example files are provided for installing a Kafka cluster managed by Streams for Apache Kafka, along with a Kafka user representing the console. These files offer the fastest way to set up and try the console, but you can also use your own Streams for Apache Kafka deployment.

2.1.1. Using your own Kafka cluster

If you use your own Streams for Apache Kafka deployment, verify the configuration by comparing it with the example deployment files provided with the console.

For each Kafka cluster, the Kafka resource used to install the cluster must be configured with the following:

  • Sufficient authorization for the console to connect
  • Metrics properties for the console to be able to display certain data

    The metrics configuration must match the properties specified in the example Kafka (console-kafka) and ConfigMap (console-kafka-metrics) resources.

2.1.2. Deploying a new Kafka cluster

If you already have Streams for Apache Kafka installed but want to create a new Kafka cluster for use with the console, example deployment resources are available to help you get started.

These resources create the following:

  • A Kafka cluster in KRaft mode with SCRAM-SHA-512 authentication.
  • A Strimzi KafkaNodePool resource to manage the cluster nodes.
  • A KafkaUser resource to enable authenticated and authorized console connections to the Kafka cluster.

The KafkaUser custom resource in the 040-KafkaUser-console-kafka-user1.yaml file includes the necessary ACL types to provide authorized access for the console to the Kafka cluster.

The minimum required ACL rules are configured as follows:

  • Describe, DescribeConfigs permissions for the cluster resource
  • Read, Describe, DescribeConfigs permissions for all topic resources
  • Read, Describe permissions for all group resources
Note

To ensure the console has the necessary access to function, a minimum level of authorization must be configured for the principal used in each Kafka cluster connection. The specific permissions may vary based on the authorization framework in use, such as ACLs, Keycloak authorization, OPA, or a custom solution.

When configuring the KafkaUser authentication and authorization, ensure they match the corresponding Kafka configuration:

  • KafkaUser.spec.authentication should match Kafka.spec.kafka.listeners[*].authentication.
  • KafkaUser.spec.authorization should match Kafka.spec.kafka.authorization.

Prerequisites

  • An OpenShift 4.14 to 4.18 cluster.
  • Access to the OpenShift Container Platform web console using an account with cluster-admin permissions, such as system:admin.
  • The oc command-line tool is installed and configured to connect to the OpenShift cluster.

Procedure

  1. Download and extract the console installation artifacts.

    The artifacts are included with installation and example files available from the Streams for Apache Kafka software downloads page.

    The artifacts provide the deployment YAML files to the install the Kafka cluster. Use the sample installation files located in examples/console/resources/kafka.

  2. Set environment variables to update the installation files:

    export NAMESPACE=kafka 1
    export LISTENER_TYPE=route 2
    export CLUSTER_DOMAIN=<domain_name> 3
    1
    The namespace in which you want to deploy the Kafka operator.
    2
    The listener type used to expose Kafka to the console.
    3
    The cluster domain name for your OpenShift cluster.

    In this example, the namespace variable is defined as kafka and the listener type is route.

  3. Install the Kafka cluster.

    Run the following command to apply the YAML files and deploy the Kafka cluster to the defined namespace:

    cat examples/console/resources/kafka/*.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -

    This command reads the YAML files, replaces the namespace environment variables, and applies the resulting configuration to the specified OpenShift namespace.

  4. Check the status of the deployment:

    oc get pods -n kafka

    Output shows the operators and cluster readiness

    NAME                              READY   STATUS   RESTARTS
    strimzi-cluster-operator          1/1     Running  0
    console-kafka-console-nodepool-0  1/1     Running  0
    console-kafka-console-nodepool-1  1/1     Running  0
    console-kafka-console-nodepool-2  1/1     Running  0

    • console-kafka is the name of the cluster.
    • console-nodepool is the name of the node pool.

      A node ID identifies the nodes created.

      With the default deployment, you install three nodes.

      READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

2.2. Installing the console operator

Install the console operator using one of the following methods:

  • From the OperatorHub in the OpenShift web console
  • Using the OpenShift CLI
  • By applying a Console Custom Resource Definition (CRD)

The recommended approach is to install the operator using either the OpenShift web console or the OpenShift CLI (oc), both of which are supported by the Operator Lifecycle Manager (OLM). If using the OLM is not suitable for your environment, you can install the operator by applying the CRD directly.

2.2.1. Installing the operator from the OperatorHub

This procedure describes how to install and subscribe to the Streams for Apache Kafka Console operator using the OperatorHub in the OpenShift Container Platform web console.

The procedure describes how to create a project and install the operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions.

Warning

Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing the operator from the default alpha channel is generally safe. However, we do not recommend enabling automatic updates on the alpha channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels.

Prerequisites

Procedure

  1. Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation.

    We use a project named streams-kafka-console in this example.

  2. Navigate to the Operators > OperatorHub page.
  3. Scroll or type a keyword into the Filter by keyword box to find the Streams for Apache Kafka Console operator.

    The operator is located in the Streaming & Messaging category.

  4. Click Streams for Apache Kafka Console to display the operator information.
  5. Read the information about the operator and click Install.
  6. On the Install Operator page, choose from the following installation and update options:

    • Update Channel: Choose the update channel for the operator.

      • The (default) alpha channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable.
      • An amq-streams-X.x channel contains the minor and micro release updates for a major release, where X is the major release version number.
      • An amq-streams-X.Y.x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number.
    • Installation Mode: Install the operator to all namespaces in the OpenShift cluster.

      A single instance of the operator will watch and manage consoles created throughout the OpenShift cluster.

    • Update approval: By default, the Streams for Apache Kafka Console operator is automatically upgraded to the latest console version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information on operators, see the OpenShift documentation.
  7. Click Install to install the operator to your selected namespace.
  8. After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace.

    The status will show as Succeeded.

  9. Use the console operator to deploy the console and connect to a Kafka cluster.

2.2.2. Installing the operator using the OpenShift CLI

This procedure describes how to install the Streams for Apache Kafka Console operator using the OpenShift CLI (oc).

Prerequisites

Procedure

  1. Download and extract the console installation artifacts.

    The artifacts are included with installation and example files available from the Streams for Apache Kafka software downloads page.

    The artifacts provide the deployment YAML files to the install the console.

  2. Set an environment variable to define the namespace where you want to install the operator:

    export NAMESPACE=operator-namespace

    In this example, the namespace variable is defined as operator-namespace.

  3. Install the console operator with the OLM.

    Use the sample installation files located in install/console-operator/olm. These files install the operator with cluster-wide scope, allowing it to manage console resources across all namespaces. Run the following command to apply the YAML files and deploy the operator to the defined namespace:

    cat install/console-operator/olm/*.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -

    This command reads the YAML files, replaces the namespace environment variables, and applies the resulting configuration to the specified OpenShift namespace.

  4. Check the status of the deployment:

    oc get pods -n operator-namespace

    Output shows the deployment name and readiness

    NAME              READY  STATUS   RESTARTS
    console-operator  1/1    Running  1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

  5. Use the console operator to deploy the console and connect to a Kafka cluster.

2.2.3. Deploying the console operator using a CRD

This procedure describes how to install the Streams for Apache Kafka Console operator using a Custom Resource Definition (CRD).

Prerequisites

Procedure

  1. Download and extract the console installation artifacts.

    The artifacts are included with installation and example files available from the Streams for Apache Kafka software downloads page.

    The artifacts include a Custom Resource Definition (CRD) file (console-operator.yaml ) to install the operator without the OLM.

  2. Set an environment variable to define the namespace where you want to install the operator:

    export NAMESPACE=operator-namespace

    In this example, the namespace variable is defined as operator-namespace.

  3. Install the console operator with the CRD.

    Use the sample installation files located in install/console-operator/non-olm. These resources install the operator with cluster-wide scope, allowing it to manage console resources across all namespaces. Run the following command to apply the YAML file:

    cat install/console-operator/non-olm/console-operator.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -

    This command reads the YAML file, replaces the namespace environment variables, and applies the resulting configuration to the specified OpenShift namespace.

  4. Check the status of the deployment:

    oc get pods -n operator-namespace

    Output shows the deployment name and readiness

    NAME              READY  STATUS   RESTARTS
    console-operator  1/1    Running  1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

  5. Use the console operator to deploy the console and connect to a Kafka cluster.

2.3. Deploying and connecting the console to a Kafka cluster

Use the console operator to deploy the Streams for Apache Kafka Console to the same OpenShift cluster as a Kafka cluster managed by Streams for Apache Kafka. Use the console to connect to the Kafka cluster.

Prerequisites

Procedure

  1. Create a Console custom resource in the desired namespace.

    If you deployed the example Kafka cluster provided with the installation artifacts, you can use the configuration specified in the examples/console/resources/console/010-Console-example.yaml configuration file unchanged.

    Otherwise, configure the resource to connect to your Kafka cluster.

    Example console configuration

    apiVersion: console.streamshub.github.com/v1alpha1
    kind: Console
    metadata:
      name: my-console
    spec:
      hostname: my-console.<cluster_domain> 1
      kafkaClusters:
        - name: console-kafka 2
          namespace: kafka 3
          listener: secure 4
          properties:
            values: [] 5
            valuesFrom: [] 6
          credentials:
            kafkaUser:
              name: console-kafka-user1 7

    1
    Hostname to access the console by HTTP.
    2
    Name of the Kafka resource representing the cluster.
    3
    Namespace of the Kafka cluster.
    4
    Listener to expose the Kafka cluster for console connection.
    5
    (Optional) Add connection properties if needed.
    6
    (optional) References to config maps or secrets, if needed.
    7
    (Optional) Kafka user created for authenticated access to the Kafka cluster.
  2. Apply the Console configuration to install the console.

    In this example, the console is deployed to the console-namespace namespace:

    kubectl apply -f examples/console/resources/console/010-Console-example.yaml -n console-namespace
  3. Check the status of the deployment:

    oc get pods -n console-namespace

    Output shows the deployment name and readiness

    NAME           READY  STATUS  RUNNING
    console-kafka  1/1    1       1

  4. Access the console.

    When the console is running, use the hostname specified in the Console resource (spec.hostname) to access the user interface.

2.3.1. Using an OIDC provider to secure access to Kafka clusters

Enable secure console connections to Kafka clusters using an OIDC provider. Configure the console deployment to configure connections to any Identity Provider (IdP), such as Keycloak or Dex, that supports OpenID Connect (OIDC). Also define the subjects and roles for user authorization. The security profiles can be configured for all Kafka cluster connections on a global level, though you can add roles and rules for specific Kafka clusters.

An example configuration is provided in the following file: examples/console/resources/console/console-security-oidc.yaml. The configuration introduces the following additional properties for console deployment:

security
Properties that define the connection details for the console to connect with the OIDC provider.
subjects
Specifies the subjects (users or groups) and their roles in terms of JWT claims or explicit subject names, determining access permissions.
roles
Defines the roles and associated access rules for users, specifying which resources (like Kafka clusters) they can interact with and what operations they are permitted to perform.

Example security configuration for all clusters

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  security:
    oidc:
      authServerUrl: <OIDC_discovery_URL> 1
      clientId: <client_id> 2
      clientSecret: 3
        valueFrom:
          secretKeyRef:
            name: my-oidc-secret
            key: client-secret
    subjects:
      - claim: groups 4
        include: 5
          - <team_name_1>
          - <team_name_2>
        roleNames: 6
          - developers
      - claim: groups
        include:
          - <team_name_3>
        roleNames:
          - administrators
      - include: 7
          - <user_1>
          - <user_2>
        roleNames:
          - administrators
    roles:
      - name: developers
        rules:
          - resources: 8
              - kafkas
          - resourceNames: 9
              - <dev_cluster_a>
              - <dev_cluster_b>
          - privileges: 10
              - '*'
      - name: administrators
        rules:
          - resources:
              - kafkas
          - privileges:
              - '*'
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      credentials:
        kafkaUser:
          name: console-kafka-user1

1
URL for OIDC provider discovery.
2
Client ID for OIDC authentication to identify the client.
3
Client secret and client ID used for authentication.
4
JWT claim types or names to identify the users or groups.
5
Users or groups included under the specified claim.
6
Roles assigned to the specified users or groups.
7
Specific users included by name when no claim is specified.
8
Resources that the assigned role can access.
9
Specific resource names accessible by the assigned role.
10
Privileges granted to the assigned role for the specified resources.

If you want to specify roles and rules for individual Kafka clusters, add the details under kafka.clusters[].security.roles[]. In the following example, the console-kafka cluster allows developers to list and view selected Kafka resources. Administrators can also update certain resources.

Example security configuration for an individual cluster

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      credentials:
        kafkaUser:
          name: console-kafka-user1
      security:
        roles:
          - name: developers
            rules:
              - resources:
                  - topics
                  - topics/records
                  - consumerGroups
                  - rebalances
              - privileges:
                  - get
                  - list
          - name: administrators
            rules:
              - resources:
                  - topics
                  - topics/records
                  - consumerGroups
                  - rebalances
                  - nodes/configs
              - privileges:
                  - get
                  - list
              - resources:
                  - consumerGroups
                  - rebalances
              - privileges:
                  - update

2.3.2. Enabling a metrics provider

Configure the console deployment to enable a metrics provider. You can set up configuration to use one of the following sources to scrape metrics from Kafka clusters using Prometheus:

  • OpenShift’s built-in user workload monitoring
    Use OpenShift’s workload monitoring, incorporating the Prometheus operator, to monitor console services and workloads without the need for an additional monitoring solution.
  • A standalone Prometheus instance
    Provide the details and credentials to connect with your own Prometheus instance.
  • An embedded Prometheus instance (default)
    Deploy a private Prometheus instance for use only by the console instance. The instance is configured to retrieve metrics from all Streams for Apache Kafka instances in the same OpenShift cluster. Using embedded metrics is intended for evaluation or development environments and should not be used in production scenarios.

Example configuration for OpenShift monitoring and a standalone Prometheus instance is provided in the following files:

  • examples/console/resources/console/console-openshift-metrics.yaml
  • examples/console/resources/console/console-standalone-prometheus.yaml

The configuration introduces the metricsSources properties for enabling monitoring. Use the type property to define the source:

  • openshift-monitoring
  • standalone (Prometheus)
  • embedded (Prometheus)

Assign the metrics source to a Kafka cluster using the kafkaClusters.metricsSource property. The configuration for openshift-monitoring and embedded requires no further configuration besides the type.

Example metrics configuration for Openshift monitoring

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  metricsSources:
    - name: my-ocp-prometheus
      type: openshift-monitoring
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      metricsSource: my-ocp-prometheus
      credentials:
        kafkaUser:
          name: console-kafka-user1
  # ...

Example metrics configuration for standalone Prometheus monitoring

apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
  name: my-console
spec:
  hostname: my-console.<cluster_domain>
  # ...
  metricsSources:
    - name: my-custom-prometheus
      type: standalone
      url: <prometheus_instance_address> 1
      authentication: 2
        username: my-user
        password: my-password
      trustStore: 3
        type: JKS
        content:
          valueFrom:
            configMapKeyRef:
              name: my-prometheus-configmap
              key: ca.jks
        password: 4
          value: changeit
  kafkaClusters:
    - name: console-kafka
      namespace: kafka
      listener: secure
      metricsSource: my-ocp-prometheus
      credentials:
        kafkaUser:
          name: console-kafka-user1
  # ...

1
URL of the standalone Prometheus instance for metrics collection.
2
Optional authentication credentials for accessing the Prometheus instance. Either username and password or token can be used.
3
Optional truststore configuration for SSL, with JKS content provided through a ConfigMap or Secret (secretKeyRef).
4
Optional password for the truststore, either directly provided or referenced. Not recommended for production.

Chapter 3. Navigating the Streams for Apache Kafka Console

When you open the Streams for Apache Kafka Console, the homepage presents a list of connected Kafka clusters. By clicking on a Kafka cluster name on the homepage or from the side menu, you can find information on the following components:

Kafka clusters
A group of Kafka brokers and management components.
Brokers
A broker contains topics and orchestrates the storage and passing of messages.
Topics
A topic provides a destination for the storage of data. Kafka splits each topic into one or more partitions.
Partitions
A subset of a topic used for data sharding and replication. The number of partitions is defined in the topic configuration.
Consumer groups
Kafka groups consumers with the same group ID and distributes messages across group members. Consumers within a group receive data from one or more partitions.

For example, you can view the status of a Kafka cluster before navigating to view information on the cluster’s brokers and topics, or the consumer groups connected to the Kafka cluster.

Note

If the side menu is not visible, click the hamburger menu (three horizontal lines) in the console header.

Chapter 4. HOME: Checking connected clusters

The homepage offers a snapshot of connected Kafka clusters, providing information on the Kafka version and associated project for each cluster. To find more information, log in to a cluster.

4.1. Logging in to a Kafka cluster

The console supports authenticated user login to a Kafka cluster using SCRAM-SHA-512 and OAuth 2.0 authentication mechanisms. For secure login, authentication must be configured in Streams for Apache Kafka.

Note

If authentication is not set up for a Kafka cluster or the credentials have been provided using the Kafka sasl.jaas.config property (which defines SASL authentication settings) in the console configuration, you can log in anonymously to the cluster without authentication.

Prerequisites

  • You must have access to an OpenShift Container Platform cluster.
  • The console must be deployed and set up to connect to a Kafka cluster.
  • For secure login, you must have appropriate authentication settings for the Kafka cluster and user.

    SCRAM-SHA-512 settings
  • Listener authentication set to scram-sha-512 in Kafka.spec.kafka.listeners[*].authentication.
  • Username and password configured in KafkaUser.spec.authentication.

    OAuth 2.0 settings
  • An OAuth 2.0 authorization server with client definitions for the Kafka cluster and users.
  • Listener authentication set to oauth in Kafka.spec.kafka.listeners[*].authentication.

For more information on configuring authentication, see the Streams for Apache Kafka documentation.

Procedure

  1. From the homepage, click Login to cluster for a selected Kafka cluster.
  2. Enter login credentials depending on the authentication method used.

    • For SCRAM-SHA-512, enter the username and password associated with the KafkaUser.
    • For OAuth 2.0, provide a client ID and client secret that is valid for the OAuth provider configured for the Kafka listener.
  3. To end your session, click your username and then Logout or navigate back to the homepage.

Chapter 5. Cluster overview page

The Cluster overview page shows the status of a Kafka cluster. Here, you can assess the readiness of Kafka brokers, identify any cluster errors or warnings, and gain insights into the cluster’s health. At a glance, the page provides information on the number of topics and partitions within the cluster, along with their replication status. Explore cluster metrics through charts displaying used disk space, CPU utilization, and memory usage. Additionally, topic metrics offer a comprehensive view of total incoming and outgoing byte rates for all topics in the Kafka cluster.

5.1. Pausing reconciliation of clusters

Pause cluster reconciliations from the Cluster overview page by following these steps. While paused, any changes to the cluster configuration using the Kafka custom resource are ignored until reconciliation is resumed.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster that you want to connect to, then click Cluster overview and Pause reconciliation.
  2. Confirm the pause, after which the Cluster overview page shows a change of status warning that reconciliation is paused.
  3. Click Resume reconciliation to restart reconciliation.
Note

If the status change is not displayed after pausing reconciliation, try refreshing the page.

5.2. Accessing cluster connection details for client access

When connecting a client to a Kafka cluster, retrieve the necessary connection details from the Cluster overview page by following these steps.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster that you want to connect to, then click Cluster overview and Cluster connection details.
  2. Copy and add bootstrap address and connection properties to your Kafka client configuration to establish a connection with the Kafka cluster.
Note

Ensure that the authentication type used by the client matches the authentication type configured for the Kafka cluster.

Chapter 6. Topics page

The Topics page shows all the topics created for a Kafka cluster. Use this page to check information on topics.

The Topics page shows the overall replication status for partitions in the topic, as well as counts for the partitions in the topic and the number of associated consumer groups. The overall storage used by the topic is also shown.

Warning

Internal topics must not be modified. You can choose to hide internal topics from the list of topics returned on the Topics page.

By clicking on a topic name, additional topic information is presented on a series of tabs:

Messages
Messages shows the message log for a topic.
Partitions
Partitions shows the replication status of each partition in a topic.
Consumer groups
Consumer groups lists the names and status of the consumer groups and group members connected to a topic.
Configuration
Configuration shows the configuration of a topic.

If a topic is shown as Managed, it means that is managed using the Streams for Apache Kafka Topic Operator and was not created directly in the Kafka cluster.

Use the information provided on the tabs to check and modify the configuration of your topics.

6.1. Checking topic messages

Track the flow of messages for a specific topic from the Messages tab. The Messages tab presents a chronological list of messages for a topic.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Topics.
  2. Click the name of the topic you want to check.
  3. Check the information on the Messages tab.

    For each message, you can see its timestamp (in UTC), offset, key, and value.

    By clicking on a message, you can see the full message details.

    Click the Manage columns icon (represented as two columns) to choose the information to display.

  4. Click the search dropdown and select the advanced search options to refine your search.

    Choose to display the latest messages or messages from a specified time or offset. You can display messages for all partitions or a specified partition.

    When you are done, you can click the CSV icon (represented as a CSV file) to download the information on the returned messages.

Refining your search

In this example, search terms, and message, retrieval, and partition options are combined:

  • messages=timestamp:2024-03-01T00:00:00Z retrieve=50 partition=1 Error on page load where=value

The filter searches for the text "Error on page load" in partition 1 as a message value, starting from March 1, 2024, and retrieves up to 50 messages.

Search terms

Enter search terms as text (has the words) to find specific matches and define where in a message to look for the term. You can search anywhere in the message or narrow the search to the key, header, or value.

For example:

  • messages=latest retrieve=100 642-26-1594 where=key

This example searches the latest 100 messages on message key 642-26-1594.

Message options

Set the starting point for returning messages.

  • Latest to start from the latest message.

    • messages=latest
  • Timestamp to start from an exact time and date in ISO 8601 format.

    • messages=timestamp:2024-03-14T00:00:00Z
  • Offset to start from an offset in a partition. In some cases, you may want to specify an offset without a partition. However, the most common scenario is to search by offset within a specific partition.

    • messages=offset:5600253 partition=0
  • Unix Timestamp to start from a time and date in Unix format.

    • messages=epoch:1
Retrieval options

Set a retrieval option.

  • Number of messages to return a specified number of messages.

    • messages=latest retrieve=50
  • Continuously to return the latest messages in real-time. Click the pause button (represented by two vertical lines) to pause the refresh. Unpause to continue the refresh.

    • retrieve=continuously
Partition options
Choose to run a search against all partitions or a specific partition.

6.2. Checking topic partitions

Check the partitions for a specific topic from the Partitions tab. The Partitions tab presents a list of partitions belonging to a topic.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Topics.
  2. Click the name of the topic you want to check from the Topics page.
  3. Check the information on the Partitions tab.

For each partition, you can see its replication status, as well as information on designated partition leaders, replica brokers, and the amount of data stored by the partition.

You can view partitions by replication status:

In-sync
All partitions in the topic are fully replicated. A partition is fully-replicated when its replicas (followers) are 'in-sync' with the designated partition leader. Replicas are 'in-sync' if they have fetched records up to the log end offset of the leader partition within an allowable lag time, as determined by replica.lag.time.max.ms.
Under-replicated
A partition is under-replicated if some of its replicas (followers) are not in-sync. An under-replicated status signals potential issues in data replication.
Offline
Some or all partitions in the topic are currently unavailable. This may be due to issues such as broker failures or network problems, which need investigating and addressing.

You can also check information on the broker designated as partition leader and the brokers that contain the replicas:

Leader
The leader handles all produce requests. Followers on other brokers replicate the leader’s data. A follower is considered in-sync if it catches up with the leader’s latest committed message.
Preferred leader
When creating a new topic, Kafka’s leader election algorithm assigns a leader from the list of replicas for each partition. The algorithm aims for a balanced spread of leadership assignments. A "Yes" value indicates the current leader is the preferred leader, suggesting a balanced leadership distribution. A "No" value may suggest imbalances in the leadership assignments, requiring further investigation. If the leadership assignments of partitions are not well-balanced, it can contribute to size discrepancies. A well-balanced Kafka cluster should distribute leadership roles across brokers evenly.
Replicas
Followers that replicate the leader’s data. Replicas provide fault tolerance and data availability.
Note

Discrepancies in the distribution of data across brokers may indicate balancing issues in the Kafka cluster. If certain brokers are consistently handling larger amounts of data, it may indicate that partitions are not evenly distributed across the brokers. This could lead to uneven resource utilization and potentially impact the performance of those brokers.

6.3. Checking topic consumer groups

Check the consumer groups for a specific topic from the Consumer groups tab. The Consumer groups tab presents a list of consumer groups associated with a topic.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Topics.
  2. Click the name of the topic you want to check from the Topics page.
  3. Check the information on the Consumer groups tab.
  4. To check consumer group members, click the consumer group name.

For each consumer group, you can see its status, the overall consumer lag across all partitions, and the number of members. For more information on checking consumer groups, see Chapter 8, Consumer Groups page.

For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions. For more information on checking consumer group members, see Section 8.1, “Checking consumer group members”.

Note

Monitoring consumer group behavior is essential for ensuring optimal distribution of messages between consumers.

6.4. Checking topic configuration

Check the configuration of a specific topic from the Configuration tab. The Configuration tab presents a list of configuration values for the topic.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Topics.
  2. Click the name of the topic you want to check from the Topics page.
  3. Check the information on the Configuration tab.

You can filter for the properties you wish to check, including selecting by data source:

  • DEFAULT_CONFIG properties have a predefined default value. This value is used when there are no user-defined values for those properties.
  • STATIC_BROKER_CONFIG properties have predefined values that apply to the entire broker and, by extension, to all topics managed by that broker. This value is used when there are no user-defined values for those properties.
  • DYNAMIC_TOPIC_CONFIG property values have been configured for a specific topic and override the default configuration values.
Tip

The Streams for Apache Kafka Topic Operator simplifies the process of creating managing Kafka topics using KafkaTopic resources.

Chapter 7. Brokers page

The Brokers page shows all the brokers created for a Kafka cluster. For each broker, you can see its status, as well as the distribution of partitions across the brokers, including the number of partition leaders and followers.

The broker status is shown as one of the following:

Not Running
The broker has not yet been started or has been explicitly stopped.
Starting
The broker is initializing and connecting to the cluster, including discovering and joining the metadata quorum.
Recovery
The broker has joined the cluster but is in recovery mode, replicating necessary data and metadata before becoming fully operational. It is not serving clients.
Running
The broker is fully operational, registered with the controller, and actively serving client requests.
Pending Controlled Shutdown
The broker has initiated a controlled shutdown process and will shut down gracefully once complete.
Shutting Down
The broker is in the process of shutting down. Client connections are being closed, and internal resources are being released.
Unknown
The broker’s state is unknown, possibly due to an unexpected error or failure.

If the broker has a rack ID, this is the ID of the rack or datacenter in which the broker resides.

Click on the right arrow (>) next to a broker name to see more information about the broker, including its hostname and disk usage.

Click on the Rebalance tab to show any rebalances taking place on the cluster.

Note

Consider rebalancing if the distribution is uneven to ensure efficient resource utilization.

7.1. Managing rebalances

When you configure KafkaRebalance resources to generate optimization proposals on a cluster, you can check their status from the Rebalance tab. The Rebalance tab presents a chronological list of KafkaRebalance resources from which you can manage the optimization proposals.

Note

Cruise Control must be enabled to run alongside the Kafka cluster in order to use the Rebalance tab. For more information on setting up and using Cruise Control to generate proposals, see the Streams for Apache Kafka documentation.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Brokers.
  2. Check the information on the Rebalance tab.

    For each rebalance, you can see its status and a timestamp in UTC.

    Table 7.1. Rebalance status descriptions
    StatusDescription

    New

    Resource has not been observed by the operator before

    PendingProposal

    Optimization proposal not generated

    ProposalReady

    Optimization proposal is ready for approval

    Rebalancing

    Rebalance in progress

    Stopped

    Rebalance stopped

    NotReady

    Error ocurred with the rebalance

    Ready

    Rebalance complete

    ReconciliationPaused

    Rebalance is paused

    Note

    The status of the KafkaRebalance resource changes to ReconciliationPaused when the strimzi.io/pause-reconciliation annotation is set to true in its configuration.

  3. Click on the right arrow (>) next to a rebalance name to see more information about the broker, including its rebalance mode, and whether auto-approval is enabled. If the rebalance involved brokers being removed or added, they are also listed.

Optimization proposals can be generated in one of three modes:

  • full is the default mode and runs a full rebalance.
  • add-brokers is the mode used after adding brokers when scaling up a Kafka cluster.
  • remove-brokers is the mode used before removing brokers when scaling down a Kafka cluster.

If auto-approval is enabled for a proposal, a successfully generated proposal goes straight into a cluster rebalance.

Viewing optimization proposals

Click on the name of a KafkaRebalance resource to see a generated optimization proposal. An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers.

For more information on the properties shown on the proposal and what they mean, see the Streams for Apache Kafka documentation.

Managing rebalances

Select the options icon (three vertical dots) and click on an option to manage a rebalance.

  • Click Approve to approve a proposal.
    The rebalance outlined in the proposal is performed on the Kafka cluster.
  • Click Refresh to generate a fresh optimization proposal.
    If there has been a gap between generating a proposal and approving it, refresh the proposal so that the current state of the cluster is taken into account with a rebalance.
  • Click Stop to stop a rebalance.
    Rebalances can take a long time and may impact the performance of your cluster. Stopping a rebalance can help avoid performance issues and allow you to revert changes if needed.
Note

The options available depend on the status of the KafkaBalance resource. For example, it’s not possible to approve an optimization proposal if it’s not ready.

Chapter 8. Consumer Groups page

The Consumer Groups page shows all the consumer groups associated with a Kafka cluster. For each consumer group, you can see its status, the overall consumer lag across all partitions, and the number of members. Click on associated topics to show the topic information available from the Topics page tabs.

Consumer group status can be one of the following:

  • Stable indicates normal functioning
  • Rebalancing indicates ongoing adjustments to the consumer group’s members.
  • Empty suggests no active members. If in the empty state, consider adding members to the group.

Check group members by clicking on a consumer group name. Select the options icon (three vertical dots) against a consumer group to reset consumer offsets.

8.1. Checking consumer group members

Check the members of a specific consumer group from the Consumer Groups page.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Consumer Groups.
  2. Click the name of the consumer group you want to check from the Consumer Groups page.
  3. Click on the right arrow (>) next to a member ID to see the topic partitions a member is associated with, as well as any possible consumer lag.

For each group member, you see the unique (consumer) client ID assigned to the consumer within the consumer group, overall consumer lag, and the number of assigned partitions.

Any consumer lag for a specific topic partition reflects the gap between the last message a consumer has picked up (committed offset position) and the latest message written by the producer (end offset position).

8.2. Resetting consumer offsets

Reset the consumer offsets of a specific consumer group from the Consumer Groups page.

You might want to do this when reprocessing old data, skipping unwanted messages, or recovering from downtime.

Prerequisites

All active members of the consumer group must be shut down before resetting the consumer offsets.

Procedure

  1. From the Streams for Apache Kafka Console, log in to the Kafka cluster, then click Consumer Groups.
  2. Click the options icon (three vertical dots) for the consumer group and click the reset consumer offset option to display the Reset consumer offset page.
  3. Choose to apply the offset reset to all consumer topics associated with the consumer group or select a specific topic.

    If you selected a topic, choose to apply the offset reset to all partitions or select a specific partition.

  4. Choose the position to reset the offset:

    • Custom offset
      If you selected custom offset, enter the custom offset value.
    • Latest offset
    • Earliest offset
    • Specific date and time
      If you selected date and time, choose the appropriate format and enter the date in that format.
  5. Click Reset to perform the offset reset.

Performing a dry run

Before actually executing the offset reset, you can use the dry run option to see which offsets would be reset before applying the change.

  1. From the Reset consumer offset page, click the down arrow next to Dry run.
  2. Choose the option to run and show the results in the console.
    Or you can copy the dry run command and run it independently against the consumer group.

The results in the console show the new offsets for each topic partition included in the reset operation.

A download option is available for the results.

Appendix A. Using your subscription

Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.

Accessing Your Account

  1. Go to access.redhat.com.
  2. If you do not already have an account, create one.
  3. Log in to your account.

Activating a Subscription

  1. Go to access.redhat.com.
  2. Navigate to My Subscriptions.
  3. Navigate to Activate a subscription and enter your 16-digit activation number.

Downloading Zip and Tar Files

To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.

  1. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
  2. Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
  3. Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
  4. Click the Download link for your component.

Installing packages with DNF

To install a package and all the package dependencies, use:

dnf install <package_name>

To install a previously-downloaded package from a local directory, use:

dnf install <path_to_download_package>

Revised on 2025-03-05 17:09:55 UTC

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.