이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Getting Started with Streams for Apache Kafka on OpenShift


Red Hat Streams for Apache Kafka 3.1

Get started using Streams for Apache Kafka 3.1 on OpenShift Container Platform

Abstract

Try Streams for Apache Kafka by creating a Kafka cluster on OpenShift. Connect to the Kafka cluster, then send and receive messages from a Kafka topic.

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation.

To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.

Prerequisite

  • You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.

Procedure

  1. Click Create issue.
  2. In the Summary text box, enter a brief description of the issue.
  3. In the Description text box, provide the following information:

    • The URL of the page where you found the issue.
    • A detailed description of the issue.
      You can leave the information in any other fields at their default values.
  4. Add a reporter name.
  5. Click Create to submit the Jira issue to the documentation team.

Thank you for taking the time to provide feedback.

Chapter 1. Getting started overview

Use Red Hat Streams for Apache Kafka to create and set up Kafka clusters, then connect your applications and services to those clusters.

This guide describes how to install and start using Streams for Apache Kafka on OpenShift Container Platform. You can install the Streams for Apache Kafka operator directly from the OperatorHub in the OpenShift web console. The Streams for Apache Kafka operator understands how to install and manage Kafka components. Installing from the OperatorHub provides a standard configuration of Streams for Apache Kafka that allows you to take advantage of automatic updates.

When the Streams for Apache Kafka operator is installed, it provides the resources to install instances of Kafka components. After installing a Kafka cluster, you can start producing and consuming messages.

Note

If you require more flexibility with your deployment, you can use the installation artifacts provided with Streams for Apache Kafka. For more information on using the installation artifacts, see Deploying and Managing Streams for Apache Kafka on OpenShift.

1.1. Prerequisites

The following prerequisites are required for getting started with Streams for Apache Kafka.

  • You have a Red Hat account.
  • JDK 11 or later is installed.
  • An OpenShift 4.16–4.20 (tested); 4.12, 4.14 (supported) cluster is available.
  • The OpenShift oc command-line tool is installed and configured to connect to the running cluster.

The steps to get started are based on using the OperatorHub in the OpenShift web console, but you’ll also use the OpenShift oc CLI tool to perform certain operations. You’ll need to connect to your OpenShift cluster using the oc tool.

  • You can install the oc CLI tool from the web console by clicking the '?' help menu, then Command Line Tools.
  • You can copy the required oc login details from the web console by clicking your profile name, then Copy login command.

Chapter 2. Installing the Streams for Apache Kafka operator from the OperatorHub

You can install and subscribe to the Streams for Apache Kafka operator using the OperatorHub in the OpenShift Container Platform web console.

This procedure describes how to create a project and install the Streams for Apache Kafka operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions.

Warning

Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing Streams for Apache Kafka from the default stable channel is generally safe. However, we do not recommend enabling automatic updates on the stable channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels.

Prerequisites

  • Access to an OpenShift Container Platform web console using an account with cluster-admin or strimzi-admin permissions.

Procedure

  1. Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation.

    We use a project named streams-kafka in this example.

  2. Navigate to the Operators > OperatorHub page.
  3. Scroll or type a keyword into the Filter by keyword box to find the Streams for Apache Kafka operator.

    The operator is located in the Streaming & Messaging category.

  4. Click Streams for Apache Kafka to display the operator information.
  5. Read the information about the operator and click Install.
  6. On the Install Operator page, choose from the following installation and update options:

    • Update Channel: Choose the update channel for the operator.

      • The (default) stable channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable.
      • An amq-streams-X.x channel contains the minor and micro release updates for a major release, where X is the major release version number.
      • An amq-streams-X.Y.x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number.
    • Installation Mode: Choose the project you created to install the operator on a specific namespace.

      You can install the Streams for Apache Kafka operator to all namespaces in the cluster (the default option) or a specific namespace. We recommend that you dedicate a specific namespace to the Kafka cluster and other Streams for Apache Kafka components.

    • Update approval: By default, the Streams for Apache Kafka operator is automatically upgraded to the latest Streams for Apache Kafka version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information on operators, see the OpenShift documentation.
  7. Click Install to install the operator to your selected namespace.

    The Streams for Apache Kafka operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace.

  8. After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace.

    The status will show as Succeeded.

    You can now use the Streams for Apache Kafka operator to deploy Kafka components, starting with a Kafka cluster and node pools.

Note

If you navigate to Workloads > Deployments, you can see the deployment details for the Cluster Operator and Entity Operator. The name of the Cluster Operator includes a version number: amq-streams-cluster-operator-<version>. The name is different when deploying the Cluster Operator using the Streams for Apache Kafka installation artifacts. In this case, the name is strimzi-cluster-operator.

Chapter 3. Deploying Kafka components using the Streams for Apache Kafka operator

When installed on Openshift, the Streams for Apache Kafka operator makes Kafka components available for installation from the user interface.

The following Kafka components are available for installation:

  • Kafka
  • Kafka Node Pool
  • Kafka Connect
  • Kafka MirrorMaker 2
  • Kafka Topic
  • Kafka User
  • Kafka Bridge
  • Kafka Connector
  • Kafka Rebalance

You select the component and create an instance. As a minimum, you create a Kafka instance and node pool. This procedure describes how to create a Kafka instance with separate node pools for brokers and controllers. You can configure the default installation specification before you perform the installation.

The process is the same for creating instances of other Kafka components.

Prerequisites

Procedure

  1. Navigate in the web console to the Operators > Installed Operators page and click Streams for Apache Kafka to display the operator details.

    From Provided APIs, you can create instances of Kafka components.

  2. Click Create instance under Kafka to create a Kafka instance.

    By default, you’ll create a Kafka cluster called my-cluster:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
      annotations:
        strimzi.io/node-pools: enabled
        strimzi.io/kraft: enabled
    spec:
      kafka:
        config:
          offsets.topic.replication.factor: 3
          transaction.state.log.replication.factor: 3
          transaction.state.log.min.isr: 2
          default.replication.factor: 3
          min.insync.replicas: 2
        listeners:
          - name: plain
            port: 9092
            type: internal
            tls: false
          - name: tls
            port: 9093
            type: internal
            tls: true
        version: 4.1.0
        metadataVersion: 4.1
      entityOperator:
        topicOperator: {}
        userOperator: {}
    Copy to Clipboard Toggle word wrap
  3. Click Create to start the installation of Kafka.

    The Kafka resource remains in a pending state until at least one node pool is created.

  4. Click Create instance under KafkaNodePool to create a node pool instance.

    Switch to YAML view and paste a minimal broker pool configuration with ephemeral storage:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaNodePool
    metadata:
      name: broker
      labels:
        strimzi.io/cluster: my-cluster
    spec:
      replicas: 3
      roles:
        - broker
      storage:
        type: jbod
        volumes:
          - id: 0
            type: ephemeral
    Copy to Clipboard Toggle word wrap
  5. Click Create instance under KafkaNodePool to create a second node pool instance.

    Switch to YAML view and paste a minimal controller pool configuration with ephemeral storage:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaNodePool
    metadata:
      name: controller
      labels:
        strimzi.io/cluster: my-cluster
    spec:
      replicas: 3
      roles:
        - controller
      storage:
        type: jbod
        volumes:
          - id: 0
            type: ephemeral
            kraftMetadata: shared
    Copy to Clipboard Toggle word wrap
  6. Select the Kafka page to show the installed Kafka clusters. Wait until the status of the Kafka cluster changes to Ready.
Note

These examples use ephemeral storage for evaluation only. For production deployments, configure persistent volumes.

Chapter 4. Creating an OpenShift route to access a Kafka cluster

Create an OpenShift route to access a Kafka cluster outside of OpenShift.

This procedure describes how to expose a Kafka cluster to clients outside the OpenShift environment. After the Kafka cluster is exposed, external clients can produce and consume messages from the Kafka cluster.

To create an OpenShift route, a route listener is added to the configuration of a Kafka cluster installed on OpenShift.

Warning

An OpenShift Route address includes the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example, my-cluster-kafka-listener1-bootstrap-streams-kafka (<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters.

Prerequisites

Procedure

  1. Navigate in the web console to the Operators > Installed Operators page and select Streams for Apache Kafka to display the operator details.
  2. Select the Kafka page to show the installed Kafka clusters.
  3. Click the name of the Kafka cluster you are configuring to view its details.

    A Kafka cluster named my-cluster is used in this example.

  4. Switch to YAML view for the Kafka cluster my-cluster.
  5. Add route listener configuration to create an OpenShift route named listener1.

    The listener configuration must be set to the route type. You add the listener configuration under listeners in the Kafka configuration.

    External route listener configuration

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
      annotations:
        strimzi.io/node-pools: enabled
        strimzi.io/kraft: enabled
      namespace: streams-kafka
    spec:
      kafka:
        # ...
        listeners:
          # ...
          - name: listener1
            port: 9094
            type: route
            tls: true
    # ...
    Copy to Clipboard Toggle word wrap

    The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example.

  6. Save the updated configuration.
  7. Select the Resources page for the Kafka cluster my-cluster to locate the connection information you will need for your client.

    From the Resources page, you’ll find details for the route listener and the public cluster certificate you need to connect to the Kafka cluster.

  8. Click the name of the my-cluster-kafka-listener1-bootstrap route created for the Kafka cluster to show the route details.
  9. Make a note of the hostname.

    The hostname is specified with port 443 in a Kafka client as the bootstrap address for connecting to the Kafka cluster.

    You can also locate the bootstrap address by navigating to Networking > Routes and selecting the streams-kafka project to display the routes created in the namespace.

    Or you can use the oc tool to extract the bootstrap details.

    Extracting bootstrap information

    oc get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'
    Copy to Clipboard Toggle word wrap

  10. Navigate back to the Resources page and click the name of the my-cluster-cluster-ca-cert to show the secret details for accessing the Kafka cluster.

    The ca.crt certificate file contains the public certificate of the Kafka cluster.

    You will need the certificate to access the Kafka broker.

  11. Make a local copy of the ca.crt public certificate file.

    You can copy the details of the certificate or use the OpenShift oc tool to extract them.

    Extracting the public certificate

    oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
    Copy to Clipboard Toggle word wrap

  12. Create a local truststore for the public cluster certificate using keytool.

    Creating a local truststore

    keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt
    Copy to Clipboard Toggle word wrap

    When prompted, create a password for accessing the truststore.

    The truststore is specified in a Kafka client for authenticating access to the Kafka cluster.

    You are now ready to start sending and receiving messages.

Chapter 5. Sending and receiving messages from a topic

Send messages to and receive messages from a Kafka cluster installed on OpenShift.

This procedure describes how to use Kafka clients to produce and consume messages. You can deploy clients to OpenShift or connect local Kafka clients to the OpenShift cluster. You can use either or both options to test your Kafka cluster installation. For the local clients, you access the Kafka cluster using an OpenShift route connection.

You will use the oc command-line tool to deploy and run the Kafka clients.

Prerequisites

For a local producer and consumer:

Sending and receiving messages from Kafka clients deployed to the OpenShift cluster

Deploy producer and consumer clients to the OpenShift cluster. You can then use the clients to send and receive messages from the Kafka cluster in the same namespace. The deployment uses the Streams for Apache Kafka container image for running Kafka.

  1. Use the oc command-line interface to deploy a Kafka producer.

    This example deploys a Kafka producer that connects to the Kafka cluster my-cluster

    A topic named my-topic is created.

    Deploying a Kafka producer to OpenShift

    oc run kafka-producer -ti \
    --image=registry.redhat.io/amq-streams/kafka-41-rhel9:3.1.0 \
    --rm=true \
    --restart=Never \
    -- bin/kafka-console-producer.sh \
    --bootstrap-server my-cluster-kafka-bootstrap:9092 \
    --topic my-topic
    Copy to Clipboard Toggle word wrap

    Note

    If the connection fails, check that the Kafka cluster is running and the correct cluster name is specified as the bootstrap-server.

  2. From the command prompt, enter a number of messages.
  3. Navigate in the OpenShift web console to the Home > Projects page and select the streams-kafka project you created.
  4. From the list of pods, click kafka-producer to view the producer pod details.
  5. Select Logs page to check the messages you entered are present.
  6. Use the oc command-line interface to deploy a Kafka consumer.

    Deploying a Kafka consumer to OpenShift

    oc run kafka-consumer -ti \
    --image=registry.redhat.io/amq-streams/kafka-41-rhel9:3.1.0 \
    --rm=true \
    --restart=Never \
    -- bin/kafka-console-consumer.sh \
    --bootstrap-server my-cluster-kafka-bootstrap:9092 \
    --topic my-topic \
    --from-beginning
    Copy to Clipboard Toggle word wrap

    The consumer consumed messages produced to my-topic.

  7. From the command prompt, confirm that you see the incoming messages in the consumer console.
  8. Navigate in the OpenShift web console to the Home > Projects page and select the streams-kafka project you created.
  9. From the list of pods, click kafka-consumer to view the consumer pod details.
  10. Select the Logs page to check the messages you consumed are present.

Sending and receiving messages from Kafka clients running locally

Use a command-line interface to run a Kafka producer and consumer on a local machine.

  1. Download and extract the Streams for Apache Kafka <version> binaries from the Streams for Apache Kafka software downloads page.

    Unzip the amq-streams-<version>-bin.zip file to any destination.

  2. Open a command-line interface, and start the Kafka console producer with the topic my-topic and the authentication properties for TLS.

    Add the properties that are required for accessing the Kafka broker with an OpenShift route.

    • Use the hostname and port 443 for the OpenShift route you are using.
    • Use the password and reference to the truststore you created for the broker certificate.

      Starting a local Kafka producer

      kafka-console-producer.sh \
      --bootstrap-server my-cluster-kafka-listener1-bootstrap-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \
      --producer-property security.protocol=SSL \
      --producer-property ssl.truststore.password=password \
      --producer-property ssl.truststore.location=client.truststore.jks \
      --topic my-topic
      Copy to Clipboard Toggle word wrap

  3. Type your message into the command-line interface where the producer is running.
  4. Press enter to send the message.
  5. Open a new command-line interface tab or window, and start the Kafka console consumer to receive the messages.

    Use the same connection details as the producer.

    Starting a local Kafka consumer

    kafka-console-consumer.sh \
    --bootstrap-server my-cluster-kafka-listener1-bootstrap-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \
    --consumer-property security.protocol=SSL \
    --consumer-property ssl.truststore.password=password \
    --consumer-property ssl.truststore.location=client.truststore.jks \
    --topic my-topic --from-beginning
    Copy to Clipboard Toggle word wrap

  6. Confirm that you see the incoming messages in the consumer console.
  7. Press Crtl+C to exit the Kafka console producer and consumer.

Chapter 6. Deploying the Streams for Apache Kafka Console

After you have deployed a Kafka cluster that’s managed by Streams for Apache Kafka, you can deploy and connect the Streams for Apache Kafka Console to the cluster. The console facilitates the administration of Kafka clusters, providing real-time insights for monitoring, managing, and optimizing each cluster from its user interface.

For more information on connecting to and using the Streams for Apache Kafka Console, see the console guide in the Streams for Apache Kafka documentation.

Appendix A. Using your subscription

Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.

A.1. Accessing Your Account

  1. Go to access.redhat.com.
  2. If you do not already have an account, create one.
  3. Log in to your account.

A.2. Activating a Subscription

  1. Go to access.redhat.com.
  2. Navigate to My Subscriptions.
  3. Navigate to Activate a subscription and enter your 16-digit activation number.

A.3. Downloading Zip and Tar Files

To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.

  1. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
  2. Locate the Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
  3. Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
  4. Click the Download link for your component.

A.4. Installing packages with DNF

To install a package and all the package dependencies, use:

dnf install <package_name>
Copy to Clipboard Toggle word wrap

To install a previously-downloaded package from a local directory, use:

dnf install <path_to_download_package>
Copy to Clipboard Toggle word wrap

Revised on 2026-02-10 16:21:59 UTC

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat
맨 위로 이동