Getting Started with Streams for Apache Kafka on OpenShift


Red Hat Streams for Apache Kafka 2.9

Get started using Streams for Apache Kafka 2.9 on OpenShift Container Platform

Abstract

Try Streams for Apache Kafka by creating a Kafka cluster on OpenShift. Connect to the Kafka cluster, then send and receive messages from a Kafka topic.

Preface

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation.

To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.

Prerequisite

  • You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance.
    If you do not have an account, you will be prompted to create one.

Procedure

  1. Click the following: Create issue.
  2. In the Summary text box, enter a brief description of the issue.
  3. In the Description text box, provide the following information:

    • The URL of the page where you found the issue.
    • A detailed description of the issue.
      You can leave the information in any other fields at their default values.
  4. Add a reporter name.
  5. Click Create to submit the Jira issue to the documentation team.

Thank you for taking the time to provide feedback.

Chapter 1. Getting started overview

Use Red Hat Streams for Apache Kafka to create and set up Kafka clusters, then connect your applications and services to those clusters.

This guide describes how to install and start using Streams for Apache Kafka on OpenShift Container Platform. You can install the Streams for Apache Kafka operator directly from the OperatorHub in the OpenShift web console. The Streams for Apache Kafka operator understands how to install and manage Kafka components. Installing from the OperatorHub provides a standard configuration of Streams for Apache Kafka that allows you to take advantage of automatic updates.

When the Streams for Apache Kafka operator is installed, it provides the resources to install instances of Kafka components. After installing a Kafka cluster, you can start producing and consuming messages.

Note

If you require more flexibility with your deployment, you can use the installation artifacts provided with Streams for Apache Kafka. For more information on using the installation artifacts, see Deploying and Managing Streams for Apache Kafka on OpenShift.

1.1. Prerequisites

The following prerequisites are required for getting started with Streams for Apache Kafka.

  • You have a Red Hat account.
  • JDK 11 or later is installed.
  • An OpenShift 4.14 and later cluster is available.
  • The OpenShift oc command-line tool is installed and configured to connect to the running cluster.

The steps to get started are based on using the OperatorHub in the OpenShift web console, but you’ll also use the OpenShift oc CLI tool to perform certain operations. You’ll need to connect to your OpenShift cluster using the oc tool.

  • You can install the oc CLI tool from the web console by clicking the '?' help menu, then Command Line Tools.
  • You can copy the required oc login details from the web console by clicking your profile name, then Copy login command.

1.2. Additional resources

Chapter 2. Installing the Streams for Apache Kafka operator from the OperatorHub

You can install and subscribe to the Streams for Apache Kafka operator using the OperatorHub in the OpenShift Container Platform web console.

This procedure describes how to create a project and install the Streams for Apache Kafka operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions.

Warning

Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing Streams for Apache Kafka from the default stable channel is generally safe. However, we do not recommend enabling automatic updates on the stable channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels.

Prerequisites

  • Access to an OpenShift Container Platform web console using an account with cluster-admin or strimzi-admin permissions.

Procedure

  1. Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation.

    We use a project named streams-kafka in this example.

  2. Navigate to the Operators > OperatorHub page.
  3. Scroll or type a keyword into the Filter by keyword box to find the Streams for Apache Kafka operator.

    The operator is located in the Streaming & Messaging category.

  4. Click Streams for Apache Kafka to display the operator information.
  5. Read the information about the operator and click Install.
  6. On the Install Operator page, choose from the following installation and update options:

    • Update Channel: Choose the update channel for the operator.

      • The (default) stable channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable.
      • An amq-streams-X.x channel contains the minor and micro release updates for a major release, where X is the major release version number.
      • An amq-streams-X.Y.x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number.
    • Installation Mode: Choose the project you created to install the operator on a specific namespace.

      You can install the Streams for Apache Kafka operator to all namespaces in the cluster (the default option) or a specific namespace. We recommend that you dedicate a specific namespace to the Kafka cluster and other Streams for Apache Kafka components.

    • Update approval: By default, the Streams for Apache Kafka operator is automatically upgraded to the latest Streams for Apache Kafka version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information on operators, see the OpenShift documentation.
  7. Click Install to install the operator to your selected namespace.

    The Streams for Apache Kafka operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace.

  8. After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace.

    The status will show as Succeeded.

    You can now use the Streams for Apache Kafka operator to deploy Kafka components, starting with a Kafka cluster.

Note

If you navigate to Workloads > Deployments, you can see the deployment details for the Cluster Operator and Entity Operator. The name of the Cluster Operator includes a version number: amq-streams-cluster-operator-<version>. The name is different when deploying the Cluster Operator using the Streams for Apache Kafka installation artifacts. In this case, the name is strimzi-cluster-operator.

Chapter 3. Deploying Kafka components using the Streams for Apache Kafka operator

When installed on Openshift, the Streams for Apache Kafka operator makes Kafka components available for installation from the user interface.

The following Kafka components are available for installation:

  • Kafka
  • Kafka Connect
  • Kafka MirrorMaker
  • Kafka MirrorMaker 2
  • Kafka Topic
  • Kafka User
  • Kafka Bridge
  • Kafka Connector
  • Kafka Rebalance

You select the component and create an instance. As a minimum, you create a Kafka instance. This procedure describes how to create a Kafka instance using the default settings. You can configure the default installation specification before you perform the installation.

The process is the same for creating instances of other Kafka components.

Prerequisites

Procedure

  1. Navigate in the web console to the Operators > Installed Operators page and click Streams for Apache Kafka to display the operator details.

    From Provided APIs, you can create instances of Kafka components.

  2. Click Create instance under Kafka to create a Kafka instance.

    By default, you’ll create a Kafka cluster called my-cluster with three Kafka broker nodes and three ZooKeeper nodes. The cluster uses ephemeral storage.

    Note

    If you prefer to create a Kafka cluster in KRaft mode, you can use one of the example KRaft-based deployment files provided with Streams for Apache Kafka. To use deployment files to install Streams for Apache Kafka, download and extract the files from the Streams for Apache Kafka software downloads page. Paste the configuration in the YAML view before starting the installation of the Kafka instance.

  3. Click Create to start the installation of Kafka.

    Wait until the status changes to Ready.

Chapter 4. Creating an OpenShift route to access a Kafka cluster

Create an OpenShift route to access a Kafka cluster outside of OpenShift.

This procedure describes how to expose a Kafka cluster to clients outside the OpenShift environment. After the Kafka cluster is exposed, external clients can produce and consume messages from the Kafka cluster.

To create an OpenShift route, a route listener is added to the configuration of a Kafka cluster installed on OpenShift.

Warning

An OpenShift Route address includes the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example, my-cluster-kafka-listener1-bootstrap-streams-kafka (<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters.

Prerequisites

Procedure

  1. Navigate in the web console to the Operators > Installed Operators page and select Streams for Apache Kafka to display the operator details.
  2. Select the Kafka page to show the installed Kafka clusters.
  3. Click the name of the Kafka cluster you are configuring to view its details.

    We use a Kafka cluster named my-cluster in this example.

  4. Select the YAML page for the Kafka cluster my-cluster.
  5. Add route listener configuration to create an OpenShift route named listener1.

    The listener configuration must be set to the route type. You add the listener configuration under listeners in the Kafka configuration.

    External route listener configuration

    Copy to Clipboard Toggle word wrap
    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
      namespace: streams-kafka
    spec:
      kafka:
        # ...
        listeners:
          # ...
          - name: listener1
            port: 9094
            type: route
            tls: true
    # ...

    The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example.

  6. Save the updated configuration.
  7. Select the Resources page for the Kafka cluster my-cluster to locate the connection information you will need for your client.

    From the Resources page, you’ll find details for the route listener and the public cluster certificate you need to connect to the Kafka cluster.

  8. Click the name of the my-cluster-kafka-listener1-bootstrap route created for the Kafka cluster to show the route details.
  9. Make a note of the hostname.

    The hostname is specified with port 443 in a Kafka client as the bootstrap address for connecting to the Kafka cluster.

    You can also locate the bootstrap address by navigating to Networking > Routes and selecting the streams-kafka project to display the routes created in the namespace.

    Or you can use the oc tool to extract the bootstrap details.

    Extracting bootstrap information

    Copy to Clipboard Toggle word wrap
    oc get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'

  10. Navigate back to the Resources page and click the name of the my-cluster-cluster-ca-cert to show the secret details for accessing the Kafka cluster.

    The ca.crt certificate file contains the public certificate of the Kafka cluster.

    You will need the certificate to access the Kafka broker.

  11. Make a local copy of the ca.crt public certificate file.

    You can copy the details of the certificate or use the OpenShift oc tool to extract them.

    Extracting the public certificate

    Copy to Clipboard Toggle word wrap
    oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt

  12. Create a local truststore for the public cluster certificate using keytool.

    Creating a local truststore

    Copy to Clipboard Toggle word wrap
    keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt

    When prompted, create a password for accessing the truststore.

    The truststore is specified in a Kafka client for authenticating access to the Kafka cluster.

    You are now ready to start sending and receiving messages.

Chapter 5. Sending and receiving messages from a topic

Send messages to and receive messages from a Kafka cluster installed on OpenShift.

This procedure describes how to use Kafka clients to produce and consume messages. You can deploy clients to OpenShift or connect local Kafka clients to the OpenShift cluster. You can use either or both options to test your Kafka cluster installation. For the local clients, you access the Kafka cluster using an OpenShift route connection.

You will use the oc command-line tool to deploy and run the Kafka clients.

Prerequisites

For a local producer and consumer:

Sending and receiving messages from Kafka clients deployed to the OpenShift cluster

Deploy producer and consumer clients to the OpenShift cluster. You can then use the clients to send and receive messages from the Kafka cluster in the same namespace. The deployment uses the Streams for Apache Kafka container image for running Kafka.

  1. Use the oc command-line interface to deploy a Kafka producer.

    This example deploys a Kafka producer that connects to the Kafka cluster my-cluster

    A topic named my-topic is created.

    Deploying a Kafka producer to OpenShift

    Copy to Clipboard Toggle word wrap
    oc run kafka-producer -ti \
    --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 \
    --rm=true \
    --restart=Never \
    -- bin/kafka-console-producer.sh \
    --bootstrap-server my-cluster-kafka-bootstrap:9092 \
    --topic my-topic

    Note

    If the connection fails, check that the Kafka cluster is running and the correct cluster name is specified as the bootstrap-server.

  2. From the command prompt, enter a number of messages.
  3. Navigate in the OpenShift web console to the Home > Projects page and select the streams-kafka project you created.
  4. From the list of pods, click kafka-producer to view the producer pod details.
  5. Select Logs page to check the messages you entered are present.
  6. Use the oc command-line interface to deploy a Kafka consumer.

    Deploying a Kafka consumer to OpenShift

    Copy to Clipboard Toggle word wrap
    oc run kafka-consumer -ti \
    --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 \
    --rm=true \
    --restart=Never \
    -- bin/kafka-console-consumer.sh \
    --bootstrap-server my-cluster-kafka-bootstrap:9092 \
    --topic my-topic \
    --from-beginning

    The consumer consumed messages produced to my-topic.

  7. From the command prompt, confirm that you see the incoming messages in the consumer console.
  8. Navigate in the OpenShift web console to the Home > Projects page and select the streams-kafka project you created.
  9. From the list of pods, click kafka-consumer to view the consumer pod details.
  10. Select the Logs page to check the messages you consumed are present.

Sending and receiving messages from Kafka clients running locally

Use a command-line interface to run a Kafka producer and consumer on a local machine.

  1. Download and extract the Streams for Apache Kafka <version> binaries from the Streams for Apache Kafka software downloads page.

    Unzip the amq-streams-<version>-bin.zip file to any destination.

  2. Open a command-line interface, and start the Kafka console producer with the topic my-topic and the authentication properties for TLS.

    Add the properties that are required for accessing the Kafka broker with an OpenShift route.

    • Use the hostname and port 443 for the OpenShift route you are using.
    • Use the password and reference to the truststore you created for the broker certificate.

      Starting a local Kafka producer

      Copy to Clipboard Toggle word wrap
      kafka-console-producer.sh \
      --bootstrap-server my-cluster-kafka-listener1-bootstrap-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \
      --producer-property security.protocol=SSL \
      --producer-property ssl.truststore.password=password \
      --producer-property ssl.truststore.location=client.truststore.jks \
      --topic my-topic

  3. Type your message into the command-line interface where the producer is running.
  4. Press enter to send the message.
  5. Open a new command-line interface tab or window, and start the Kafka console consumer to receive the messages.

    Use the same connection details as the producer.

    Starting a local Kafka consumer

    Copy to Clipboard Toggle word wrap
    kafka-console-consumer.sh \
    --bootstrap-server my-cluster-kafka-listener1-bootstrap-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \
    --consumer-property security.protocol=SSL \
    --consumer-property ssl.truststore.password=password \
    --consumer-property ssl.truststore.location=client.truststore.jks \
    --topic my-topic --from-beginning

  6. Confirm that you see the incoming messages in the consumer console.
  7. Press Crtl+C to exit the Kafka console producer and consumer.

Chapter 6. Deploying the Streams for Apache Kafka Console

After you have deployed a Kafka cluster that’s managed by Streams for Apache Kafka, you can deploy and connect the Streams for Apache Kafka Console to the cluster. The console facilitates the administration of Kafka clusters, providing real-time insights for monitoring, managing, and optimizing each cluster from its user interface.

For more information on connecting to and using the Streams for Apache Kafka Console, see the console guide in the Streams for Apache Kafka documentation.

Appendix A. Using your subscription

Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.

Accessing Your Account

  1. Go to access.redhat.com.
  2. If you do not already have an account, create one.
  3. Log in to your account.

Activating a Subscription

  1. Go to access.redhat.com.
  2. Navigate to My Subscriptions.
  3. Navigate to Activate a subscription and enter your 16-digit activation number.

Downloading Zip and Tar Files

To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.

  1. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
  2. Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
  3. Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
  4. Click the Download link for your component.

Installing packages with DNF

To install a package and all the package dependencies, use:

Copy to Clipboard Toggle word wrap
dnf install <package_name>

To install a previously-downloaded package from a local directory, use:

Copy to Clipboard Toggle word wrap
dnf install <path_to_download_package>

Revised on 2025-03-14 17:23:00 UTC

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.