Getting Started with Streams for Apache Kafka on OpenShift
Get started using Streams for Apache Kafka 2.9 on OpenShift Container Platform
Abstract
Preface
Providing feedback on Red Hat documentation
We appreciate your feedback on our documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.
Prerequisite
-
You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance.
If you do not have an account, you will be prompted to create one.
Procedure
- Click the following: Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
-
A detailed description of the issue.
You can leave the information in any other fields at their default values.
- Add a reporter name.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Chapter 1. Getting started overview
Use Red Hat Streams for Apache Kafka to create and set up Kafka clusters, then connect your applications and services to those clusters.
This guide describes how to install and start using Streams for Apache Kafka on OpenShift Container Platform. You can install the Streams for Apache Kafka operator directly from the OperatorHub in the OpenShift web console. The Streams for Apache Kafka operator understands how to install and manage Kafka components. Installing from the OperatorHub provides a standard configuration of Streams for Apache Kafka that allows you to take advantage of automatic updates.
When the Streams for Apache Kafka operator is installed, it provides the resources to install instances of Kafka components. After installing a Kafka cluster, you can start producing and consuming messages.
If you require more flexibility with your deployment, you can use the installation artifacts provided with Streams for Apache Kafka. For more information on using the installation artifacts, see Deploying and Managing Streams for Apache Kafka on OpenShift.
1.1. Prerequisites
The following prerequisites are required for getting started with Streams for Apache Kafka.
- You have a Red Hat account.
- JDK 11 or later is installed.
- An OpenShift 4.14 and later cluster is available.
-
The OpenShift
oc
command-line tool is installed and configured to connect to the running cluster.
The steps to get started are based on using the OperatorHub in the OpenShift web console, but you’ll also use the OpenShift oc
CLI tool to perform certain operations. You’ll need to connect to your OpenShift cluster using the oc
tool.
-
You can install the
oc
CLI tool from the web console by clicking the'?'
help menu, then Command Line Tools. -
You can copy the required
oc login
details from the web console by clicking your profile name, then Copy login command.
1.2. Additional resources
Chapter 2. Installing the Streams for Apache Kafka operator from the OperatorHub
You can install and subscribe to the Streams for Apache Kafka operator using the OperatorHub in the OpenShift Container Platform web console.
This procedure describes how to create a project and install the Streams for Apache Kafka operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions.
Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing Streams for Apache Kafka from the default stable channel is generally safe. However, we do not recommend enabling automatic updates on the stable channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels.
Prerequisites
-
Access to an OpenShift Container Platform web console using an account with
cluster-admin
orstrimzi-admin
permissions.
Procedure
Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation.
We use a project named
streams-kafka
in this example.- Navigate to the Operators > OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Streams for Apache Kafka operator.
The operator is located in the Streaming & Messaging category.
- Click Streams for Apache Kafka to display the operator information.
- Read the information about the operator and click Install.
On the Install Operator page, choose from the following installation and update options:
Update Channel: Choose the update channel for the operator.
- The (default) stable channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable.
- An amq-streams-X.x channel contains the minor and micro release updates for a major release, where X is the major release version number.
- An amq-streams-X.Y.x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number.
Installation Mode: Choose the project you created to install the operator on a specific namespace.
You can install the Streams for Apache Kafka operator to all namespaces in the cluster (the default option) or a specific namespace. We recommend that you dedicate a specific namespace to the Kafka cluster and other Streams for Apache Kafka components.
- Update approval: By default, the Streams for Apache Kafka operator is automatically upgraded to the latest Streams for Apache Kafka version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information on operators, see the OpenShift documentation.
Click Install to install the operator to your selected namespace.
The Streams for Apache Kafka operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace.
After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace.
The status will show as Succeeded.
You can now use the Streams for Apache Kafka operator to deploy Kafka components, starting with a Kafka cluster.
If you navigate to Workloads > Deployments, you can see the deployment details for the Cluster Operator and Entity Operator. The name of the Cluster Operator includes a version number: amq-streams-cluster-operator-<version>
. The name is different when deploying the Cluster Operator using the Streams for Apache Kafka installation artifacts. In this case, the name is strimzi-cluster-operator
.
Chapter 3. Deploying Kafka components using the Streams for Apache Kafka operator
When installed on Openshift, the Streams for Apache Kafka operator makes Kafka components available for installation from the user interface.
The following Kafka components are available for installation:
- Kafka
- Kafka Connect
- Kafka MirrorMaker
- Kafka MirrorMaker 2
- Kafka Topic
- Kafka User
- Kafka Bridge
- Kafka Connector
- Kafka Rebalance
You select the component and create an instance. As a minimum, you create a Kafka instance. This procedure describes how to create a Kafka instance using the default settings. You can configure the default installation specification before you perform the installation.
The process is the same for creating instances of other Kafka components.
Prerequisites
- The Streams for Apache Kafka operator is installed on the OpenShift cluster.
Procedure
Navigate in the web console to the Operators > Installed Operators page and click Streams for Apache Kafka to display the operator details.
From Provided APIs, you can create instances of Kafka components.
Click Create instance under Kafka to create a Kafka instance.
By default, you’ll create a Kafka cluster called
my-cluster
with three Kafka broker nodes and three ZooKeeper nodes. The cluster uses ephemeral storage.NoteIf you prefer to create a Kafka cluster in KRaft mode, you can use one of the example KRaft-based deployment files provided with Streams for Apache Kafka. To use deployment files to install Streams for Apache Kafka, download and extract the files from the Streams for Apache Kafka software downloads page. Paste the configuration in the YAML view before starting the installation of the Kafka instance.
Click Create to start the installation of Kafka.
Wait until the status changes to Ready.
Chapter 4. Creating an OpenShift route to access a Kafka cluster
Create an OpenShift route to access a Kafka cluster outside of OpenShift.
This procedure describes how to expose a Kafka cluster to clients outside the OpenShift environment. After the Kafka cluster is exposed, external clients can produce and consume messages from the Kafka cluster.
To create an OpenShift route, a route
listener is added to the configuration of a Kafka cluster installed on OpenShift.
An OpenShift Route address includes the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example, my-cluster-kafka-listener1-bootstrap-streams-kafka
(<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters.
Prerequisites
- You have created a Kafka cluster on OpenShift.
-
You need the OpenJDK
keytool
to manage certificates. -
(Optional) You can perform some of the steps using the OpenShift
oc
CLI tool.
Procedure
- Navigate in the web console to the Operators > Installed Operators page and select Streams for Apache Kafka to display the operator details.
- Select the Kafka page to show the installed Kafka clusters.
Click the name of the Kafka cluster you are configuring to view its details.
We use a Kafka cluster named
my-cluster
in this example.-
Select the YAML page for the Kafka cluster
my-cluster
. Add route listener configuration to create an OpenShift route named
listener1
.The listener configuration must be set to the
route
type. You add the listener configuration underlisteners
in the Kafka configuration.External route listener configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: streams-kafka spec: kafka: # ... listeners: # ... - name: listener1 port: 9094 type: route tls: true # ...
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: streams-kafka spec: kafka: # ... listeners: # ... - name: listener1 port: 9094 type: route tls: true # ...
The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example.
- Save the updated configuration.
Select the Resources page for the Kafka cluster
my-cluster
to locate the connection information you will need for your client.From the Resources page, you’ll find details for the route listener and the public cluster certificate you need to connect to the Kafka cluster.
-
Click the name of the
my-cluster-kafka-listener1-bootstrap
route created for the Kafka cluster to show the route details. Make a note of the hostname.
The hostname is specified with port 443 in a Kafka client as the bootstrap address for connecting to the Kafka cluster.
You can also locate the bootstrap address by navigating to Networking > Routes and selecting the
streams-kafka
project to display the routes created in the namespace.Or you can use the
oc
tool to extract the bootstrap details.Extracting bootstrap information
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'
oc get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'
Navigate back to the Resources page and click the name of the
my-cluster-cluster-ca-cert
to show the secret details for accessing the Kafka cluster.The
ca.crt
certificate file contains the public certificate of the Kafka cluster.You will need the certificate to access the Kafka broker.
Make a local copy of the
ca.crt
public certificate file.You can copy the details of the certificate or use the OpenShift
oc
tool to extract them.Extracting the public certificate
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
Create a local truststore for the public cluster certificate using
keytool
.Creating a local truststore
Copy to Clipboard Copied! Toggle word wrap Toggle overflow keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt
keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt
When prompted, create a password for accessing the truststore.
The truststore is specified in a Kafka client for authenticating access to the Kafka cluster.
You are now ready to start sending and receiving messages.
Chapter 5. Sending and receiving messages from a topic
Send messages to and receive messages from a Kafka cluster installed on OpenShift.
This procedure describes how to use Kafka clients to produce and consume messages. You can deploy clients to OpenShift or connect local Kafka clients to the OpenShift cluster. You can use either or both options to test your Kafka cluster installation. For the local clients, you access the Kafka cluster using an OpenShift route connection.
You will use the oc
command-line tool to deploy and run the Kafka clients.
Prerequisites
- You have created a Kafka cluster on OpenShift.
For a local producer and consumer:
- You have created a route for external access to the Kafka cluster running in OpenShift.
- You can access the latest Kafka client binaries from the Streams for Apache Kafka software downloads page.
Sending and receiving messages from Kafka clients deployed to the OpenShift cluster
Deploy producer and consumer clients to the OpenShift cluster. You can then use the clients to send and receive messages from the Kafka cluster in the same namespace. The deployment uses the Streams for Apache Kafka container image for running Kafka.
Use the
oc
command-line interface to deploy a Kafka producer.This example deploys a Kafka producer that connects to the Kafka cluster
my-cluster
A topic named
my-topic
is created.Deploying a Kafka producer to OpenShift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc run kafka-producer -ti \ --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 \ --rm=true \ --restart=Never \ -- bin/kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic
oc run kafka-producer -ti \ --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 \ --rm=true \ --restart=Never \ -- bin/kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic
NoteIf the connection fails, check that the Kafka cluster is running and the correct cluster name is specified as the
bootstrap-server
.- From the command prompt, enter a number of messages.
-
Navigate in the OpenShift web console to the Home > Projects page and select the
streams-kafka
project you created. -
From the list of pods, click
kafka-producer
to view the producer pod details. - Select Logs page to check the messages you entered are present.
Use the
oc
command-line interface to deploy a Kafka consumer.Deploying a Kafka consumer to OpenShift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc run kafka-consumer -ti \ --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 \ --rm=true \ --restart=Never \ -- bin/kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic \ --from-beginning
oc run kafka-consumer -ti \ --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 \ --rm=true \ --restart=Never \ -- bin/kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic \ --from-beginning
The consumer consumed messages produced to
my-topic
.- From the command prompt, confirm that you see the incoming messages in the consumer console.
-
Navigate in the OpenShift web console to the Home > Projects page and select the
streams-kafka
project you created. -
From the list of pods, click
kafka-consumer
to view the consumer pod details. - Select the Logs page to check the messages you consumed are present.
Sending and receiving messages from Kafka clients running locally
Use a command-line interface to run a Kafka producer and consumer on a local machine.
Download and extract the Streams for Apache Kafka <version> binaries from the Streams for Apache Kafka software downloads page.
Unzip the
amq-streams-<version>-bin.zip
file to any destination.Open a command-line interface, and start the Kafka console producer with the topic
my-topic
and the authentication properties for TLS.Add the properties that are required for accessing the Kafka broker with an OpenShift route.
- Use the hostname and port 443 for the OpenShift route you are using.
Use the password and reference to the truststore you created for the broker certificate.
Starting a local Kafka producer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --producer-property security.protocol=SSL \ --producer-property ssl.truststore.password=password \ --producer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic
kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --producer-property security.protocol=SSL \ --producer-property ssl.truststore.password=password \ --producer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic
- Type your message into the command-line interface where the producer is running.
- Press enter to send the message.
Open a new command-line interface tab or window, and start the Kafka console consumer to receive the messages.
Use the same connection details as the producer.
Starting a local Kafka consumer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --consumer-property security.protocol=SSL \ --consumer-property ssl.truststore.password=password \ --consumer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic --from-beginning
kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --consumer-property security.protocol=SSL \ --consumer-property ssl.truststore.password=password \ --consumer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic --from-beginning
- Confirm that you see the incoming messages in the consumer console.
- Press Crtl+C to exit the Kafka console producer and consumer.
Chapter 6. Deploying the Streams for Apache Kafka Console
After you have deployed a Kafka cluster that’s managed by Streams for Apache Kafka, you can deploy and connect the Streams for Apache Kafka Console to the cluster. The console facilitates the administration of Kafka clusters, providing real-time insights for monitoring, managing, and optimizing each cluster from its user interface.
For more information on connecting to and using the Streams for Apache Kafka Console, see the console guide in the Streams for Apache Kafka documentation.
Appendix A. Using your subscription
Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
Accessing Your Account
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
Activating a Subscription
- Go to access.redhat.com.
- Navigate to My Subscriptions.
- Navigate to Activate a subscription and enter your 16-digit activation number.
Downloading Zip and Tar Files
To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.
- Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
- Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
- Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
- Click the Download link for your component.
Installing packages with DNF
To install a package and all the package dependencies, use:
dnf install <package_name>
dnf install <package_name>
To install a previously-downloaded package from a local directory, use:
dnf install <path_to_download_package>
dnf install <path_to_download_package>
Revised on 2025-03-14 17:23:00 UTC