Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. What is deployed with Streams for Apache Kafka


Streams for Apache Kafka enables the deployment of Apache Kafka components to an OpenShift cluster, typically running as clusters for high availability.

A standard Kafka deployment using Streams for Apache Kafka might include the following components:

  • Kafka cluster of broker nodes as the core component
  • Kafka Connect cluster for external data connections
  • Kafka MirrorMaker cluster to mirror data to another Kafka cluster
  • Kafka Exporter to extract additional Kafka metrics data for monitoring
  • Kafka Bridge to enable HTTP-based communication with Kafka
  • Cruise Control to rebalance topic partitions across brokers

Not all of these components are required, though you need Kafka as a minimum for a Streams for Apache Kafka-managed Kafka cluster. Depending on your use case, you can deploy the additional components as needed. These components can also be used with Kafka clusters that are not managed by Streams for Apache Kafka.

3.1. Order of deployment

The required order of deployment to an OpenShift cluster is as follows:

  1. Deploy the Cluster Operator to manage your Kafka cluster
  2. Deploy the Kafka cluster with the ZooKeeper cluster, and include the Topic Operator and User Operator in the deployment
  3. Optionally deploy:

    • The Topic Operator and User Operator standalone if you did not deploy them with the Kafka cluster
    • Kafka Connect
    • Kafka MirrorMaker
    • Kafka Bridge
    • Components for the monitoring of metrics

The Cluster Operator creates OpenShift resources for the components, such as Deployment, Service, and Pod resources. The names of the OpenShift resources are appended with the name specified for a component when it’s deployed. For example, a Kafka cluster named my-kafka-cluster has a service named my-kafka-cluster-kafka.

3.2. Deploying the Streams for Apache Kafka Proxy

Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself.

For more information on connecting to and using the Streams for Apache Kafka Proxy, see the proxy guide in the Streams for Apache Kafka documentation.

Important

This feature is a technology preview and not intended for a production environment. For more information see the release notes.

3.3. Deploying the Streams for Apache Kafka Console

After you have deployed a Kafka cluster that’s managed by Streams for Apache Kafka, you can deploy and connect the Streams for Apache Kafka Console to the cluster. The console facilitates the administration of Kafka clusters, providing real-time insights for monitoring, managing, and optimizing each cluster from its user interface.

For more information on connecting to and using the Streams for Apache Kafka Console, see the console guide in the Streams for Apache Kafka documentation.

Important

This feature is a technology preview and not intended for a production environment. For more information see the release notes.

Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat