Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. What is deployed with Streams for Apache Kafka

download PDF

Apache Kafka components are provided for deployment to OpenShift with the Streams for Apache Kafka distribution. The Kafka components are generally run as clusters for availability.

A typical deployment incorporating Kafka components might include:

  • Kafka cluster of broker nodes
  • ZooKeeper cluster of replicated ZooKeeper instances
  • Kafka Connect cluster for external data connections
  • Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster
  • Kafka Exporter to extract additional Kafka metrics data for monitoring
  • Kafka Bridge to make HTTP-based requests to the Kafka cluster
  • Cruise Control to rebalance topic partitions across broker nodes

Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect.

3.1. Order of deployment

The required order of deployment to an OpenShift cluster is as follows:

  1. Deploy the Cluster Operator to manage your Kafka cluster
  2. Deploy the Kafka cluster with the ZooKeeper cluster, and include the Topic Operator and User Operator in the deployment
  3. Optionally deploy:

    • The Topic Operator and User Operator standalone if you did not deploy them with the Kafka cluster
    • Kafka Connect
    • Kafka MirrorMaker
    • Kafka Bridge
    • Components for the monitoring of metrics

The Cluster Operator creates OpenShift resources for the components, such as Deployment, Service, and Pod resources. The names of the OpenShift resources are appended with the name specified for a component when it’s deployed. For example, a Kafka cluster named my-kafka-cluster has a service named my-kafka-cluster-kafka.

3.2. (Preview) Deploying the Streams for Apache Kafka Proxy

Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself.

For more information on connecting to and using the Streams for Apache Kafka Proxy, see the proxy guide in the Streams for Apache Kafka documentation.

Note

The Streams for Apache Kafka Proxy is currently available as a technology preview.

3.3. (Preview) Deploying the Streams for Apache Kafka Console

After you have deployed a Kafka cluster that’s managed by Streams for Apache Kafka, you can deploy the Streams for Apache Kafka Console and connect your cluster. The Streams for Apache Kafka Console facilitates the administration of Kafka clusters, providing real-time insights for monitoring, managing, and optimizing each cluster from its user interface.

For more information on connecting to and using the Streams for Apache Kafka Console, see the console guide in the Streams for Apache Kafka documentation.

Note

The Streams for Apache Kafka Console is currently available as a technology preview.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.