이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 2. Streams for Apache Kafka deployment of Kafka


Streams for Apache Kafka enables the deployment of Apache Kafka components to an OpenShift cluster, typically running as clusters for high availability.

A standard Kafka deployment using Streams for Apache Kafka might include the following components:

  • Kafka cluster of broker nodes as the core component
  • Kafka Connect cluster for external data connections
  • Kafka MirrorMaker cluster to mirror data to another Kafka cluster
  • Kafka Exporter to extract additional Kafka metrics data for monitoring
  • Kafka Bridge to enable HTTP-based communication with Kafka
  • Cruise Control to rebalance topic partitions across brokers

Not all of these components are required, though you need Kafka as a minimum for a Streams for Apache Kafka-managed Kafka cluster. Depending on your use case, you can deploy the additional components as needed. These components can also be used with Kafka clusters that are not managed by Streams for Apache Kafka.

2.1. Kafka component architecture

A KRaft-based Kafka cluster consists of broker nodes responsible for message delivery and controller nodes that manage cluster metadata and coordinate clusters. These roles can be configured using node pools in Streams for Apache Kafka.

Other Kafka components interact with the Kafka cluster for specific tasks.

Kafka component interaction

Data flows between several Kafka components and the Kafka cluster.

Kafka Connect

Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database, for import or export of data using connectors. Connectors provide the connection configuration needed.

  • A source connector pushes external data into Kafka.
  • A sink connector extracts data out of Kafka

External data is translated and transformed into the appropriate format.

Kafka Connect can be configured to build custom container images with the required connectors.

Kafka MirrorMaker
Kafka MirrorMaker replicates data between two Kafka clusters, either in the same data center or across different locations.
Kafka Bridge
Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster.
Kafka Exporter
Kafka Exporter extracts data for analysis as Prometheus metrics, primarily data relating to offsets, consumer groups, consumer lag and topics. Consumer lag is the delay between the last message written to a partition and the message currently being picked up from that partition by a consumer
Apache ZooKeeper (optional)
Apache ZooKeeper provides a cluster coordination service, storing and tracking the status of brokers and consumers. ZooKeeper is also used for controller election. If ZooKeeper is used, the ZooKeeper cluster must be ready before running Kafka. However, since the introduction of KRaft, ZooKeeper is no longer required as Kafka nodes handle cluster coordination and control natively.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat