Chapter 4. Using Kafka in KRaft mode


KRaft (Kafka Raft metadata) mode replaces Kafka’s dependency on ZooKeeper for cluster management. KRaft mode simplifies the deployment and management of Kafka clusters by bringing metadata management and coordination of clusters into Kafka.

Kafka in KRaft mode is designed to offer enhanced reliability, scalability, and throughput. Metadata operations become more efficient as they are directly integrated. And by removing the need to maintain a ZooKeeper cluster, there’s also a reduction in the operational and security overhead.

Through Kafka configuration, nodes are assigned the role of broker, controller, or both:

  • Controller nodes operate in the control plane to manage cluster metadata and the state of the cluster using a Raft-based consensus protocol.
  • Broker nodes operate in the data plane to manage the streaming of messages, receiving and storing data in topic partitions.
  • Dual-role nodes fulfill the responsibilities of controllers and brokers.

You can use a dynamic or static quorum of controllers. Dynamic is recommended as it supports dynamic scaling.

Controllers use a metadata log, stored as a single-partition topic (__cluster_metadata) on every node, which records the state of the cluster. When requests are made to change the cluster configuration, an active (lead) controller manages updates to the metadata log, and follower controllers replicate these updates. The metadata log stores information on brokers, replicas, topics, and partitions, including the state of in-sync replicas and partition leadership. Kafka uses this metadata to coordinate changes and manage the cluster effectively.

Broker nodes act as observers, storing the metadata log passively to stay up-to-date with the cluster’s state. Each node fetches updates to the log independently. If you are using JBOD storage, you can change the directory that stores the metadata log.

Note

The KRaft metadata version used in the Kafka cluster must be supported by the Kafka version in use.

In the following example, a Kafka cluster comprises a quorum of controller and broker nodes for fault tolerance and high availability.

Figure 4.1. Example cluster with separate broker and controller nodes

KRaft quorums for broker and controller

In a typical production environment, use dedicated broker and controller nodes. However, you might want to use nodes in a dual-role configuration for development or testing.

You can use a combination of nodes that combine roles with nodes that perform a single role. In the following example, three nodes perform a dual role and two nodes act only as brokers.

Figure 4.2. Example cluster with dual-role nodes and dedicated broker nodes

KRaft cluster with nodes that combine roles
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top