Chapter 4. Migrating to KRaft mode
If you are using ZooKeeper for metadata management of your Kafka cluster, you can migrate to using Kafka in KRaft mode. KRaft mode replaces ZooKeeper for distributed coordination, offering enhanced reliability, scalability, and throughput.
To migrate your cluster, do as follows:
- Install a quorum of controller nodes to replace ZooKeeper for cluster management.
-
Enable KRaft migration in the controller configuration by setting the
zookeeper.metadata.migration.enable
property totrue
. - Start the controllers and enable KRaft migration on the current cluster brokers using the same configuration property.
- Perform a rolling restart of the brokers to apply the configuration changes.
- When migration is complete, switch the brokers to KRaft mode and disable migration on the controllers.
Once KRaft mode has been finalized, rollback to ZooKeeper is not possible. Carefully consider this before proceeding with the migration.
Before starting the migration, verify that your environment can support Kafka in KRaft mode:
- Migration is only supported on dedicated controller nodes, not on nodes with dual roles as brokers and controllers.
- Throughout the migration process, ZooKeeper and KRaft controller nodes operate in parallel, requiring sufficient compute resources in your cluster.
Prerequisites
-
You are logged in to Red Hat Enterprise Linux as the
kafka
user. - Streams for Apache Kafka is installed on each host, and the configuration files are available.
- You are using Streams for Apache Kafka 2.7 or newer with Kafka 3.7.0 or newer. If you are using an earlier version of Streams for Apache Kafka, upgrade before migrating to KRaft mode.
Logging is enabled to check the migration process.
Set
DEBUG
level inlog4j.properties
for the root logger on the controllers and brokers in the cluster. For detailed migration-specific logs, setTRACE
for the migration logger:Controller logging configuration
log4j.rootLogger=DEBUG log4j.logger.org.apache.kafka.metadata.migration=TRACE
Procedure
Retrieve the cluster ID of your Kafka cluster.
Use the
zookeeper-shell
tool:/opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /cluster/id
The command returns the cluster ID.
Install a KRaft controller quorum to the cluster.
Configure a controller node on each host using the
controller.properties
file.At a minimum, each controller requires the following configuration:
- A unique node ID
-
The migration enabled flag set to
true
- ZooKeeper connection details
- Listener name used by the controller quorum
- A quorum of controller voters
Listener name for inter-broker communication
Example controller configuration
process.roles=controller node.id=1 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090 inter.broker.listener.name=PLAINTEXT
The format for the controller quorum is
<node_id>@<hostname>:<port>
in a comma-separated list. The inter-broker listener name is required for the KRaft controller to initiate the migration.
Set up log directories for each controller node:
/opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/controller.properties
Returns:
Formatting /tmp/kraft-controller-logs
Replace <uuid> with the cluster ID you retrieved. Use the same cluster ID for each controller node in your cluster.
By default, the log directory (
log.dirs
) specified in thecontroller.properties
configuration file is set to/tmp/kraft-controller-logs
. The/tmp
directory is typically cleared on each system reboot, making it suitable for development environments only. Set multiple log directories using a comma-separated list, if needed.Start each controller.
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/controller.properties
Check that Kafka is running:
jcmd | grep kafka
Returns:
process ID kafka.Kafka /opt/kafka/config/kraft/controller.properties
Check the logs of each controller to ensure that they have successfully joined the KRaft cluster:
tail -f /opt/kafka/logs/controller.log
Enable migration on each broker.
If running, stop the Kafka broker running on the host.
/opt/kafka/bin/kafka-server-stop.sh jcmd | grep kafka
If using a multi-node cluster, refer to Section 3.6, “Performing a graceful rolling restart of Kafka brokers”.
Enable migration using the
server.properties
file.At a minimum, each broker requires the following additional configuration:
- Inter-broker protocol version set to version 3.7
- The migration enabled flag
- Controller configuration that matches the controller nodes
- A quorum of controller voters
Example broker configuration
broker.id=0 inter.broker.protocol.version=3.7 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090
The ZooKeeper connection details should already be present.
Restart the updated broker:
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties
The migration starts automatically and can take some time depending on the number of topics and partitions in the cluster.
Check that Kafka is running:
jcmd | grep kafka
Returns:
process ID kafka.Kafka /opt/kafka/config/kraft/server.properties
Check the log on the active controller to confirm that the migration is complete:
/opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /controller
Look for an
INFO
log entry that says the following:Completed migration of metadata from ZooKeeper to KRaft.
Switch each broker to KRaft mode.
- Stop the broker, as before.
Update the broker configuration in the
server.properties
file:-
Replace the
broker.id
with anode.id
using the same ID -
Add a
broker
KRaft role for the broker -
Remove the inter-broker protocol version (
inter.broker.protocol.version
) -
Remove the migration enabled flag (
zookeeper.metadata.migration.enable
) - Remove ZooKeeper configuration
-
Remove the listener for controller and broker communication (
control.plane.listener.name
)
Example broker configuration for KRaft
node.id=0 process.roles=broker listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090
-
Replace the
If you are using ACLS in your broker configuration, update the authorizer using the
authorizer.class.name
property to the KRaft-based standard authorizer.ZooKeeper-based brokers use
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
.When migrating to KRaft-based brokers, specify
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
.- Restart the broker, as before.
Switch each controller out of migration mode.
- Stop the controller in the same way as the broker, as described previously.
Update the controller configuration in the
controller.properties
file:- Remove the ZooKeeper connection details
-
Remove the
zookeeper.metadata.migration.enable
property -
Remove
inter.broker.listener.name
Example controller configuration following migration
process.roles=controller node.id=1 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090
- Restart the controller in the same way as the broker, as described previously.