Chapter 4. Migrating to KRaft mode
If you are using ZooKeeper for metadata management of your Kafka cluster, you can migrate to using Kafka in KRaft mode. KRaft mode replaces ZooKeeper for distributed coordination, offering enhanced reliability, scalability, and throughput.
Migrating from ZooKeeper to KRaft currently requires using a static controller quorum. Streams for Apache Kafka 2.9 (LTS) is expected to introduce support for both static and dynamic controller quorums, at which point KRaft support will also be promoted to GA.
To migrate your cluster, do as follows:
- Install a quorum of controller nodes to replace ZooKeeper for cluster management.
-
Enable KRaft migration in the controller configuration by setting the
zookeeper.metadata.migration.enableproperty totrue. - Start the controllers and enable KRaft migration on the current cluster brokers using the same configuration property.
- Perform a rolling restart of the brokers to apply the configuration changes.
- When migration is complete, switch the brokers to KRaft mode and disable migration on the controllers.
Once KRaft mode has been finalized, rollback to ZooKeeper is not possible. Carefully consider this before proceeding with the migration.
Before starting the migration, verify that your environment can support Kafka in KRaft mode:
- Migration is only supported on dedicated controller nodes, not on nodes with dual roles as brokers and controllers.
- Throughout the migration process, ZooKeeper and KRaft controller nodes operate in parallel, requiring sufficient compute resources in your cluster.
Prerequisites
-
You are logged in to Red Hat Enterprise Linux as the
kafkauser. - Streams for Apache Kafka is installed on each host, and the configuration files are available.
- You are using Streams for Apache Kafka 2.7 or newer with Kafka 3.7.0 or newer. If you are using an earlier version of Streams for Apache Kafka, upgrade before migrating to KRaft mode.
Logging is enabled to check the migration process.
Set
DEBUGlevel inlog4j.propertiesfor the root logger on the controllers and brokers in the cluster. For detailed migration-specific logs, setTRACEfor the migration logger:Controller logging configuration
log4j.rootLogger=DEBUG log4j.logger.org.apache.kafka.metadata.migration=TRACE
Procedure
Retrieve the cluster ID of your Kafka cluster.
Use the
zookeeper-shelltool:/opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /cluster/idThe command returns the cluster ID.
Install a KRaft controller quorum to the cluster.
Configure a controller node on each host using the
controller.propertiesfile.At a minimum, each controller requires the following configuration:
- A unique node ID
-
The migration enabled flag set to
true - ZooKeeper connection details
- Listener name used by the controller quorum
- A quorum of controller voters
Listener name for inter-broker communication
Example controller configuration
process.roles=controller node.id=1 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090 inter.broker.listener.name=PLAINTEXTThe format for the controller quorum is
<node_id>@<hostname>:<port>in a comma-separated list. The inter-broker listener name is required for the KRaft controller to initiate the migration.
Set up log directories for each controller node:
/opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/controller.propertiesReturns:
Formatting /tmp/kraft-controller-logsReplace <uuid> with the cluster ID you retrieved. Use the same cluster ID for each controller node in your cluster.
By default, the log directory (
log.dirs) specified in thecontroller.propertiesconfiguration file is set to/tmp/kraft-controller-logs. The/tmpdirectory is typically cleared on each system reboot, making it suitable for development environments only. Set multiple log directories using a comma-separated list, if needed.Start each controller.
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/controller.propertiesCheck that Kafka is running:
jcmd | grep kafkaReturns:
process ID kafka.Kafka /opt/kafka/config/kraft/controller.propertiesCheck the logs of each controller to ensure that they have successfully joined the KRaft cluster:
tail -f /opt/kafka/logs/controller.log
Enable migration on each broker.
If running, stop the Kafka broker running on the host.
/opt/kafka/bin/kafka-server-stop.sh jcmd | grep kafkaIf using a multi-node cluster, refer to Section 3.6, “Performing a graceful rolling restart of Kafka brokers”.
Enable migration using the
server.propertiesfile.At a minimum, each broker requires the following additional configuration:
- Inter-broker protocol version set to version 3.8
- The migration enabled flag
- Controller configuration that matches the controller nodes
- A quorum of controller voters
Example broker configuration
broker.id=0 inter.broker.protocol.version=3.8 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090The ZooKeeper connection details should already be present.
Restart the updated broker:
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.propertiesThe migration starts automatically and can take some time depending on the number of topics and partitions in the cluster.
Check that Kafka is running:
jcmd | grep kafkaReturns:
process ID kafka.Kafka /opt/kafka/config/kraft/server.properties
Check the log on the active controller to confirm that the migration is complete:
/opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /controllerLook for an
INFOlog entry that says the following:Completed migration of metadata from ZooKeeper to KRaft.Switch each broker to KRaft mode.
- Stop the broker, as before.
Update the broker configuration in the
server.propertiesfile:-
Replace the
broker.idwith anode.idusing the same ID -
Add a
brokerKRaft role for the broker -
Remove the inter-broker protocol version (
inter.broker.protocol.version) -
Remove the migration enabled flag (
zookeeper.metadata.migration.enable) - Remove ZooKeeper configuration
-
Remove the listener for controller and broker communication (
control.plane.listener.name)
Example broker configuration for KRaft
node.id=0 process.roles=broker listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090-
Replace the
If you are using ACLS in your broker configuration, update the authorizer using the
authorizer.class.nameproperty to the KRaft-based standard authorizer.ZooKeeper-based brokers use
authorizer.class.name=kafka.security.authorizer.AclAuthorizer.When migrating to KRaft-based brokers, specify
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer.- Restart the broker, as before.
Switch each controller out of migration mode.
- Stop the controller in the same way as the broker, as described previously.
Update the controller configuration in the
controller.propertiesfile:- Remove the ZooKeeper connection details
-
Remove the
zookeeper.metadata.migration.enableproperty -
Remove
inter.broker.listener.name
Example controller configuration following migration
process.roles=controller node.id=1 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.voters=1@localhost:9090- Restart the controller in the same way as the broker, as described previously.