Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Migrating to KRaft mode

download PDF

If you are using ZooKeeper for metadata management of your Kafka cluster, you can migrate to using Kafka in KRaft mode. KRaft mode replaces ZooKeeper for distributed coordination, offering enhanced reliability, scalability, and throughput.

During the migration, you install a quorum of controller nodes that replaces ZooKeeper for management of your cluster. You enable KRaft migration in the controller configuration by setting the zookeeper.metadata.migration.enable property to true. When the controllers are started, you enable KRaft migration on the current cluster brokers using the same configuration property. After the migration is complete, you switch the brokers to using KRaft and the controllers out of migration mode.

Before starting the migration, verify that your environment can support Kafka in KRaft mode, as KRaft does not support JBOD storage with multiple disks.

Prerequisites

  • You are logged in to Red Hat Enterprise Linux as the kafka user.
  • Streams for Apache Kafka is installed on each host, and the configuration files are available.
  • You must be using Streams for Apache Kafka 2.7 or newer with Kafka 3.7.0 or newer. If you are using an earlier version of Streams for Apache Kafka, upgrade before migrating to KRaft mode.
  • Logging is enabled to check the migration process.

    It is useful to set a DEBUG level in log4j.properties for the root logger on the controllers and brokers in the cluster. For the controller logger specific to migration, set TRACE:

    Controller logging configuration

    log4j.rootLogger=DEBUG
    log4j.logger.org.apache.kafka.metadata.migration=TRACE

Procedure

  1. Retrieve the cluster ID of your Kafka cluster.

    You can use the zookeeper-shell tool to do this:

    /opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /cluster/id

    The command returns the cluster ID.

  2. Install a KRaft controller quorum to the cluster.

    1. Configure a controller node on each host using the controller.properties file.

      At a minimum, each controller requires the following configuration:

      • A unique node ID
      • The migration enabled flag set to true
      • ZooKeeper connection details
      • Controller listeners
      • A quorum of controller voters

        Example controller configuration

        process.roles=controller
        node.id=1
        
        zookeeper.metadata.migration.enable=true
        zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181
        
        listeners=CONTROLLER://0.0.0.0:9090
        controller.listener.names=CONTROLLER
        listener.security.protocol.map=CONTROLLER:PLAINTEXT
        controller.quorum.voters=1@localhost:9090

        The format for the controller quorum is <node_id>@<hostname>:<port> in a comma-separated list.

    2. Set up log directories for each controller node:

      /opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/controller.properties

      Returns:

      Formatting /tmp/kraft-controller-logs

      Replace <uuid> with the cluster ID you retrieved. Use the same cluster ID for each controller node in your cluster.

      Apply the controller configuration using the properties file you configured for the controller.

      By default, the log directory (log.dirs) specified in the controller.properties configuration file is set to /tmp/kraft-controller-logs. The /tmp directory is typically cleared on each system reboot, making it suitable for development environments only.

      You can add a comma-separated list to set up multiple log directories.

    3. Start each controller.

      /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/controller.properties
    4. Check that Kafka is running:

      jcmd | grep kafka

      Returns:

      process ID kafka.Kafka /opt/kafka/config/kraft/controller.properties

      Check the logs of each controller to ensure that they have successfully joined the KRaft cluster:

      tail -f /opt/kafka/logs/controller.log
  3. Enable migration on each broker.

    1. If running, stop the Kafka broker running on the host.

      /opt/kafka/bin/kafka-server-stop.sh
      jcmd | grep kafka

      If you are running Kafka on a multi-node cluster, see Section 3.6, “Performing a graceful rolling restart of Kafka brokers”.

    2. Enable migration using the server.properties file.

      At a minimum, each broker requires the following additional configuration:

      • Inter-broker protocol version set to version 3.5.
      • The migration enabled flag
      • Controller listeners
      • A quorum of controller voters

      Example broker configuration

      broker.id=0
      inter.broker.protocol.version=3.5
      
      zookeeper.metadata.migration.enable=true
      zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181
      
      listeners=CONTROLLER://0.0.0.0:9090
      controller.listener.names=CONTROLLER
      listener.security.protocol.map=CONTROLLER:PLAINTEXT
      controller.quorum.voters=1@localhost:9090

      The ZooKeeper connection details should already be present. The controller configuration for the brokers is the same as for the controllers.

    3. Restart the updated broker:

      /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties

      The migration starts automatically and can take some time depending on the number of topics and partitions in the cluster.

    4. Check that Kafka is running:

      jcmd | grep kafka

      Returns:

      process ID kafka.Kafka /opt/kafka/config/kraft/server.properties
  4. Check the log on the active controller to ensure that migration is complete:

    /opt/kafka/bin/zookeeper-shell.sh localhost:2181 get /controller

    Look for an INFO log entry that says the following: Completed migration of metadata from ZooKeeper to KRaft.

  5. Switch each broker to run in KRaft mode.

    1. Stop the broker, as before.
    2. Update the broker configuration in the server.properties file:

      • Replace the broker.id with a node.id using the same ID
      • Add a broker KRaft role for the broker
      • Remove the inter-broker protocol version (inter.broker.protocol.version)
      • Remove the migration enabled flag (zookeeper.metadata.migration.enable)
      • Remove ZooKeeper configuration

      Example broker configuration for KRaft

      node.id=0
      process.roles=broker
      
      listeners=CONTROLLER://0.0.0.0:9090
      controller.listener.names=CONTROLLER
      listener.security.protocol.map=CONTROLLER:PLAINTEXT
      controller.quorum.voters=1@localhost:9090

    3. If you are using ACLS in your broker configuration, update the authorizer using the authorizer.class.name property to the KRaft-based standard authorizer.

      ZooKeeper-based brokers use authorizer.class.name=kafka.security.authorizer.AclAuthorizer.

      When migrating to KRaft-based brokers, specify authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer.

    4. Restart the broker, as before.
  6. Switch each controller out of migration mode.

    1. Stop the controller, as before.
    2. Remove the zookeeper.metadata.migration.enable property from the controller.properties file.
    3. Restart the controller, as before.

      Example controller configuration following migration

      process.roles=controller
      node.id=1
      
      listeners=CONTROLLER://0.0.0.0:9090
      controller.listener.names=CONTROLLER
      listener.security.protocol.map=CONTROLLER:PLAINTEXT
      controller.quorum.voters=1@localhost:9090

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.