Este conteúdo não está disponível no idioma selecionado.

Chapter 5. Running Kafka in KRaft mode (development preview)


When you run AMQ Streams in KRaft (Kafka Raft metadata) mode, Kafka clusters are managed by an internal quorum of controllers instead of ZooKeeper.

Apache Kafka is in the process of phasing out the need for ZooKeeper. KRaft mode is now available to try. You can deploy a Kafka cluster in KRaft mode without ZooKeeper.

Important

KRaft mode is experimental, intended only for development and testing, and must not be enabled for a production environment.

Currently, the KRaft mode in AMQ Streams has the following major limitations:

  • Migrating from Kafka clusters with ZooKeeper to KRaft clusters is not recommended for production.
  • Downgrading from KRaft mode to using ZooKeeper is not supported.
  • JBOD storage with multiple disks is not supported.
  • Many configuration options are still in development.

5.1. Using AMQ Streams with Kafka in KRaft mode

If you use Kafka in KRaft mode, you do not need to use ZooKeeper for cluster coordination or storing metadata. Kafka coordinates the cluster itself using brokers that act as controllers. Kafka also stores the metadata used to track the status of brokers and partitions.

To identify a cluster, create an ID. The ID is used when creating logs for the brokers you add to the cluster.

In the configuration of each broker node, specify the following:

  • A node ID
  • Broker roles
  • A list of brokers (or voters) that act as controllers

A broker performs the role of broker, controller, or both.

Broker role
A broker, sometimes referred to as a node or server, orchestrates the storage and passing of messages.
Controller role
A controller coordinates the cluster and manages the tracking metadata.

You can use combined broker and controller nodes, though you might want to separate these functions. Brokers performing combined roles can be more efficient in simpler deployments.

You specify a list of controllers, configured as voters, using the node ID and connection details (hostname and port) for each controller.

5.2. Running a Kafka cluster in KRaft mode

Configure and run Kafka in KRaft mode. You can run Kafka in KRaft mode if you are using a single-node or multi-node Kafka cluster. Run a minimum of three broker and controller nodes for stability and availability.

You set roles for brokers so that they can also be controllers. You apply broker configuration, including the setting of roles, using a configuration properties file. Broker configuration differs according to role. KRaft provides three example broker configuration properties files.

  • /opt/kafka/config/kraft/broker.properties has example configuration for a broker role
  • /opt/kafka/config/kraft/controller.properties has example configuration for a controller role
  • /opt/kafka/config/kraft/server.properties has example configuration for a dual role

You can base your broker configuration on these example properties files. In this procedure, the example server.properties configuration is used.

Prerequisites

  • AMQ Streams is installed on all hosts which will be used as Kafka brokers.

Procedure

  1. Generate an ID for the Kafka cluster using the kafka-storage tool:

    /opt/kafka/bin/kafka-storage.sh random-uuid
    Copy to Clipboard Toggle word wrap

    The command returns an ID. A cluster ID is required in KRaft mode.

  2. Create a configuration properties file for each broker in the cluster.

    You can base the file on the examples provided with Kafka.

    1. Specify a role as broker, controller, or broker, controller

      For example, specify process.roles=broker, controller for a dual role.

    2. Specify a unique node.id for each node in the cluster starting from 0.

      For example, node.id=1.

    3. Specify a list of controller.quorum.voters in the format <node_id>@<hostname:port>.

      For example, controller.quorum.voters=1@localhost:9093.

  3. Set up log directories for each node in your Kafka cluster:

    /opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/server.properties
    Copy to Clipboard Toggle word wrap

    Returns:

    Formatting /tmp/kraft-combined-logs
    Copy to Clipboard Toggle word wrap

    Replace <uuid> with the cluster ID you generated. Use the same ID for each node in your cluster.

    Apply the broker configuration using the properties file you created for the broker.

    The default log directory location specified in the server.properties configuration file is /tmp/kraft-combined-logs. You can add a comma-separated list to set up multiple log directories.

  4. Start each Kafka broker.

    /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server.properties
    Copy to Clipboard Toggle word wrap
  5. Check that Kafka is running:

    jcmd | grep kafka
    Copy to Clipboard Toggle word wrap

    Returns:

    number kafka.Kafka /opt/kafka/config/kraft/server.properties
    Copy to Clipboard Toggle word wrap

You can now create topics, and send and receive messages from the brokers.

For brokers passing messages, you can use topic replication across the brokers in a cluster for data durability. Configure topics to have a replication factor of at least three and a minimum number of in-sync replicas set to 1 less than the replication factor. For more information, see Section 7.7, “Creating a topic”.

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat