Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 18. Upgrading Streams for Apache Kafka and Kafka

download PDF

Upgrade your Kafka cluster with no downtime. Streams for Apache Kafka 2.7 supports and uses Apache Kafka version 3.7.0. Kafka 3.6.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7. You upgrade to the latest supported version of Kafka when you install the latest version of Streams for Apache Kafka.

18.1. Upgrade prerequisites

Before you begin the upgrade process, make sure you are familiar with any upgrade changes described in the Streams for Apache Kafka 2.7 on Red Hat Enterprise Linux Release Notes.

Note

Refer to the documentation supporting a specific version of Streams for Apache Kafka for information on how to upgrade to that version.

18.2. Updating Kafka versions

Upgrading Kafka when using ZooKeeper for cluster management requires updates to the Kafka version (Kafka.spec.kafka.version) and its inter-broker protocol version (inter.broker.protocol.version) in the configuration of the Kafka resource. Each version of Kafka has a compatible version of the inter-broker protocol. The inter-broker protocol is used for inter-broker communication. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table. The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config.

The following table shows the differences between Kafka versions:

Table 18.1. Kafka version differences
Streams for Apache Kafka versionKafka versionInter-broker protocol versionLog message format versionZooKeeper version

2.7

3.7.0

3.7

3.7

3.8.3

2.6

3.6.0

3.6

3.6

3.8.3

  • Kafka 3.7.0 is supported for production use.
  • Kafka 3.6.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7.

Log message format version

When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the message format they were encoded with.

The properties used to set a specific message format version are as follows:

  • message.format.version property for topics
  • log.message.format.version property for Kafka brokers

From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don’t need to be set. The values reflect the Kafka version used.

When upgrading to Kafka 3.0.0 or higher, you can remove these settings when you update the inter.broker.protocol.version. Otherwise, you can set the message format version based on the Kafka version you are upgrading to.

The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration.

Rolling updates from Kafka version changes

The Cluster Operator initiates rolling updates to Kafka brokers when the Kafka version is updated. Further rolling updates depend on the configuration for inter.broker.protocol.version and log.message.format.version.

If Kafka.spec.kafka.config contains…​The Cluster Operator initiates…​

Both the inter.broker.protocol.version and the log.message.format.version.

A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version. Changing each will trigger a further rolling update.

Either the inter.broker.protocol.version or the log.message.format.version.

Two rolling updates.

No configuration for the inter.broker.protocol.version or the log.message.format.version.

Two rolling updates.

Important

From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn’t need to be set. The log.message.format.version property for brokers and the message.format.version property for topics are deprecated and will be removed in a future release of Kafka.

As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper.

  • A single rolling update occurs even if the ZooKeeper version is unchanged.
  • Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version.

18.3. Strategies for upgrading clients

Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved.

Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable.

If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions.

18.4. Upgrading Kafka brokers and ZooKeeper

Upgrade Kafka brokers and ZooKeeper on a host machine to use the latest version of Streams for Apache Kafka. You update the installation files, then configure and restart all Kafka brokers to use a new inter-broker protocol version. After performing these steps, data is transmitted between the Kafka brokers using the new inter-broker protocol version.

Note

From Kafka 3.0.0, message format version values are assumed to match the inter.broker.protocol.version and don’t need to be set. The values reflect the Kafka version used.

Prerequisites

Procedure

For each Kafka broker in your Streams for Apache Kafka cluster and one at a time:

  1. Download the Streams for Apache Kafka archive from the Streams for Apache Kafka software downloads page.

    Note

    If prompted, log in to your Red Hat account.

  2. On the command line, create a temporary directory and extract the contents of the amq-streams-<version>-bin.zip file.

    mkdir /tmp/kafka
    unzip amq-streams-<version>-bin.zip -d /tmp/kafka
  3. If running, stop ZooKeeper and the Kafka broker running on the host.

    /opt/kafka/bin/zookeeper-server-stop.sh
    /opt/kafka/bin/kafka-server-stop.sh
    jcmd | grep zookeeper
    jcmd | grep kafka

    If you are running Kafka on a multi-node cluster, see Section 4.3, “Performing a graceful rolling restart of Kafka brokers”.

  4. Delete the libs and bin directories from your existing installation:

    rm -rf /opt/kafka/libs /opt/kafka/bin
  5. Copy the libs and bin directories from the temporary directory:

    cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/
    cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/
  6. If required, update the configuration files in the config directory to reflect any changes in the new versions.
  7. Delete the temporary directory.

    rm -r /tmp/kafka
  8. Edit the /opt/kafka/config/server.properties properties file.

    Set the inter.broker.protocol.version and log.message.format.version properties to the current version.

    For example, the current version is 3.6 if upgrading from Kafka version 3.6.0 to 3.7.0:

    inter.broker.protocol.version=3.6
    log.message.format.version=3.6

    Use the correct version for the Kafka version you are upgrading from (3.5, 3.6, and so on). Leaving the inter.broker.protocol.version unchanged at the current setting ensures that the brokers can continue to communicate with each other throughout the upgrade.

    If the properties are not configured, add them with the current version.

    If you are upgrading from Kafka 3.0.0 or later, you only need to set the inter.broker.protocol.version.

  9. Restart the updated ZooKeeper and Kafka broker:

    /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties
    /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties

    The Kafka broker and ZooKeeper start using the binaries for the latest Kafka version.

    For information on restarting brokers in a multi-node cluster, see Section 4.3, “Performing a graceful rolling restart of Kafka brokers”.

  10. Verify that the restarted Kafka broker has caught up with the partition replicas it is following.

    Use the kafka-topics.sh tool to ensure that all replicas contained in the broker are back in sync. For instructions, see Listing and describing topics.

    In the next steps, update your Kafka brokers to use the new inter-broker protocol version.

    Update each broker, one at a time.

    Warning

    Downgrading Streams for Apache Kafka is not possible after completing the following steps.

  11. Set the inter.broker.protocol.version property to 3.7 in the /opt/kafka/config/server.properties file:

    inter.broker.protocol.version=3.7
  12. On the command line, stop the Kafka broker that you modified:

    /opt/kafka/bin/kafka-server-stop.sh
  13. Check that Kafka is not running:

    jcmd | grep kafka
  14. Restart the Kafka broker that you modified:

    /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
  15. Check that Kafka is running:

    jcmd | grep kafka
  16. If you are upgrading from a version earlier than Kafka 3.0.0, set the log.message.format.version property to 3.7 in the /opt/kafka/config/server.properties file:

    log.message.format.version=3.7
  17. On the command line, stop the Kafka broker that you modified:

    /opt/kafka/bin/kafka-server-stop.sh
  18. Check that Kafka is not running:

    jcmd | grep kafka
  19. Restart the Kafka broker that you modified:

    /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
  20. Check that Kafka is running:

    jcmd | grep kafka
  21. Verify that the restarted Kafka broker has caught up with the partition replicas it is following.

    Use the kafka-topics.sh tool to ensure that all replicas contained in the broker are back in sync. For instructions, see Listing and describing topics.

  22. If it was used in the upgrade, remove the legacy log.message.format.version configuration from the server.properties file.

Upgrading client applications

Ensure all Kafka client applications are updated to use the new version of the client binaries as part of the upgrade process and verify their compatibility with the Kafka upgrade. If needed, coordinate with the team responsible for managing the client applications.

Tip

To check that a client is using the latest message format, use the kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec metric. The metric shows 0 if the latest message format is being used.

18.5. Upgrading Kafka components

Upgrade Kafka components on a host machine to use the latest version of Streams for Apache Kafka. You can use the Streams for Apache Kafka installation files to upgrade the following components:

  • Kafka Connect
  • MirrorMaker
  • Kafka Bridge (separate ZIP file)

Prerequisites

  • You are logged in to Red Hat Enterprise Linux as the kafka user.
  • You have downloaded the installation files.
  • You have upgraded Kafka.

    If a Kafka component is running on the same host as Kafka, you’ll also need to stop and start Kafka when upgrading.

Procedure

For each host running an instance of the Kafka component:

  1. Download the Streams for Apache Kafka or Kafka Bridge installation files from the Streams for Apache Kafka software downloads page.

    Note

    If prompted, log in to your Red Hat account.

  2. On the command line, create a temporary directory and extract the contents of the amq-streams-<version>-bin.zip file.

    mkdir /tmp/kafka
    unzip amq-streams-<version>-bin.zip -d /tmp/kafka

    For Kafka Bridge, extract the amq-streams-<version>-bridge-bin.zip file.

  3. If running, stop the Kafka component running on the host.
  4. Delete the libs and bin directories from your existing installation:

    rm -rf /opt/kafka/libs /opt/kafka/bin
  5. Copy the libs and bin directories from the temporary directory:

    cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/
    cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/
  6. If required, update the configuration files in the config directory to reflect any changes in the new versions.
  7. Delete the temporary directory.

    rm -r /tmp/kafka
  8. Start the Kafka component using the appropriate script and properties files.

    Starting Kafka Connect in standalone mode

    /opt/kafka/bin/connect-standalone.sh \
    /opt/kafka/config/connect-standalone.properties <connector1>.properties
    [<connector2>.properties ...]

    Starting Kafka Connect in distributed mode

    /opt/kafka/bin/connect-distributed.sh \
    /opt/kafka/config/connect-distributed.properties

    Starting MirrorMaker 2 in dedicated mode

    /opt/kafka/bin/connect-mirror-maker.sh \
    /opt/kafka/config/connect-mirror-maker.properties

    Starting Kafka Bridge

    su - kafka
    ./bin/kafka_bridge_run.sh \
    --config-file=<path>/application.properties

  9. Verify that the Kafka component is running, and producing or consuming data as expected.

    Verifying Kafka Connect in standalone mode is running

    jcmd | grep ConnectStandalone

    Verifying Kafka Connect in distributed mode is running

    jcmd | grep ConnectDistributed

    Verifying MirrorMaker 2 in dedicated mode is running

    jcmd | grep mirrorMaker

    Verifying Kafka Bridge is running by checking the log

    HTTP-Kafka Bridge started and listening on port 8080
    HTTP-Kafka Bridge bootstrap servers localhost:9092

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.