此内容没有您所选择的语言版本。

Chapter 20. Upgrading Streams for Apache Kafka and Kafka


Upgrade your Kafka cluster with no downtime. Streams for Apache Kafka 2.9 supports and uses Apache Kafka version 3.9.0. Kafka 3.8.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9. You upgrade to the latest supported version of Kafka when you install the latest version of Streams for Apache Kafka.

20.1. Upgrade prerequisites

Before you begin the upgrade process, make sure you are familiar with any upgrade changes described in the Streams for Apache Kafka 2.9 on Red Hat Enterprise Linux Release Notes.

Note

Refer to the documentation supporting a specific version of Streams for Apache Kafka for information on how to upgrade to that version.

20.2. Streams for Apache Kafka upgrade paths

Two upgrade paths are available for Streams for Apache Kafka.

Incremental upgrade
An incremental upgrade involves upgrading Streams for Apache Kafka from the previous minor version to version 2.9.
Multi-version upgrade
A multi-version upgrade involves upgrading an older version of Streams for Apache Kafka to version 2.9 within a single upgrade, skipping one or more intermediate versions. For example, you might wish to upgrade from one LTS version to the next LTS version, slipping intermediate releases.

The upgrade process is the same for either path, you just need to make sure that the inter.broker.protocol.version is switched to the newer version.

20.3. Updating Kafka versions

Upgrading Kafka when using ZooKeeper for cluster management requires updates to the Kafka version (Kafka.spec.kafka.version) and its inter-broker protocol version (inter.broker.protocol.version) in the configuration of the Kafka resource. Each version of Kafka has a compatible version of the inter-broker protocol. The inter-broker protocol is used for inter-broker communication. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table. The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config.

The following table shows the differences between Kafka versions:

Expand
Table 20.1. Kafka version differences
Streams for Apache Kafka versionKafka versionInter-broker protocol versionLog message format versionZooKeeper version

2.9

3.9.0

3.9

3.9

3.8.4

2.8

3.8.0

3.8

3.8

3.8.4

  • Kafka 3.9.0 is supported for production use.
  • Kafka 3.8.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9.

Log message format version

When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the message format they were encoded with.

The properties used to set a specific message format version are as follows:

  • message.format.version property for topics
  • log.message.format.version property for Kafka brokers

From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don’t need to be set. The values reflect the Kafka version used.

When upgrading to Kafka 3.0.0 or higher, you can remove these settings when you update the inter.broker.protocol.version. Otherwise, you can set the message format version based on the Kafka version you are upgrading to.

The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration.

Rolling updates from Kafka version changes

The Cluster Operator initiates rolling updates to Kafka brokers when the Kafka version is updated. Further rolling updates depend on the configuration for inter.broker.protocol.version and log.message.format.version.

Expand
If Kafka.spec.kafka.config contains…​The Cluster Operator initiates…​

Both the inter.broker.protocol.version and the log.message.format.version.

A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version. Changing each will trigger a further rolling update.

Either the inter.broker.protocol.version or the log.message.format.version.

Two rolling updates.

No configuration for the inter.broker.protocol.version or the log.message.format.version.

Two rolling updates.

Important

From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn’t need to be set. The log.message.format.version property for brokers and the message.format.version property for topics are deprecated and will be removed in a future release of Kafka.

As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper.

  • A single rolling update occurs even if the ZooKeeper version is unchanged.
  • Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version.

20.4. Strategies for upgrading clients

Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved.

Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable.

If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions.

20.5. Upgrading Kafka brokers and ZooKeeper

Upgrade Kafka brokers and ZooKeeper on a host machine to use the latest version of Streams for Apache Kafka. You update the installation files, then configure and restart all Kafka brokers to use a new inter-broker protocol version. After performing these steps, data is transmitted between the Kafka brokers using the new inter-broker protocol version. For this setup, Kafka is installed in the /opt/kafka/ directory.

Note

From Kafka 3.0.0, message format version values are assumed to match the inter.broker.protocol.version and don’t need to be set. The values reflect the Kafka version used.

Prerequisites

Procedure

This procedure is divided into two phases:

  • First, you update the Kafka binaries and configure the inter.broker.protocol.version and log.message.format.version to the current Kafka version for each broker.
  • Then, you perform a rolling update of these properties across the cluster to the new Kafka version.

Phase 1: Update Kafka binaries and set protocol versions

For each Kafka broker in your cluster, perform the following steps one at a time. Complete all of these steps for a single broker before moving to the next broker.

  1. Download the Streams for Apache Kafka archive from the Streams for Apache Kafka software downloads page.

    Note

    If prompted, log in to your Red Hat account.

  2. On the command line, create a temporary directory and extract the contents of the amq-streams-<version>-kafka-bin.zip file.

    mkdir /tmp/kafka
    unzip amq-streams-<version>-kafka-bin.zip -d /tmp/kafka
    Copy to Clipboard Toggle word wrap
  3. If running, stop ZooKeeper and the Kafka broker running on the host.

    ./bin/zookeeper-server-stop.sh
    ./bin/kafka-server-stop.sh
    jcmd | grep zookeeper
    jcmd | grep kafka
    Copy to Clipboard Toggle word wrap

    If you are running Kafka on a multi-node cluster, see Section 4.3, “Performing a graceful rolling restart of Kafka brokers”.

  4. Delete the libs and bin directories from your existing installation:

    rm -rf /opt/kafka/libs /opt/kafka/bin
    Copy to Clipboard Toggle word wrap
  5. Copy the libs and bin directories from the temporary directory:

    cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/
    cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/
    Copy to Clipboard Toggle word wrap
  6. If required, update the configuration files in the config directory to reflect any changes in the new versions.
  7. Delete the temporary directory.

    rm -r /tmp/kafka
    Copy to Clipboard Toggle word wrap
  8. Edit the ./config/server.properties properties file.

    Set the inter.broker.protocol.version and log.message.format.version properties to the current version.

    For example, the current version is 3.8 if upgrading from Kafka version 3.8.0 to 3.9.0:

    inter.broker.protocol.version=3.8
    log.message.format.version=3.8
    Copy to Clipboard Toggle word wrap

    Use the correct version for the Kafka version you are upgrading from (3.7, 3.8, and so on). Leaving the inter.broker.protocol.version unchanged at the current setting ensures that the brokers can continue to communicate with each other throughout the upgrade.

    If the properties are not configured, add them with the current version.

    If you are upgrading from Kafka 3.0.0 or later, you only need to set the inter.broker.protocol.version.

  9. Restart the updated ZooKeeper and Kafka broker:

    ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties
    ./bin/kafka-server-start.sh -daemon ./config/server.properties
    Copy to Clipboard Toggle word wrap

    The Kafka broker and ZooKeeper start using the binaries for the latest Kafka version.

    For information on restarting brokers in a multi-node cluster, see Section 4.3, “Performing a graceful rolling restart of Kafka brokers”.

  10. Verify that the restarted Kafka broker has caught up with the partition replicas it is following.

    Use the kafka-topics.sh tool to ensure that all replicas contained in the broker are back in sync. For instructions, see Listing and describing topics.

Phase 2: Update protocol versions to new Kafka version

In the second phase, you update your Kafka brokers to use the new inter-broker protocol version and, if applicable, the new message format version.

Warning

Downgrading Streams for Apache Kafka is not possible after completing the following steps.

  1. Update inter.broker.protocol.version across all brokers.

    For each Kafka broker in your cluster, perform the following steps one at a time. Complete all of these steps for a single broker before moving to the next broker.

    1. Set the inter.broker.protocol.version property to 3.9 in the ./config/server.properties file:

      inter.broker.protocol.version=3.9
      Copy to Clipboard Toggle word wrap
    2. On the command line, stop the Kafka broker that you modified:

      ./bin/kafka-server-stop.sh
      Copy to Clipboard Toggle word wrap
    3. Check that Kafka is not running:

      jcmd | grep kafka
      Copy to Clipboard Toggle word wrap
    4. Restart the Kafka broker that you modified:

      ./bin/kafka-server-start.sh -daemon ./config/server.properties
      Copy to Clipboard Toggle word wrap
    5. Check that Kafka is running:

      jcmd | grep kafka
      Copy to Clipboard Toggle word wrap
  2. Update log.message.format.version across all brokers (if applicable).

    If you are upgrading from a version earlier than Kafka 3.0.0, for each Kafka broker in your cluster, perform the following steps one at a time. Complete all of these steps for a single broker before moving to the next broker.

    1. Set the log.message.format.version property to 3.9 in the ./config/server.properties file:

      log.message.format.version=3.9
      Copy to Clipboard Toggle word wrap
    2. On the command line, stop the Kafka broker that you modified:

      ./bin/kafka-server-stop.sh
      Copy to Clipboard Toggle word wrap
    3. Check that Kafka is not running:

      jcmd | grep kafka
      Copy to Clipboard Toggle word wrap
    4. Restart the Kafka broker that you modified:

      ./bin/kafka-server-start.sh -daemon ./config/server.properties
      Copy to Clipboard Toggle word wrap
    5. Check that Kafka is running:

      jcmd | grep kafka
      Copy to Clipboard Toggle word wrap
  3. Verify that the restarted Kafka broker has caught up with the partition replicas it is following.

    Use the kafka-topics.sh tool to ensure that all replicas contained in the broker are back in sync. For instructions, see Listing and describing topics.

  4. If it was used in the upgrade, remove the legacy log.message.format.version configuration from the server.properties file.

Example of a 3-node cluster upgrade sequence

Consider a 3-node Kafka cluster. The upgrade sequence (from Kafka version 3.8 to 3.9) for the inter.broker.protocol.version (IBPV) and log.message.format.version (LMFV) properties would be as follows:

Phase 1: Update Kafka binaries and set protocol versions

Update each node sequentially.

  • Node 1: Update binaries to 2.9, set IBPV=3.8, set LMFV=3.8, restart ZooKeeper/Kafka.
  • Node 2: Update binaries to 2.9, set IBPV=3.8, set LMFV=3.8, restart ZooKeeper/Kafka.
  • Node 3: Update binaries to 2.9, set IBPV=3.8, set LMFV=3.8, restart ZooKeeper/Kafka.

Phase 2 (step 1): Update inter.broker.protocol.version across all brokers

  • Node 1: Set IBPV=3.9, restart Kafka.
  • Node 2: Set IBPV=3.9, restart Kafka.
  • Node 3: Set IBPV=3.9, restart Kafka.

Phase 2 (step 2): Update log.message.format.version across all brokers (if applicable)

  • Node 1: Set LMFV=3.9, restart Kafka.
  • Node 2: Set LMFV=3.9, restart Kafka.
  • Node 3: Set LMFV=3.9, restart Kafka.
Note

The steps for updating log.message.format.version (LMFV) are only required when upgrading from a Kafka version earlier than 3.0.0. If you are upgrading from Kafka 3.0.0 or later, LMFV is assumed to match the inter.broker.protocol.version.

Upgrading client applications

Ensure all Kafka client applications are updated to use the new version of the client binaries as part of the upgrade process and verify their compatibility with the Kafka upgrade. If needed, coordinate with the team responsible for managing the client applications.

Tip

To check that a client is using the latest message format, use the kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec metric. The metric shows 0 if the latest message format is being used.

20.6. Upgrading Kafka components

Upgrade Kafka components on a host machine to use the latest version of Streams for Apache Kafka. You can use the Streams for Apache Kafka installation files to upgrade the following components:

  • Kafka Connect
  • MirrorMaker
  • Kafka Bridge (separate ZIP file)

For this setup, Kafka is installed in the /opt/kafka/ directory.

Prerequisites

  • You are logged in to Red Hat Enterprise Linux as the Kafka user.
  • You have downloaded the installation files.
  • You have upgraded Kafka.

    If a Kafka component is running on the same host as Kafka, you’ll also need to stop and start Kafka when upgrading.

Procedure

For each host running an instance of the Kafka component:

  1. Download the Streams for Apache Kafka or Kafka Bridge installation files from the Streams for Apache Kafka software downloads page.

    Note

    If prompted, log in to your Red Hat account.

  2. On the command line, create a temporary directory and extract the contents of the amq-streams-<version>-kafka-bin.zip file.

    mkdir /tmp/kafka
    unzip amq-streams-<version>-kafka-bin.zip -d /tmp/kafka
    Copy to Clipboard Toggle word wrap

    For Kafka Bridge, extract the amq-streams-<version>-bridge-bin.zip file.

  3. If running, stop the Kafka component running on the host.
  4. Delete the libs and bin directories from your existing installation:

    rm -rf ./libs ./bin
    Copy to Clipboard Toggle word wrap
  5. Copy the libs and bin directories from the temporary directory:

    cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/
    cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/
    Copy to Clipboard Toggle word wrap
  6. If required, update the configuration files in the config directory to reflect any changes in the new versions.
  7. Delete the temporary directory.

    rm -r /tmp/kafka
    Copy to Clipboard Toggle word wrap
  8. Start the Kafka component using the appropriate script and properties files.

    Starting Kafka Connect in standalone mode

    ./bin/connect-standalone.sh \
    ./config/connect-standalone.properties <connector1>.properties
    [<connector2>.properties ...]
    Copy to Clipboard Toggle word wrap

    Starting Kafka Connect in distributed mode

    ./bin/connect-distributed.sh \
    ./config/connect-distributed.properties
    Copy to Clipboard Toggle word wrap

    Starting MirrorMaker 2 in dedicated mode

    ./bin/connect-mirror-maker.sh \
    ./config/connect-mirror-maker.properties
    Copy to Clipboard Toggle word wrap

    Starting Kafka Bridge

    ./bin/kafka_bridge_run.sh \
    --config-file=<path>/application.properties
    Copy to Clipboard Toggle word wrap

  9. Verify that the Kafka component is running, and producing or consuming data as expected.

    Verifying Kafka Connect in standalone mode is running

    jcmd | grep ConnectStandalone
    Copy to Clipboard Toggle word wrap

    Verifying Kafka Connect in distributed mode is running

    jcmd | grep ConnectDistributed
    Copy to Clipboard Toggle word wrap

    Verifying MirrorMaker 2 in dedicated mode is running

    jcmd | grep mirrorMaker
    Copy to Clipboard Toggle word wrap

    Verifying Kafka Bridge is running by checking the log

    HTTP-Kafka Bridge started and listening on port 8080
    HTTP-Kafka Bridge bootstrap servers localhost:9092
    Copy to Clipboard Toggle word wrap

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat