Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 18. Upgrading Streams for Apache Kafka and Kafka
Upgrade your Kafka cluster with no downtime. Streams for Apache Kafka 2.7 supports and uses Apache Kafka version 3.7.0. Kafka 3.6.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7. You upgrade to the latest supported version of Kafka when you install the latest version of Streams for Apache Kafka.
18.1. Upgrade prerequisites
Before you begin the upgrade process, make sure you are familiar with any upgrade changes described in the Streams for Apache Kafka 2.7 on Red Hat Enterprise Linux Release Notes.
18.2. Strategies for upgrading clients
Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved.
Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable.
If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions.
18.3. Upgrading Kafka clusters
Upgrade a KRaft-based Kafka cluster to a newer supported Kafka version and KRaft metadata version. You update the installation files, then configure and restart all Kafka nodes. After performing these steps, data is transmitted between the Kafka brokers according to the new metadata version.
When downgrading a KRaft-based Strimzi Kafka cluster to a lower version, like moving from 3.7.0 to 3.6.0, ensure that the metadata version used by the Kafka cluster is a version supported by the Kafka version you want to downgrade to. The metadata version for the Kafka version you are downgrading from must not be higher than the version you are downgrading to.
Prerequisites
-
You are logged in to Red Hat Enterprise Linux as the
kafka
user. - Streams for Apache Kafka is installed on each host, and the configuration files are available.
- You have downloaded the installation files.
Procedure
For each Kafka node in your Streams for Apache Kafka cluster, starting with controller nodes and then brokers, and one at a time:
Download the Streams for Apache Kafka archive from the Streams for Apache Kafka software downloads page.
NoteIf prompted, log in to your Red Hat account.
On the command line, create a temporary directory and extract the contents of the
amq-streams-<version>-bin.zip
file.mkdir /tmp/kafka unzip amq-streams-<version>-bin.zip -d /tmp/kafka
If running, stop the Kafka broker running on the host.
/opt/kafka/bin/kafka-server-stop.sh jcmd | grep kafka
If you are running Kafka on a multi-node cluster, see Section 3.6, “Performing a graceful rolling restart of Kafka brokers”.
Delete the
libs
andbin
directories from your existing installation:rm -rf /opt/kafka/libs /opt/kafka/bin
Copy the
libs
andbin
directories from the temporary directory:cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/
-
If required, update the configuration files in the
config
directory to reflect any changes in the new Kafka version. Delete the temporary directory.
rm -r /tmp/kafka
Restart the updated Kafka node:
Restarting nodes with combined roles
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties
Restarting controller nodes
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/controller.properties
Restarting nodes with broker roles
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/broker.properties
The Kafka broker starts using the binaries for the latest Kafka version.
For information on restarting brokers in a multi-node cluster, see Section 3.6, “Performing a graceful rolling restart of Kafka brokers”.
Check that Kafka is running:
jcmd | grep kafka
Update the Kafka metadata version:
./bin/kafka-features.sh --bootstrap-server <broker_host>:<port> upgrade --metadata 3.7
Use the correct version for the Kafka version you are upgrading to.
Verify that a restarted Kafka broker has caught up with the partition replicas it is following using the kafka-topics.sh
tool to ensure that all replicas contained in the broker are back in sync. For instructions, see Listing and describing topics.
Upgrading client applications
Ensure all Kafka client applications are updated to use the new version of the client binaries as part of the upgrade process and verify their compatibility with the Kafka upgrade. If needed, coordinate with the team responsible for managing the client applications.
To check that a client is using the latest message format, use the kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec
metric. The metric shows 0
if the latest message format is being used.
18.4. Upgrading Kafka components
Upgrade Kafka components on a host machine to use the latest version of Streams for Apache Kafka. You can use the Streams for Apache Kafka installation files to upgrade the following components:
- Kafka Connect
- MirrorMaker
- Kafka Bridge (separate ZIP file)
Prerequisites
-
You are logged in to Red Hat Enterprise Linux as the
kafka
user. - You have downloaded the installation files.
You have upgraded Kafka.
If a Kafka component is running on the same host as Kafka, you’ll also need to stop and start Kafka when upgrading.
Procedure
For each host running an instance of the Kafka component:
Download the Streams for Apache Kafka or Kafka Bridge installation files from the Streams for Apache Kafka software downloads page.
NoteIf prompted, log in to your Red Hat account.
On the command line, create a temporary directory and extract the contents of the
amq-streams-<version>-bin.zip
file.mkdir /tmp/kafka unzip amq-streams-<version>-bin.zip -d /tmp/kafka
For Kafka Bridge, extract the
amq-streams-<version>-bridge-bin.zip
file.- If running, stop the Kafka component running on the host.
Delete the
libs
andbin
directories from your existing installation:rm -rf /opt/kafka/libs /opt/kafka/bin
Copy the
libs
andbin
directories from the temporary directory:cp -r /tmp/kafka/kafka_<version>/libs /opt/kafka/ cp -r /tmp/kafka/kafka_<version>/bin /opt/kafka/
-
If required, update the configuration files in the
config
directory to reflect any changes in the new versions. Delete the temporary directory.
rm -r /tmp/kafka
Start the Kafka component using the appropriate script and properties files.
Starting Kafka Connect in standalone mode
/opt/kafka/bin/connect-standalone.sh \ /opt/kafka/config/connect-standalone.properties <connector1>.properties [<connector2>.properties ...]
Starting Kafka Connect in distributed mode
/opt/kafka/bin/connect-distributed.sh \ /opt/kafka/config/connect-distributed.properties
Starting MirrorMaker 2 in dedicated mode
/opt/kafka/bin/connect-mirror-maker.sh \ /opt/kafka/config/connect-mirror-maker.properties
Starting Kafka Bridge
su - kafka ./bin/kafka_bridge_run.sh \ --config-file=<path>/application.properties
Verify that the Kafka component is running, and producing or consuming data as expected.
Verifying Kafka Connect in standalone mode is running
jcmd | grep ConnectStandalone
Verifying Kafka Connect in distributed mode is running
jcmd | grep ConnectDistributed
Verifying MirrorMaker 2 in dedicated mode is running
jcmd | grep mirrorMaker
Verifying Kafka Bridge is running by checking the log
HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092