Search

Chapter 10. Upgrading AMQ Streams

download PDF

AMQ Streams can be upgraded to version 2.3 to take advantage of new features and enhancements, performance improvements, and security options.

As part of the upgrade, you upgrade Kafka to the latest supported version. Each Kafka release introduces new features, improvements, and bug fixes to your AMQ Streams deployment.

AMQ Streams can be downgraded to the previous version if you encounter issues with the newer version.

Released versions of AMQ Streams are available from the AMQ Streams software downloads page.

Upgrade downtime and availability

If topics are configured for high availability, upgrading AMQ Streams should not cause any downtime for consumers and producers that publish and read data from those topics. Highly available topics have a replication factor of at least 3 and partitions distributed evenly among the brokers.

Upgrading AMQ Streams triggers rolling updates, where all brokers are restarted in turn, at different stages of the process. During rolling updates, not all brokers are online, so overall cluster availability is temporarily reduced. A reduction in cluster availability increases the chance that a broker failure will result in lost messages.

10.1. AMQ Streams upgrade paths

Two upgrade paths are possible.

Incremental upgrade
Upgrading AMQ Streams from the previous minor version to version 2.3.
Multi-version upgrade

Upgrading AMQ Streams from an old version to version 2.3 within a single upgrade (skipping one or more intermediate versions).

For example, upgrading from AMQ Streams 1.8 directly to AMQ Streams 2.3.

10.1.1. Supported Kafka versions

Decide which Kafka version to upgrade to before starting the AMQ Streams upgrade process. You can review supported Kafka versions in the AMQ Streams Supported Configurations.

  • Kafka 3.3.1 is supported for production use.
  • Kafka 3.2.3 is supported only for the purpose of upgrading to AMQ Streams 2.3.

You can only use a Kafka version supported by the version of AMQ Streams you are using. You can upgrade to a higher Kafka version as long as it is supported by your version of AMQ Streams. In some cases, you can also downgrade to a previous supported Kafka version.

10.1.2. Upgrading from an AMQ Streams version earlier than 1.7

If you are upgrading to the latest version of AMQ Streams from a version prior to version 1.7, do the following:

  1. Upgrade AMQ Streams to version 1.7 following the standard sequence.
  2. Convert AMQ Streams custom resources to v1beta2 using the Red Hat AMQ Streams API Conversion Tool provided with AMQ Streams 1.8.
  3. Do one of the following:

    • Upgrade to AMQ Streams 1.8 (where the ControlPlaneListener feature gate is disabled by default).
    • Upgrade to AMQ Streams 2.0 or 2.2 (where the ControlPlaneListener feature gate is enabled by default) with the ControlPlaneListener feature gate disabled.
  4. Enable the ControlPlaneListener feature gate.
  5. Upgrade to AMQ Streams 2.3 following the standard sequence.

AMQ Streams custom resources started using the v1beta2 API version in release 1.7. CRDs and custom resources must be converted before upgrading to AMQ Streams 1.8 or newer. For information on using the API conversion tool, see the AMQ Streams 1.7 upgrade documentation.

Note

As an alternative to first upgrading to version 1.7, you can install the custom resources from version 1.7 and then convert the resources.

The ControlPlaneListener feature is now permanently enabled in AMQ Streams. You must upgrade to a version of AMQ Streams where it is disabled, then enable it using the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.

Disabling the ControlPlaneListener feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: -ControlPlaneListener

Enabling the ControlPlaneListener feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: +ControlPlaneListener

10.2. Required upgrade sequence

To upgrade brokers and clients without downtime, you must complete the AMQ Streams upgrade procedures in the following order:

  1. Make sure your OpenShift cluster version is supported.

    AMQ Streams 2.3 is supported by OpenShift 4.8 to 4.12.

    You can upgrade OpenShift with minimal downtime.

  2. Upgrade the Cluster Operator.
  3. Upgrade all Kafka brokers and client applications to the latest supported Kafka version.
  4. Optional: Upgrade consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances.

10.3. Upgrading OpenShift with minimal downtime

If you are upgrading OpenShift, refer to the OpenShift upgrade documentation to check the upgrade path and the steps to upgrade your nodes correctly. Before upgrading OpenShift, check the supported versions for your version of AMQ Streams.

When performing your upgrade, you’ll want to keep your Kafka clusters available.

You can employ one of the following strategies:

  1. Configuring pod disruption budgets
  2. Rolling pods by one of these methods:

    1. Using the AMQ Streams Drain Cleaner
    2. Manually by applying an annotation to your pod

You have to configure the pod disruption budget before using one of the methods to roll your pods.

For Kafka to stay operational, topics must also be replicated for high availability. This requires topic configuration that specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor.

Kafka topic replicated for high availability

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster
spec:
  partitions: 1
  replicas: 3
  config:
    # ...
    min.insync.replicas: 2
    # ...

In a highly available environment, the Cluster Operator maintains a minimum number of in-sync replicas for topics during the upgrade process so that there is no downtime.

10.3.1. Rolling pods using the AMQ Streams Drain Cleaner

You can use the AMQ Streams Drain Cleaner tool to evict nodes during an upgrade. The AMQ Streams Drain Cleaner annotates pods with a rolling update pod annotation. This informs the Cluster Operator to perform a rolling update of an evicted pod.

A pod disruption budget allows only a specified number of pods to be unavailable at a given time. During planned maintenance of Kafka broker pods, a pod disruption budget ensures Kafka continues to run in a highly available environment.

You specify a pod disruption budget using a template customization for a Kafka component. By default, pod disruption budgets allow only a single pod to be unavailable at a given time.

To do this, you set maxUnavailable to 0 (zero). Reducing the maximum pod disruption budget to zero prevents voluntary disruptions, so pods must be evicted manually.

Specifying a pod disruption budget

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: myproject
spec:
  kafka:
    # ...
    template:
      podDisruptionBudget:
        maxUnavailable: 0
# ...

10.3.2. Rolling pods manually while keeping topics available

During an upgrade, you can trigger a manual rolling update of pods through the Cluster Operator. Using Pod resources, rolling updates restart the pods of resources with new pods. As with using the AMQ Streams Drain Cleaner, you’ll need to set the maxUnavailable value to zero for the pod disruption budget.

You need to watch the pods that need to be drained. You then add a pod annotation to make the update.

Here, the annotation updates a Kafka broker.

Performing a manual rolling update on a Kafka broker pod

oc annotate pod <cluster_name>-kafka-<index> strimzi.io/manual-rolling-update=true

You replace <cluster_name> with the name of the cluster. Kafka broker pods are named <cluster-name>-kafka-<index>, where <index> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-kafka-0.

10.4. Upgrading the Cluster Operator

Use the same method to upgrade the Cluster Operator as the initial method of deployment.

Using installation files
If you deployed the Cluster Operator using the installation YAML files, perform your upgrade by modifying the Operator installation files, as described in Upgrading the Cluster Operator using installation files.
Using the OperatorHub

If you deployed AMQ Streams from the OperatorHub, use the Operator Lifecycle Manager (OLM) to change the update channel for the AMQ Streams operators to a new AMQ Streams version.

Updating the channel starts one of the following types of upgrade, depending on your chosen upgrade strategy:

  • An automatic upgrade is initiated
  • A manual upgrade that requires approval before installation begins
Note

If you subscribe to the stable channel, you can get automatic updates without changing channels. However, enabling automatic updates is not recommended because of the potential for missing any pre-installation upgrade steps. Use automatic upgrades only on version-specific channels.

For more information on using the OperatorHub to upgrade Operators, see Upgrading installed Operators (OpenShift documentation).

10.4.1. Upgrading the Cluster Operator returns Kafka version error

If you upgrade the Cluster Operator and get an unsupported Kafka version error, your Kafka cluster deployment has an older Kafka version that is not supported by the new operator version. This error applies to all installation methods.

If this error occurs, upgrade Kafka to a supported Kafka version. Change the spec.kafka.version in the Kafka resource to the supported version.

You can use oc to check for error messages like this in the status of the Kafka resource.

Checking the Kafka status for errors

oc get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'

Replace <kafka_cluster_name> with the name of your Kafka cluster and <namespace> with the OpenShift namespace where the pod is running.

10.4.2. Upgrading from AMQ Streams 1.7 or earlier using the OperatorHub

Action required if upgrading from AMQ Streams 1.7 or earlier using the OperatorHub

Before you upgrade the Red Hat Integration - AMQ Streams Operator to version 2.3, you need to make the following changes:

  • Convert custom resources and CRDs to v1beta2
  • Upgrade to a version of AMQ Streams where the ControlPlaneListener feature gate is disabled

These requirements are described in Section 10.1.2, “Upgrading from an AMQ Streams version earlier than 1.7”.

If you are upgrading from AMQ Streams 1.7 or earlier, do the following:

  1. Upgrade to AMQ Streams 1.7.
  2. Download the Red Hat AMQ Streams API Conversion Tool provided with AMQ Streams 1.8.
  3. Convert custom resources and CRDs to v1beta2.

    For more information, see the AMQ Streams 1.7 upgrade documentation.

  4. In the OperatorHub, delete version 1.7 of the Red Hat Integration - AMQ Streams Operator.
  5. If it also exists, delete version 2.3 of the Red Hat Integration - AMQ Streams Operator.

    If it does not exist, go to the next step.

    If the Approval Strategy for the AMQ Streams Operator was set to Automatic, version 2.3 of the operator might already exist in your cluster. If you did not convert custom resources and CRDs to the v1beta2 API version before release, the operator-managed custom resources and CRDs will be using the old API version. As a result, the 2.3 Operator is stuck in Pending status. In this situation, you need to delete version 2.3 of the Red Hat Integration - AMQ Streams Operator as well as version 1.7.

    If you delete both operators, reconciliations are paused until the new operator version is installed. Follow the next steps immediately so that any changes to custom resources are not delayed.

  6. In the OperatorHub, do one of the following:

    • Upgrade to version 1.8 of the Red Hat Integration - AMQ Streams Operator (where the ControlPlaneListener feature gate is disabled by default).
    • Upgrade to version 2.0 or 2.2 of the Red Hat Integration - AMQ Streams Operator (where the ControlPlaneListener feature gate is enabled by default) with the ControlPlaneListener feature gate disabled.
  7. Upgrade to version 2.3 of the Red Hat Integration - AMQ Streams Operator immediately.

    The installed 2.3 operator begins to watch the cluster and performs rolling updates. You might notice a temporary decrease in cluster performance during this process.

10.4.3. Upgrading the Cluster Operator using installation files

This procedure describes how to upgrade a Cluster Operator deployment to use AMQ Streams 2.3.

Follow this procedure if you deployed the Cluster Operator using the installation YAML files.

The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation.

Note

Refer to the documentation supporting a specific version of AMQ Streams for information on how to upgrade to that version.

Prerequisites

Procedure

  1. Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator directory). Any changes will be overwritten by the new version of the Cluster Operator.
  2. Update your custom resources to reflect the supported configuration options available for AMQ Streams version 2.3.
  3. Update the Cluster Operator.

    1. Modify the installation files for the new Cluster Operator version according to the namespace the Cluster Operator is running in.

      On Linux, use:

      sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml

      On MacOS, use:

      sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
    2. If you modified one or more environment variables in your existing Cluster Operator Deployment, edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to use those environment variables.
  4. When you have an updated configuration, deploy it along with the rest of the installation resources:

    oc replace -f install/cluster-operator

    Wait for the rolling updates to complete.

  5. If the new Operator version no longer supports the Kafka version you are upgrading from, the Cluster Operator returns an error message to say the version is not supported. Otherwise, no error message is returned.

    • If the error message is returned, upgrade to a Kafka version that is supported by the new Cluster Operator version:

      1. Edit the Kafka custom resource.
      2. Change the spec.kafka.version property to a supported Kafka version.
    • If the error message is not returned, go to the next step. You will upgrade the Kafka version later.
  6. Get the image for the Kafka pod to ensure the upgrade was successful:

    oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'

    The image tag shows the new Operator version. For example:

    registry.redhat.io/amq7/amq-streams-kafka-33-rhel8:2.3.0

Your Cluster Operator was upgraded to version 2.3 but the version of Kafka running in the cluster it manages is unchanged.

Following the Cluster Operator upgrade, you must perform a Kafka upgrade.

10.5. Upgrading Kafka

After you have upgraded your Cluster Operator to 2.3, the next step is to upgrade all Kafka brokers to the latest supported version of Kafka.

Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka brokers.

The Cluster Operator initiates rolling updates based on the Kafka cluster configuration.

If Kafka.spec.kafka.config contains…​The Cluster Operator initiates…​

Both the inter.broker.protocol.version and the log.message.format.version.

A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version. Changing each will trigger a further rolling update.

Either the inter.broker.protocol.version or the log.message.format.version.

Two rolling updates.

No configuration for the inter.broker.protocol.version or the log.message.format.version.

Two rolling updates.

Important

From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn’t need to be set. The log.message.format.version property for brokers and the message.format.version property for topics are deprecated and will be removed in a future release of Kafka.

As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper.

  • A single rolling update occurs even if the ZooKeeper version is unchanged.
  • Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version.

10.5.1. Kafka versions

Kafka’s log message format version and inter-broker protocol version specify, respectively, the log format version appended to messages and the version of the Kafka protocol used in a cluster. To ensure the correct versions are used, the upgrade process involves making configuration changes to existing Kafka brokers and code changes to client applications (consumers and producers).

The following table shows the differences between Kafka versions:

Table 10.1. Kafka version differences
Kafka versionInter-broker protocol versionLog message format versionZooKeeper version

3.2.0

3.2

3.2

3.6.3

3.2.1

3.2

3.2

3.6.3

3.2.3

3.2

3.2

3.6.3

3.3.1

3.3

3.3

3.6.3

Inter-broker protocol version

In Kafka, the network protocol used for inter-broker communication is called the inter-broker protocol. Each version of Kafka has a compatible version of the inter-broker protocol. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table.

The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config.

Log message format version

When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the message format they were encoded with.

The properties used to set a specific message format version are as follows:

  • message.format.version property for topics
  • log.message.format.version property for Kafka brokers

From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don’t need to be set. The values reflect the Kafka version used.

When upgrading to Kafka 3.0.0 or higher, you can remove these settings when you update the inter.broker.protocol.version. Otherwise, set the message format version based on the Kafka version you are upgrading to.

The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration.

10.5.2. Strategies for upgrading clients

The right approach to upgrading your client applications (including Kafka Connect connectors) depends on your particular circumstances.

Consuming applications need to receive messages in a message format that they understand. You can ensure that this is the case in one of two ways:

  • By upgrading all the consumers for a topic before upgrading any of the producers.
  • By having the brokers down-convert messages to an older format.

Using broker down-conversion puts extra load on the brokers, so it is not ideal to rely on down-conversion for all topics for a prolonged period of time. For brokers to perform optimally they should not be down converting messages at all.

Broker down-conversion is configured in two ways:

  • The topic-level message.format.version configures it for a single topic.
  • The broker-level log.message.format.version is the default for topics that do not have the topic-level message.format.version configured.

Messages published to a topic in a new-version format will be visible to consumers, because brokers perform down-conversion when they receive messages from producers, not when they are sent to consumers.

Common strategies you can use to upgrade your clients are described as follows. Other strategies for upgrading client applications are also possible.

Important

The steps outlined in each strategy change slightly when upgrading to Kafka 3.0.0 or later. From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don’t need to be set.

Broker-level consumers first strategy

  1. Upgrade all the consuming applications.
  2. Change the broker-level log.message.format.version to the new version.
  3. Upgrade all the producing applications.

This strategy is straightforward, and avoids any broker down-conversion. However, it assumes that all consumers in your organization can be upgraded in a coordinated way, and it does not work for applications that are both consumers and producers. There is also a risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log so that you cannot revert to the previous consumer version.

Topic-level consumers first strategy

For each topic:

  1. Upgrade all the consuming applications.
  2. Change the topic-level message.format.version to the new version.
  3. Upgrade all the producing applications.

This strategy avoids any broker down-conversion, and means you can proceed on a topic-by-topic basis. It does not work for applications that are both consumers and producers of the same topic. Again, it has the risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log.

Topic-level consumers first strategy with down conversion

For each topic:

  1. Change the topic-level message.format.version to the old version (or rely on the topic defaulting to the broker-level log.message.format.version).
  2. Upgrade all the consuming and producing applications.
  3. Verify that the upgraded applications function correctly.
  4. Change the topic-level message.format.version to the new version.

This strategy requires broker down-conversion, but the load on the brokers is minimized because it is only required for a single topic (or small group of topics) at a time. It also works for applications that are both consumers and producers of the same topic. This approach ensures that the upgraded producers and consumers are working correctly before you commit to using the new message format version.

The main drawback of this approach is that it can be complicated to manage in a cluster with many topics and applications.

Note

It is also possible to apply multiple strategies. For example, for the first few applications and topics the "per-topic consumers first, with down conversion" strategy can be used. When this has proved successful another, more efficient strategy can be considered acceptable to use instead.

10.5.3. Kafka version and image mappings

When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES environment variable and the Kafka.spec.kafka.version property.

  • Each Kafka resource can be configured with a Kafka.spec.kafka.version.
  • The Cluster Operator’s STRIMZI_KAFKA_IMAGES environment variable provides a mapping between the Kafka version and the image to be used when that version is requested in a given Kafka resource.

    • If Kafka.spec.kafka.image is not configured, the default image for the given version is used.
    • If Kafka.spec.kafka.image is configured, the default image is overridden.
Warning

The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version. Take care to ensure that the given image corresponds to the given Kafka version.

10.5.4. Upgrading Kafka brokers and client applications

Upgrade an AMQ Streams Kafka cluster to the latest supported Kafka version and inter-broker protocol version.

You should also choose a strategy for upgrading clients. Kafka clients are upgraded in step 6 of this procedure.

Prerequisites

  • The Cluster Operator is up and running.
  • Before you upgrade the AMQ Streams Kafka cluster, check that the Kafka.spec.kafka.config properties of the Kafka resource do not contain configuration options that are not supported in the new Kafka version.

Procedure

  1. Update the Kafka cluster configuration:

    oc edit kafka <my_cluster>
  2. If configured, check that the inter.broker.protocol.version and log.message.format.version properties are set to the current version.

    For example, the current version is 3.2 if upgrading from Kafka version 3.2.3 to 3.3.1:

    kind: Kafka
    spec:
      # ...
      kafka:
        version: 3.2.3
        config:
          log.message.format.version: "3.2"
          inter.broker.protocol.version: "3.2"
          # ...

    If log.message.format.version and inter.broker.protocol.version are not configured, AMQ Streams automatically updates these versions to the current defaults after the update to the Kafka version in the next step.

    Note

    The value of log.message.format.version and inter.broker.protocol.version must be strings to prevent them from being interpreted as floating point numbers.

  3. Change the Kafka.spec.kafka.version to specify the new Kafka version; leave the log.message.format.version and inter.broker.protocol.version at the defaults for the current Kafka version.

    Note

    Changing the kafka.version ensures that all brokers in the cluster will be upgraded to start using the new broker binaries. During this process, some brokers are using the old binaries while others have already upgraded to the new ones. Leaving the inter.broker.protocol.version unchanged at the current setting ensures that the brokers can continue to communicate with each other throughout the upgrade.

    For example, if upgrading from Kafka 3.2.3 to 3.3.1:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    spec:
      # ...
      kafka:
        version: 3.3.1 1
        config:
          log.message.format.version: "3.2" 2
          inter.broker.protocol.version: "3.2" 3
          # ...
    1
    Kafka version is changed to the new version.
    2
    Message format version is unchanged.
    3
    Inter-broker protocol version is unchanged.
    Warning

    You cannot downgrade Kafka if the inter.broker.protocol.version for the new Kafka version changes. The inter-broker protocol version determines the schemas used for persistent metadata stored by the broker, including messages written to __consumer_offsets. The downgraded cluster will not understand the messages.

  4. If the image for the Kafka cluster is defined in the Kafka custom resource, in Kafka.spec.kafka.image, update the image to point to a container image with the new Kafka version.

    See Kafka version and image mappings

  5. Save and exit the editor, then wait for rolling updates to complete.

    Check the progress of the rolling updates by watching the pod state transitions:

    oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'

    The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.

  6. Depending on your chosen strategy for upgrading clients, upgrade all client applications to use the new version of the client binaries.

    If required, set the version property for Kafka Connect and MirrorMaker as the new version of Kafka:

    1. For Kafka Connect, update KafkaConnect.spec.version.
    2. For MirrorMaker, update KafkaMirrorMaker.spec.version.
    3. For MirrorMaker 2.0, update KafkaMirrorMaker2.spec.version.
  7. If configured, update the Kafka resource to use the new inter.broker.protocol.version version. Otherwise, go to step 9.

    For example, if upgrading to Kafka 3.3.1:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    spec:
      # ...
      kafka:
        version: 3.3.1
        config:
          log.message.format.version: "3.2"
          inter.broker.protocol.version: "3.3"
          # ...
  8. Wait for the Cluster Operator to update the cluster.
  9. If configured, update the Kafka resource to use the new log.message.format.version version. Otherwise, go to step 10.

    For example, if upgrading to Kafka 3.3.1:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    spec:
      # ...
      kafka:
        version: 3.3.1
        config:
          log.message.format.version: "3.3"
          inter.broker.protocol.version: "3.3"
          # ...
    Important

    From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn’t need to be set.

  10. Wait for the Cluster Operator to update the cluster.

    • The Kafka cluster and clients are now using the new Kafka version.
    • The brokers are configured to send messages using the inter-broker protocol version and message format version of the new version of Kafka.

Following the Kafka upgrade, if required, you can upgrade consumers to use the incremental cooperative rebalance protocol.

10.6. Upgrading consumers to cooperative rebalancing

You can upgrade Kafka consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances instead of the default eager rebalance protocol. The new protocol was added in Kafka 2.4.0.

Consumers keep their partition assignments in a cooperative rebalance and only revoke them at the end of the process, if needed to achieve a balanced cluster. This reduces the unavailability of the consumer group or Kafka Streams application.

Note

Upgrading to the incremental cooperative rebalance protocol is optional. The eager rebalance protocol is still supported.

Prerequisites

Procedure

To upgrade a Kafka consumer to use the incremental cooperative rebalance protocol:

  1. Replace the Kafka clients .jar file with the new version.
  2. In the consumer configuration, append cooperative-sticky to the partition.assignment.strategy. For example, if the range strategy is set, change the configuration to range, cooperative-sticky.
  3. Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart.
  4. Reconfigure each consumer in the group by removing the earlier partition.assignment.strategy from the consumer configuration, leaving only the cooperative-sticky strategy.
  5. Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart.

To upgrade a Kafka Streams application to use the incremental cooperative rebalance protocol:

  1. Replace the Kafka Streams .jar file with the new version.
  2. In the Kafka Streams configuration, set the upgrade.from configuration parameter to the Kafka version you are upgrading from (for example, 2.3).
  3. Restart each of the stream processors (nodes) in turn.
  4. Remove the upgrade.from configuration parameter from the Kafka Streams configuration.
  5. Restart each consumer in the group in turn.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.