Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 25. Upgrading Streams for Apache Kafka

download PDF

Upgrade your Streams for Apache Kafka installation to version 2.7 and benefit from new features, performance improvements, and enhanced security options. During the upgrade, Kafka is also be updated to the latest supported version, introducing additional features and bug fixes to your Streams for Apache Kafka deployment.

Use the same method to upgrade the Cluster Operator as the initial method of deployment. For example, if you used the Streams for Apache Kafka installation files, modify those files to perform the upgrade. After you have upgraded your Cluster Operator to 2.7, the next step is to upgrade all Kafka nodes to the latest supported version of Kafka. Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka nodes.

If you encounter any issues with the new version, Streams for Apache Kafka can be downgraded to the previous version.

Released Streams for Apache Kafka versions can be found at Streams for Apache Kafka software downloads page.

Upgrade without downtime

For topics configured with high availability (replication factor of at least 3 and evenly distributed partitions), the upgrade process should not cause any downtime for consumers and producers.

The upgrade triggers rolling updates, where brokers are restarted one by one at different stages of the process. During this time, overall cluster availability is temporarily reduced, which may increase the risk of message loss in the event of a broker failure.

25.1. Required upgrade sequence

To upgrade brokers and clients without downtime, you must complete the Streams for Apache Kafka upgrade procedures in the following order:

  1. Make sure your OpenShift cluster version is supported.

    Streams for Apache Kafka 2.7 requires OpenShift 4.12 to 4.15.

    You can upgrade OpenShift with minimal downtime.

  2. Upgrade the Cluster Operator.
  3. Upgrade Kafka depending on the cluster configuration:

    1. If using Kafka in KRaft mode, update the Kafka version and spec.kafka.metadataVersion to upgrade all Kafka brokers and client applications.
    2. If using ZooKeeper-based Kafka, update the Kafka version and inter.broker.protocol.version to upgrade all Kafka brokers and client applications.
Note

From Streams for Apache Kafka 2.7, upgrades and downgrades between KRaft-based clusters are supported.

25.2. Streams for Apache Kafka upgrade paths

Two upgrade paths are available for Streams for Apache Kafka.

Incremental upgrade
An incremental upgrade involves upgrading Streams for Apache Kafka from the previous minor version to version 2.7.
Multi-version upgrade
A multi-version upgrade involves upgrading an older version of Streams for Apache Kafka to version 2.7 within a single upgrade, skipping one or more intermediate versions. For example, upgrading directly from Streams for Apache Kafka 2.3.0 to Streams for Apache Kafka 2.7 is possible.

25.2.1. Support for Kafka versions when upgrading

When upgrading Streams for Apache Kafka, it is important to ensure compatibility with the Kafka version being used.

Multi-version upgrades are possible even if the supported Kafka versions differ between the old and new versions. However, if you attempt to upgrade to a new Streams for Apache Kafka version that does not support the current Kafka version, an error indicating that the Kafka version is not supported is generated. In this case, you must upgrade the Kafka version as part of the Streams for Apache Kafka upgrade by changing the spec.kafka.version in the Kafka custom resource to the supported version for the new Streams for Apache Kafka version.

25.2.2. Upgrading from a Streams for Apache Kafka version earlier than 1.7

If you are upgrading to the latest version of Streams for Apache Kafka from a version prior to version 1.7, do the following:

  1. Upgrade Streams for Apache Kafka to version 1.7 following the standard sequence.
  2. Convert Streams for Apache Kafka custom resources to v1beta2 using the API conversion tool provided with Streams for Apache Kafka.
  3. Do one of the following:

    • Upgrade Streams for Apache Kafka to a version between 1.8 and 0.26 (where the ControlPlaneListener feature gate is disabled by default).
    • Upgrade Streams for Apache Kafka to a version between 2.0 and 0.31 (where the ControlPlaneListener feature gate is enabled by default) with the ControlPlaneListener feature gate disabled.
  4. Enable the ControlPlaneListener feature gate.
  5. Upgrade to Streams for Apache Kafka 2.7 following the standard sequence.

Streams for Apache Kafka custom resources started using the v1beta2 API version in release 1.7. CRDs and custom resources must be converted before upgrading to Streams for Apache Kafka 1.8 or newer. For information on using the API conversion tool, see the Streams for Apache Kafka 1.7 upgrade documentation.

Note

As an alternative to first upgrading to version 1.7, you can install the custom resources from version 1.7 and then convert the resources.

The ControlPlaneListener feature is now permanently enabled in Streams for Apache Kafka. You must upgrade to a version of Streams for Apache Kafka where it is disabled, then enable it using the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.

Disabling the ControlPlaneListener feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: -ControlPlaneListener

Enabling the ControlPlaneListener feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: +ControlPlaneListener

25.2.3. Kafka version and image mappings

When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES environment variable and the Kafka.spec.kafka.version property.

  • Each Kafka resource can be configured with a Kafka.spec.kafka.version, which defaults to the latest supported Kafka version (3.7.0) if not specified.
  • The Cluster Operator’s STRIMZI_KAFKA_IMAGES environment variable provides a mapping (<kafka_version>=<image>) between a Kafka version and the image to be used when a specific Kafka version is requested in a given Kafka resource. For example, 3.7.0=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0.

    • If Kafka.spec.kafka.image is not configured, the default image for the given version is used.
    • If Kafka.spec.kafka.image is configured, the default image is overridden.
Warning

The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version. Take care to ensure that the given image corresponds to the given Kafka version.

25.3. Strategies for upgrading clients

Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved.

Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable.

If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions.

25.4. Upgrading OpenShift with minimal downtime

If you are upgrading OpenShift, refer to the OpenShift upgrade documentation to check the upgrade path and the steps to upgrade your nodes correctly. Before upgrading OpenShift, check the supported versions for your version of Streams for Apache Kafka.

When performing your upgrade, ensure the availability of your Kafka clusters by following these steps:

  1. Configure pod disruption budgets
  2. Roll pods using one of these methods:

    1. Use the Streams for Apache Kafka Drain Cleaner (recommended)
    2. Apply an annotation to your pods to roll them manually

For Kafka to stay operational, topics must also be replicated for high availability. This requires topic configuration that specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor.

Kafka topic replicated for high availability

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster
spec:
  partitions: 1
  replicas: 3
  config:
    # ...
    min.insync.replicas: 2
    # ...

In a highly available environment, the Cluster Operator maintains a minimum number of in-sync replicas for topics during the upgrade process so that there is no downtime.

25.4.1. Rolling pods using Drain Cleaner

When using the Streams for Apache Kafka Drain Cleaner to evict nodes during OpenShift upgrade, it annotates pods with a manual rolling update annotation to inform the Cluster Operator to perform a rolling update of the pod that should be evicted and have it moved away from the OpenShift node that is being upgraded.

For more information, see Chapter 23, Evicting pods with the Streams for Apache Kafka Drain Cleaner.

25.4.2. Rolling pods manually (alternative to Drain Cleaner)

As an alternative to using the Drain Cleaner to roll pods, you can trigger a manual rolling update of pods through the Cluster Operator. Using Pod resources, rolling updates restart the pods of resources with new pods. To replicate the operation of the Drain Cleaner by keeping topics available, you must also set the maxUnavailable value to zero for the pod disruption budget. Reducing the pod disruption budget to zero prevents OpenShift from evicting pods automatically.

Specifying a pod disruption budget

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: myproject
spec:
  kafka:
    # ...
    template:
      podDisruptionBudget:
        maxUnavailable: 0
# ...

You need to watch the pods that need to be drained. You then add a pod annotation to make the update.

Here, the annotation updates a Kafka pod named my-cluster-pool-a-1.

Performing a manual rolling update on a Kafka pod

oc annotate pod my-cluster-pool-a-1 strimzi.io/manual-rolling-update="true"

25.5. Upgrading the Cluster Operator

Use the same method to upgrade the Cluster Operator as the initial method of deployment.

25.5.1. Upgrading the Cluster Operator using installation files

This procedure describes how to upgrade a Cluster Operator deployment to use Streams for Apache Kafka 2.7.

Follow this procedure if you deployed the Cluster Operator using the installation YAML files.

The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation.

Note

Refer to the documentation supporting a specific version of Streams for Apache Kafka for information on how to upgrade to that version.

Prerequisites

Procedure

  1. Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator directory). Any changes will be overwritten by the new version of the Cluster Operator.
  2. Update your custom resources to reflect the supported configuration options available for Streams for Apache Kafka version 2.7.
  3. Update the Cluster Operator.

    1. Modify the installation files for the new Cluster Operator version according to the namespace the Cluster Operator is running in.

      On Linux, use:

      sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml

      On MacOS, use:

      sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
    2. If you modified one or more environment variables in your existing Cluster Operator Deployment, edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to use those environment variables.
  4. When you have an updated configuration, deploy it along with the rest of the installation resources:

    oc replace -f install/cluster-operator

    Wait for the rolling updates to complete.

  5. If the new Operator version no longer supports the Kafka version you are upgrading from, the Cluster Operator returns an error message to say the version is not supported. Otherwise, no error message is returned.

    • If the error message is returned, upgrade to a Kafka version that is supported by the new Cluster Operator version:

      1. Edit the Kafka custom resource.
      2. Change the spec.kafka.version property to a supported Kafka version.
    • If the error message is not returned, go to the next step. You will upgrade the Kafka version later.
  6. Get the image for the Kafka pod to ensure the upgrade was successful:

    oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'

    The image tag shows the new Streams for Apache Kafka version followed by the Kafka version:

    registry.redhat.io/amq-streams/strimzi-kafka-37-rhel9:2.7.0

    You can also check the upgrade has completed successfully from the status of the Kafka resource.

The Cluster Operator is upgraded to version 2.7, but the version of Kafka running in the cluster it manages is unchanged.

25.5.2. Upgrading the Cluster Operator using the OperatorHub

If you deployed Streams for Apache Kafka from OperatorHub, use the Operator Lifecycle Manager (OLM) to change the update channel for the Streams for Apache Kafka operators to a new Streams for Apache Kafka version.

Updating the channel starts one of the following types of upgrade, depending on your chosen upgrade strategy:

  • An automatic upgrade is initiated
  • A manual upgrade that requires approval before installation begins
Note

If you subscribe to the stable channel, you can get automatic updates without changing channels. However, enabling automatic updates is not recommended because of the potential for missing any pre-installation upgrade steps. Use automatic upgrades only on version-specific channels.

For more information on using OperatorHub to upgrade Operators, see the Upgrading installed Operators (OpenShift documentation).

25.5.3. Migrating to unidirectional topic management

When deploying the Topic Operator to manage topics, the Cluster Operator enables unidirectional topic management by default. If you are switching from a version of Streams for Apache Kafka that used bidirectional topic management, there are some cleanup tasks to perform after upgrading the Cluster Operator. For more information, see Section 10.9, “Switching between Topic Operator modes”.

25.5.4. Upgrading the Cluster Operator returns Kafka version error

If you upgrade the Cluster Operator to a version that does not support the current version of Kafka you are using, you get an unsupported Kafka version error. This error applies to all installation methods and means that you must upgrade Kafka to a supported Kafka version. Change the spec.kafka.version in the Kafka resource to the supported version.

You can use oc to check for error messages like this in the status of the Kafka resource.

Checking the Kafka status for errors

oc get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'

Replace <kafka_cluster_name> with the name of your Kafka cluster and <namespace> with the OpenShift namespace where the pod is running.

25.5.5. Upgrading from Streams for Apache Kafka 1.7 or earlier using the OperatorHub

Action required if upgrading from Streams for Apache Kafka 1.7 or earlier using the OperatorHub

Before you upgrade the Streams for Apache Kafka Operator to version 2.7, you need to make the following changes:

  • Convert custom resources and CRDs to v1beta2
  • Upgrade to a version of Streams for Apache Kafka where the ControlPlaneListener feature gate is disabled

These requirements are described in Section 25.2.2, “Upgrading from a Streams for Apache Kafka version earlier than 1.7”.

If you are upgrading from Streams for Apache Kafka 1.7 or earlier, do the following:

  1. Upgrade to Streams for Apache Kafka 1.7.
  2. Download the Red Hat Streams for Apache Kafka API Conversion Tool provided with Streams for Apache Kafka 1.8 from the Streams for Apache Kafka software downloads page.
  3. Convert custom resources and CRDs to v1beta2.

    For more information, see the Streams for Apache Kafka 1.7 upgrade documentation.

  4. In the OperatorHub, delete version 1.7 of the Streams for Apache Kafka Operator.
  5. If it also exists, delete version 2.7 of the Streams for Apache Kafka Operator.

    If it does not exist, go to the next step.

    If the Approval Strategy for the Streams for Apache Kafka Operator was set to Automatic, version 2.7 of the operator might already exist in your cluster. If you did not convert custom resources and CRDs to the v1beta2 API version before release, the operator-managed custom resources and CRDs will be using the old API version. As a result, the 2.7 Operator is stuck in Pending status. In this situation, you need to delete version 2.7 of the Streams for Apache Kafka Operator as well as version 1.7.

    If you delete both operators, reconciliations are paused until the new operator version is installed. Follow the next steps immediately so that any changes to custom resources are not delayed.

  6. In the OperatorHub, do one of the following:

    • Upgrade to version 1.8 of the Streams for Apache Kafka Operator (where the ControlPlaneListener feature gate is disabled by default).
    • Upgrade to version 2.0 or 2.2 of the Streams for Apache Kafka Operator (where the ControlPlaneListener feature gate is enabled by default) with the ControlPlaneListener feature gate disabled.
  7. Upgrade to version 2.7 of the Streams for Apache Kafka Operator immediately.

    The installed 2.7 operator begins to watch the cluster and performs rolling updates. You might notice a temporary decrease in cluster performance during this process.

25.6. Upgrading KRaft-based Kafka clusters and client applications

Upgrade a KRaft-based Streams for Apache Kafka cluster to a newer supported Kafka version and KRaft metadata version.

You should also choose a strategy for upgrading clients. Kafka clients are upgraded in step 6 of this procedure.

Note

Refer to the Apache Kafka documentation for the latest on support for KRaft-based upgrades.

Prerequisites

  • The Cluster Operator is up and running.
  • Before you upgrade the Streams for Apache Kafka cluster, check that the properties of the Kafka resource do not contain configuration options that are not supported in the new Kafka version.

Procedure

  1. Update the Kafka cluster configuration:

    oc edit kafka <kafka_configuration_file>
  2. If configured, check that the current spec.kafka.metadataVersion is set to a version supported by the version of Kafka you are upgrading to.

    For example, the current version is 3.6-IV2 if upgrading from Kafka version 3.6.0 to 3.7.0:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        replicas: 3
        metadataVersion: 3.6-IV2
        version: 3.6.0
        # ...

    If metadataVersion is not configured, Streams for Apache Kafka automatically updates it to the current default after the update to the Kafka version in the next step.

    Note

    The value of metadataVersion must be a string to prevent it from being interpreted as a floating point number.

  3. Change the Kafka.spec.kafka.version to specify the new Kafka version; leave the metadataVersion at the default for the current Kafka version.

    Note

    Changing the kafka.version ensures that all brokers in the cluster are upgraded to start using the new broker binaries. During this process, some brokers are using the old binaries while others have already upgraded to the new ones. Leaving the metadataVersion unchanged at the current setting ensures that the Kafka brokers and controllers can continue to communicate with each other throughout the upgrade.

    For example, if upgrading from Kafka 3.6.0 to 3.7.0:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        replicas: 3
        metadataVersion: 3.6-IV2 1
        version: 3.7.0 2
        # ...
    1
    Metadata version is unchanged
    2
    Kafka version is changed to the new version.
  4. If the image for the Kafka cluster is defined in Kafka.spec.kafka.image of the Kafka custom resource, update the image to point to a container image with the new Kafka version.

    See Kafka version and image mappings

  5. Save and exit the editor, then wait for the rolling updates to upgrade the Kafka nodes to complete.

    Check the progress of the rolling updates by watching the pod state transitions:

    oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'

    The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.

  6. Depending on your chosen strategy for upgrading clients, upgrade all client applications to use the new version of the client binaries.

    If required, set the version property for Kafka Connect and MirrorMaker as the new version of Kafka:

    1. For Kafka Connect, update KafkaConnect.spec.version.
    2. For MirrorMaker, update KafkaMirrorMaker.spec.version.
    3. For MirrorMaker 2, update KafkaMirrorMaker2.spec.version.

      Note

      If you are using custom images that are built manually, you must rebuild those images to ensure that they are up-to-date with the latest Streams for Apache Kafka base image. For example, if you created a container image from the base Kafka Connect image, update the Dockerfile to point to the latest base image and build configuration.

  7. Verify that the upgraded client applications work correctly with the new Kafka brokers.
  8. If configured, update the Kafka resource to use the new metadataVersion version. Otherwise, go to step 9.

    For example, if upgrading to Kafka 3.7.0:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        replicas: 3
        metadataVersion: 3.7-IV2
        version: 3.7.0
        # ...
    Warning

    Exercise caution when changing the metadataVersion, as downgrading may not be possible. You cannot downgrade Kafka if the metadataVersion for the new Kafka version is higher than the Kafka version you wish to downgrade to. However, understand the potential implications on support and compatibility when maintaining an older version.

  9. Wait for the Cluster Operator to update the cluster.

    You can check the upgrade has completed successfully from the status of the Kafka resource.

25.7. Upgrading Kafka when using ZooKeeper

If you are using a ZooKeeper-based Kafka cluster, an upgrade requires an update to the Kafka version and the inter-broker protocol version.

If you want to switch a Kafka cluster from using ZooKeeper for metadata management to operating in KRaft mode, the steps must be performed separately from the upgrade. For information on migrating to a KRaft-based cluster, see Chapter 8, Migrating to KRaft mode.

25.7.1. Updating Kafka versions

Upgrading Kafka when using ZooKeeper for cluster management requires updates to the Kafka version (Kafka.spec.kafka.version) and its inter-broker protocol version (inter.broker.protocol.version) in the configuration of the Kafka resource. Each version of Kafka has a compatible version of the inter-broker protocol. The inter-broker protocol is used for inter-broker communication. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table. The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config.

The following table shows the differences between Kafka versions:

Table 25.1. Kafka version differences
Streams for Apache Kafka versionKafka versionInter-broker protocol versionLog message format versionZooKeeper version

2.7

3.7.0

3.7

3.7

3.8.3

2.6

3.6.0

3.6

3.6

3.8.3

  • Kafka 3.7.0 is supported for production use.
  • Kafka 3.6.0 is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7.

Log message format version

When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the message format they were encoded with.

The properties used to set a specific message format version are as follows:

  • message.format.version property for topics
  • log.message.format.version property for Kafka brokers

From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version and don’t need to be set. The values reflect the Kafka version used.

When upgrading to Kafka 3.0.0 or higher, you can remove these settings when you update the inter.broker.protocol.version. Otherwise, you can set the message format version based on the Kafka version you are upgrading to.

The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration.

Rolling updates from Kafka version changes

The Cluster Operator initiates rolling updates to Kafka brokers when the Kafka version is updated. Further rolling updates depend on the configuration for inter.broker.protocol.version and log.message.format.version.

If Kafka.spec.kafka.config contains…​The Cluster Operator initiates…​

Both the inter.broker.protocol.version and the log.message.format.version.

A single rolling update. After the update, the inter.broker.protocol.version must be updated manually, followed by log.message.format.version. Changing each will trigger a further rolling update.

Either the inter.broker.protocol.version or the log.message.format.version.

Two rolling updates.

No configuration for the inter.broker.protocol.version or the log.message.format.version.

Two rolling updates.

Important

From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn’t need to be set. The log.message.format.version property for brokers and the message.format.version property for topics are deprecated and will be removed in a future release of Kafka.

As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper.

  • A single rolling update occurs even if the ZooKeeper version is unchanged.
  • Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version.

25.7.2. Upgrading clients with older message formats

Before Kafka 3.0, you could configure a specific message format for brokers using the log.message.format.version property (or the message.format.version property at the topic level). This allowed brokers to accommodate older Kafka clients that were using an outdated message format. Though Kafka inherently supports older clients without explicitly setting this property, brokers would then need to convert the messages from the older clients, which came with a significant performance cost.

Apache Kafka Java clients have supported the latest message format version since version 0.11. If all of your clients are using the latest message version, you can remove the log.message.format.version or message.format.version overrides when upgrading your brokers.

However, if you still have clients that are using an older message format version, we recommend upgrading your clients first. Start with the consumers, then upgrade the producers before removing the log.message.format.version or message.format.version overrides when upgrading your brokers. This will ensure that all of your clients can support the latest message format version and that the upgrade process goes smoothly.

You can track Kafka client names and versions using this metric:

  • kafka.server:type=socket-server-metrics,clientSoftwareName=<name>,clientSoftwareVersion=<version>,listener=<listener>,networkProcessor=<processor>
Tip

The following Kafka broker metrics help monitor the performance of message down-conversion:

  • kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request={Produce|Fetch} provides metrics on the time taken to perform message conversion.
  • kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec,topic=([-.\w]+) provides metrics on the number of messages converted over a period of time.

25.7.3. Upgrading ZooKeeper-based Kafka clusters and client applications

Upgrade a ZooKeeper-based Streams for Apache Kafka cluster to a newer supported Kafka version and inter-broker protocol version.

You should also choose a strategy for upgrading clients. Kafka clients are upgraded in step 6 of this procedure.

Prerequisites

  • The Cluster Operator is up and running.
  • Before you upgrade the Streams for Apache Kafka cluster, check that the properties of the Kafka resource do not contain configuration options that are not supported in the new Kafka version.

Procedure

  1. Update the Kafka cluster configuration:

    oc edit kafka <kafka_configuration_file>
  2. If configured, check that the inter.broker.protocol.version and log.message.format.version properties are set to the current version.

    For example, the current version is 3.6 if upgrading from Kafka version 3.6.0 to 3.7.0:

    kind: Kafka
    spec:
      # ...
      kafka:
        version: 3.6.0
        config:
          log.message.format.version: "3.6"
          inter.broker.protocol.version: "3.6"
          # ...

    If log.message.format.version and inter.broker.protocol.version are not configured, Streams for Apache Kafka automatically updates these versions to the current defaults after the update to the Kafka version in the next step.

    Note

    The value of log.message.format.version and inter.broker.protocol.version must be strings to prevent them from being interpreted as floating point numbers.

  3. Change the Kafka.spec.kafka.version to specify the new Kafka version; leave the log.message.format.version and inter.broker.protocol.version at the defaults for the current Kafka version.

    Note

    Changing the kafka.version ensures that all brokers in the cluster are upgraded to start using the new broker binaries. During this process, some brokers are using the old binaries while others have already upgraded to the new ones. Leaving the inter.broker.protocol.version unchanged at the current setting ensures that the brokers can continue to communicate with each other throughout the upgrade.

    For example, if upgrading from Kafka 3.6.0 to 3.7.0:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    spec:
      # ...
      kafka:
        version: 3.7.0 1
        config:
          log.message.format.version: "3.6" 2
          inter.broker.protocol.version: "3.6" 3
          # ...
    1
    Kafka version is changed to the new version.
    2
    Message format version is unchanged.
    3
    Inter-broker protocol version is unchanged.
    Warning

    You cannot downgrade Kafka if the inter.broker.protocol.version for the new Kafka version changes. The inter-broker protocol version determines the schemas used for persistent metadata stored by the broker, including messages written to __consumer_offsets. The downgraded cluster will not understand the messages.

  4. If the image for the Kafka cluster is defined in Kafka.spec.kafka.image of the Kafka custom resource, update the image to point to a container image with the new Kafka version.

    See Kafka version and image mappings

  5. Save and exit the editor, then wait for rolling updates to complete.

    Check the progress of the rolling updates by watching the pod state transitions:

    oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'

    The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.

  6. Depending on your chosen strategy for upgrading clients, upgrade all client applications to use the new version of the client binaries.

    If required, set the version property for Kafka Connect and MirrorMaker as the new version of Kafka:

    1. For Kafka Connect, update KafkaConnect.spec.version.
    2. For MirrorMaker, update KafkaMirrorMaker.spec.version.
    3. For MirrorMaker 2, update KafkaMirrorMaker2.spec.version.

      Note

      If you are using custom images that are built manually, you must rebuild those images to ensure that they are up-to-date with the latest Streams for Apache Kafka base image. For example, if you created a container image from the base Kafka Connect image, update the Dockerfile to point to the latest base image and build configuration.

  7. Verify that the upgraded client applications work correctly with the new Kafka brokers.
  8. If configured, update the Kafka resource to use the new inter.broker.protocol.version version. Otherwise, go to step 9.

    For example, if upgrading to Kafka 3.7.0:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    spec:
      # ...
      kafka:
        version: 3.7.0
        config:
          log.message.format.version: "3.6"
          inter.broker.protocol.version: "3.7"
          # ...
  9. Wait for the Cluster Operator to update the cluster.
  10. If configured, update the Kafka resource to use the new log.message.format.version version. Otherwise, go to step 10.

    For example, if upgrading to Kafka 3.7.0:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    spec:
      # ...
      kafka:
        version: 3.7.0
        config:
          log.message.format.version: "3.7"
          inter.broker.protocol.version: "3.7"
          # ...
    Important

    From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn’t need to be set.

  11. Wait for the Cluster Operator to update the cluster.

    You can check the upgrade has completed successfully from the status of the Kafka resource.

25.8. Checking the status of an upgrade

When performing an upgrade (or downgrade), you can check it completed successfully in the status of the Kafka custom resource. The status provides information on the Streams for Apache Kafka and Kafka versions being used.

To ensure that you have the correct versions after completing an upgrade, verify the kafkaVersion and operatorLastSuccessfulVersion values in the Kafka status.

  • operatorLastSuccessfulVersion is the version of the Streams for Apache Kafka operator that last performed a successful reconciliation.
  • kafkaVersion is the the version of Kafka being used by the Kafka cluster.
  • kafkaMetadataVersion is the metadata version used by KRaft-based Kafka clusters

You can use these values to check an upgrade of Streams for Apache Kafka or Kafka has completed.

Checking an upgrade from the Kafka status

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
spec:
  # ...
status:
  # ...
  kafkaVersion: 3.7.0
  operatorLastSuccessfulVersion: 2.7
  kafkaMetadataVersion: 3.7

25.9. Switching to FIPS mode when upgrading Streams for Apache Kafka

Upgrade Streams for Apache Kafka to run in FIPS mode on FIPS-enabled OpenShift clusters. Until Streams for Apache Kafka 2.3, running on FIPS-enabled OpenShift clusters was possible only by disabling FIPS mode using the FIPS_MODE environment variable. From release 2.3, Streams for Apache Kafka supports FIPS mode. If you run Streams for Apache Kafka on a FIPS-enabled OpenShift cluster with the FIPS_MODE set to disabled, you can enable it by following this procedure.

Prerequisites

  • FIPS-enabled OpenShift cluster
  • An existing Cluster Operator deployment with the FIPS_MODE environment variable set to disabled

Procedure

  1. Upgrade the Cluster Operator to version 2.3 or newer but keep the FIPS_MODE environment variable set to disabled.
  2. If you initially deployed a Streams for Apache Kafka version older than 2.3, it might use old encryption and digest algorithms in its PKCS #12 stores, which are not supported with FIPS enabled. To recreate the certificates with updated algorithms, renew the cluster and clients CA certificates.

  3. If you use SCRAM-SHA-512 authentication, check the password length of your users. If they are less than 32 characters long, generate a new password in one of the following ways:

    1. Delete the user secret so that the User Operator generates a new one with a new password of sufficient length.
    2. If you provided your password using the .spec.authentication.password properties of the KafkaUser custom resource, update the password in the OpenShift secret referenced in the same password configuration. Don’t forget to update your clients to use the new passwords.
  4. Ensure that the CA certificates are using the correct algorithms and the SCRAM-SHA-512 passwords are of sufficient length. You can then enable the FIPS mode.
  5. Remove the FIPS_MODE environment variable from the Cluster Operator deployment. This restarts the Cluster Operator and rolls all the operands to enable the FIPS mode. After the restart is complete, all Kafka clusters now run with FIPS mode enabled.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.