이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 31. Upgrading Streams for Apache Kafka
Download the latest Streams for Apache Kafka deployment files and upgrade your Streams for Apache Kafka installation to version 3.1 to benefit from new features, performance improvements, and enhanced security options. During the upgrade, Kafka is also be updated to the latest supported version, introducing additional features and bug fixes to your Streams for Apache Kafka deployment.
Use the same method to upgrade the Cluster Operator as the initial method of deployment. For example, if you used the Streams for Apache Kafka installation files, modify those files to perform the upgrade. After you have upgraded your Cluster Operator to 3.1, the next step is to upgrade all Kafka nodes to the latest supported version of Kafka. Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka nodes.
If you encounter any issues with the new version, Streams for Apache Kafka can be downgraded to the previous version.
Released Streams for Apache Kafka versions can be found at Streams for Apache Kafka software downloads page.
Upgrade without downtime
For topics configured with high availability (replication factor of at least 3 and evenly distributed partitions), the upgrade process should not cause any downtime for consumers and producers.
The upgrade triggers rolling updates, where brokers are restarted one by one at different stages of the process. During this time, overall cluster availability is temporarily reduced, which may increase the risk of message loss in the event of a broker failure.
31.1. Required upgrade sequence 링크 복사링크가 클립보드에 복사되었습니다!
To upgrade brokers and clients without downtime, you must complete the Streams for Apache Kafka upgrade procedures in the following order:
Make sure your OpenShift cluster version is supported.
Streams for Apache Kafka 3.1 requires OpenShift 4.16 to 4.20.
Make sure you use Streams for Apache Kafka 2.7 or newer and all your Apache Kafka clusters are Kraft-based.
ZooKeeper-based Apache Kafka clusters are not supported anymore and need to be migrated to KRaft before upgrading the Cluster Operator or Apache Kafka.
- Upgrade the Cluster Operator.
-
Update the Kafka
versionandmetadataVersion.
From Streams for Apache Kafka 2.7, upgrades and downgrades between KRaft-based clusters are supported.
31.2. Streams for Apache Kafka upgrade paths 링크 복사링크가 클립보드에 복사되었습니다!
Two upgrade paths are available for Streams for Apache Kafka.
- Incremental upgrade
- An incremental upgrade moves between consecutive minor versions (such as 3.0 to 3.1), following a supported upgrade path.
- Multi-version upgrade
- A multi-version upgrade skips one or more minor versions and is supported only between consecutive Long-Term Support (LTS) versions. From the OperatorHub, you can perform a multi-version upgrade by selecting the version-specific channel for the target LTS version, and manually updating the Kafka version. While temporary errors due to Kafka version changes may occur, they are expected and can be resolved by updating the Kafka version during the upgrade.
Before upgrading in production, test your specific scenario in a controlled environment to identify potential issues.
Multi version upgrade is not supported in the LTS channel (amq-streams-lts). No further LTS versions are being added to the LTS channel, which is deprecated and will be removed in a later release. For upgrade between LTS versions, use the version-specific channels.
31.2.1. Support for Kafka versions when upgrading 링크 복사링크가 클립보드에 복사되었습니다!
When upgrading Streams for Apache Kafka, it is important to ensure compatibility with the Kafka version being used.
Multi-version upgrades are possible even if the supported Kafka versions differ between the old and new versions. However, if you attempt to upgrade to a new Streams for Apache Kafka version that does not support the current Kafka version, an error indicating that the Kafka version is not supported is generated. In this case, you must upgrade the Kafka version as part of the Streams for Apache Kafka upgrade by changing the spec.kafka.version in the Kafka custom resource to the supported version for the new Streams for Apache Kafka version.
31.2.2. Upgrading from a Streams for Apache Kafka version earlier than 2.7 링크 복사링크가 클립보드에 복사되었습니다!
Streams for Apache Kafka 3.1 supports upgrades only for KRaft-based Apache Kafka clusters managed by Streams for Apache Kafka 2.7 and newer. When upgrading from older Streams for Apache Kafka versions, please make sure to first upgrade to a Streams for Apache Kafka version from 2.7 to 2.9 and migrate to KRaft before upgrading to Streams for Apache Kafka 3.1.
31.2.3. Kafka version and image mappings 링크 복사링크가 클립보드에 복사되었습니다!
When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES environment variable and the Kafka.spec.kafka.version property.
-
Each
Kafkaresource can be configured with aKafka.spec.kafka.version, which defaults to the latest supported Kafka version (4.1.0) if not specified. The Cluster Operator’s
STRIMZI_KAFKA_IMAGESenvironment variable provides a mapping (<kafka_version>=<image>) between a Kafka version and the image to be used when a specific Kafka version is requested in a givenKafkaresource. For example,4.1.0=registry.redhat.io/amq-streams/kafka-41-rhel9:3.1.0.-
If
Kafka.spec.kafka.imageis not configured, the default image for the given version is used. -
If
Kafka.spec.kafka.imageis configured, the default image is overridden.
-
If
The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version. Take care to ensure that the given image corresponds to the given Kafka version.
31.3. Strategies for upgrading clients 링크 복사링크가 클립보드에 복사되었습니다!
Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved.
Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable.
If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions.
31.4. Upgrading OpenShift with minimal downtime 링크 복사링크가 클립보드에 복사되었습니다!
If you are upgrading OpenShift, refer to the OpenShift upgrade documentation to check the upgrade path and the steps to upgrade your nodes correctly. Before upgrading OpenShift, check the supported versions for your version of Streams for Apache Kafka.
When performing your upgrade, ensure the availability of your Kafka clusters by following these steps:
- Configure pod disruption budgets
Roll pods using one of these methods:
- Use the Streams for Apache Kafka Drain Cleaner (recommended)
- Apply an annotation to your pods to roll them manually
For Kafka to stay operational, topics must also be replicated for high availability. This requires topic configuration that specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor.
Kafka topic replicated for high availability
In a highly available environment, the Cluster Operator maintains a minimum number of in-sync replicas for topics during the upgrade process so that there is no downtime.
31.4.1. Rolling pods using Drain Cleaner 링크 복사링크가 클립보드에 복사되었습니다!
When using the Streams for Apache Kafka Drain Cleaner to evict nodes during OpenShift upgrade, it annotates pods with a manual rolling update annotation to inform the Cluster Operator to perform a rolling update of the pod that should be evicted and have it moved away from the OpenShift node that is being upgraded.
For more information, see Chapter 27, Evicting pods with the Streams for Apache Kafka Drain Cleaner.
31.4.2. Rolling pods manually (alternative to Drain Cleaner) 링크 복사링크가 클립보드에 복사되었습니다!
As an alternative to using the Drain Cleaner to roll pods, you can trigger a manual rolling update of pods through the Cluster Operator. Using Pod resources, rolling updates restart the pods of resources with new pods. To replicate the operation of the Drain Cleaner by keeping topics available, you must also set the maxUnavailable value to zero for the pod disruption budget. Reducing the pod disruption budget to zero prevents OpenShift from evicting pods automatically.
Specifying a pod disruption budget
The PodDisruptionBudget resource created for Kafka clusters covers all associated node pool pods.
Watch the node pool pods that need to be drained and add a pod annotation to make the update. Here, the annotation updates a Kafka pod named my-cluster-pool-a-1.
Performing a manual rolling update on a Kafka pod
oc annotate pod my-cluster-pool-a-1 strimzi.io/manual-rolling-update="true"
oc annotate pod my-cluster-pool-a-1 strimzi.io/manual-rolling-update="true"
31.5. Upgrading the Cluster Operator 링크 복사링크가 클립보드에 복사되었습니다!
Use the same method to upgrade the Cluster Operator as the initial method of deployment.
31.5.1. Upgrading the Cluster Operator using installation files 링크 복사링크가 클립보드에 복사되었습니다!
This procedure describes how to upgrade a Cluster Operator deployment to use Streams for Apache Kafka 3.1.
Follow this procedure if you deployed the Cluster Operator using the installation YAML files in the install/cluster-operator/ directory. The steps include the necessary configuration changes when the Cluster Operator watches multiple or all namespaces.
The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation.
Refer to the documentation supporting a specific version of Streams for Apache Kafka for information on how to upgrade to that version.
Prerequisites
- An existing Cluster Operator deployment is available.
- You have downloaded the release artifacts for Streams for Apache Kafka 3.1.
-
You need an account with permission to create and manage
CustomResourceDefinitionand RBAC (ClusterRole, andRoleBinding) resources.
Currently, no feature gates require enabling or disabling before upgrading. If a feature gate introduces such a requirement, the details will be provided here.
Procedure
Take note of any configuration changes made during the previous Cluster Operator installation.
Any changes will be overwritten by the new version of the Cluster Operator.
- Update your custom resources to reflect the supported configuration options available for Streams for Apache Kafka version 3.1.
Modify the installation files for the new Cluster Operator version to reflect the namespace in which the Cluster Operator is running.
On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you modified environment variables in the
Deploymentconfiguration, edit the060-Deployment-strimzi-cluster-operator.yamlfile to use those environment variables.-
If the Cluster Operator is watching multiple namespaces, add the list of namespaces to the
STRIMZI_NAMESPACEenvironment variable. -
If the Cluster Operator is watching all namespaces, specify
value: "*"for theSTRIMZI_NAMESPACEenvironment variable.
-
If the Cluster Operator is watching multiple namespaces, add the list of namespaces to the
If the Cluster Operator is watching more than one namespace, update the role bindings.
If watching multiple namespaces, replace the
namespacein theRoleBindinginstallation files with the actual namespace name and create the role bindings for each namespace:Creating role bindings for a namespace
oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>
oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, if the Cluster Operator is watching three namespaces, create three sets of role bindings by substituting
<watched_namespace>with the name of each namespace.If watching all namespaces, recreate the cluster role bindings that grant cluster-wide access (if needed):
Granting cluster-wide access using role bindings
oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
When you have an updated configuration, deploy it along with the rest of the installation resources:
oc replace -f install/cluster-operator
oc replace -f install/cluster-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the rolling updates to complete.
If the new operator version no longer supports the Kafka version you are upgrading from, an error message is returned.
To resolve this, upgrade to a supported Kafka version:
-
Edit the
Kafkacustom resource. -
Change the
spec.kafka.versionproperty to a supported Kafka version.
If no error message is returned, you can proceed to the next step and upgrade the Kafka version later.
-
Edit the
Get the image for the Kafka pod to ensure the upgrade was successful:
oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The image tag shows the new Streams for Apache Kafka version followed by the Kafka version:
registry.redhat.io/amq-streams/strimzi-kafka-41-rhel9:3.1.0
registry.redhat.io/amq-streams/strimzi-kafka-41-rhel9:3.1.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also check the upgrade has completed successfully from the status of the
Kafkaresource.
The Cluster Operator is upgraded to version 3.1, but the version of Kafka running in the cluster it manages is unchanged.
31.5.2. Upgrading the Cluster Operator using the OperatorHub 링크 복사링크가 클립보드에 복사되었습니다!
If you deployed Streams for Apache Kafka from OperatorHub, use the Operator Lifecycle Manager (OLM) to change the update channel for the Streams for Apache Kafka operators to a new Streams for Apache Kafka version.
Updating the channel starts one of the following types of upgrade, depending on your chosen upgrade strategy:
- An automatic upgrade is initiated
- A manual upgrade that requires approval before installation begins
If you subscribe to the stable channel, you can get automatic updates without changing channels. However, enabling automatic updates is not recommended because of the potential for missing any pre-installation upgrade steps. Use automatic upgrades only on version-specific channels.
For more information on using OperatorHub to upgrade Operators, see the Updating installed Operators (as described in the Openshift Operators guide).
31.5.3. Upgrading the Cluster Operator returns Kafka version error 링크 복사링크가 클립보드에 복사되었습니다!
If you upgrade the Cluster Operator to a version that does not support the current version of Kafka you are using, you get an unsupported Kafka version error. This error applies to all installation methods and means that you must upgrade Kafka to a supported Kafka version. Change the spec.kafka.version in the Kafka resource to the supported version.
You can use oc to check for error messages like this in the status of the Kafka resource.
Checking the Kafka status for errors
oc get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'
oc get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'
Replace <kafka_cluster_name> with the name of your Kafka cluster and <namespace> with the OpenShift namespace where the pod is running.
31.6. Upgrading Kafka clusters 링크 복사링크가 클립보드에 복사되었습니다!
Upgrade a Kafka cluster to a newer supported Kafka version and KRaft metadata version.
Refer to the Apache Kafka documentation for the latest on support for Kafka upgrades.
Prerequisites
- The Cluster Operator is up and running.
-
Before you upgrade the Kafka cluster, check that the properties of the
Kafkaresource do not contain configuration options that are not supported in the new Kafka version.
Procedure
Update the Kafka cluster configuration:
oc edit kafka <kafka_configuration_file>
oc edit kafka <kafka_configuration_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If configured, check that the current
spec.kafka.metadataVersionis set to a version supported by the version of Kafka you are upgrading to.For example, the current version is 4.0-IV0 if upgrading from Kafka version 4.0.0 to 4.1.0:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
metadataVersionis not configured, Streams for Apache Kafka automatically updates it to the current default after the update to the Kafka version in the next step.NoteThe value of
metadataVersionmust be a string to prevent it from being interpreted as a floating point number.Change the
Kafka.spec.kafka.versionto specify the new Kafka version; leave themetadataVersionat the default for the current Kafka version.NoteChanging the
kafka.versionensures that all brokers in the cluster are upgraded to use the new broker binaries. During the rolling upgrade, some brokers will still run the old binaries while others have already switched to the new ones. KeepingmetadataVersionunchanged at its current value ensures that all brokers and controllers remain compatible and can continue to communicate throughout the upgrade.NoteFor upgrades that involve only a micro release of Kafka, (for example, from X.x.0 to X.x.1) do not modify the
Kafka.spec.kafka.version. Keep the existing version setting. The Cluster Operator automatically applies the updated Kafka binaries for the micro release, even though thekafka.versionfield continues to show the previous minor version.For example, if upgrading from Kafka 4.0.0 to 4.1.0:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the image for the Kafka cluster is defined in
Kafka.spec.kafka.imageof theKafkacustom resource, update theimageto point to a container image with the new Kafka version.Save and exit the editor, then wait for the rolling updates to upgrade the Kafka nodes to complete.
Check the progress of the rolling updates by watching the pod state transitions:
oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.
If required, set the
versionproperty for Kafka Connect and MirrorMaker as the new version of Kafka:-
For Kafka Connect, update
KafkaConnect.spec.version. For MirrorMaker 2, update
KafkaMirrorMaker2.spec.version.NoteIf you are using custom images that are built manually, you must rebuild those images to ensure that they are up-to-date with the latest Streams for Apache Kafka base image. For example, if you created a container image from the base Kafka Connect image, update the Dockerfile to point to the latest base image and build configuration.
-
For Kafka Connect, update
If configured, update the Kafka resource to use the new
metadataVersionversion. Otherwise, go to step 9.For example, if upgrading to Kafka 4.1.0:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningExercise caution when changing the
metadataVersion, as downgrading may not be possible. You cannot downgrade Kafka if themetadataVersionfor the new Kafka version is higher than the Kafka version you wish to downgrade to. However, understand the potential implications on support and compatibility when maintaining an older version.Wait for the Cluster Operator to update the cluster.
Check the upgrade has completed successfully from the status of the
Kafkaresource.
Upgrading client applications
Ensure all Kafka client applications are updated to use the new version of the client binaries as part of the upgrade process and verify their compatibility with the Kafka upgrade. If needed, coordinate with the team responsible for managing the client applications.
To check that a client is using the latest message format, use the kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec metric. The metric shows 0 if the latest message format is being used.
31.7. Checking the status of an upgrade 링크 복사링크가 클립보드에 복사되었습니다!
When performing an upgrade (or downgrade), you can check it completed successfully in the status of the Kafka custom resource. The status provides information on the Streams for Apache Kafka and Kafka versions being used.
To ensure that you have the correct versions after completing an upgrade, verify the kafkaVersion and operatorLastSuccessfulVersion values in the Kafka status.
-
operatorLastSuccessfulVersionis the version of the Streams for Apache Kafka operator that last performed a successful reconciliation. -
kafkaVersionis the version of Kafka being used by the Kafka cluster. -
kafkaMetadataVersionis the metadata version used by Kafka clusters.
You can use these values to check an upgrade of Streams for Apache Kafka or Kafka has completed.
Checking an upgrade from the Kafka status