Chapter 8. Upgrading AMQ Streams
Action required: Convert custom resources and CRDs to the v1beta2 API version before you upgrade the AMQ Streams Operator to version 1.8.0.
From AMQ Streams version 1.8 onwards, the Red Hat Integration - AMQ Streams Operator supports v1beta2 custom resources only. Before you upgrade the AMQ Streams Operator to version 1.8.0 in the OperatorHub UI, you must use the API conversion tool to upgrade custom resources to v1beta2. The API conversion tool requires Java 11.
The recommended approach for OperatorHub upgrades is as follows:
- First, upgrade to AMQ Streams 1.7, if you have not already done so.
- Download the Red Hat AMQ Streams 1.8.0 API Conversion Tool from the AMQ Streams download site.
Use the API conversion tool to convert custom resources and CRDs to
v1beta2To fully complete the conversion, you must run the following two commands in order:
-
convert-resourceto convert AMQ Streams custom resources into a format applicable tov1beta2. See Converting custom resources directly using the API conversion tool. -
crd-upgradeto updatespec.versionsin the CRDs to declarev1beta2as the storage API version. See Upgrading CRDs to v1beta2 using the API conversion tool.
-
- In the OperatorHub UI, delete version 1.7.0 of the Red Hat Integration - AMQ Streams Operator.
Perform this step if version 1.8.0 of the Red Hat Integration - AMQ Streams Operator already exists in your cluster. If not, go to step 6.
If the Approval Strategy for the AMQ Streams Operator was set to Automatic, version 1.8.0 of the Operator might already exist in your cluster. If you did not convert custom resources and CRDs to the
v1beta2API version before release, the Operator-managed custom resources and CRDs will be using the old API version. As a result, the 1.8.0 Operator is stuck in the "Pending" status.In this situation, delete version 1.8.0 of the Red Hat Integration - AMQ Streams Operator as well as version 1.7.0.
NoteIf you delete both Operators, reconciliations are paused until the new Operator version is installed. Follow the next step immediately so that any changes to custom resources are not delayed.
In the OperatorHub UI, install version 1.8.0 of the Red Hat Integration - AMQ Streams Operator immediately.
The installed 1.8.0 Operator begins to watch the cluster and performs rolling updates. You might notice a temporary decrease in cluster performance during this process.
Upgrades overview
AMQ Streams can be upgraded to version 1.8 to take advantage of new features and enhancements, performance improvements, and security options.
As part of the upgrade, you upgrade Kafka to the latest supported version. Each Kafka release introduces new features, improvements, and bug fixes to your AMQ Streams deployment.
AMQ Streams can be downgraded to the previous version if you encounter issues with the newer version.
Released versions of AMQ Streams are listed in the Product Downloads section of the Red Hat Customer Portal.
Upgrade paths
Two upgrade paths are possible:
- Incremental
- Upgrading AMQ Streams from the previous minor version to version 1.8.
- Multi-version
Upgrading AMQ Streams from an old version to version 1.8 within a single upgrade (skipping one or more intermediate versions).
For example, upgrading from AMQ Streams 1.5 directly to AMQ Streams 1.7.
Upgrading from a version earlier than 1.7
The v1beta2 API version for all custom resources was introduced with AMQ Streams 1.7. For AMQ Streams 1.8, the v1alpha1 and v1beta1 API versions were removed from all AMQ Streams custom resources apart from KafkaTopic and KafkaUser.
If you are upgrading from an AMQ Streams version prior to version 1.7:
- Upgrade AMQ Streams to 1.7
-
Convert the custom resources to
v1beta2 - Upgrade AMQ Streams to 1.8 or newer
As an alternative, you can install the custom resources from version 1.7, convert the resources, and then upgrade to 1.8 or newer.
Kafka version support
The Kafka versions table lists the supported Kafka versions for AMQ Streams 1.8. In the table:
- The latest Kafka version is supported for production use.
- The previous Kafka version is supported only for the purpose of upgrading to AMQ Streams 1.8.
Decide which Kafka version to upgrade to before beginning the AMQ Streams upgrade process.
You can upgrade to a higher Kafka version as long as it is supported by your version of AMQ Streams. In some cases, you can also downgrade to a previous supported Kafka version.
Downtime and availability
If topics are configured for high availability, upgrading AMQ Streams should not cause any downtime for consumers and producers that publish and read data from those topics. Highly available topics have a replication factor of at least 3 and partitions distributed evenly among the brokers.
Upgrading AMQ Streams triggers rolling updates, where all brokers are restarted in turn, at different stages of the process. During rolling updates, not all brokers are online, so overall cluster availability is temporarily reduced. A reduction in cluster availability increases the chance that a broker failure will result in lost messages.
8.1. Required upgrade sequence Copy linkLink copied to clipboard!
To upgrade brokers and clients without downtime, you must complete the Strimzi upgrade procedures in the following order:
Update existing custom resources to support the
v1beta2API version.Do this after upgrading to AMQ Streams 1.7, but before upgrading to AMQ Streams 1.8 or newer. For a multi-version upgrade from a version prior to version 1.7:
- Skip this step and continue with the following steps to upgrade to version 1.7.
- Return to this step and perform all the steps in this upgrade sequence to upgrade to 1.8 or newer.
Update your Cluster Operator to a new AMQ Streams version.
The approach you take depends on how you deployed the Cluster Operator.
- If you deployed the Cluster Operator using the installation YAML files, perform your upgrade by modifying the Operator installation files, as described in Upgrading the Cluster Operator.
If you deployed the Cluster Operator from the OperatorHub, use the Operator Lifecycle Manager (OLM) to change the update channel for the AMQ Streams Operators to a new AMQ Streams version.
Depending on your chosen upgrade strategy, after updating the channel, either:
- An automatic upgrade is initiated
A manual upgrade will require approval before the installation begins
For more information on using the OperatorHub to upgrade Operators, see Upgrading installed Operators in the OpenShift documentation.
Upgrade all Kafka brokers and client applications to the latest supported Kafka version.
Optional: incremental cooperative rebalance upgrade
Consider upgrading consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances.
8.2. AMQ Streams custom resource upgrades Copy linkLink copied to clipboard!
Before upgrading AMQ Streams to 1.8, you must ensure that your custom resources are using API version v1beta2. You can do this any time after upgrading to AMQ Streams 1.7, but the upgrades must be completed before upgrading to AMQ Streams 1.8 or newer.
Upgrade of the custom resources to v1beta2 must be performed before upgrading the Cluster Operator, so the Cluster Operator can understand the resources.
Upgrade of the custom resources to v1beta2 prepares AMQ Streams for a move to OpenShift CRD v1, which is required for OpenShift v1.22.
CLI upgrades to custom resources
AMQ Streams provides an API conversion tool with its release artifacts.
You can download its ZIP or TAR.GZ from AMQ Streams download site. To use the tool, extract it and use the scripts in the bin directory.
From its CLI, you can then use the tool to convert the format of your custom resources to v1beta2 in one of two ways:
After the conversion of your custom resources, you must set v1beta2 as the storage API version in your CRDs:
Manual upgrades to custom resources
Instead of using the API conversion tool to update custom resources to v1beta2, you can manually update each custom resource to use v1beta2:
Update the Kafka custom resource, including the configurations for the other components:
- Section 8.2.5, “Upgrading Kafka resources to support v1beta2”
- Section 8.2.7, “Upgrading ZooKeeper to support v1beta2”
- Section 8.2.8, “Upgrading the Topic Operator to support v1beta2”
- Section 8.2.9, “Upgrading the Entity Operator to support v1beta2”
- Section 8.2.10, “Upgrading Cruise Control to support v1beta2” (if Cruise Control is deployed)
- Section 8.2.11, “Upgrading the API version of Kafka resources to v1beta2”
Update the other custom resources that apply to your deployment:
- Section 8.2.12, “Upgrading Kafka Connect resources to v1beta2”
- Section 8.2.13, “Upgrading Kafka Connect S2I resources to v1beta2”
- Section 8.2.14, “Upgrading Kafka MirrorMaker resources to v1beta2”
- Section 8.2.15, “Upgrading Kafka MirrorMaker 2.0 resources to v1beta2”
- Section 8.2.16, “Upgrading Kafka Bridge resources to v1beta2”
- Section 8.2.17, “Upgrading Kafka User resources to v1beta2”
- Section 8.2.18, “Upgrading Kafka Topic resources to v1beta2”
- Section 8.2.19, “Upgrading Kafka Connector resources to v1beta2”
- Section 8.2.20, “Upgrading Kafka Rebalance resources to v1beta2”
The manual procedures show the changes that are made to each custom resource. After these changes, you must use the API conversion tool to upgrade your CRDs.
8.2.1. API versioning Copy linkLink copied to clipboard!
Custom resources are edited and controlled using APIs added to OpenShift by CRDs. Put another way, CRDs extend the Kubernetes API to allow the creation of custom resources. CRDs are themselves resources within OpenShift. They are installed in an OpenShift cluster to define the versions of API for the custom resource. Each version of the custom resource API can define its own schema for that version. OpenShift clients, including the AMQ Streams Operators, access the custom resources served by the Kubernetes API server using a URL path (API path), which includes the API version.
The introduction of v1beta2 updates the schemas of the custom resources. The v1alpha1 and v1beta1 versions have been removed.
The v1alpha1 API version is no longer used for the following AMQ Streams custom resources:
-
Kafka -
KafkaConnect -
KafkaConnectS2I -
KafkaConnector -
KafkaMirrorMaker -
KafkaMirrorMaker2 -
KafkaTopic -
KafkaUser -
KafkaBridge -
KafkaRebalance
The v1beta1 API version is no longer used for the following AMQ Streams custom resources:
-
Kafka -
KafkaConnect -
KafkaConnectS2I -
KafkaMirrorMaker -
KafkaTopic -
KafkaUser
8.2.2. Converting custom resources configuration files using the API conversion tool Copy linkLink copied to clipboard!
This procedure describes how to use the API conversion tool to convert YAML files describing the configuration for AMQ Streams custom resources into a format applicable to v1beta2. To do so, you use the convert-file (cf) command.
The convert-file command can convert YAML files containing multiple documents. For a multi-document YAML file, all the AMQ Streams custom resources it contains are converted. Any non-AMQ Streams OpenShift resources are replicated unmodified in the converted output file.
After you have converted the YAML file, you must apply the configuration to update the custom resource in the cluster. Alternatively, if the GitOps synchronization mechanism is being used for updates on your cluster, you can use it to apply the changes. The conversion is only complete when the custom resource is updated in the OpenShift cluster.
Alternatively, you can use the convert-resource procedure to convert custom resources directly.
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. - The API conversion tool, which is provided with the release artifacts.
- The tool requires Java 11.
Use the CLI help for more information on the API conversion tool, and the flags available for the convert-file command:
bin/api-conversion.sh help bin/api-conversion.sh help convert-file
bin/api-conversion.sh help
bin/api-conversion.sh help convert-file
Use bin/api-conversion.cmd for this procedure if you are using Windows.
| Flag | Description |
|---|---|
|
| Specifies the YAML file for the AMQ Streams custom resource being converted |
|
| Creates an output YAML file for the converted custom resource |
|
| Updates the original source file with the converted YAML |
Procedure
Run the API conversion tool with the
convert-filecommand and appropriate flags.Example 1, converts a YAML file and displays the output, though the file does not change:
bin/api-conversion.sh convert-file --file input.yaml
bin/api-conversion.sh convert-file --file input.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 2, converts a YAML file, and writes the changes into the original source file:
bin/api-conversion.sh convert-file --file input.yaml --in-place
bin/api-conversion.sh convert-file --file input.yaml --in-placeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 3, converts a YAML file, and writes the changes into a new output file:
bin/api-conversion.sh convert-file --file input.yaml --output output.yaml
bin/api-conversion.sh convert-file --file input.yaml --output output.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the custom resources using the converted configuration file.
oc apply -f CONVERTED-CONFIG-FILE
oc apply -f CONVERTED-CONFIG-FILECopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the custom resources have been converted.
oc get KIND CUSTOM-RESOURCE-NAME -o yaml
oc get KIND CUSTOM-RESOURCE-NAME -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.3. Converting custom resources directly using the API conversion tool Copy linkLink copied to clipboard!
This procedure describes how to use the API conversion tool to convert AMQ Streams custom resources directly in the OpenShift cluster into a format applicable to v1beta2. To do so, you use the convert-resource (cr) command. The command uses Kubernetes APIs to make the conversions.
You can specify one or more of types of AMQ Streams custom resources, based on the kind property, or you can convert all types. You can also target a specific namespace or all namespaces for conversion. When targeting a namespace, you can convert all custom resources in that namespace, or convert a single custom resource by specifying its name and kind.
Alternatively, you can use the convert-file procedure to convert and apply the YAML files describing the custom resources.
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. - The API conversion tool, which is provided with the release artifacts.
- The tool requires Java 11 (OpenJDK).
The steps require a user admin account with RBAC permission to:
-
Get the AMQ Streams custom resources being converted using the
--nameoption -
List the AMQ Streams custom resources being converted without using the
--nameoption - Replace the AMQ Streams custom resources being converted
-
Get the AMQ Streams custom resources being converted using the
Use the CLI help for more information on the API conversion tool, and the flags available for the convert-resource command:
bin/api-conversion.sh help bin/api-conversion.sh help convert-resource
bin/api-conversion.sh help
bin/api-conversion.sh help convert-resource
Use bin/api-conversion.cmd for this procedure if you are using Windows.
| Flag | Description |
|---|---|
|
| Specifies the kinds of custom resources to be converted, or converts all resources if not specified |
|
| Converts custom resources in all namespaces |
|
| Specifies an OpenShift namespace or OpenShift project, or uses the current namespace if not specified |
|
|
If |
Procedure
Run the API conversion tool with the
convert-resourcecommand and appropriate flags.Example 1, converts all AMQ Streams resources in current namespace:
bin/api-conversion.sh convert-resource
bin/api-conversion.sh convert-resourceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 2, converts all AMQ Streams resources in all namespaces:
bin/api-conversion.sh convert-resource --all-namespaces
bin/api-conversion.sh convert-resource --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 3, converts all AMQ Streams resources in the
my-kafkanamespace:bin/api-conversion.sh convert-resource --namespace my-kafka
bin/api-conversion.sh convert-resource --namespace my-kafkaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 4, converts only Kafka resources in all namespaces:
bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka
bin/api-conversion.sh convert-resource --all-namespaces --kind KafkaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5, converts Kafka and Kafka Connect resources in all namespaces:
bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka --kind KafkaConnect
bin/api-conversion.sh convert-resource --all-namespaces --kind Kafka --kind KafkaConnectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 6, converts a Kafka custom resource named
my-clusterin themy-kafkanamespace:bin/api-conversion.sh convert-resource --kind Kafka --namespace my-kafka --name my-cluster
bin/api-conversion.sh convert-resource --kind Kafka --namespace my-kafka --name my-clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the custom resources have been converted.
oc get KIND CUSTOM-RESOURCE-NAME -o yaml
oc get KIND CUSTOM-RESOURCE-NAME -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.4. Upgrading CRDs to v1beta2 using the API conversion tool Copy linkLink copied to clipboard!
This procedure describes how to use the API conversion tool to convert the CRDs that define the schemas used to instantiate and manage AMQ Streams-specific resources in a format applicable to v1beta2. To do so, you use the crd-upgrade command.
Perform this procedure after converting all AMQ Streams custom resources in the whole OpenShift cluster to v1beta2. If you upgrade your CRDs first, and then convert your custom resources, you will need to run this command again.
The command updates spec.versions in the CRDs to declare v1beta2 as the storage API version. The command also updates custom resources so they are stored under v1beta2. New custom resource instances are created from the specification of the storage API version, so only one API version is ever marked as the storage version.
When you have upgraded the CRDs to use v1beta2 as the storage version, you should only use v1beta2 properties in your custom resources.
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. - The API conversion tool, which is provided with the release artifacts.
- The tool requires Java 11 (OpenJDK).
-
Custom resources have been converted to
v1beta2. The steps require a user admin account with RBAC permission to:
- List the AMQ Streams custom resources in all namespaces
- Replace the AMQ Streams custom resources being converted
- Update CRDs
- Replace the status of the CRDs
Use the CLI help for more information on the API conversion tool:
bin/api-conversion.sh help
bin/api-conversion.sh help
Use bin/api-conversion.cmd for this procedure if you are using Windows.
Procedure
If you have not done so, convert your custom resources to use
v1beta2.You can use the API conversion tool to do this in one of two ways:
Run the API conversion tool with the
crd-upgradecommand.bin/api-conversion.sh crd-upgrade
bin/api-conversion.sh crd-upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the CRDs have been upgraded so that v1beta2 is the storage version.
For example, for the Kafka topic CRD:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.5. Upgrading Kafka resources to support v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running.
Procedure
Perform the following steps for each Kafka custom resource in your deployment.
Update the
Kafkacustom resource in an editor.oc edit kafka KAFKA-CLUSTER
oc edit kafka KAFKA-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you have not already done so, update
.spec.kafka.listenerto the new generic listener format, as described in Section 8.2.6, “Updating listeners to the generic listener configuration”.WarningThe old listener format is not supported in API version
v1beta2.-
If present, move
affinityfrom.spec.kafka.affinityto.spec.kafka.template.pod.affinity. -
If present, move
tolerationsfrom.spec.kafka.tolerationsto.spec.kafka.template.pod.tolerations. -
If present, remove
.spec.kafka.template.tlsSidecarContainer. -
If present, remove
.spec.kafka.tlsSidecarContainer. If either of the following policy configurations exist:
-
.spec.kafka.template.externalBootstrapService.externalTrafficPolicy .spec.kafka.template.perPodService.externalTrafficPolicy-
Move the configuration to
.spec.kafka.listeners[].configuration.externalTrafficPolicy, for bothtype: loadbalancerandtype: nodeportlisteners. -
Remove
.spec.kafka.template.externalBootstrapService.externalTrafficPolicyor.spec.kafka.template.perPodService.externalTrafficPolicy.
-
Move the configuration to
-
If either of the following
loadbalancerlistener configurations exist:-
.spec.kafka.template.externalBootstrapService.loadBalancerSourceRanges .spec.kafka.template.perPodService.loadBalancerSourceRanges-
Move the configuration to
.spec.kafka.listeners[].configuration.loadBalancerSourceRanges, fortype: loadbalancerlisteners. -
Remove
.spec.kafka.template.externalBootstrapService.loadBalancerSourceRangesor.spec.kafka.template.perPodService.loadBalancerSourceRanges.
-
Move the configuration to
-
If
type: externallogging is configured in.spec.kafka.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
.spec.kafka.metricsfield is used to enable metrics:Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the
.spec.kafka.metricsfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
.spec.kafka.metricsConfigproperty that points to the ConfigMap and key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the old
.spec.kafka.metricsfield.
- Save the file, exit the editor and wait for the updated custom resource to be reconciled.
What to do next
For each Kafka custom resource, upgrade the configurations for ZooKeeper, Topic Operator, Entity Operator, and Cruise Control (if deployed) to support version v1beta2. This is described in the following procedures.
When all Kafka configurations are updated to support v1beta2, you can upgrade the Kafka custom resource to v1beta2.
8.2.6. Updating listeners to the generic listener configuration Copy linkLink copied to clipboard!
AMQ Streams provides a GenericKafkaListener schema for the configuration of Kafka listeners in a Kafka resource.
GenericKafkaListener replaces the KafkaListeners schema, which has been removed from AMQ Streams.
With the GenericKafkaListener schema, you can configure as many listeners as required, as long as their names and ports are unique. The listeners configuration is defined as an array, but the deprecated format is also supported.
For clients inside the OpenShift cluster, you can create plain (without encryption) or tls internal listeners.
For clients outside the OpenShift cluster, you create external listeners and specify a connection mechanism, which can be nodeport, loadbalancer, ingress or route.
The KafkaListeners schema used sub-properties for plain, tls and external listeners, with fixed ports for each. At any stage in the upgrade process, you must convert listeners configured using the KafkaListeners schema into the format of the GenericKafkaListener schema.
For example, if you are currently using the following configuration in your Kafka configuration:
Old listener configuration
Convert the listeners into the new format using:
New listener configuration
Make sure to use the exact names and port numbers shown.
For any additional configuration or overrides properties used with the old format, you need to update them to the new format.
Changes introduced to the listener configuration:
-
overridesis merged with theconfigurationsection -
dnsAnnotationshas been renamedannotations -
preferredAddressTypehas been renamedpreferredNodePortAddressType -
addresshas been renamedalternativeNames -
loadBalancerSourceRangesandexternalTrafficPolicymove to the listener configuration from the now deprecatedtemplate
For example, this configuration:
Old additional listener configuration
Changes to:
New additional listener configuration
The name and port numbers shown in the new listener configuration must be used for backwards compatibility. Using any other values will cause renaming of the Kafka listeners and OpenShift services.
For more information on the configuration options available for each type of listener, see the GenericKafkaListener schema reference.
8.2.7. Upgrading ZooKeeper to support v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running.
Procedure
Perform the following steps for each Kafka custom resource in your deployment.
Update the
Kafkacustom resource in an editor.oc edit kafka KAFKA-CLUSTER
oc edit kafka KAFKA-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If present, move
affinityfrom.spec.zookeeper.affinityto.spec.zookeeper.template.pod.affinity. -
If present, move
tolerationsfrom.spec.zookeeper.tolerationsto.spec.zookeeper.template.pod.tolerations. -
If present, remove
.spec.zookeeper.template.tlsSidecarContainer. -
If present, remove
.spec.zookeeper.tlsSidecarContainer. If
type: externallogging is configured in.spec.kafka.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
.spec.zookeeper.metricsfield is used to enable metrics:Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the
.spec.zookeeper.metricsfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
.spec.zookeeper.metricsConfigproperty that points to the ConfigMap and key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the old
.spec.zookeeper.metricsfield.
- Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.8. Upgrading the Topic Operator to support v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running.
Procedure
Perform the following steps for each Kafka custom resource in your deployment.
Update the
Kafkacustom resource in an editor.oc edit kafka KAFKA-CLUSTER
oc edit kafka KAFKA-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
Kafka.spec.topicOperatoris used:-
Move
affinityfrom.spec.topicOperator.affinityto.spec.entityOperator.template.pod.affinity. -
Move
tolerationsfrom.spec.topicOperator.tolerationsto.spec.entityOperator.template.pod.tolerations. -
Move
.spec.topicOperator.tlsSidecarto.spec.entityOperator.tlsSidecar. -
After moving
affinity,tolerations, andtlsSidecar, move the remaining configuration in.spec.topicOperatorto.spec.entityOperator.topicOperator.
-
Move
If
type: externallogging is configured in.spec.topicOperator.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also complete this step as part of the Entity Operator upgrade.
- Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.9. Upgrading the Entity Operator to support v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. -
Kafka.spec.entityOperatoris configured, as described in Section 8.2.8, “Upgrading the Topic Operator to support v1beta2”.
Procedure
Perform the following steps for each Kafka custom resource in your deployment.
Update the
Kafkacustom resource in an editor.oc edit kafka KAFKA-CLUSTER
oc edit kafka KAFKA-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Move
affinityfrom.spec.entityOperator.affinityto.spec.entityOperator.template.pod.affinity. -
Move
tolerationsfrom.spec.entityOperator.tolerationsto.spec.entityOperator.template.pod.tolerations. If
type: externallogging is configured in.spec.entityOperator.userOperator.loggingor.spec.entityOperator.topicOperator.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.10. Upgrading Cruise Control to support v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. - Cruise Control is configured and deployed. See Deploying Cruise Control in the Using AMQ Streams on OpenShift guide.
Procedure
Perform the following steps for each Kafka.spec.cruiseControl configuration in your Kafka cluster.
Update the
Kafkacustom resource in an editor.oc edit kafka KAFKA-CLUSTER
oc edit kafka KAFKA-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
type: externallogging is configured in.spec.cruiseControl.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
.spec.cruiseControl.metricsfield is used to enable metrics:Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the
.spec.cruiseControl.metricsfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
.spec.cruiseControl.metricsConfigproperty that points to the ConfigMap and key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the old
.spec.cruiseControl.metricsfield.
- Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.11. Upgrading the API version of Kafka resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. You have updated the following configurations within the
Kafkacustom resource:- ZooKeeper
- Topic Operator
- Entity Operator
- Cruise Control (if Cruise Control is deployed)
Procedure
Perform the following steps for each Kafka custom resource in your deployment.
Update the
Kafkacustom resource in an editor.oc edit kafka KAFKA-CLUSTER
oc edit kafka KAFKA-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
apiVersionof theKafkacustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.12. Upgrading Kafka Connect resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running.
Procedure
Perform the following steps for each KafkaConnect custom resource in your deployment.
Update the
KafkaConnectcustom resource in an editor.oc edit kafkaconnect KAFKA-CONNECT-CLUSTER
oc edit kafkaconnect KAFKA-CONNECT-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow If present, move:
KafkaConnect.spec.affinity
KafkaConnect.spec.affinityCopy to Clipboard Copied! Toggle word wrap Toggle overflow KafkaConnect.spec.tolerations
KafkaConnect.spec.tolerationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow to:
KafkaConnect.spec.template.pod.affinity
KafkaConnect.spec.template.pod.affinityCopy to Clipboard Copied! Toggle word wrap Toggle overflow KafkaConnect.spec.template.pod.tolerations
KafkaConnect.spec.template.pod.tolerationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, move:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow to:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
type: externallogging is configured in.spec.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
.spec.metricsfield is used to enable metrics:Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the
.spec.metricsfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
.spec.metricsConfigproperty that points to the ConfigMap and key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the old
.spec.metricsfield.
Update the
apiVersionof theKafkaConnectcustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.13. Upgrading Kafka Connect S2I resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running.
Procedure
Perform the following steps for each KafkaConnectS2I custom resource in your deployment.
Update the
KafkaConnectS2Icustom resource in an editor.oc edit kafkaconnects2i S2I-CLUSTER
oc edit kafkaconnects2i S2I-CLUSTERCopy to Clipboard Copied! Toggle word wrap Toggle overflow If present, move:
KafkaConnectS2I.spec.affinity
KafkaConnectS2I.spec.affinityCopy to Clipboard Copied! Toggle word wrap Toggle overflow KafkaConnectS2I.spec.tolerations
KafkaConnectS2I.spec.tolerationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow to:
KafkaConnectS2I.spec.template.pod.affinity
KafkaConnectS2I.spec.template.pod.affinityCopy to Clipboard Copied! Toggle word wrap Toggle overflow KafkaConnectS2I.spec.template.pod.tolerations
KafkaConnectS2I.spec.template.pod.tolerationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, move:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow to:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
type: externallogging is configured in.spec.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
.spec.metricsfield is used to enable metrics:Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the
.spec.metricsfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
.spec.metricsConfigproperty that points to the ConfigMap and key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the old
.spec.metricsfield
Update the
apiVersionof theKafkaConnectS2Icustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.14. Upgrading Kafka MirrorMaker resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. - MirrorMaker is configured and deployed. See Section 5.3.1, “Deploying Kafka MirrorMaker to your OpenShift cluster”.
Procedure
Perform the following steps for each KafkaMirrorMaker custom resource in your deployment.
Update the
KafkaMirrorMakercustom resource in an editor.oc edit kafkamirrormaker MIRROR-MAKER
oc edit kafkamirrormaker MIRROR-MAKERCopy to Clipboard Copied! Toggle word wrap Toggle overflow If present, move:
KafkaMirrorMaker.spec.affinity
KafkaMirrorMaker.spec.affinityCopy to Clipboard Copied! Toggle word wrap Toggle overflow KafkaMirrorMaker.spec.tolerations
KafkaMirrorMaker.spec.tolerationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow to:
KafkaMirrorMaker.spec.template.pod.affinity
KafkaMirrorMaker.spec.template.pod.affinityCopy to Clipboard Copied! Toggle word wrap Toggle overflow KafkaMirrorMaker.spec.template.pod.tolerations
KafkaMirrorMaker.spec.template.pod.tolerationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, move:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow to:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
type: externallogging is configured in.spec.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
.spec.metricsfield is used to enable metrics:Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the
.spec.metricsfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
.spec.metricsConfigproperty that points to the ConfigMap and key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the old
.spec.metricsfield.
Update the
apiVersionof theKafkaMirrorMakercustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.15. Upgrading Kafka MirrorMaker 2.0 resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. - MirrorMaker 2.0 is configured and deployed. See Section 5.3.1, “Deploying Kafka MirrorMaker to your OpenShift cluster”.
Procedure
Perform the following steps for each KafkaMirrorMaker2 custom resource in your deployment.
Update the
KafkaMirrorMaker2custom resource in an editor.oc edit kafkamirrormaker2 MIRROR-MAKER-2
oc edit kafkamirrormaker2 MIRROR-MAKER-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If present, move
affinityfrom.spec.affinityto.spec.template.pod.affinity. -
If present, move
tolerationsfrom.spec.tolerationsto.spec.template.pod.tolerations. If
type: externallogging is configured in.spec.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
.spec.metricsfield is used to enable metrics:Create a new ConfigMap that stores the YAML configuration for the JMX Prometheus exporter under a key. The YAML must match what is currently in the
.spec.metricsfield.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
.spec.metricsConfigproperty that points to the ConfigMap and key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Delete the old
.spec.metricsfield.
Update the
apiVersionof theKafkaMirrorMaker2custom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1alpha1
apiVersion: kafka.strimzi.io/v1alpha1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.16. Upgrading Kafka Bridge resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. - The Kafka Bridge is configured and deployed. See Section 5.4.1, “Deploying Kafka Bridge to your OpenShift cluster”.
Procedure
Perform the following steps for each KafkaBridge resource in your deployment.
Update the
KafkaBridgecustom resource in an editor.oc edit kafkabridge KAFKA-BRIDGE
oc edit kafkabridge KAFKA-BRIDGECopy to Clipboard Copied! Toggle word wrap Toggle overflow If
type: externallogging is configured inKafkaBridge.spec.logging:Replace the
nameof the ConfigMap containing the logging configuration:logging: type: external name: my-config-map
logging: type: external name: my-config-mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the
valueFrom.configMapKeyReffield, and specify both the ConfigMapnameand thekeyunder which the logging is stored:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
apiVersionof theKafkaBridgecustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1alpha1
apiVersion: kafka.strimzi.io/v1alpha1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.17. Upgrading Kafka User resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A User Operator supporting the
v1beta2API version is up and running.
Procedure
Perform the following steps for each KafkaUser custom resource in your deployment.
Update the
KafkaUsercustom resource in an editor.oc edit kafkauser KAFKA-USER
oc edit kafkauser KAFKA-USERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
apiVersionof theKafkaUsercustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.18. Upgrading Kafka Topic resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Topic Operator supporting the
v1beta2API version is up and running.
Procedure
Perform the following steps for each KafkaTopic custom resource in your deployment.
Update the
KafkaTopiccustom resource in an editor.oc edit kafkatopic KAFKA-TOPIC
oc edit kafkatopic KAFKA-TOPICCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
apiVersionof theKafkaTopiccustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1beta1
apiVersion: kafka.strimzi.io/v1beta1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.19. Upgrading Kafka Connector resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. -
KafkaConnectorcustom resources are deployed to manage connector instances. See Section 5.2.4, “Creating and managing connectors”.
Procedure
Perform the following steps for each KafkaConnector custom resource in your deployment.
Update the
KafkaConnectorcustom resource in an editor.oc edit kafkaconnector KAFKA-CONNECTOR
oc edit kafkaconnector KAFKA-CONNECTORCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
apiVersionof theKafkaConnectorcustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1alpha1
apiVersion: kafka.strimzi.io/v1alpha1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.2.20. Upgrading Kafka Rebalance resources to v1beta2 Copy linkLink copied to clipboard!
Prerequisites
-
A Cluster Operator supporting the
v1beta2API version is up and running. - Cruise Control is configured and deployed. See Deploying Cruise Control in the Using AMQ Streams on OpenShift guide.
Procedure
Perform the following steps for each KafkaRebalance custom resource in your deployment.
Update the
KafkaRebalancecustom resource in an editor.oc edit kafkarebalance KAFKA-REBALANCE
oc edit kafkarebalance KAFKA-REBALANCECopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
apiVersionof theKafkaRebalancecustom resource tov1beta2:Replace:
apiVersion: kafka.strimzi.io/v1alpha1
apiVersion: kafka.strimzi.io/v1alpha1Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:
apiVersion: kafka.strimzi.io/v1beta2
apiVersion: kafka.strimzi.io/v1beta2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file, exit the editor and wait for the updated custom resource to be reconciled.
8.3. Upgrading the Cluster Operator Copy linkLink copied to clipboard!
This procedure describes how to upgrade a Cluster Operator deployment to use AMQ Streams 1.8.
Follow this procedure if you deployed the Cluster Operator using the installation YAML files rather than OperatorHub.
The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation.
Refer to the documentation supporting a specific version of AMQ Streams for information on how to upgrade to that version.
Prerequisites
- An existing Cluster Operator deployment is available.
- You have downloaded the release artifacts for AMQ Streams 1.8.
Procedure
-
Take note of any configuration changes made to the existing Cluster Operator resources (in the
/install/cluster-operatordirectory). Any changes will be overwritten by the new version of the Cluster Operator. - Update your custom resources to reflect the supported configuration options available for AMQ Streams version 1.8.
Update the Cluster Operator.
Modify the installation files for the new Cluster Operator version according to the namespace the Cluster Operator is running in.
On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If you modified one or more environment variables in your existing Cluster Operator
Deployment, edit theinstall/cluster-operator/060-Deployment-strimzi-cluster-operator.yamlfile to use those environment variables.
When you have an updated configuration, deploy it along with the rest of the installation resources:
oc replace -f install/cluster-operator
oc replace -f install/cluster-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the rolling updates to complete.
If the new Operator version no longer supports the Kafka version you are upgrading from, the Cluster Operator returns a "Version not found" error message. Otherwise, no error message is returned.
For example:
"Version 2.4.0 is not supported. Supported versions are: 2.6.0, 2.6.1, 2.7.0."
"Version 2.4.0 is not supported. Supported versions are: 2.6.0, 2.6.1, 2.7.0."Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the error message is returned, upgrade to a Kafka version that is supported by the new Cluster Operator version:
-
Edit the
Kafkacustom resource. -
Change the
spec.kafka.versionproperty to a supported Kafka version.
-
Edit the
- If the error message is not returned, go to the next step. You will upgrade the Kafka version later.
Get the image for the Kafka pod to ensure the upgrade was successful:
oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The image tag shows the new Operator version. For example:
registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:{ContainerVersion}registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:{ContainerVersion}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your Cluster Operator was upgraded to version 1.8 but the version of Kafka running in the cluster it manages is unchanged.
Following the Cluster Operator upgrade, you must perform a Kafka upgrade.
8.4. Upgrading Kafka Copy linkLink copied to clipboard!
After you have upgraded your Cluster Operator to 1.8, the next step is to upgrade all Kafka brokers to the latest supported version of Kafka.
Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka brokers.
The Cluster Operator initiates rolling updates based on the Kafka cluster configuration.
If Kafka.spec.kafka.config contains… | The Cluster Operator initiates… |
|---|---|
|
Both the |
A single rolling update. After the update, the |
|
Either the | Two rolling updates. |
|
No configuration for the | Two rolling updates. |
As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper.
- A single rolling update occurs even if the ZooKeeper version is unchanged.
- Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version.
Additional resources
8.4.1. Kafka versions Copy linkLink copied to clipboard!
Kafka’s log message format version and inter-broker protocol version specify, respectively, the log format version appended to messages and the version of the Kafka protocol used in a cluster. To ensure the correct versions are used, the upgrade process involves making configuration changes to existing Kafka brokers and code changes to client applications (consumers and producers).
The following table shows the differences between Kafka versions:
| Kafka version | Interbroker protocol version | Log message format version | ZooKeeper version |
|---|---|---|---|
| 2.7.0 | 2.7 | 2.7 | 3.5.8 |
| 2.7.1 | 2.7 | 2.7 | 3.5.9 |
| 2.8.0 | 2.8 | 2.8 | 3.5.9 |
Inter-broker protocol version
In Kafka, the network protocol used for inter-broker communication is called the inter-broker protocol. Each version of Kafka has a compatible version of the inter-broker protocol. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table.
The inter-broker protocol version is set cluster wide in the Kafka resource. To change it, you edit the inter.broker.protocol.version property in Kafka.spec.kafka.config.
Log message format version
When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the format they were encoded with. You can configure a Kafka broker to convert messages from newer format versions to a given older format version before the broker appends the message to the log.
In Kafka, there are two different methods for setting the message format version:
-
The
message.format.versionproperty is set on topics. -
The
log.message.format.versionproperty is set on Kafka brokers.
The default value of message.format.version for a topic is defined by the log.message.format.version that is set on the Kafka broker. You can manually set the message.format.version of a topic by modifying its topic configuration.
The upgrade tasks in this section assume that the message format version is defined by the log.message.format.version.
8.4.2. Strategies for upgrading clients Copy linkLink copied to clipboard!
The right approach to upgrading your client applications (including Kafka Connect connectors) depends on your particular circumstances.
Consuming applications need to receive messages in a message format that they understand. You can ensure that this is the case in one of two ways:
- By upgrading all the consumers for a topic before upgrading any of the producers.
- By having the brokers down-convert messages to an older format.
Using broker down-conversion puts extra load on the brokers, so it is not ideal to rely on down-conversion for all topics for a prolonged period of time. For brokers to perform optimally they should not be down converting messages at all.
Broker down-conversion is configured in two ways:
-
The topic-level
message.format.versionconfigures it for a single topic. -
The broker-level
log.message.format.versionis the default for topics that do not have the topic-levelmessage.format.versionconfigured.
Messages published to a topic in a new-version format will be visible to consumers, because brokers perform down-conversion when they receive messages from producers, not when they are sent to consumers.
There are a number of strategies you can use to upgrade your clients:
- Consumers first
- Upgrade all the consuming applications.
-
Change the broker-level
log.message.format.versionto the new version. Upgrade all the producing applications.
This strategy is straightforward, and avoids any broker down-conversion. However, it assumes that all consumers in your organization can be upgraded in a coordinated way, and it does not work for applications that are both consumers and producers. There is also a risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log so that you cannot revert to the previous consumer version.
- Per-topic consumers first
For each topic:
- Upgrade all the consuming applications.
-
Change the topic-level
message.format.versionto the new version. Upgrade all the producing applications.
This strategy avoids any broker down-conversion, and means you can proceed on a topic-by-topic basis. It does not work for applications that are both consumers and producers of the same topic. Again, it has the risk that, if there is a problem with the upgraded clients, new-format messages might get added to the message log.
- Per-topic consumers first, with down conversion
For each topic:
-
Change the topic-level
message.format.versionto the old version (or rely on the topic defaulting to the broker-levellog.message.format.version). - Upgrade all the consuming and producing applications.
- Verify that the upgraded applications function correctly.
Change the topic-level
message.format.versionto the new version.This strategy requires broker down-conversion, but the load on the brokers is minimized because it is only required for a single topic (or small group of topics) at a time. It also works for applications that are both consumers and producers of the same topic. This approach ensures that the upgraded producers and consumers are working correctly before you commit to using the new message format version.
The main drawback of this approach is that it can be complicated to manage in a cluster with many topics and applications.
-
Change the topic-level
Other strategies for upgrading client applications are also possible.
It is also possible to apply multiple strategies. For example, for the first few applications and topics the "per-topic consumers first, with down conversion" strategy can be used. When this has proved successful another, more efficient strategy can be considered acceptable to use instead.
8.4.3. Kafka version and image mappings Copy linkLink copied to clipboard!
When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES environment variable and the Kafka.spec.kafka.version property.
-
Each
Kafkaresource can be configured with aKafka.spec.kafka.version. The Cluster Operator’s
STRIMZI_KAFKA_IMAGESenvironment variable provides a mapping between the Kafka version and the image to be used when that version is requested in a givenKafkaresource.-
If
Kafka.spec.kafka.imageis not configured, the default image for the given version is used. -
If
Kafka.spec.kafka.imageis configured, the default image is overridden.
-
If
The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version. Take care to ensure that the given image corresponds to the given Kafka version.
8.4.4. Upgrading Kafka brokers and client applications Copy linkLink copied to clipboard!
This procedure describes how to upgrade a AMQ Streams Kafka cluster to the latest supported Kafka version.
Compared to your current Kafka version, the new version might support a higher log message format version or inter-broker protocol version, or both. Follow the steps to upgrade these versions, if required. For more information, see Section 8.4.1, “Kafka versions”.
You should also choose a strategy for upgrading clients. Kafka clients are upgraded in step 6 of this procedure.
Prerequisites
For the Kafka resource to be upgraded, check that:
- The Cluster Operator, which supports both versions of Kafka, is up and running.
-
The
Kafka.spec.kafka.configdoes not contain options that are not supported in the new Kafka version.
Procedure
Update the Kafka cluster configuration:
oc edit kafka my-cluster
oc edit kafka my-clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow If configured, ensure that
Kafka.spec.kafka.confighas thelog.message.format.versionandinter.broker.protocol.versionset to the defaults for the current Kafka version.For example, if upgrading from Kafka version 2.7.0 to 2.8.0:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
log.message.format.versionandinter.broker.protocol.versionare not configured, AMQ Streams automatically updates these versions to the current defaults after the update to the Kafka version in the next step.NoteThe value of
log.message.format.versionandinter.broker.protocol.versionmust be strings to prevent them from being interpreted as floating point numbers.Change the
Kafka.spec.kafka.versionto specify the new Kafka version; leave thelog.message.format.versionandinter.broker.protocol.versionat the defaults for the current Kafka version.NoteChanging the
kafka.versionensures that all brokers in the cluster will be upgraded to start using the new broker binaries. During this process, some brokers are using the old binaries while others have already upgraded to the new ones. Leaving theinter.broker.protocol.versionunchanged ensures that the brokers can continue to communicate with each other throughout the upgrade.For example, if upgrading from Kafka 2.7.0 to 2.8.0:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningYou cannot downgrade Kafka if the
inter.broker.protocol.versionfor the new Kafka version changes. The inter-broker protocol version determines the schemas used for persistent metadata stored by the broker, including messages written to__consumer_offsets. The downgraded cluster will not understand the messages.If the image for the Kafka cluster is defined in the Kafka custom resource, in
Kafka.spec.kafka.image, update theimageto point to a container image with the new Kafka version.Save and exit the editor, then wait for rolling updates to complete.
Check the progress of the rolling updates by watching the pod state transitions:
oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.
Depending on your chosen strategy for upgrading clients, upgrade all client applications to use the new version of the client binaries.
If required, set the
versionproperty for Kafka Connect and MirrorMaker as the new version of Kafka:-
For Kafka Connect, update
KafkaConnect.spec.version. -
For MirrorMaker, update
KafkaMirrorMaker.spec.version. -
For MirrorMaker 2.0, update
KafkaMirrorMaker2.spec.version.
-
For Kafka Connect, update
If configured, update the Kafka resource to use the new
inter.broker.protocol.versionversion. Otherwise, go to step 9.For example, if upgrading to Kafka 2.8.0:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the Cluster Operator to update the cluster.
If configured, update the Kafka resource to use the new
log.message.format.versionversion. Otherwise, go to step 10.For example, if upgrading to Kafka 2.8.0:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Cluster Operator to update the cluster.
- The Kafka cluster and clients are now using the new Kafka version.
- The brokers are configured to send messages using the inter-broker protocol version and message format version of the new version of Kafka.
Following the Kafka upgrade, if required, you can:
8.5. Upgrading consumers to cooperative rebalancing Copy linkLink copied to clipboard!
You can upgrade Kafka consumers and Kafka Streams applications to use the incremental cooperative rebalance protocol for partition rebalances instead of the default eager rebalance protocol. The new protocol was added in Kafka 2.4.0.
Consumers keep their partition assignments in a cooperative rebalance and only revoke them at the end of the process, if needed to achieve a balanced cluster. This reduces the unavailability of the consumer group or Kafka Streams application.
Upgrading to the incremental cooperative rebalance protocol is optional. The eager rebalance protocol is still supported.
Prerequisites
- You have upgraded Kafka brokers and client applications to Kafka 2.8.0.
Procedure
To upgrade a Kafka consumer to use the incremental cooperative rebalance protocol:
-
Replace the Kafka clients
.jarfile with the new version. -
In the consumer configuration, append
cooperative-stickyto thepartition.assignment.strategy. For example, if therangestrategy is set, change the configuration torange, cooperative-sticky. - Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart.
-
Reconfigure each consumer in the group by removing the earlier
partition.assignment.strategyfrom the consumer configuration, leaving only thecooperative-stickystrategy. - Restart each consumer in the group in turn, waiting for the consumer to rejoin the group after each restart.
To upgrade a Kafka Streams application to use the incremental cooperative rebalance protocol:
-
Replace the Kafka Streams
.jarfile with the new version. -
In the Kafka Streams configuration, set the
upgrade.fromconfiguration parameter to the Kafka version you are upgrading from (for example, 2.3). - Restart each of the stream processors (nodes) in turn.
-
Remove the
upgrade.fromconfiguration parameter from the Kafka Streams configuration. - Restart each consumer in the group in turn.
Additional resources
- Notable changes in 2.4.0 in the Apache Kafka documentation.