Chapter 23. Upgrading AMQ Streams
Upgrade your AMQ Streams installation to version 2.6 and benefit from new features, performance improvements, and enhanced security options. During the upgrade, Kafka is also be updated to the latest supported version, introducing additional features and bug fixes to your AMQ Streams deployment.
If you encounter any issues with the new version, AMQ Streams can be downgraded to the previous version.
Released AMQ Streams versions can be found at AMQ Streams software downloads page.
Upgrade without downtime
For topics configured with high availability (replication factor of at least 3 and evenly distributed partitions), the upgrade process should not cause any downtime for consumers and producers.
The upgrade triggers rolling updates, where brokers are restarted one by one at different stages of the process. During this time, overall cluster availability is temporarily reduced, which may increase the risk of message loss in the event of a broker failure.
23.1. AMQ Streams upgrade paths
Two upgrade paths are available for AMQ Streams.
- Incremental upgrade
- An incremental upgrade involves upgrading AMQ Streams from the previous minor version to version 2.6.
- Multi-version upgrade
- A multi-version upgrade involves upgrading an older version of AMQ Streams to version 2.6 within a single upgrade, skipping one or more intermediate versions. For example, upgrading directly from AMQ Streams 2.3 to AMQ Streams 2.6 is possible.
23.1.1. Support for Kafka versions when upgrading
When upgrading AMQ Streams, it is important to ensure compatibility with the Kafka version being used.
Multi-version upgrades are possible even if the supported Kafka versions differ between the old and new versions. However, if you attempt to upgrade to a new AMQ Streams version that does not support the current Kafka version, an error indicating that the Kafka version is not supported is generated. In this case, you must upgrade the Kafka version as part of the AMQ Streams upgrade by changing the spec.kafka.version
in the Kafka
custom resource to the supported version for the new AMQ Streams version.
23.1.2. Upgrading from an AMQ Streams version earlier than 1.7
If you are upgrading to the latest version of AMQ Streams from a version prior to version 1.7, do the following:
- Upgrade AMQ Streams to version 1.7 following the standard sequence.
-
Convert AMQ Streams custom resources to
v1beta2
using the API conversion tool provided with AMQ Streams. Do one of the following:
-
Upgrade to AMQ Streams 1.8 (where the
ControlPlaneListener
feature gate is disabled by default). -
Upgrade to AMQ Streams 2.0 or 2.2 (where the
ControlPlaneListener
feature gate is enabled by default) with theControlPlaneListener
feature gate disabled.
-
Upgrade to AMQ Streams 1.8 (where the
-
Enable the
ControlPlaneListener
feature gate. - Upgrade to AMQ Streams 2.6 following the standard sequence.
AMQ Streams custom resources started using the v1beta2
API version in release 1.7. CRDs and custom resources must be converted before upgrading to AMQ Streams 1.8 or newer. For information on using the API conversion tool, see the AMQ Streams 1.7 upgrade documentation.
As an alternative to first upgrading to version 1.7, you can install the custom resources from version 1.7 and then convert the resources.
The ControlPlaneListener
feature is now permanently enabled in AMQ Streams. You must upgrade to a version of AMQ Streams where it is disabled, then enable it using the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration.
Disabling the ControlPlaneListener
feature gate
env: - name: STRIMZI_FEATURE_GATES value: -ControlPlaneListener
Enabling the ControlPlaneListener
feature gate
env: - name: STRIMZI_FEATURE_GATES value: +ControlPlaneListener
23.2. Required upgrade sequence
To upgrade brokers and clients without downtime, you must complete the AMQ Streams upgrade procedures in the following order:
Make sure your OpenShift cluster version is supported.
AMQ Streams 2.6 is supported by OpenShift 4.11 to 4.14.
- Upgrade the Cluster Operator.
- Upgrade all Kafka brokers and client applications to the latest supported Kafka version.
23.3. Upgrading OpenShift with minimal downtime
If you are upgrading OpenShift, refer to the OpenShift upgrade documentation to check the upgrade path and the steps to upgrade your nodes correctly. Before upgrading OpenShift, check the supported versions for your version of AMQ Streams.
When performing your upgrade, you’ll want to keep your Kafka clusters available.
You can employ one of the following strategies:
- Configuring pod disruption budgets
Rolling pods by one of these methods:
- Using the AMQ Streams Drain Cleaner
- Manually by applying an annotation to your pod
When using either of the methods to roll the pods, you must set a pod disruption budget of zero using the maxUnavailable
property.
StrimziPodSet
custom resources manage Kafka and ZooKeeper pods using a custom controller that cannot use the maxUnavailable
value directly. Instead, the maxUnavailable
value is converted to a minAvailable
value. If there are three broker pods and the maxUnavailable
property is set to 0
(zero), the minAvailable
setting is 3
, requiring all three broker pods to be available and allowing zero pods to be unavailable.
For Kafka to stay operational, topics must also be replicated for high availability. This requires topic configuration that specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor.
Kafka topic replicated for high availability
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # ... min.insync.replicas: 2 # ...
In a highly available environment, the Cluster Operator maintains a minimum number of in-sync replicas for topics during the upgrade process so that there is no downtime.
23.3.1. Rolling pods using the AMQ Streams Drain Cleaner
You can use the AMQ Streams Drain Cleaner to evict nodes during an upgrade. The AMQ Streams Drain Cleaner annotates pods with a rolling update pod annotation. This informs the Cluster Operator to perform a rolling update of an evicted pod.
A pod disruption budget allows only a specified number of pods to be unavailable at a given time. During planned maintenance of Kafka broker pods, a pod disruption budget ensures Kafka continues to run in a highly available environment.
You specify a pod disruption budget using a template
customization for a Kafka component. By default, pod disruption budgets allow only a single pod to be unavailable at a given time.
In order to use the Drain Cleaner to roll pods, you set maxUnavailable
to 0
(zero). Reducing the pod disruption budget to zero prevents voluntary disruptions, so pods must be evicted manually.
Specifying a pod disruption budget
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... template: podDisruptionBudget: maxUnavailable: 0 # ...
23.3.2. Rolling pods manually while keeping topics available
During an upgrade, you can trigger a manual rolling update of pods through the Cluster Operator. Using Pod
resources, rolling updates restart the pods of resources with new pods. As with using the AMQ Streams Drain Cleaner, you’ll need to set the maxUnavailable
value to zero for the pod disruption budget.
You need to watch the pods that need to be drained. You then add a pod annotation to make the update.
Here, the annotation updates a Kafka broker.
Performing a manual rolling update on a Kafka broker pod
oc annotate pod <cluster_name>-kafka-<index> strimzi.io/manual-rolling-update=true
You replace <cluster_name> with the name of the cluster. Kafka broker pods are named <cluster-name>-kafka-<index>, where <index> starts at zero and ends at the total number of replicas minus one. For example, my-cluster-kafka-0
.
23.4. Upgrading the Cluster Operator
Use the same method to upgrade the Cluster Operator as the initial method of deployment.
- Using installation files
- If you deployed the Cluster Operator using the installation YAML files, perform your upgrade by modifying the Operator installation files, as described in Upgrading the Cluster Operator using installation files.
- Using the OperatorHub
If you deployed AMQ Streams from the OperatorHub, use the Operator Lifecycle Manager (OLM) to change the update channel for the AMQ Streams operators to a new AMQ Streams version.
Updating the channel starts one of the following types of upgrade, depending on your chosen upgrade strategy:
- An automatic upgrade is initiated
- A manual upgrade that requires approval before installation begins
NoteIf you subscribe to the stable channel, you can get automatic updates without changing channels. However, enabling automatic updates is not recommended because of the potential for missing any pre-installation upgrade steps. Use automatic upgrades only on version-specific channels.
For more information on using the OperatorHub to upgrade Operators, see Upgrading installed Operators (OpenShift documentation).
23.4.1. Upgrading the Cluster Operator returns Kafka version error
If you upgrade the Cluster Operator to a version that does not support the current version of Kafka you are using, you get an unsupported Kafka version error. This error applies to all installation methods and means that you must upgrade Kafka to a supported Kafka version. Change the spec.kafka.version
in the Kafka
resource to the supported version.
You can use oc
to check for error messages like this in the status
of the Kafka
resource.
Checking the Kafka status for errors
oc get kafka <kafka_cluster_name> -n <namespace> -o jsonpath='{.status.conditions}'
Replace <kafka_cluster_name> with the name of your Kafka cluster and <namespace> with the OpenShift namespace where the pod is running.
23.4.2. Upgrading from AMQ Streams 1.7 or earlier using the OperatorHub
Action required if upgrading from AMQ Streams 1.7 or earlier using the OperatorHub
Before you upgrade the AMQ Streams Operator to version 2.6, you need to make the following changes:
-
Convert custom resources and CRDs to
v1beta2
-
Upgrade to a version of AMQ Streams where the
ControlPlaneListener
feature gate is disabled
These requirements are described in Section 23.1.2, “Upgrading from an AMQ Streams version earlier than 1.7”.
If you are upgrading from AMQ Streams 1.7 or earlier, do the following:
- Upgrade to AMQ Streams 1.7.
- Download the Red Hat AMQ Streams API Conversion Tool provided with AMQ Streams 1.8 from the AMQ Streams software downloads page.
Convert custom resources and CRDs to
v1beta2
.For more information, see the AMQ Streams 1.7 upgrade documentation.
- In the OperatorHub, delete version 1.7 of the AMQ Streams Operator.
If it also exists, delete version 2.6 of the AMQ Streams Operator.
If it does not exist, go to the next step.
If the Approval Strategy for the AMQ Streams Operator was set to Automatic, version 2.6 of the operator might already exist in your cluster. If you did not convert custom resources and CRDs to the
v1beta2
API version before release, the operator-managed custom resources and CRDs will be using the old API version. As a result, the 2.6 Operator is stuck in Pending status. In this situation, you need to delete version 2.6 of the AMQ Streams Operator as well as version 1.7.If you delete both operators, reconciliations are paused until the new operator version is installed. Follow the next steps immediately so that any changes to custom resources are not delayed.
In the OperatorHub, do one of the following:
-
Upgrade to version 1.8 of the AMQ Streams Operator (where the
ControlPlaneListener
feature gate is disabled by default). -
Upgrade to version 2.0 or 2.2 of the AMQ Streams Operator (where the
ControlPlaneListener
feature gate is enabled by default) with theControlPlaneListener
feature gate disabled.
-
Upgrade to version 1.8 of the AMQ Streams Operator (where the
Upgrade to version 2.6 of the AMQ Streams Operator immediately.
The installed 2.6 operator begins to watch the cluster and performs rolling updates. You might notice a temporary decrease in cluster performance during this process.
23.4.3. Upgrading the Cluster Operator using installation files
This procedure describes how to upgrade a Cluster Operator deployment to use AMQ Streams 2.6.
Follow this procedure if you deployed the Cluster Operator using the installation YAML files.
The availability of Kafka clusters managed by the Cluster Operator is not affected by the upgrade operation.
Refer to the documentation supporting a specific version of AMQ Streams for information on how to upgrade to that version.
Prerequisites
- An existing Cluster Operator deployment is available.
- You have downloaded the release artifacts for AMQ Streams 2.6.
Procedure
-
Take note of any configuration changes made to the existing Cluster Operator resources (in the
/install/cluster-operator
directory). Any changes will be overwritten by the new version of the Cluster Operator. - Update your custom resources to reflect the supported configuration options available for AMQ Streams version 2.6.
Update the Cluster Operator.
Modify the installation files for the new Cluster Operator version according to the namespace the Cluster Operator is running in.
On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
-
If you modified one or more environment variables in your existing Cluster Operator
Deployment
, edit theinstall/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to use those environment variables.
When you have an updated configuration, deploy it along with the rest of the installation resources:
oc replace -f install/cluster-operator
Wait for the rolling updates to complete.
If the new Operator version no longer supports the Kafka version you are upgrading from, the Cluster Operator returns an error message to say the version is not supported. Otherwise, no error message is returned.
If the error message is returned, upgrade to a Kafka version that is supported by the new Cluster Operator version:
-
Edit the
Kafka
custom resource. -
Change the
spec.kafka.version
property to a supported Kafka version.
-
Edit the
- If the error message is not returned, go to the next step. You will upgrade the Kafka version later.
Get the image for the Kafka pod to ensure the upgrade was successful:
oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'
The image tag shows the new AMQ Streams version followed by the Kafka version:
registry.redhat.io/amq-streams/strimzi-kafka-36-rhel8:2.6.0
You can also check the upgrade has completed successfully from the status of the
Kafka
resource.
The Cluster Operator is upgraded to version 2.6, but the version of Kafka running in the cluster it manages is unchanged. Following a Cluster Operator upgrade, you must perform a Kafka upgrade.
23.5. Upgrading Kafka
After you have upgraded your Cluster Operator to 2.6, the next step is to upgrade all Kafka brokers to the latest supported version of Kafka.
Kafka upgrades are performed by the Cluster Operator through rolling updates of the Kafka brokers.
The Cluster Operator initiates rolling updates based on the Kafka cluster configuration.
If Kafka.spec.kafka.config contains… | The Cluster Operator initiates… |
---|---|
Both the |
A single rolling update. After the update, the |
Either the | Two rolling updates. |
No configuration for the | Two rolling updates. |
From Kafka 3.0.0, when the inter.broker.protocol.version
is set to 3.0
or higher, the log.message.format.version
option is ignored and doesn’t need to be set. The log.message.format.version
property for brokers and the message.format.version
property for topics are deprecated and will be removed in a future release of Kafka.
As part of the Kafka upgrade, the Cluster Operator initiates rolling updates for ZooKeeper.
- A single rolling update occurs even if the ZooKeeper version is unchanged.
- Additional rolling updates occur if the new version of Kafka requires a new ZooKeeper version.
23.5.1. Kafka versions
Kafka’s log message format version and inter-broker protocol version specify, respectively, the log format version appended to messages and the version of the Kafka protocol used in a cluster. To ensure the correct versions are used, the upgrade process involves making configuration changes to existing Kafka brokers and code changes to client applications (consumers and producers).
The following table shows the differences between Kafka versions:
AMQ Streams version | Kafka version | Inter-broker protocol version | Log message format version | ZooKeeper version |
---|---|---|---|---|
2.6 | 3.6.0 | 3.6 | 3.6 | 3.8.3 |
2.5 | 3.5.0 | 3.5 | 3.5 | 3.6.4 |
- Kafka 3.6.0 is supported for production use.
- Kafka 3.5.0 is supported only for the purpose of upgrading to AMQ Streams 2.6.
Inter-broker protocol version
In Kafka, the network protocol used for inter-broker communication is called the inter-broker protocol. Each version of Kafka has a compatible version of the inter-broker protocol. The minor version of the protocol typically increases to match the minor version of Kafka, as shown in the preceding table.
The inter-broker protocol version is set cluster wide in the Kafka
resource. To change it, you edit the inter.broker.protocol.version
property in Kafka.spec.kafka.config
.
Log message format version
When a producer sends a message to a Kafka broker, the message is encoded using a specific format. The format can change between Kafka releases, so messages specify which version of the message format they were encoded with.
The properties used to set a specific message format version are as follows:
-
message.format.version
property for topics -
log.message.format.version
property for Kafka brokers
From Kafka 3.0.0, the message format version values are assumed to match the inter.broker.protocol.version
and don’t need to be set. The values reflect the Kafka version used.
When upgrading to Kafka 3.0.0 or higher, you can remove these settings when you update the inter.broker.protocol.version
. Otherwise, set the message format version based on the Kafka version you are upgrading to.
The default value of message.format.version
for a topic is defined by the log.message.format.version
that is set on the Kafka broker. You can manually set the message.format.version
of a topic by modifying its topic configuration.
23.5.2. Strategies for upgrading clients
Upgrading Kafka clients ensures that they benefit from the features, fixes, and improvements that are introduced in new versions of Kafka. Upgraded clients maintain compatibility with other upgraded Kafka components. The performance and stability of the clients might also be improved.
Consider the best approach for upgrading Kafka clients and brokers to ensure a smooth transition. The chosen upgrade strategy depends on whether you are upgrading brokers or clients first. Since Kafka 3.0, you can upgrade brokers and client independently and in any order. The decision to upgrade clients or brokers first depends on several factors, such as the number of applications that need to be upgraded and how much downtime is tolerable.
If you upgrade clients before brokers, some new features may not work as they are not yet supported by brokers. However, brokers can handle producers and consumers running with different versions and supporting different log message versions.
Upgrading clients when using Kafka versions older than Kafka 3.0
Before Kafka 3.0, you would configure a specific message format for brokers using the log.message.format.version
property (or the message.format.version
property at the topic level). This allowed brokers to support older Kafka clients that were using an outdated message format. Otherwise, the brokers would need to convert the messages from the older clients, which came with a significant performance cost.
Apache Kafka Java clients have supported the latest message format version since version 0.11. If all of your clients are using the latest message version, you can remove the log.message.format.version
or message.format.version
overrides when upgrading your brokers.
However, if you still have clients that are using an older message format version, we recommend upgrading your clients first. Start with the consumers, then upgrade the producers before removing the log.message.format.version
or message.format.version
overrides when upgrading your brokers. This will ensure that all of your clients can support the latest message format version and that the upgrade process goes smoothly.
You can track Kafka client names and versions using this metric:
-
kafka.server:type=socket-server-metrics,clientSoftwareName=<name>,clientSoftwareVersion=<version>,listener=<listener>,networkProcessor=<processor>
The following Kafka broker metrics help monitor the performance of message down-conversion:
-
kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request={Produce|Fetch}
provides metrics on the time taken to perform message conversion. -
kafka.server:type=BrokerTopicMetrics,name={Produce|Fetch}MessageConversionsPerSec,topic=([-.\w]+)
provides metrics on the number of messages converted over a period of time.
23.5.3. Kafka version and image mappings
When upgrading Kafka, consider your settings for the STRIMZI_KAFKA_IMAGES
environment variable and the Kafka.spec.kafka.version
property.
-
Each
Kafka
resource can be configured with aKafka.spec.kafka.version
. The Cluster Operator’s
STRIMZI_KAFKA_IMAGES
environment variable provides a mapping between the Kafka version and the image to be used when that version is requested in a givenKafka
resource.-
If
Kafka.spec.kafka.image
is not configured, the default image for the given version is used. -
If
Kafka.spec.kafka.image
is configured, the default image is overridden.
-
If
The Cluster Operator cannot validate that an image actually contains a Kafka broker of the expected version. Take care to ensure that the given image corresponds to the given Kafka version.
23.5.4. Upgrading Kafka brokers and client applications
Upgrade an AMQ Streams Kafka cluster to the latest supported Kafka version and inter-broker protocol version.
You should also choose a strategy for upgrading clients. Kafka clients are upgraded in step 6 of this procedure.
Prerequisites
- The Cluster Operator is up and running.
-
Before you upgrade the AMQ Streams Kafka cluster, check that the
Kafka.spec.kafka.config
properties of theKafka
resource do not contain configuration options that are not supported in the new Kafka version.
Procedure
Update the Kafka cluster configuration:
oc edit kafka <my_cluster>
If configured, check that the
inter.broker.protocol.version
andlog.message.format.version
properties are set to the current version.For example, the current version is 3.5 if upgrading from Kafka version 3.5.0 to 3.6.0:
kind: Kafka spec: # ... kafka: version: 3.5.0 config: log.message.format.version: "3.5" inter.broker.protocol.version: "3.5" # ...
If
log.message.format.version
andinter.broker.protocol.version
are not configured, AMQ Streams automatically updates these versions to the current defaults after the update to the Kafka version in the next step.NoteThe value of
log.message.format.version
andinter.broker.protocol.version
must be strings to prevent them from being interpreted as floating point numbers.Change the
Kafka.spec.kafka.version
to specify the new Kafka version; leave thelog.message.format.version
andinter.broker.protocol.version
at the defaults for the current Kafka version.NoteChanging the
kafka.version
ensures that all brokers in the cluster are upgraded to start using the new broker binaries. During this process, some brokers are using the old binaries while others have already upgraded to the new ones. Leaving theinter.broker.protocol.version
unchanged at the current setting ensures that the brokers can continue to communicate with each other throughout the upgrade.For example, if upgrading from Kafka 3.5.0 to 3.6.0:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.6.0 1 config: log.message.format.version: "3.5" 2 inter.broker.protocol.version: "3.5" 3 # ...
WarningYou cannot downgrade Kafka if the
inter.broker.protocol.version
for the new Kafka version changes. The inter-broker protocol version determines the schemas used for persistent metadata stored by the broker, including messages written to__consumer_offsets
. The downgraded cluster will not understand the messages.If the image for the Kafka cluster is defined in
Kafka.spec.kafka.image
of theKafka
custom resource, update theimage
to point to a container image with the new Kafka version.Save and exit the editor, then wait for rolling updates to complete.
Check the progress of the rolling updates by watching the pod state transitions:
oc get pods my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'
The rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.
Depending on your chosen strategy for upgrading clients, upgrade all client applications to use the new version of the client binaries.
If required, set the
version
property for Kafka Connect and MirrorMaker as the new version of Kafka:-
For Kafka Connect, update
KafkaConnect.spec.version
. -
For MirrorMaker, update
KafkaMirrorMaker.spec.version
. For MirrorMaker 2, update
KafkaMirrorMaker2.spec.version
.NoteIf you are using custom images that are built manually, you must rebuild those images to ensure that they are up-to-date with the latest AMQ Streams base image. For example, if you created a Docker image from the base Kafka Connect image, update the Dockerfile to point to the latest base image and build configuration.
-
For Kafka Connect, update
- Verify that the upgraded client applications work correctly with the new Kafka brokers.
If configured, update the Kafka resource to use the new
inter.broker.protocol.version
version. Otherwise, go to step 9.For example, if upgrading to Kafka 3.6.0:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.6.0 config: log.message.format.version: "3.5" inter.broker.protocol.version: "3.6" # ...
- Wait for the Cluster Operator to update the cluster.
If configured, update the Kafka resource to use the new
log.message.format.version
version. Otherwise, go to step 10.For example, if upgrading to Kafka 3.6.0:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.6.0 config: log.message.format.version: "3.6" inter.broker.protocol.version: "3.6" # ...
ImportantFrom Kafka 3.0.0, when the
inter.broker.protocol.version
is set to3.0
or higher, thelog.message.format.version
option is ignored and doesn’t need to be set.Wait for the Cluster Operator to update the cluster.
You can check the upgrade has completed successfully from the status of the
Kafka
resource.
23.6. Checking the status of an upgrade
When performing an upgrade, you can check it completed successfully in the status of the Kafka
custom resource. The status provides information on the AMQ Streams and Kafka versions being used.
To ensure that you have the correct versions after completing an upgrade, verify the kafkaVersion
and operatorLastSuccessfulVersion
values in the Kafka status.
-
operatorLastSuccessfulVersion
is the version of the AMQ Streams operator that last performed a successful reconciliation. -
kafkaVersion
is the the version of Kafka being used by the Kafka cluster.
You can use these values to check an upgrade of AMQ Streams or Kafka has completed.
Checking an upgrade from the Kafka status
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # ... status: # ... kafkaVersion: 3.6.0 operatorLastSuccessfulVersion: 2.6
23.7. Switching to FIPS mode when upgrading AMQ Streams
Upgrade AMQ Streams to run in FIPS mode on FIPS-enabled OpenShift clusters. Until AMQ Streams 2.3, running on FIPS-enabled OpenShift clusters was possible only by disabling FIPS mode using the FIPS_MODE
environment variable. From release 2.3, AMQ Streams supports FIPS mode. If you run AMQ Streams on a FIPS-enabled OpenShift cluster with the FIPS_MODE
set to disabled
, you can enable it by following this procedure.
Prerequisites
- FIPS-enabled OpenShift cluster
-
An existing Cluster Operator deployment with the
FIPS_MODE
environment variable set todisabled
Procedure
-
Upgrade the Cluster Operator to version 2.3 or newer but keep the
FIPS_MODE
environment variable set todisabled
. If you initially deployed an AMQ Streams version older than 2.3, it might use old encryption and digest algorithms in its PKCS #12 stores, which are not supported with FIPS enabled. To recreate the certificates with updated algorithms, renew the cluster and clients CA certificates.
-
To renew the CAs generated by the Cluster Operator, add the
force-renew
annotation to the CA secrets to trigger a renewal. -
To renew your own CAs, add the new certificate to the CA secret and update the
ca-cert-generation
annotation with a higher incremental value to capture the update.
-
To renew the CAs generated by the Cluster Operator, add the
If you use SCRAM-SHA-512 authentication, check the password length of your users. If they are less than 32 characters long, generate a new password in one of the following ways:
- Delete the user secret so that the User Operator generates a new one with a new password of sufficient length.
-
If you provided your password using the
.spec.authentication.password
properties of theKafkaUser
custom resource, update the password in the OpenShift secret referenced in the same password configuration. Don’t forget to update your clients to use the new passwords.
- Ensure that the CA certificates are using the correct algorithms and the SCRAM-SHA-512 passwords are of sufficient length. You can then enable the FIPS mode.
-
Remove the
FIPS_MODE
environment variable from the Cluster Operator deployment. This restarts the Cluster Operator and rolls all the operands to enable the FIPS mode. After the restart is complete, all Kafka clusters now run with FIPS mode enabled.