Release Notes for Streams for Apache Kafka 2.9 on RHEL
Highlights of what's new and what's changed with this release of Streams for Apache Kafka on Red Hat Enterprise Linux
Abstract
Chapter 1. Notification of name change to Streams for Apache Kafka Copy linkLink copied to clipboard!
AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat’s product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.
Chapter 2. Streams for Apache Kafka 2.9 Long Term Support Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.9 is a Long Term Support (LTS) offering for Streams for Apache Kafka.
For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.
Chapter 3. Kafka 4 impact and adoption schedule Copy linkLink copied to clipboard!
Streams for Apache Kafka 3.0 is scheduled for release in 2025. The introduction of Apache Kafka 4 in the release brings significant changes to how Kafka clusters are deployed, configured, and operated.
For more information on how these changes affect the Streams for Apache Kafka 3.0 release, refer to the article Streams for Apache Kafka 3.0: Kafka 4 Impact and Adoption.
Chapter 4. Features Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.9 introduces the features described in this section.
Streams for Apache Kafka 2.9 on RHEL is based on Apache Kafka 3.9.x.
To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.
4.1. Streams for Apache Kafka 2.9.x (Long Term Support) Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.9.x is the Long Term Support (LTS) offering for Streams for Apache Kafka.
The latest patch release is Streams for Apache Kafka 2.9.3. The Streams for Apache Kafka binaries have changed to version 2.9.3.
For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.
4.2. Kafka 3.9.x support Copy linkLink copied to clipboard!
Streams for Apache Kafka supports and uses Apache Kafka 3.9.x. Updates for Apache Kafka 3.9.1 were introduced in the 2.9.1 patch release, and the 2.9.3 patch release continues to use this version. Only Kafka distributions built by Red Hat are supported.
For upgrade, see the instructions for Streams for Apache Kafka and Kafka upgrades in the following guides:
Refer to the Kafka 3.9.0 and Kafka 3.9.1 Release Notes for additional information.
Kafka 3.8.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9. We recommend that you perform a rolling update to use the new binaries.
4.3. Streams for Apache Kafka Copy linkLink copied to clipboard!
4.3.1. KRaft support moves to GA Copy linkLink copied to clipboard!
KRaft (Kafka Raft metadata) mode moves to GA (General Availability). KRaft mode replaces Kafka’s dependency on ZooKeeper for cluster management, simplifying deployment and management of Kafka clusters by bringing metadata management and coordination of clusters into Kafka.
Last release to support ZooKeeper
Kafka 3.9.x provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. Kafka 3.9 is the final version to support ZooKeeper. Consequently, Streams for Apache Kafka 2.9 is the last version compatible with Kafka clusters using ZooKeeper.
If you are using ZooKeeper for metadata management in your Kafka cluster, you can migrate to using Kafka in KRaft mode using a static controller quorum. Once KRaft mode is enabled, you cannot switch back to ZooKeeper.
To prepare for Streams for Apache Kafka 3.0, migrate to Kafka in KRaft mode.
KRaft mode limitations
For Kafka 3.8 and earlier, the controller quorums (which replace Zookeeper) were of fixed size (static). Dynamic controller quorums were introduced in Kafka 3.9.
Migration between Kafka’s static and dynamic controller quorums is not currently supported, though this feature is expected in a future Kafka release.
Streams for Apache Kafka 2.9 on RHEL supports static and dynamic controller quorums, with dynamic quorums recommended for new deployments.
4.3.2. Capability to move data between JBOD disks using Cruise Control Copy linkLink copied to clipboard!
If you are using JBOD storage and have Cruise Control installed with Streams for Apache Kafka, you can now reassign partitions between the JBOD disks used for storage on the same broker. This capability also allows you to remove JBOD disks without data loss.
Make requests to the remove_disks endpoint of the Cruise Control REST API to demote a disk in the cluster and reassign its partitions to other disk volumes.
For more information, see Using Cruise Control to reassign partitions on JBOD disks.
Chapter 5. Enhancements Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.9 adds a number of enhancements.
5.1. Kafka 3.9.1 enhancements Copy linkLink copied to clipboard!
The Streams for Apache Kafka 2.9.x release supports Kafka 3.9.x. Updates and enhancements from Kafka 3.9.1 were introduced in the 2.9.1 patch release and remain in use with 2.9.3.
For an overview of the enhancements introduced with Kafka 3.9.x, refer to the Kafka 3.9.0 and Kafka 3.9.1 Release Notes.
5.2. Streams for Apache Kafka Copy linkLink copied to clipboard!
5.2.1. Configuration mechanism for quotas management Copy linkLink copied to clipboard!
The Strimzi Quotas plugin moves to GA (General Availability). Use the plugin properties to set throughput and storage limits on brokers in your Kafka cluster configuration.
If you have previously used the Strimzi Quotas plugin in releases prior to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the latest properties to avoid reconciliation issues when upgrading.
For more information, see Setting limits on brokers using the Kafka Static Quota plugin.
Chapter 6. Technology Previews Copy linkLink copied to clipboard!
Technology Preview features included with Streams for Apache Kafka 2.9.
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.
There are no technology previews for Streams for Apache Kafka 2.9 on RHEL.
Chapter 7. Deprecated features Copy linkLink copied to clipboard!
The features deprecated in this release, and that were supported in previous releases of Streams for Apache Kafka, are outlined below.
7.1. Streams for Apache Kafka Copy linkLink copied to clipboard!
7.1.1. Java 11 deprecated in Streams for Apache Kafka 2.7 Copy linkLink copied to clipboard!
Support for Java 11 is deprecated from Kafka 3.7.0 and Streams for Apache Kafka 2.7. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in release 3.0.
Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17.
If you want to continue using Java 11 for the time being, Streams for Apache Kafka 2.5 provides Long Term Support (LTS). For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.
Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way.
7.1.2. Environment variable configuration provider Copy linkLink copied to clipboard!
You can use configuration providers to load configuration data from external sources for all Kafka components, including producers and consumers.
Previously, you could enable the io.strimzi.kafka.EnvVarConfigProvider environment variable configuration provider. However, this provider is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka’s own environment variable configuration provider (`org.apache.kafka.common.config.provider.EnvVarConfigProvider `) to provide configuration properties as environment variables.
Example configuration to enable the environment variable configuration provider
config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider
7.1.3. Kafka MirrorMaker 2 identity replication policy Copy linkLink copied to clipboard!
Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster’s name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios.
To implement an identity replication policy, you must specify a replication policy class (replication.policy.class) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka’s own replication policy class (org.apache.kafka.connect.mirror.IdentityReplicationPolicy).
7.1.4. Kafka MirrorMaker 1 Copy linkLink copied to clipboard!
Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0 and will be removed in Streams for Apache Kafka 3.0 and Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. To avoid disruptions, please transition to MirrorMaker 2 before support ends.
If you’re using MirrorMaker 1, you can replicate its functionality in MirrorMaker 2 by using the IdentityReplicationPolicy class. By default, MirrorMaker 2 renames topics replicated to a target cluster, but IdentityReplicationPolicy preserves the original topic names, enabling the same active/passive unidirectional replication as MirrorMaker 1.
7.2. Kafka Bridge Copy linkLink copied to clipboard!
7.2.1. OpenAPI v2 (Swagger) Copy linkLink copied to clipboard!
Support for OpenAPI v2 is now deprecated and will be removed in Streams for Apache Kafka 3.0. OpenAPI v3 is now supported. Plan to move to using OpenAPI v3.
During the transition to using OpenAPI v2, the /openapi endpoint returns the OpenAPI v2 specification using an additional /openapi/v2 endpoint. A new /openapi/v3 endpoint returns the OpenAPI v3 specification.
7.2.2. Kafka Bridge span attributes Copy linkLink copied to clipboard!
The following Kafka Bridge span attributes are deprecated with replacements shown where applicable:
-
http.methodreplaced byhttp.request.method -
http.urlreplaced byurl.scheme,url.path, andurl.query -
messaging.destinationreplaced bymessaging.destination.name -
http.status_codereplaced byhttp.response.status_code -
messaging.destination.kind=topicwithout replacement
Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge
Chapter 8. Fixed issues Copy linkLink copied to clipboard!
The issues fixed in Streams for Apache Kafka 2.9 on RHEL.
8.1. Fixed issues for Streams for Apache Kafka 2.9.3 Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.9.3 (Long Term Support) is the latest patch release. This release continues to use Kafka 3.9.1, the version introduced with 2.9.1.
For details of the issues fixed in Kafka 3.9.1 refer to the Kafka 3.9.1 Release Notes.
For details of the issues resolved in Streams for Apache Kafka 2.9.3, see Streams for Apache Kafka 2.9.x Resolved Issues.
8.2. Fixed issues for Streams for Apache Kafka 2.9.2 Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.9.2 (Long Term Support) was the previous patch release. It retained Kafka 3.9.1, introduced with 2.9.1.
For details of the issues resolved in Streams for Apache Kafka 2.9.2, see Streams for Apache Kafka 2.9.x Resolved Issues.
8.3. Fixed issues for Streams for Apache Kafka 2.9.1 Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.9.1 (Long Term Support) introduced Kafka 3.9.1 as the underlying Kafka version, alongside other resolved issues.
For details of the issues resolved in Streams for Apache Kafka 2.9.1, see Streams for Apache Kafka 2.9.x Resolved Issues.
8.4. Fixed issues for Streams for Apache Kafka 2.9.0 Copy linkLink copied to clipboard!
For details of the issues fixed in Kafka 3.9.0, refer to the Kafka 3.9.0 Release Notes.
| Issue Number | Description |
|---|---|
| Make it possible to use Cruise Control to move all data between two JBOD disks | |
| [KAFKA] Improve MirrorMaker logging in case of authorization errors | |
| [BRIDGE] path label in metrics can contain very different values and that makes it hard to work with the metrics |
8.5. Security updates Copy linkLink copied to clipboard!
Check the latest information about Streams for Apache Kafka security updates in the Red Hat Product Advisories portal.
8.6. Erratas Copy linkLink copied to clipboard!
Check the latest security and product enhancement advisories for Streams for Apache Kafka.
Chapter 9. Known issues Copy linkLink copied to clipboard!
This section lists the known issues for Streams for Apache Kafka 2.9 on RHEL.
9.1. Kafka: Intra-broker log directory reassignment can cause a log directory to go offline Copy linkLink copied to clipboard!
When using multiple log directories per broker (JBOD) and performing intra-broker log directory reassignment (moving replicas between log directories on the same broker), Apache Kafka can incorrectly mark a log directory as failed if a transient filesystem or I/O error occurs during the operation.
This issue is caused by a race condition between background log flush operations and file deletion during replica movement. Under these conditions, Kafka may encounter a NoSuchFileException or a related I/O error and treat it as a fatal storage failure. As a result, the broker takes the entire log directory offline to protect data integrity, and any partitions stored on that directory become unavailable. The log directory can remain marked as failed even after the reassignment completes.
This behavior affects intra-broker log directory reassignment only. Inter-broker partition reassignment is not affected.
Workaround
Restart the affected Kafka broker. On restart, the broker re-scans the log directories and marks the disk as healthy if no underlying filesystem issue is present.
This is a known issue in Apache Kafka. Work to address this issue is being tracked in the Apache Kafka issue tracker (KAFKA-19571). A fix will be included in a future release of Red Hat Streams for Apache Kafka once it is available in the underlying Apache Kafka distribution.
9.2. JMX authentication when running in FIPS mode Copy linkLink copied to clipboard!
When running Streams for Apache Kafka in FIPS mode with JMX authentication enabled, clients may fail authentication. To work around this issue, do not enable JMX authentication while running in FIPS mode. We are investigating the issue and working to resolve it in a future release.
Chapter 10. Supported Configurations Copy linkLink copied to clipboard!
Supported configurations for the Streams for Apache Kafka 2.9 release.
10.1. Supported platforms Copy linkLink copied to clipboard!
The following platforms are tested for Streams for Apache Kafka 2.9 running with Kafka on the version of Red Hat Enterprise Linux (RHEL) stated.
| Operating System | Architecture | JVM |
|---|---|---|
| RHEL 8, 9, and 10 | x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM) | Java 11 (deprecated) and Java 17 |
Platforms are tested with Open JDK 11 and 17, though Java 11 is deprecated in Streams for Apache Kafka 2.7 and will be removed in version 3.0. The IBM JDK is supported but not regularly tested against during each release. Oracle JDK 11 is not supported.
FIPS compliance
Streams for Apache Kafka is designed for FIPS.
To check which versions of RHEL are approved by the National Institute of Standards and Technology (NIST), see the Cryptographic Module Validation Program on the NIST website.
10.2. Supported clients Copy linkLink copied to clipboard!
Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library, which is tested and supported on kafka-clients-3.8.0.redhat-00007 and newer.
Clients are tested with Open JDK 11 and 17.
10.3. Supported Apache Kafka ecosystem Copy linkLink copied to clipboard!
In Streams for Apache Kafka, only the following components released directly from the Apache Software Foundation are supported:
- Apache Kafka Broker
- Apache Kafka Connect
- Apache MirrorMaker
- Apache MirrorMaker 2
- Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams
- Apache ZooKeeper
Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes.
10.4. Additional supported features Copy linkLink copied to clipboard!
- Kafka Bridge
- Cruise Control
- Distributed Tracing
See also, Chapter 12, Supported integration with Red Hat products.
10.5. Subscription limits and core usage Copy linkLink copied to clipboard!
Cores used by Red Hat components and product operators do not count against subscription limits. Additionally, cores or vCPUs allocated to ZooKeeper nodes are excluded from subscription compliance calculations and do not count towards a subscription.
10.6. Storage requirements Copy linkLink copied to clipboard!
Streams for Apache Kafka has been tested with block storage and is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. File-based storage options, such as NFS, are not tested or supported for primary broker storage and may cause instability or degraded performance.
Chapter 11. Component details Copy linkLink copied to clipboard!
The following table shows the component versions for each Streams for Apache Kafka release.
Components like the operators, console, and proxy only apply to using Streams for Apache Kafka on OpenShift.
| Streams for Apache Kafka | Apache Kafka | Strimzi Operators | Kafka Bridge | Oauth | Cruise Control | Console | Proxy |
|---|---|---|---|---|---|---|---|
| 2.9.3 | 3.9.1 | 0.45.1 | 0.31.2 | 0.15.1 | 2.5.142 | 0.6.9 | 0.9.0 |
| 2.9.2 | 3.9.1 | 0.45.1 | 0.31.2 | 0.15.1 | 2.5.142 | 0.6.7 | 0.9.0 |
| 2.9.1 | 3.9.1 | 0.45.0 | 0.31.1 | 0.15.0 | 2.5.142 | 0.6.6 | 0.9.0 |
| 2.9.0 | 3.9.0 | 0.45.0 | 0.31.1 | 0.15.0 | 2.5.141 | 0.6.3 | 0.9.0 |
| 2.8.0 | 3.8.0 | 0.43.0 | 0.30.0 | 0.15.0 | 2.5.138 | 0.1 | 0.8.0 |
| 2.7.0 | 3.7.0 | 0.40.0 | 0.28.0 | 0.15.0 | 2.5.137 | 0.1 | 0.5.1 |
| 2.6.0 | 3.6.0 | 0.38.0 | 0.27.0 | 0.14.0 | 2.5.128 | - | - |
| 2.5.2 | 3.5.0 (+3.5.2) | 0.36.0 | 0.26.0 | 0.13.0 | 2.5.123 | - | - |
| 2.5.1 | 3.5.0 | 0.36.0 | 0.26.0 | 0.13.0 | 2.5.123 | - | - |
| 2.5.0 | 3.5.0 | 0.36.0 | 0.26.0 | 0.13.0 | 2.5.123 | - | - |
| 2.4.0 | 3.4.0 | 0.34.0 | 0.25.0 | 0.12.0 | 2.5.112 | - | - |
| 2.3.0 | 3.3.1 | 0.32.0 | 0.22.3 | 0.11.0 | 2.5.103 | - | - |
| 2.2.2 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.103 | - | - |
| 2.2.1 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.103 | - | - |
| 2.2.0 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.89 | - | - |
| 2.1.0 | 3.1.0 | 0.28.0 | 0.21.4 | 0.10.0 | 2.5.82 | - | - |
| 2.0.1 | 3.0.0 | 0.26.0 | 0.20.3 | 0.9.0 | 2.5.73 | - | - |
| 2.0.0 | 3.0.0 | 0.26.0 | 0.20.3 | 0.9.0 | 2.5.73 | - | - |
| 1.8.4 | 2.8.0 | 0.24.0 | 0.20.1 | 0.8.1 | 2.5.59 | - | - |
| 1.8.0 | 2.8.0 | 0.24.0 | 0.20.1 | 0.8.1 | 2.5.59 | - | - |
| 1.7.0 | 2.7.0 | 0.22.1 | 0.19.0 | 0.7.1 | 2.5.37 | - | - |
| 1.6.7 | 2.6.3 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.6 | 2.6.3 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.5 | 2.6.2 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.4 | 2.6.2 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.0 | 2.6.0 | 0.20.0 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.5.0 | 2.5.0 | 0.18.0 | 0.16.0 | 0.5.0 | - | - | - |
| 1.4.1 | 2.4.0 | 0.17.0 | 0.15.2 | 0.3.0 | - | - | - |
| 1.4.0 | 2.4.0 | 0.17.0 | 0.15.2 | 0.3.0 | - | - | - |
| 1.3.0 | 2.3.0 | 0.14.0 | 0.14.0 | 0.1.0 | - | - | - |
| 1.2.0 | 2.2.1 | 0.12.1 | 0.12.2 | - | - | - | - |
| 1.1.1 | 2.1.1 | 0.11.4 | - | - | - | - | - |
| 1.1.0 | 2.1.1 | 0.11.1 | - | - | - | - | - |
| 1.0 | 2.0.0 | 0.8.1 | - | - | - | - | - |
Chapter 12. Supported integration with Red Hat products Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.9 supports integration with the following Red Hat products:
- Red Hat build of Keycloak
- Provides OAuth 2.0 authentication and OAuth 2.0 authorization.
For information on the functionality these products can introduce to your Streams for Apache Kafka deployment, refer to the product documentation.
12.1. Red Hat build of Keycloak (formerly Red Hat Single Sign-On) Copy linkLink copied to clipboard!
Streams for Apache Kafka supports OAuth 2.0 token-based authorization through Red Hat build of Keycloak Authorization Services, providing centralized management of security policies and permissions.
Red Hat build of Keycloak replaces Red Hat Single Sign-On, which is now in maintenance support. We are working on updating our documentation, resources, and media to reflect this transition. In the interim, content that describes using Single Sign-On in the Streams for Apache Kafka documentation also applies to using the Red Hat build of Keycloak.
Revised on 2026-01-23 10:37:33 UTC