Release Notes for Streams for Apache Kafka 2.9 on RHEL


Red Hat Streams for Apache Kafka 2.9

Highlights of what's new and what's changed with this release of Streams for Apache Kafka on Red Hat Enterprise Linux

Abstract

The release notes summarize the new features, enhancements, and fixes introduced in the Streams for Apache Kafka 2.9 release.

AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat’s product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.

Streams for Apache Kafka 2.9 is a Long Term Support (LTS) offering for Streams for Apache Kafka.

For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.

Chapter 3. Kafka 4 impact and adoption schedule

Streams for Apache Kafka 3.0 is scheduled for release in 2025. The introduction of Apache Kafka 4 in the release brings significant changes to how Kafka clusters are deployed, configured, and operated.

For more information on how these changes affect the Streams for Apache Kafka 3.0 release, refer to the article Streams for Apache Kafka 3.0: Kafka 4 Impact and Adoption.

Chapter 4. Features

Streams for Apache Kafka 2.9 introduces the features described in this section.

Streams for Apache Kafka 2.9 on RHEL is based on Apache Kafka 3.9.x.

Note

To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.

Streams for Apache Kafka 2.9.x is the Long Term Support (LTS) offering for Streams for Apache Kafka.

The latest patch release is Streams for Apache Kafka 2.9.3. The Streams for Apache Kafka binaries have changed to version 2.9.3.

For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.

4.2. Kafka 3.9.x support

Streams for Apache Kafka supports and uses Apache Kafka 3.9.x. Updates for Apache Kafka 3.9.1 were introduced in the 2.9.1 patch release, and the 2.9.3 patch release continues to use this version. Only Kafka distributions built by Red Hat are supported.

For upgrade, see the instructions for Streams for Apache Kafka and Kafka upgrades in the following guides:

Refer to the Kafka 3.9.0 and Kafka 3.9.1 Release Notes for additional information.

Kafka 3.8.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9. We recommend that you perform a rolling update to use the new binaries.

4.3. Streams for Apache Kafka

4.3.1. KRaft support moves to GA

KRaft (Kafka Raft metadata) mode moves to GA (General Availability). KRaft mode replaces Kafka’s dependency on ZooKeeper for cluster management, simplifying deployment and management of Kafka clusters by bringing metadata management and coordination of clusters into Kafka.

Last release to support ZooKeeper

Kafka 3.9.x provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. Kafka 3.9 is the final version to support ZooKeeper. Consequently, Streams for Apache Kafka 2.9 is the last version compatible with Kafka clusters using ZooKeeper.

If you are using ZooKeeper for metadata management in your Kafka cluster, you can migrate to using Kafka in KRaft mode using a static controller quorum. Once KRaft mode is enabled, you cannot switch back to ZooKeeper.

To prepare for Streams for Apache Kafka 3.0, migrate to Kafka in KRaft mode.

KRaft mode limitations

For Kafka 3.8 and earlier, the controller quorums (which replace Zookeeper) were of fixed size (static). Dynamic controller quorums were introduced in Kafka 3.9.

Migration between Kafka’s static and dynamic controller quorums is not currently supported, though this feature is expected in a future Kafka release.

Streams for Apache Kafka 2.9 on RHEL supports static and dynamic controller quorums, with dynamic quorums recommended for new deployments.

If you are using JBOD storage and have Cruise Control installed with Streams for Apache Kafka, you can now reassign partitions between the JBOD disks used for storage on the same broker. This capability also allows you to remove JBOD disks without data loss.

Make requests to the remove_disks endpoint of the Cruise Control REST API to demote a disk in the cluster and reassign its partitions to other disk volumes.

For more information, see Using Cruise Control to reassign partitions on JBOD disks.

Chapter 5. Enhancements

Streams for Apache Kafka 2.9 adds a number of enhancements.

5.1. Kafka 3.9.1 enhancements

The Streams for Apache Kafka 2.9.x release supports Kafka 3.9.x. Updates and enhancements from Kafka 3.9.1 were introduced in the 2.9.1 patch release and remain in use with 2.9.3.

For an overview of the enhancements introduced with Kafka 3.9.x, refer to the Kafka 3.9.0 and Kafka 3.9.1 Release Notes.

5.2. Streams for Apache Kafka

The Strimzi Quotas plugin moves to GA (General Availability). Use the plugin properties to set throughput and storage limits on brokers in your Kafka cluster configuration.

Warning

If you have previously used the Strimzi Quotas plugin in releases prior to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the latest properties to avoid reconciliation issues when upgrading.

For more information, see Setting limits on brokers using the Kafka Static Quota plugin.

Chapter 6. Technology Previews

Technology Preview features included with Streams for Apache Kafka 2.9.

Important

Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.

There are no technology previews for Streams for Apache Kafka 2.9 on RHEL.

Chapter 7. Deprecated features

The features deprecated in this release, and that were supported in previous releases of Streams for Apache Kafka, are outlined below.

7.1. Streams for Apache Kafka

Support for Java 11 is deprecated from Kafka 3.7.0 and Streams for Apache Kafka 2.7. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in release 3.0.

Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17.

If you want to continue using Java 11 for the time being, Streams for Apache Kafka 2.5 provides Long Term Support (LTS). For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.

Note

Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way.

7.1.2. Environment variable configuration provider

You can use configuration providers to load configuration data from external sources for all Kafka components, including producers and consumers.

Previously, you could enable the io.strimzi.kafka.EnvVarConfigProvider environment variable configuration provider. However, this provider is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka’s own environment variable configuration provider (`org.apache.kafka.common.config.provider.EnvVarConfigProvider `) to provide configuration properties as environment variables.

Example configuration to enable the environment variable configuration provider

config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider

Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster’s name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios.

To implement an identity replication policy, you must specify a replication policy class (replication.policy.class) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka’s own replication policy class (org.apache.kafka.connect.mirror.IdentityReplicationPolicy).

See Using Streams for Apache Kafka with MirrorMaker 2.

7.1.4. Kafka MirrorMaker 1

Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0 and will be removed in Streams for Apache Kafka 3.0 and Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. To avoid disruptions, please transition to MirrorMaker 2 before support ends.

If you’re using MirrorMaker 1, you can replicate its functionality in MirrorMaker 2 by using the IdentityReplicationPolicy class. By default, MirrorMaker 2 renames topics replicated to a target cluster, but IdentityReplicationPolicy preserves the original topic names, enabling the same active/passive unidirectional replication as MirrorMaker 1.

See Using Streams for Apache Kafka with MirrorMaker 2.

7.2. Kafka Bridge

7.2.1. OpenAPI v2 (Swagger)

Support for OpenAPI v2 is now deprecated and will be removed in Streams for Apache Kafka 3.0. OpenAPI v3 is now supported. Plan to move to using OpenAPI v3.

During the transition to using OpenAPI v2, the /openapi endpoint returns the OpenAPI v2 specification using an additional /openapi/v2 endpoint. A new /openapi/v3 endpoint returns the OpenAPI v3 specification.

7.2.2. Kafka Bridge span attributes

The following Kafka Bridge span attributes are deprecated with replacements shown where applicable:

  • http.method replaced by http.request.method
  • http.url replaced by url.scheme, url.path, and url.query
  • messaging.destination replaced by messaging.destination.name
  • http.status_code replaced by http.response.status_code
  • messaging.destination.kind=topic without replacement

Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge

Chapter 8. Fixed issues

The issues fixed in Streams for Apache Kafka 2.9 on RHEL.

Streams for Apache Kafka 2.9.3 (Long Term Support) is the latest patch release. This release continues to use Kafka 3.9.1, the version introduced with 2.9.1.

For details of the issues fixed in Kafka 3.9.1 refer to the Kafka 3.9.1 Release Notes.

For details of the issues resolved in Streams for Apache Kafka 2.9.3, see Streams for Apache Kafka 2.9.x Resolved Issues.

Streams for Apache Kafka 2.9.2 (Long Term Support) was the previous patch release. It retained Kafka 3.9.1, introduced with 2.9.1.

For details of the issues resolved in Streams for Apache Kafka 2.9.2, see Streams for Apache Kafka 2.9.x Resolved Issues.

Streams for Apache Kafka 2.9.1 (Long Term Support) introduced Kafka 3.9.1 as the underlying Kafka version, alongside other resolved issues.

For details of the issues resolved in Streams for Apache Kafka 2.9.1, see Streams for Apache Kafka 2.9.x Resolved Issues.

For details of the issues fixed in Kafka 3.9.0, refer to the Kafka 3.9.0 Release Notes.

Expand
Table 8.1. Streams for Apache Kafka fixed issues
Issue NumberDescription

ENTMQST-4324

Make it possible to use Cruise Control to move all data between two JBOD disks

ENTMQST-5318

[KAFKA] Improve MirrorMaker logging in case of authorization errors

ENTMQST-6234

[BRIDGE] path label in metrics can contain very different values and that makes it hard to work with the metrics

8.5. Security updates

Check the latest information about Streams for Apache Kafka security updates in the Red Hat Product Advisories portal.

8.6. Erratas

Check the latest security and product enhancement advisories for Streams for Apache Kafka.

Chapter 9. Known issues

This section lists the known issues for Streams for Apache Kafka 2.9 on RHEL.

When using multiple log directories per broker (JBOD) and performing intra-broker log directory reassignment (moving replicas between log directories on the same broker), Apache Kafka can incorrectly mark a log directory as failed if a transient filesystem or I/O error occurs during the operation.

This issue is caused by a race condition between background log flush operations and file deletion during replica movement. Under these conditions, Kafka may encounter a NoSuchFileException or a related I/O error and treat it as a fatal storage failure. As a result, the broker takes the entire log directory offline to protect data integrity, and any partitions stored on that directory become unavailable. The log directory can remain marked as failed even after the reassignment completes.

This behavior affects intra-broker log directory reassignment only. Inter-broker partition reassignment is not affected.

Workaround

Restart the affected Kafka broker. On restart, the broker re-scans the log directories and marks the disk as healthy if no underlying filesystem issue is present.

This is a known issue in Apache Kafka. Work to address this issue is being tracked in the Apache Kafka issue tracker (KAFKA-19571). A fix will be included in a future release of Red Hat Streams for Apache Kafka once it is available in the underlying Apache Kafka distribution.

9.2. JMX authentication when running in FIPS mode

When running Streams for Apache Kafka in FIPS mode with JMX authentication enabled, clients may fail authentication. To work around this issue, do not enable JMX authentication while running in FIPS mode. We are investigating the issue and working to resolve it in a future release.

Chapter 10. Supported Configurations

Supported configurations for the Streams for Apache Kafka 2.9 release.

10.1. Supported platforms

The following platforms are tested for Streams for Apache Kafka 2.9 running with Kafka on the version of Red Hat Enterprise Linux (RHEL) stated.

Expand
Operating SystemArchitectureJVM

RHEL 8, 9, and 10

x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM)

Java 11 (deprecated) and Java 17

Platforms are tested with Open JDK 11 and 17, though Java 11 is deprecated in Streams for Apache Kafka 2.7 and will be removed in version 3.0. The IBM JDK is supported but not regularly tested against during each release. Oracle JDK 11 is not supported.

FIPS compliance

Streams for Apache Kafka is designed for FIPS.

To check which versions of RHEL are approved by the National Institute of Standards and Technology (NIST), see the Cryptographic Module Validation Program on the NIST website.

10.2. Supported clients

Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library, which is tested and supported on kafka-clients-3.8.0.redhat-00007 and newer.

Clients are tested with Open JDK 11 and 17.

10.3. Supported Apache Kafka ecosystem

In Streams for Apache Kafka, only the following components released directly from the Apache Software Foundation are supported:

  • Apache Kafka Broker
  • Apache Kafka Connect
  • Apache MirrorMaker
  • Apache MirrorMaker 2
  • Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams
  • Apache ZooKeeper
Note

Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes.

10.4. Additional supported features

  • Kafka Bridge
  • Cruise Control
  • Distributed Tracing

See also, Chapter 12, Supported integration with Red Hat products.

10.5. Subscription limits and core usage

Cores used by Red Hat components and product operators do not count against subscription limits. Additionally, cores or vCPUs allocated to ZooKeeper nodes are excluded from subscription compliance calculations and do not count towards a subscription.

10.6. Storage requirements

Streams for Apache Kafka has been tested with block storage and is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. File-based storage options, such as NFS, are not tested or supported for primary broker storage and may cause instability or degraded performance.

Chapter 11. Component details

The following table shows the component versions for each Streams for Apache Kafka release.

Note

Components like the operators, console, and proxy only apply to using Streams for Apache Kafka on OpenShift.

Expand
Streams for Apache KafkaApache KafkaStrimzi OperatorsKafka BridgeOauthCruise ControlConsoleProxy

2.9.3

3.9.1

0.45.1

0.31.2

0.15.1

2.5.142

0.6.9

0.9.0

2.9.2

3.9.1

0.45.1

0.31.2

0.15.1

2.5.142

0.6.7

0.9.0

2.9.1

3.9.1

0.45.0

0.31.1

0.15.0

2.5.142

0.6.6

0.9.0

2.9.0

3.9.0

0.45.0

0.31.1

0.15.0

2.5.141

0.6.3

0.9.0

2.8.0

3.8.0

0.43.0

0.30.0

0.15.0

2.5.138

0.1

0.8.0

2.7.0

3.7.0

0.40.0

0.28.0

0.15.0

2.5.137

0.1

0.5.1

2.6.0

3.6.0

0.38.0

0.27.0

0.14.0

2.5.128

-

-

2.5.2

3.5.0 (+3.5.2)

0.36.0

0.26.0

0.13.0

2.5.123

-

-

2.5.1

3.5.0

0.36.0

0.26.0

0.13.0

2.5.123

-

-

2.5.0

3.5.0

0.36.0

0.26.0

0.13.0

2.5.123

-

-

2.4.0

3.4.0

0.34.0

0.25.0

0.12.0

2.5.112

-

-

2.3.0

3.3.1

0.32.0

0.22.3

0.11.0

2.5.103

-

-

2.2.2

3.2.3

0.29.0

0.21.5

0.10.0

2.5.103

-

-

2.2.1

3.2.3

0.29.0

0.21.5

0.10.0

2.5.103

-

-

2.2.0

3.2.3

0.29.0

0.21.5

0.10.0

2.5.89

-

-

2.1.0

3.1.0

0.28.0

0.21.4

0.10.0

2.5.82

-

-

2.0.1

3.0.0

0.26.0

0.20.3

0.9.0

2.5.73

-

-

2.0.0

3.0.0

0.26.0

0.20.3

0.9.0

2.5.73

-

-

1.8.4

2.8.0

0.24.0

0.20.1

0.8.1

2.5.59

-

-

1.8.0

2.8.0

0.24.0

0.20.1

0.8.1

2.5.59

-

-

1.7.0

2.7.0

0.22.1

0.19.0

0.7.1

2.5.37

-

-

1.6.7

2.6.3

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.6

2.6.3

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.5

2.6.2

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.4

2.6.2

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.0

2.6.0

0.20.0

0.19.0

0.6.1

2.5.11

-

-

1.5.0

2.5.0

0.18.0

0.16.0

0.5.0

-

-

-

1.4.1

2.4.0

0.17.0

0.15.2

0.3.0

-

-

-

1.4.0

2.4.0

0.17.0

0.15.2

0.3.0

-

-

-

1.3.0

2.3.0

0.14.0

0.14.0

0.1.0

-

-

-

1.2.0

2.2.1

0.12.1

0.12.2

-

-

-

-

1.1.1

2.1.1

0.11.4

-

-

-

-

-

1.1.0

2.1.1

0.11.1

-

-

-

-

-

1.0

2.0.0

0.8.1

-

-

-

-

-

Streams for Apache Kafka 2.9 supports integration with the following Red Hat products:

Red Hat build of Keycloak
Provides OAuth 2.0 authentication and OAuth 2.0 authorization.

For information on the functionality these products can introduce to your Streams for Apache Kafka deployment, refer to the product documentation.

Streams for Apache Kafka supports OAuth 2.0 token-based authorization through Red Hat build of Keycloak Authorization Services, providing centralized management of security policies and permissions.

Note

Red Hat build of Keycloak replaces Red Hat Single Sign-On, which is now in maintenance support. We are working on updating our documentation, resources, and media to reflect this transition. In the interim, content that describes using Single Sign-On in the Streams for Apache Kafka documentation also applies to using the Red Hat build of Keycloak.

Revised on 2026-01-23 10:37:33 UTC

Legal Notice

Copyright © Red Hat.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top