Release Notes for Streams for Apache Kafka 2.7 on RHEL
Highlights of what's new and what's changed with this release of Streams for Apache Kafka on Red Hat Enterprise Linux
Abstract
Chapter 1. Notification of name change to Streams for Apache Kafka Copy linkLink copied to clipboard!
AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat’s product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.
Chapter 2. Features Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.7 introduces the features described in this section.
Streams for Apache Kafka 2.7 on RHEL is based on Apache Kafka 3.7.0.
To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.
2.1. Kafka 3.7.0 support Copy linkLink copied to clipboard!
Streams for Apache Kafka now supports and uses Apache Kafka version 3.7.0. Only Kafka distributions built by Red Hat are supported.
For upgrade instructions, see the instructions for Streams for Apache Kafka and Kafka upgrades in the following guides:
Refer to the Kafka 3.7.0 Release Notes for additional information.
Kafka 3.6.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7. We recommend that you perform a rolling update to use the new binaries.
Kafka 3.7.0 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol.
2.2. KRaft: Support for migrating from ZooKeeper-based to KRaft-based Kafka clusters Copy linkLink copied to clipboard!
KRaft mode in Streams for Apache Kafka is a technology preview, with some limitations, but this release introduces a number of new features that support KRaft. To support using KRaft, a new using guide is available: Using Streams for Apache Kafka on RHEL in KRaft mode.
If you are using ZooKeeper for metadata management in your Kafka cluster, you can now migrate to using Kafka in KRaft mode.
During the migration, you do the following:
- Install a quorum of controller nodes, which replaces ZooKeeper for management of your cluster.
-
Enable KRaft migration in the controller configuration by setting the
zookeeper.metadata.migration.enableflag totrue. -
Enable KRaft migration in the brokers by setting the
zookeeper.metadata.migration.enableflag totrue. -
Switch the brokers to using KRaft by adding a
brokerKRaft role and node ID. -
Switch the controllers out of migration mode by removing the
zookeeper.metadata.migration.enableproperty.
2.3. KRaft: Kafka upgrades for the KRaft-based clusters Copy linkLink copied to clipboard!
KRaft to KRaft upgrades are now supported. You update the installation files, then configure and restart all Kafka nodes. You then upgrade the KRaft-based Kafka cluster to a newer supported KRaft metadata version.
Updating the KRaft metadata version
./bin/kafka-features.sh --bootstrap-server <broker_host>:<port> upgrade --metadata 3.7
./bin/kafka-features.sh --bootstrap-server <broker_host>:<port> upgrade --metadata 3.7
2.4. RHEL 7 no longer supported Copy linkLink copied to clipboard!
RHEL 7 is no longer supported. The decision was made due to known incompatibility issues.
Chapter 3. Enhancements Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.7 adds a number of enhancements.
3.1. Kafka 3.7.0 enhancements Copy linkLink copied to clipboard!
For an overview of the enhancements introduced with Kafka 3.7.0, refer to the Kafka 3.7.0 Release Notes.
3.2. Kafka Bridge text format Copy linkLink copied to clipboard!
When performing producer operations, POST requests must provide Content-Type headers specifying the embedded data format of the messages produced. Previously, JSON and binary were the supported formats for record and key values. It’s now possible to also use text format.
| Embedded data format | Content-Type header |
|---|---|
| JSON |
|
| Binary |
|
| Text |
|
Chapter 4. Technology Previews Copy linkLink copied to clipboard!
Technology Preview features included with Streams for Apache Kafka 2.7.
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.
4.1. KRaft mode Copy linkLink copied to clipboard!
KRaft mode is available as a technology preview.
Currently, the KRaft mode in Streams for Apache Kafka has the following limitations:
- Downgrading from KRaft mode to using ZooKeeper is not supported.
- JBOD storage with multiple disks is not supported.
- Unregistering Kafka nodes removed from the Kafka cluster.
4.2. Kafka Static Quota plugin configuration Copy linkLink copied to clipboard!
Use the technology preview of the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers.
Example Kafka Static Quota plugin configuration
See Setting limits on brokers using the Kafka Static Quota plugin.
Chapter 5. Deprecated features Copy linkLink copied to clipboard!
The features deprecated in this release, and that were supported in previous releases of Streams for Apache Kafka, are outlined below.
5.1. Java 11 deprecated in Streams for Apache Kafka 2.7.0 Copy linkLink copied to clipboard!
Support for Java 11 is deprecated in Kafka 3.7.0 and Streams for Apache Kafka 2.7.0. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in the future.
Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17.
Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way.
5.2. Kafka MirrorMaker 2 identity replication policy Copy linkLink copied to clipboard!
Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster’s name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios.
To implement an identity replication policy, you must specify a replication policy class (replication.policy.class) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in the future. Therefore, it is recommended to update your implementation to use Kafka’s own replication policy class (org.apache.kafka.connect.mirror.IdentityReplicationPolicy).
5.3. Kafka MirrorMaker 1 Copy linkLink copied to clipboard!
Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0.0 and will be removed in Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters.
As a result, MirrorMaker 1 has also been deprecated in Streams for Apache Kafka as well. If you are using MirrorMaker 1 (referred to as just MirrorMaker in the Streams for Apache Kafka documentation), use MirrorMaker 2 with the IdentityReplicationPolicy class. MirrorMaker 2 renames topics replicated to a target cluster. IdentityReplicationPolicy configuration overrides the automatic renaming. Use it to produce the same active/passive unidirectional replication as MirrorMaker 1.
5.4. Kafka Bridge span attributes Copy linkLink copied to clipboard!
The following Kafka Bridge span attributes are deprecated with replacements shown where applicable:
-
http.methodreplaced byhttp.request.method -
http.urlreplaced byurl.scheme,url.path, andurl.query -
messaging.destinationreplaced bymessaging.destination.name -
http.status_codereplaced byhttp.response.status_code -
messaging.destination.kind=topicwithout replacement
Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge
Chapter 6. Fixed issues Copy linkLink copied to clipboard!
The issues fixed in Streams for Apache Kafka 2.7 on RHEL.
For details of the issues fixed in Kafka 3.7.0, refer to the Kafka 3.7.0 Release Notes.
| Issue Number | Description |
|---|---|
|
OAuth issue fix: | |
| Producing with different embedded formats across multiple HTTP requests isn’t honoured | |
| Add support for Kafka and Strimzi upgrades when KRaft is enabled | |
| ZooKeeper to KRaft migration |
| Issue Number | Description |
|---|---|
| CVE-2023-43642 flaw was found in SnappyInputStream in snappy-java | |
| CVE-2023-52428 Nimbus JOSE+JWT before 9.37.2 | |
| CVE-2022-4899 vulnerability was found in zstd v1.4.10 | |
| CVE-2021-24032 flaw was found in zstd | |
| CVE-2024-23944 Apache ZooKeeper: Information disclosure in persistent watcher handling | |
| CVE-2021-3520 a flaw in lz4 | |
| CVE-2024-29025 netty-codec-http: Allocation of Resources Without Limits or Throttling | |
| CVE-2024-1023 vert.x: io.vertx/vertx-core: memory leak due to the use of Netty FastThreadLocal data structures in Vertx | |
| CVE-2024-1300 vertx-core: io.vertx:vertx-core: memory leak when a TCP server is configured with TLS and SNI support |
Chapter 7. Known issues Copy linkLink copied to clipboard!
This section lists the known issues for Streams for Apache Kafka 2.7 on RHEL.
7.1. Incompatibility with RHEL 7 Copy linkLink copied to clipboard!
There are known incompatibility issues when using RHEL 7 with Kafka 3.7. As a result, RHEL 7 is no longer supported. The issues arise due to an outdated GCC (GNU Compiler Collection) version on RHEL 7, which is incompatible with the version of the RocksDB JNI library (org.rocksdb:rocksdbjni:7.9.2) required by Kafka 3.7.
RocksDB JNI version 7.9.2 requires a newer version of the GCC and the associated libstdc++ library than what is available on RHEL 7. Snappy compression and Kafka Streams, which depend on RocksDB, will not function correctly on RHEL 7 due to these outdated libraries.
Recommendation
- Upgrade your clients and brokers running on RHEL 7 to RHEL 8 to ensure compatibility with Kafka 3.7 and the latest Streams for Apache Kafka features.
- If you wish to continue using RHEL 7, consider using Streams for Apache Kafka 2.5 LTS or 2.6.
7.2. JMX authentication when running in FIPS mode Copy linkLink copied to clipboard!
When running Streams for Apache Kafka in FIPS mode with JMX authentication enabled, clients may fail authentication. To work around this issue, do not enable JMX authentication while running in FIPS mode. We are investigating the issue and working to resolve it in a future release.
Chapter 8. Supported Configurations Copy linkLink copied to clipboard!
Supported configurations for the Streams for Apache Kafka 2.7 release.
8.1. Supported platforms Copy linkLink copied to clipboard!
The following platforms are tested for Streams for Apache Kafka 2.7 running with Kafka on the version of Red Hat Enterprise Linux (RHEL) stated.
| Operating System | Architecture | JVM |
|---|---|---|
| RHEL 8 and 9 | x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM) | Java 11 (deprecated) and Java 17 |
Platforms are tested with Open JDK 11 and 17, though Java 11 is deprecated in Streams for Apache Kafka 2.7.0. The IBM JDK is supported but not regularly tested against during each release. Oracle JDK 11 is not supported.
FIPS compliance
Streams for Apache Kafka 2.7.0 is designed for FIPS.
To check which versions of RHEL are approved by the National Institute of Standards and Technology (NIST), see the Cryptographic Module Validation Program on the NIST website.
8.2. Supported clients Copy linkLink copied to clipboard!
Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library.
Clients are tested with Open JDK 11 and 17.
8.3. Supported Apache Kafka ecosystem Copy linkLink copied to clipboard!
In Streams for Apache Kafka, only the following components released directly from the Apache Software Foundation are supported:
- Apache Kafka Broker
- Apache Kafka Connect
- Apache MirrorMaker
- Apache MirrorMaker 2
- Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams
- Apache ZooKeeper
Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes. Additionally, the cores or vCPU allocated to ZooKeeper nodes are not included in subscription compliance calculations. In other words, ZooKeeper nodes do not count towards a customer’s subscription.
8.4. Additional supported features Copy linkLink copied to clipboard!
- Kafka Bridge
- Cruise Control
- Distributed Tracing
See also, Chapter 10, Supported integration with Red Hat products.
8.5. Storage requirements Copy linkLink copied to clipboard!
Streams for Apache Kafka has been tested with block storage and is compatible with the XFS and ext4 file systems, both of which are commonly used with Kafka. File storage options, such as NFS, are not compatible.
Chapter 9. Component details Copy linkLink copied to clipboard!
The following table shows the component versions for each Streams for Apache Kafka release.
Components like the operators, console, and proxy only apply to using Streams for Apache Kafka on OpenShift.
| Streams for Apache Kafka | Apache Kafka | Strimzi Operators | Kafka Bridge | Oauth | Cruise Control | Console | Proxy |
|---|---|---|---|---|---|---|---|
| 2.7.0 | 3.7.0 | 0.40.0 | 0.28 | 0.15.0 | 2.5.128 | 0.1 | 0.5.1 |
| 2.6.0 | 3.6.0 | 0.38.0 | 0.27 | 0.14.0 | 2.5.128 | - | - |
| 2.5.1 | 3.5.0 | 0.36.0 | 0.26 | 0.13.0 | 2.5.123 | - | - |
| 2.5.0 | 3.5.0 | 0.36.0 | 0.26 | 0.13.0 | 2.5.123 | - | - |
| 2.4.0 | 3.4.0 | 0.34.0 | 0.25.0 | 0.12.0 | 2.5.112 | - | - |
| 2.3.0 | 3.3.1 | 0.32.0 | 0.22.3 | 0.11.0 | 2.5.103 | - | - |
| 2.2.2 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.103 | - | - |
| 2.2.1 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.103 | - | - |
| 2.2.0 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.89 | - | - |
| 2.1.0 | 3.1.0 | 0.28.0 | 0.21.4 | 0.10.0 | 2.5.82 | - | - |
| 2.0.1 | 3.0.0 | 0.26.0 | 0.20.3 | 0.9.0 | 2.5.73 | - | - |
| 2.0.0 | 3.0.0 | 0.26.0 | 0.20.3 | 0.9.0 | 2.5.73 | - | - |
| 1.8.4 | 2.8.0 | 0.24.0 | 0.20.1 | 0.8.1 | 2.5.59 | - | - |
| 1.8.0 | 2.8.0 | 0.24.0 | 0.20.1 | 0.8.1 | 2.5.59 | - | - |
| 1.7.0 | 2.7.0 | 0.22.1 | 0.19.0 | 0.7.1 | 2.5.37 | - | - |
| 1.6.7 | 2.6.3 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.6 | 2.6.3 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.5 | 2.6.2 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.4 | 2.6.2 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.0 | 2.6.0 | 0.20.0 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.5.0 | 2.5.0 | 0.18.0 | 0.16.0 | 0.5.0 | - | - | - |
| 1.4.1 | 2.4.0 | 0.17.0 | 0.15.2 | 0.3.0 | - | - | - |
| 1.4.0 | 2.4.0 | 0.17.0 | 0.15.2 | 0.3.0 | - | - | - |
| 1.3.0 | 2.3.0 | 0.14.0 | 0.14.0 | 0.1.0 | - | - | - |
| 1.2.0 | 2.2.1 | 0.12.1 | 0.12.2 | - | - | - | - |
| 1.1.1 | 2.1.1 | 0.11.4 | - | - | - | - | - |
| 1.1.0 | 2.1.1 | 0.11.1 | - | - | - | - | - |
| 1.0 | 2.0.0 | 0.8.1 | - | - | - | - | - |
Strimzi 0.26.0 contains a Log4j vulnerability. The version included in the product has been updated to depend on versions that do not contain the vulnerability.
Chapter 10. Supported integration with Red Hat products Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.7 supports integration with the following Red Hat products:
- Red Hat build of Keycloak
- Provides OAuth 2.0 authentication and OAuth 2.0 authorization.
For information on the functionality these products can introduce to your Streams for Apache Kafka deployment, refer to the product documentation.
10.1. Red Hat build of Keycloak (formerly Red Hat Single Sign-On) Copy linkLink copied to clipboard!
Streams for Apache Kafka supports OAuth 2.0 token-based authorization through Red Hat build of Keycloak Authorization Services, providing centralized management of security policies and permissions.
Red Hat build of Keycloak replaces Red Hat Single Sign-On, which is now in maintenance support. We are working on updating our documentation, resources, and media to reflect this transition. In the interim, content that describes using Single Sign-On in the Streams for Apache Kafka documentation also applies to using the Red Hat build of Keycloak.
Revised on 2024-07-08 11:23:30 UTC