此内容没有您所选择的语言版本。
Release Notes for Streams for Apache Kafka 2.9 on OpenShift
Highlights of what's new and what's changed with this release of Streams for Apache Kafka on OpenShift Container Platform
Abstract
AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat’s product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.
Streams for Apache Kafka 2.9 is a Long Term Support (LTS) offering for Streams for Apache Kafka.
For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.
Chapter 3. Upgrading from a Streams version before 1.7 复制链接链接已复制到粘贴板!
The v1beta2 API version for all custom resources was introduced with Streams for Apache Kafka 1.7. For Streams for Apache Kafka 1.8, v1alpha1 and v1beta1 API versions were removed from all Streams for Apache Kafka custom resources apart from KafkaTopic and KafkaUser.
Upgrade of the custom resources to v1beta2 prepares Streams for Apache Kafka for a move to Kubernetes CRD v1, which is required for Kubernetes 1.22.
If you are upgrading from a Streams for Apache Kafka version prior to version 1.7:
- Upgrade to Streams for Apache Kafka 1.7
-
Convert the custom resources to
v1beta2 - Upgrade to Streams for Apache Kafka 1.8
You must upgrade your custom resources to use API version v1beta2 before upgrading to Streams for Apache Kafka version 2.9.
3.1. Upgrading custom resources to v1beta2 复制链接链接已复制到粘贴板!
To support the upgrade of custom resources to v1beta2, Streams for Apache Kafka provides an API conversion tool, which you can download from the Streams for Apache Kafka 1.8 software downloads page.
You perform the custom resources upgrades in two steps.
Step one: Convert the format of custom resources
Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways:
- Converting the YAML files that describe the configuration for Streams for Apache Kafka custom resources
- Converting Streams for Apache Kafka custom resources directly in the cluster
Alternatively, you can manually convert each custom resource into a format applicable to v1beta2. Instructions for manually converting custom resources are included in the documentation.
Step two: Upgrade CRDs to v1beta2
Next, using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually.
For more information, see Upgrading from a Streams for Apache Kafka version earlier than 1.7.
Chapter 4. Kafka 4 impact and adoption schedule 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 3.0 is scheduled for release in 2025. The introduction of Apache Kafka 4 in the release brings significant changes to how Kafka clusters are deployed, configured, and operated.
For more information on how these changes affect the Streams for Apache Kafka 3.0 release, refer to the article Streams for Apache Kafka 3.0: Kafka 4 Impact and Adoption.
Chapter 5. Features 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9 introduces the features described in this section.
Streams for Apache Kafka 2.9 on OpenShift is based on Apache Kafka 3.9.x and Strimzi 0.45.x.
To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.
5.1. Streams for Apache Kafka 2.9.x (Long Term Support) 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9.x is the Long Term Support (LTS) offering for Streams.
The latest patch release is Streams for Apache Kafka 2.9.3. The latest Streams for Apache Kafka product images have changed to version 2.9.3.
For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.
5.2. OpenShift Container Platform support 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9 is supported on OpenShift Container Platform 4.14 and later.
For more information, see Chapter 12, Supported Configurations.
5.3. Kafka 3.9.x support 复制链接链接已复制到粘贴板!
Streams for Apache Kafka supports and uses Apache Kafka 3.9.x. Updates for Apache Kafka 3.9.1 were introduced in the 2.9.1 patch release, and the 2.9.3 patch release continues to use this version. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to Streams for Apache Kafka version 2.9 before you can upgrade brokers and client applications to Kafka 3.9.x. For upgrade instructions, see Upgrading Streams for Apache Kafka.
Refer to the Kafka 3.9.0 and Kafka 3.9.1 Release Notes for additional information.
Kafka 3.8.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9.
Last release to support ZooKeeper
Kafka 3.9.x provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. Kafka 3.9 is the final version to support ZooKeeper. Consequently, Streams for Apache Kafka 2.9.x is the last version compatible with Kafka clusters using ZooKeeper.
To deploy Kafka clusters in KRaft (Kafka Raft metadata) mode without ZooKeeper, the Kafka custom resource must include the annotation strimzi.io/kraft="enabled", and you must use KafkaNodePool resources to manage the configuration of groups of nodes.
Migrating to Kafka in KRaft mode
To prepare for Streams for Apache Kafka 3.0, migrate to Kafka in KRaft mode.
It is possible to roll back the procedure for KRaft migration, but failure to follow the rollback steps correctly can lead to cluster failure due to metadata inconsistencies. For more information, see Performing a rollback on migration.
KRaft mode limitations
Kafka 3.9 introduces dynamic controller quorums, which allow you to scale controllers in KRaft mode without recreating the cluster. However, migration from static to dynamic controller quorums is not currently supported. This limitation means that existing Kafka clusters using static controller quorums cannot be migrated to use dynamic quorums.
To maintain compatibility with existing KRaft-based deployments, Streams for Apache Kafka on OpenShift continues to use static controller quorums only, even for new installations. As a result, dynamic controller quorums are not yet supported, regardless of whether you are creating a new cluster or migrating or managing an existing one.
Support for dynamic quorums is expected in a future Kafka release.
5.4. Streams for Apache Kafka 复制链接链接已复制到粘贴板!
5.4.1. Support for automatic rebalancing 复制链接链接已复制到粘贴板!
You can scale a Kafka cluster by adjusting the number of brokers using the spec.replicas property in the Kafka or KafkaNodePool custom resource used in deployment.
Enable auto-rebalancing to automatically redistribute topic partitions when scaling a cluster up or down. Auto-rebalancing requires a Cruise Control deployment, a rebalancing template for the operation, and autorebalance configuration in the Kafka resource referencing the template. When enabled, auto-rebalancing rebalances clusters that have been scaled up or down without further intervention.
- After scaling up, auto-rebalancing redistributes some existing partitions to the newly added brokers.
- Before scaling down, if the brokers to be removed host partitions, the operator triggers auto-rebalancing to move the partitions, freeing the brokers for removal.
For more information, see Triggering auto-rebalances when scaling clusters.
If you are using JBOD storage and have Cruise Control installed with Streams for Apache Kafka, you can now reassign partitions between the JBOD disks used for storage on the same broker. This capability also allows you to remove JBOD disks without data loss.
You configure a KafkaRebalance resource in remove-disks mode and specify a list of broker IDs with corresponding volume IDs for partition reassignment. Cruise Control generates an optimization proposal based on the configuration and reassigns the partitions when approved manually or automatically.
For more information, see Using Cruise Control to reassign partitions on JBOD disks.
5.4.3. Mechanism to manage connector offsets 复制链接链接已复制到粘贴板!
A new mechanism allows connector offsets to be managed through KafkaConnect and KafkaMirrorMaker2 resources. It’s now possible to list, alter, and reset offsets.
For more information, see Configuring Kafka Connect connectors.
5.4.4. Templates for host and advertisedHost properties 复制链接链接已复制到粘贴板!
Hostnames and advertised hostnames for individual brokers can be specified using the host and advertisedHost properties. This release introduces support for using variables, such as {nodeId} or {nodePodName}, in the following templates:
-
advertisedHostTemplate -
hostTemplate
By using templates, you no longer need to configure each broker individually. Streams for Apache Kafka automatically replaces the template variables with the corresponding values for each broker.
For more information, see Overriding advertised addresses for brokers and Specifying listener types.
Environment variables for any container deployed by Streams for Apache Kafka may now be based on values specified in a Secret or ConfigMap. This replaces the requirement to use the ExternalConfiguration schema for Kafka Connect and MirrorMaker 2 containers, which is now deprecated.
Values are referenced in the container configuration using the valueFrom.secretKeyRef or valueFrom.configMapKeyRef properties.
For more information, see Loading configuration values from environment variables.
5.4.6. Disabling pod disruption budget generation 复制链接链接已复制到粘贴板!
Strimzi generates pod disruption budget resources for Kafka, Kafka Connect worker, MirrorMaker2 worker, and Kafka Bridge worker nodes.
If you want to use custom pod disruption budget resources, you can now set the STRIMZI_POD_DISRUPTION_BUDGET_GENERATION environment variable to false in the Cluster Operator configuration.
For more information, see Disabling pod disruption budget generation.
5.4.7. Support for CSI volumes in templates 复制链接链接已复制到粘贴板!
In order to support the CSI volumes, a new property named csi has been added to the AdditionalVolume schema. This property maps to the Kubernetes API CSIVolumeSource structure, allowing CSI volumes to be defined in container template fields.
For more information, see AdditionalVolume schema reference and Additional volumes.
5.5. Kafka Bridge 复制链接链接已复制到粘贴板!
5.5.1. Create topics 复制链接链接已复制到粘贴板!
Use the new admin/topics endpoint of the Kafka Bridge API to create topics. You can specify the topic name, partition count, replication factor in the request body.
For more information, see Securing connections from clients.
5.6. Proxy 复制链接链接已复制到粘贴板!
Streams for Apache Kafka Proxy is currently a technology preview.
5.6.1. mTLS client authentication 复制链接链接已复制到粘贴板!
When configuring proxies, you can now use trust properties to configure virtual clusters to use TLS client authentication.
For more information, see Securing connections from clients.
5.7. Console 复制链接链接已复制到粘贴板!
5.7.1. Console moves to GA 复制链接链接已复制到粘贴板!
The console (user interface) for Streams for Apache Kafka moves to GA. It is designed to seamlessly integrate with your Streams for Apache Kafka deployment, providing a centralized hub for monitoring and managing Kafka clusters. Deploy the console and connect it to Kafka clusters managed by Streams for Apache Kafka.
Gain insights into each connected cluster through dedicated console pages covering brokers, topics, and consumer groups. View essential information, such as the status of a Kafka cluster, before looking into specific details about brokers, topics, or connected consumer groups.
For more information, see the Streams for Apache Kafka Console guide.
5.7.2. Reset consumer offsets 复制链接链接已复制到粘贴板!
You can now reset consumer offsets of a specific consumer group from the Consumer Groups page.
For more information, see Resetting consumer offsets.
5.7.3. Manage rebalances 复制链接链接已复制到粘贴板!
When you configure KafkaRebalance resources to generate optimization proposals on a cluster, you can manage the proposals and any resulting rebalances from the Brokers page.
For more information, see Managing rebalances.
5.7.4. Pause reconciliations 复制链接链接已复制到粘贴板!
Pause and resume cluster reconciliations from the Cluster overview page. While paused, any changes to the cluster configuration using the Kafka custom resource are ignored until reconciliation is resumed.
For more information, see Pausing reconciliation of clusters.
5.7.5. Support for authorization configuration 复制链接链接已复制到粘贴板!
The console now supports configuration of authorization rules in the console deployment configuration. Enable secure console connections to Kafka clusters using an OpenID Connect (OIDC) provider, such as Red Hat build of Keycloak. The configuration can be set up for all clusters or at the cluster level.
For more information, see Deploying the console.
Chapter 6. Enhancements 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9 adds a number of enhancements.
6.1. Kafka 3.9.1 enhancements 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9.x supports Kafka 3.9.x. Updates and enhancements from Kafka 3.9.1 were introduced in the 2.9.1 patch release and remain in use with 2.9.3.
For an overview of the enhancements introduced with Kafka 3.9.x, refer to the Kafka 3.9.0 and Kafka 3.9.1 Release Notes.
6.2. Streams for Apache Kafka 复制链接链接已复制到粘贴板!
6.2.1. Configuration mechanism for quotas management 复制链接链接已复制到粘贴板!
The Strimzi Quotas plugin moves to GA (General Availability). Use the plugin properties to set throughput and storage limits on brokers in your Kafka cluster configuration.
If you have previously used the Strimzi Quotas plugin in releases prior to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the latest .spec.kafka.quotas properties to avoid reconciliation issues when upgrading.
For more information, see Setting limits on brokers using the Kafka Static Quota plugin.
6.2.2. Change to unmanaged topic reconciliation 复制链接链接已复制到粘贴板!
When finalizers are enabled (default), the Topic Operator no longer restores them on unmanaged KafkaTopic resources if removed. This behavior aligns with paused topics, where finalizers are also not restored.
The technology preview of the ContinueReconciliationOnManualRollingUpdateFailure feature gate moves to beta stage and is enabled by default. If required, ContinueReconciliationOnManualRollingUpdateFailure can be disabled in the feature gates configuration in the Cluster Operator.
6.2.4. Rolling pods once for CA renewal 复制链接链接已复制到粘贴板!
Pods are now rolled only when the cluster CA key is replaced, not when the clients CA key is replaced, which is used solely for trust. Consequently, the restart event reason ClientCaCertKeyReplaced has been removed, and either CaCertRenewed or CaCertHasOldGeneration is now used as the event reason.
Rolling updates for new CA certificate generations now resume from where they left off after an interruption, instead of restarting the process and rolling all pods again.
Chapter 7. Technology Previews 复制链接链接已复制到粘贴板!
Technology Preview features included with Streams for Apache Kafka 2.9.
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.
7.1. Streams for Apache Kafka Proxy 复制链接链接已复制到粘贴板!
Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself.
As part of the technology preview, you can try the Record Encryption filter and Record Validation filter. The Record Encryption filter uses industry-standard cryptographic techniques to apply encryption to Kafka messages, ensuring the confidentiality of data stored in the Kafka Cluster. The Record Validation filter validates records sent by a producer. Only records that pass the validation are sent to the broker.
For more information, see the Streams for Apache Kafka Proxy guide.
Chapter 8. Developer Previews 复制链接链接已复制到粘贴板!
Developer preview features included with Streams for Apache Kafka 2.9.
As a Kafka cluster administrator, you can toggle a subset of features on and off using feature gates in the Cluster Operator deployment configuration. The feature gates available as developer previews are at an alpha level of maturity and disabled by default.
Developer Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Developer Preview features in production environments. This Developer Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Developer Preview Support Scope.
8.1. Tiered storage for Kafka brokers 复制链接链接已复制到粘贴板!
Streams for Apache Kafka now supports tiered storage for Kafka brokers as a developer preview, allowing you to introduce custom remote storage solutions as well as local storage. Due to its current limitations, it is not recommended for production environments.
Remote storage configuration is specified using kafka.tieredStorage properties in the Kafka resource. You specify a custom remote storage manager to manage the tiered storage.
Example custom tiered storage configuration
- 1
- Configure the custom remote storage manager with the necessary settings. The keys are automatically prefixed with
rsm.configand appended to the Kafka broker configuration. - 2
- Streams for Apache Kafka uses the
TopicBasedRemoteLogMetadataManagerfor Remote Log Metadata Management (RLMM). Add RLMM configuration using anrlmm.config.prefix.
If you want to use custom tiered storage, you must first add the tiered storage plugin to the Streams for Apache Kafka image by building a custom container image.
Chapter 9. Deprecated features 复制链接链接已复制到粘贴板!
Deprecated features that were supported in previous releases of Streams for Apache Kafka.
9.1. Streams for Apache Kafka 复制链接链接已复制到粘贴板!
9.1.1. Schema property deprecations 复制链接链接已复制到粘贴板!
| Schema | Deprecated property | Replacement property |
|---|---|---|
|
|
|
|
|
|
| - |
|
|
| - |
|
|
| - |
|
|
| - |
|
|
| - |
|
|
|
|
|
|
| - |
|
|
| - |
|
|
| - |
|
|
|
Replaced by |
|
|
|
Replaced by |
|
|
| - |
|
|
|
|
|
|
|
Replaced by |
|
|
|
Replaced by |
|
|
| - |
|
| all properties | - |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| - |
|
|
|
Replaced by |
See the Streams for Apache Kafka Custom Resource API Reference.
Support for Java 11 is deprecated from Kafka 3.7.0 and Streams for Apache Kafka 2.7. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in release 3.0.
Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17.
If you want to continue using Java 11 for the time being, Streams for Apache Kafka 2.5 provides Long Term Support (LTS). For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.
Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way.
9.1.3. Storage overrides 复制链接链接已复制到粘贴板!
The storage overrides (*.storage.overrides) for configuring per-broker storage are deprecated and will be removed in Streams for Apache Kafka 3.0. If you are using the storage overrides, migrate to KafkaNodePool resources and use multiple node pools with a different storage class each.
For more information, see PersistentClaimStorage schema reference.
9.1.4. Environment variable configuration provider 复制链接链接已复制到粘贴板!
You can use configuration providers to load configuration data from external sources for all Kafka components, including producers and consumers.
Previously, you could enable the io.strimzi.kafka.EnvVarConfigProvider environment variable configuration provider using the config.providers properties in the spec configuration of a component. However, this provider is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka’s own environment variable configuration provider (org.apache.kafka.common.config.provider.EnvVarConfigProvider) to provide configuration properties as environment variables.
Example configuration to enable the environment variable configuration provider
9.1.5. Kafka MirrorMaker 2 identity replication policy 复制链接链接已复制到粘贴板!
Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster’s name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios.
To implement an identity replication policy, you must specify a replication policy class (replication.policy.class) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka’s own replication policy class (org.apache.kafka.connect.mirror.IdentityReplicationPolicy).
For more information, see Configuring Kafka MirrorMaker 2.
9.1.6. Kafka MirrorMaker 1 复制链接链接已复制到粘贴板!
Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0 and will be removed in Streams for Apache Kafka 3.0, including the KafkaMirrorMaker custom resource, and Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. To avoid disruptions, please transition to MirrorMaker 2 before support ends.
If you’re using MirrorMaker 1, you can replicate its functionality in MirrorMaker 2 by using the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy class.. By default, MirrorMaker 2 renames topics replicated to a target cluster, but IdentityReplicationPolicy preserves the original topic names, enabling the same active/passive unidirectional replication as MirrorMaker 1.
For more information, see Configuring Kafka MirrorMaker 2.
9.2. Kafka Bridge 复制链接链接已复制到粘贴板!
9.2.1. OpenAPI v2 (Swagger) 复制链接链接已复制到粘贴板!
Support for OpenAPI v2 is now deprecated and will be removed in Streams for Apache Kafka 3.0. OpenAPI v3 is now supported. Plan to move to using OpenAPI v3.
During the transition to using OpenAPI v2, the /openapi endpoint returns the OpenAPI v2 specification using an additional /openapi/v2 endpoint. A new /openapi/v3 endpoint returns the OpenAPI v3 specification.
9.2.2. Kafka Bridge span attributes 复制链接链接已复制到粘贴板!
The following Kafka Bridge span attributes are deprecated with replacements shown where applicable:
-
http.methodreplaced byhttp.request.method -
http.urlreplaced byurl.scheme,url.path, andurl.query -
messaging.destinationreplaced bymessaging.destination.name -
http.status_codereplaced byhttp.response.status_code -
messaging.destination.kind=topicwithout replacement
Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge
Chapter 10. Fixed issues 复制链接链接已复制到粘贴板!
The issues fixed in Streams for Apache Kafka 2.9 on OpenShift.
10.1. Fixed issues for Streams for Apache Kafka 2.9.3 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9.3 (Long Term Support) is the latest patch release. This release continues to use Kafka 3.9.1, the version introduced with 2.9.1.
For details of the issues fixed in Kafka 3.9.1 refer to the Kafka 3.9.1 Release Notes.
For details of the issues resolved in Streams for Apache Kafka 2.9.3, see Streams for Apache Kafka 2.9.x Resolved Issues.
10.2. Fixed issues for Streams for Apache Kafka 2.9.2 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9.2 (Long Term Support) was the previous patch release. It retained Kafka 3.9.1, introduced with 2.9.1.
For details of the issues resolved in Streams for Apache Kafka 2.9.2, see Streams for Apache Kafka 2.9.x Resolved Issues.
10.3. Fixed issues for Streams for Apache Kafka 2.9.1 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9.1 (Long Term Support) introduced Kafka 3.9.1 as the underlying Kafka version, alongside other resolved issues.
For details of the issues resolved in Streams for Apache Kafka 2.9.1, see Streams for Apache Kafka 2.9.x Resolved Issues.
10.4. Fixed issues for Streams for Apache Kafka 2.9.0 复制链接链接已复制到粘贴板!
For details of the issues fixed in Kafka 3.9.0, refer to the Kafka 3.9.0 Release Notes.
| Issue Number | Description |
|---|---|
| Make it possible to use Cruise Control to move all data between two JBOD disks | |
| [KAFKA] Improve MirrorMaker logging in case of authorization errors | |
| [BRIDGE] path label in metrics can contain very different values and that makes it hard to work with the metrics | |
| Do not generate empty required arrays in OneOf definition | |
| The namespace.mapper configuration option of Mongodb Sink connector is reported as forbidden | |
| Fix port handling in the Kafka Agent | |
| Improve handling of custom Cruise Control topic configurations | |
| Improve handling of invalid topic configurations | |
| The KafkaTopic.status.topicId is never updated | |
| Use init container for Kafka nodes only when needed | |
| Improve documentation, logging, and automation of certificate renewal activities on OpenShift | |
| Remove-brokers rebalancing seems to get stuck by race condition | |
| CA cert annotations aren’t updated during CaReconciler rolling update | |
| Findings in DAST scans results for 2.8.0 | |
| Support for mounting CSI volumes |
10.5. Security updates 复制链接链接已复制到粘贴板!
Check the latest information about Streams for Apache Kafka security updates in the Red Hat Product Advisories portal.
10.6. Erratas 复制链接链接已复制到粘贴板!
Check the latest security and product enhancement advisories for Streams for Apache Kafka.
Chapter 11. Known issues 复制链接链接已复制到粘贴板!
This section lists the known issues for Streams for Apache Kafka 2.9 on OpenShift.
Currently, multi-version upgrades between Long Term Support (LTS) versions are not supported through the Operator Lifecycle Manager (OLM) when using the OperatorHub LTS channel.
For example, you cannot directly upgrade from version 2.2 LTS to version 2.9 LTS. Instead, you must perform incremental upgrades, stepping through each intermediate minor version to reach version 2.9.
11.2. Cruise Control CPU utilization estimation 复制链接链接已复制到粘贴板!
Cruise Control for Streams for Apache Kafka has a known issue that relates to the calculation of CPU utilization estimation. CPU utilization is calculated as a percentage of the defined capacity of a broker pod. The issue occurs when running Kafka brokers across nodes with varying CPU cores. For example, node1 might have 2 CPU cores and node2 might have 4 CPU cores. In this situation, Cruise Control can underestimate and overestimate CPU load of brokers The issue can prevent cluster rebalances when the pod is under heavy load.
There are two workarounds for this issue.
Workaround one: Equal CPU requests and limits
You can set CPU requests equal to CPU limits in Kafka.spec.kafka.resources. That way, all CPU resources are reserved upfront and are always available. This configuration allows Cruise Control to properly evaluate the CPU utilization when preparing the rebalance proposals based on CPU goals.
Workaround two: Exclude CPU goals
You can exclude CPU goals from the hard and default goals specified in the Cruise Control configuration.
Example Cruise Control configuration without CPU goals
For more information, see Insufficient CPU capacity.
11.3. JMX authentication when running in FIPS mode 复制链接链接已复制到粘贴板!
When running Streams for Apache Kafka in FIPS mode with JMX authentication enabled, clients may fail authentication. To work around this issue, do not enable JMX authentication while running in FIPS mode. We are investigating the issue and working to resolve it in a future release.
Chapter 12. Supported Configurations 复制链接链接已复制到粘贴板!
Supported configurations for the Streams for Apache Kafka 2.9 release.
12.1. Supported platforms 复制链接链接已复制到粘贴板!
The following platforms are tested for Streams for Apache Kafka 2.9 running with Kafka on the version of OpenShift stated.
| Platform | Version | Architecture |
|---|---|---|
| Red Hat OpenShift Container Platform | 4.14 and later | x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM) |
| Red Hat OpenShift Container Platform disconnected environment | Latest | x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM) |
| Red Hat OpenShift Dedicated | Latest | x86_64 |
| Microsoft Azure Red Hat OpenShift (ARO) | Latest | x86_64 |
|
Red Hat OpenShift Service on AWS (ROSA) | Latest | x86_64 |
| Red Hat build of MicroShift | Latest | x86_64 |
Unsupported features
- Red Hat MicroShift does not support Kafka Connect’s build configuration for building container images with connectors.
- IBM Z and IBM® LinuxONE s390x architecture does not support Streams for Apache Kafka OPA integration.
FIPS compliance
Streams for Apache Kafka is designed for FIPS. Streams for Apache Kafka container images are based on RHEL 9.2, which contains cryptographic modules submitted to NIST for approval.
To check which versions of RHEL are approved by the National Institute of Standards and Technology (NIST), see the Cryptographic Module Validation Program on the NIST website.
Red Hat OpenShift Container Platform is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS validation only on the x86_64, ppc64le (IBM Power), s390x (IBM Z), and aarch64 (64-bit ARM) architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards.
12.2. Supported clients 复制链接链接已复制到粘贴板!
Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library, which is tested and supported on kafka-clients-3.8.0.redhat-00007 and newer. Clients are supported for use with Streams for Apache Kafka 2.9 on the following operating systems and architectures:
| Operating System | Architecture | JVM |
|---|---|---|
| RHEL and UBI 8, 9, and 10 | x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM) | Java 11 (deprecated) and Java 17 |
Clients are tested with Open JDK 11 and 17, though Java 11 is deprecated in Streams for Apache Kafka 2.7 and will be removed in version 3.0. The IBM JDK is supported but not regularly tested against during each release. Oracle JDK 11 is not supported.
Support for Red Hat Universal Base Image (UBI) versions correspond to the same RHEL version.
12.3. Supported Apache Kafka ecosystem 复制链接链接已复制到粘贴板!
In Streams for Apache Kafka, only the following components released directly from the Apache Software Foundation are supported:
- Apache Kafka Broker
- Apache Kafka Connect
- Apache MirrorMaker
- Apache MirrorMaker 2
- Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams
- Apache ZooKeeper
Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes.
12.4. Additional supported features 复制链接链接已复制到粘贴板!
- Kafka Bridge
- Drain Cleaner
- Cruise Control
- Distributed Tracing
- Streams for Apache Kafka Console
- Streams for Apache Kafka Proxy (technology preview)
Streams for Apache Kafka Proxy is not production-ready. For the technology preview, it has been tested on x86 and amd64 only.
See also, Chapter 14, Supported integration with Red Hat products.
12.5. Console supported browsers 复制链接链接已复制到粘贴板!
Streams for Apache Kafka Console is supported on the most recent stable releases of Firefox, Edge, Chrome and Webkit-based browsers.
12.6. Subscription limits and core usage 复制链接链接已复制到粘贴板!
Cores used by Red Hat components and product operators do not count against subscription limits. Additionally, cores or vCPUs allocated to ZooKeeper nodes are excluded from subscription compliance calculations and do not count towards a subscription.
12.7. Storage requirements 复制链接链接已复制到粘贴板!
Streams for Apache Kafka has been tested with block storage and is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. File-based storage options, such as NFS, are not tested or supported for primary broker storage and may cause instability or degraded performance.
Chapter 13. Component details 复制链接链接已复制到粘贴板!
The following table shows the component versions for each Streams for Apache Kafka release.
Components like the operators, console, and proxy only apply to using Streams for Apache Kafka on OpenShift.
| Streams for Apache Kafka | Apache Kafka | Strimzi Operators | Kafka Bridge | Oauth | Cruise Control | Console | Proxy |
|---|---|---|---|---|---|---|---|
| 2.9.3 | 3.9.1 | 0.45.1 | 0.31.2 | 0.15.1 | 2.5.142 | 0.6.9 | 0.9.0 |
| 2.9.2 | 3.9.1 | 0.45.1 | 0.31.2 | 0.15.1 | 2.5.142 | 0.6.7 | 0.9.0 |
| 2.9.1 | 3.9.1 | 0.45.0 | 0.31.1 | 0.15.0 | 2.5.142 | 0.6.6 | 0.9.0 |
| 2.9.0 | 3.9.0 | 0.45.0 | 0.31.1 | 0.15.0 | 2.5.141 | 0.6.3 | 0.9.0 |
| 2.8.0 | 3.8.0 | 0.43.0 | 0.30.0 | 0.15.0 | 2.5.138 | 0.1 | 0.8.0 |
| 2.7.0 | 3.7.0 | 0.40.0 | 0.28.0 | 0.15.0 | 2.5.137 | 0.1 | 0.5.1 |
| 2.6.0 | 3.6.0 | 0.38.0 | 0.27.0 | 0.14.0 | 2.5.128 | - | - |
| 2.5.2 | 3.5.0 (+3.5.2) | 0.36.0 | 0.26.0 | 0.13.0 | 2.5.123 | - | - |
| 2.5.1 | 3.5.0 | 0.36.0 | 0.26.0 | 0.13.0 | 2.5.123 | - | - |
| 2.5.0 | 3.5.0 | 0.36.0 | 0.26.0 | 0.13.0 | 2.5.123 | - | - |
| 2.4.0 | 3.4.0 | 0.34.0 | 0.25.0 | 0.12.0 | 2.5.112 | - | - |
| 2.3.0 | 3.3.1 | 0.32.0 | 0.22.3 | 0.11.0 | 2.5.103 | - | - |
| 2.2.2 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.103 | - | - |
| 2.2.1 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.103 | - | - |
| 2.2.0 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.89 | - | - |
| 2.1.0 | 3.1.0 | 0.28.0 | 0.21.4 | 0.10.0 | 2.5.82 | - | - |
| 2.0.1 | 3.0.0 | 0.26.0 | 0.20.3 | 0.9.0 | 2.5.73 | - | - |
| 2.0.0 | 3.0.0 | 0.26.0 | 0.20.3 | 0.9.0 | 2.5.73 | - | - |
| 1.8.4 | 2.8.0 | 0.24.0 | 0.20.1 | 0.8.1 | 2.5.59 | - | - |
| 1.8.0 | 2.8.0 | 0.24.0 | 0.20.1 | 0.8.1 | 2.5.59 | - | - |
| 1.7.0 | 2.7.0 | 0.22.1 | 0.19.0 | 0.7.1 | 2.5.37 | - | - |
| 1.6.7 | 2.6.3 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.6 | 2.6.3 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.5 | 2.6.2 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.4 | 2.6.2 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.0 | 2.6.0 | 0.20.0 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.5.0 | 2.5.0 | 0.18.0 | 0.16.0 | 0.5.0 | - | - | - |
| 1.4.1 | 2.4.0 | 0.17.0 | 0.15.2 | 0.3.0 | - | - | - |
| 1.4.0 | 2.4.0 | 0.17.0 | 0.15.2 | 0.3.0 | - | - | - |
| 1.3.0 | 2.3.0 | 0.14.0 | 0.14.0 | 0.1.0 | - | - | - |
| 1.2.0 | 2.2.1 | 0.12.1 | 0.12.2 | - | - | - | - |
| 1.1.1 | 2.1.1 | 0.11.4 | - | - | - | - | - |
| 1.1.0 | 2.1.1 | 0.11.1 | - | - | - | - | - |
| 1.0 | 2.0.0 | 0.8.1 | - | - | - | - | - |
Chapter 14. Supported integration with Red Hat products 复制链接链接已复制到粘贴板!
Streams for Apache Kafka 2.9 supports integration with the following Red Hat products:
- Red Hat build of Keycloak
- Provides OAuth 2.0 authentication and OAuth 2.0 authorization.
- Red Hat 3scale API Management
- Secures the Kafka Bridge and provides additional API management features.
- Red Hat build of Debezium
- Monitors databases and creates event streams.
- Red Hat build of Apicurio Registry
- Provides a centralized store of service schemas for data streaming.
- Red Hat build of Apache Camel K
- Provides a lightweight integration framework.
For information on the functionality these products can introduce to your Streams for Apache Kafka deployment, refer to the product documentation.
Streams for Apache Kafka supports OAuth 2.0 token-based authorization through Red Hat build of Keycloak Authorization Services, providing centralized management of security policies and permissions.
Red Hat build of Keycloak replaces Red Hat Single Sign-On, which is now in maintenance support. We are working on updating our documentation, resources, and media to reflect this transition. In the interim, content that describes using Single Sign-On in the Streams for Apache Kafka documentation also applies to using the Red Hat build of Keycloak.
14.2. Red Hat 3scale API Management 复制链接链接已复制到粘贴板!
If you deployed the Kafka Bridge on OpenShift Container Platform, you can use it with 3scale. 3scale API Management can secure the Kafka Bridge with TLS, and provide authentication and authorization. Integration with 3scale also means that additional features like metrics, rate limiting and billing are available.
For information on deploying 3scale, see Using 3scale API Management with the Streams for Apache Kafka Bridge.
14.3. Red Hat build of Debezium for change data capture 复制链接链接已复制到粘贴板!
The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with Streams for Apache Kafka. Following a deployment of Streams for Apache Kafka, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to Streams for Apache Kafka on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
For more information on deploying Debezium with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Debezium.
You can use the Red Hat build of Apicurio Registry as a centralized store of service schemas for data streaming. Red Hat build of Apicurio Registry provides schema registry support for schema technologies such as:
- Avro
- Protobuf
- JSON schema
Apicurio Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.
Using Apicurio Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.
For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.
Kafka client applications can push or pull their schemas from Apicurio Registry at runtime.
For more information on using the Red Hat build of Apicurio Registry with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Apicurio Registry.
14.5. Red Hat build of Apache Camel K 复制链接链接已复制到粘贴板!
The Red Hat build of Apache Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K supports serverless integration, which allows for development and deployment of integration tasks without the need to manage the underlying infrastructure. You can use Camel K to build and integrate event-driven applications with your Streams for Apache Kafka environment. For scenarios requiring real-time data synchronization between different systems or databases, Camel K can be used to capture and transform change in events and send them to Streams for Apache Kafka for distribution to other systems.
For more information on using the Camel K with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Apache Camel K.
Revised on 2025-10-23 15:00:06 UTC