Release Notes for Streams for Apache Kafka 2.8 on OpenShift
Highlights of what's new and what's changed with this release of Streams for Apache Kafka on OpenShift Container Platform
Abstract
Chapter 1. Notification of name change to Streams for Apache Kafka Copy linkLink copied to clipboard!
AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat’s product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.
Chapter 2. Upgrading from a Streams version before 1.7 Copy linkLink copied to clipboard!
The v1beta2 API version for all custom resources was introduced with Streams for Apache Kafka 1.7. For Streams for Apache Kafka 1.8, v1alpha1 and v1beta1 API versions were removed from all Streams for Apache Kafka custom resources apart from KafkaTopic and KafkaUser.
Upgrade of the custom resources to v1beta2 prepares Streams for Apache Kafka for a move to Kubernetes CRD v1, which is required for Kubernetes 1.22.
If you are upgrading from a Streams for Apache Kafka version prior to version 1.7:
- Upgrade to Streams for Apache Kafka 1.7
-
Convert the custom resources to
v1beta2 - Upgrade to Streams for Apache Kafka 1.8
You must upgrade your custom resources to use API version v1beta2 before upgrading to Streams for Apache Kafka version 2.8.
2.1. Upgrading custom resources to v1beta2 Copy linkLink copied to clipboard!
To support the upgrade of custom resources to v1beta2, Streams for Apache Kafka provides an API conversion tool, which you can download from the Streams for Apache Kafka 1.8 software downloads page.
You perform the custom resources upgrades in two steps.
Step one: Convert the format of custom resources
Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways:
- Converting the YAML files that describe the configuration for Streams for Apache Kafka custom resources
- Converting Streams for Apache Kafka custom resources directly in the cluster
Alternatively, you can manually convert each custom resource into a format applicable to v1beta2. Instructions for manually converting custom resources are included in the documentation.
Step two: Upgrade CRDs to v1beta2
Next, using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually.
For more information, see Upgrading from a Streams for Apache Kafka version earlier than 1.7.
Chapter 3. Features Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.8 introduces the features described in this section.
Streams for Apache Kafka 2.8 on OpenShift is based on Apache Kafka 3.8.0 and Strimzi 0.43.x.
To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.
3.1. Streams for Apache Kafka Copy linkLink copied to clipboard!
3.1.1. OpenShift Container Platform support Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.8 is supported on OpenShift Container Platform 4.12 and 4.14 to 4.17.
For more information, see Chapter 10, Supported Configurations.
3.1.2. Kafka 3.8.0 support Copy linkLink copied to clipboard!
Streams for Apache Kafka now supports and uses Apache Kafka version 3.8.0. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to Streams for Apache Kafka version 2.8 before you can upgrade brokers and client applications to Kafka 3.8.0. For upgrade instructions, see Upgrading Streams for Apache Kafka.
Refer to the Kafka 3.8.0 Release Notes for additional information.
Kafka 3.7.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.8.
Kafka 3.8.0 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol.
3.1.3. KRaft support moves to GA Copy linkLink copied to clipboard!
The UseKRaft feature gate moves to GA (General Availability) and is now permanently enabled. To deploy Kafka clusters in KRaft (Kafka Raft metadata) mode without ZooKeeper, the Kafka custom resource must include the annotation strimzi.io/kraft="enabled", and you must use KafkaNodePool resources to manage the configuration of groups of nodes.
For more information, see Deploying a Kafka cluster in KRaft mode and Configuring Kafka in KRaft mode.
If you are using ZooKeeper for metadata management in your Kafka cluster, you can migrate to using Kafka in KRaft mode. Once KRaft mode is enabled, you cannot switch back to ZooKeeper. For more information, see Migrating to KRaft mode.
KRaft limitations
The following Kafka features are currently not supported in KRaft:
- Scaling of KRaft controller-only nodes up or down
3.1.4. KafkaNodePools feature gate permanently enabled Copy linkLink copied to clipboard!
The KafkaNodePools feature gate moves to GA (General Availability) and is now permanently enabled. The feature gate enables the configuration of different pools of Apache Kafka nodes through the KafkaNodePool custom resource. To use the KafkaNodePool resources, you still need to use the strimzi.io/node-pools: enabled annotation on the Kafka custom resources.
For more information, see Configuring node pools.
3.1.5. UnidirectionalTopicOperator feature gate permanently enabled Copy linkLink copied to clipboard!
The UnidirectionalTopicOperator feature gate moves to GA (General Availability) and is now permanently enabled. The feature gate introduces KRaft-compatible unidirectional topic management, enabling the creation of Kafka topics using the KafkaTopic resource. These topics are then managed by the Topic Operator.
The bidirectional Topic Operator has been removed in this release and is no longer available. If you are upgrading from a version of Streams for Apache Kafka that uses the bidirectional Topic Operator, some cleanup tasks are required. For more information, see Upgrading from a Streams for Apache Kafka version using the Bidirectional Topic Operator.
For more information, see Using the Topic Operator.
3.1.6. New configuration mechanism for quotas management Copy linkLink copied to clipboard!
The Strimzi Quotas plugin is currently a technology preview.
A new configuration mechanism supports quotas management. Configure a Kafka resource to enable the Strimzi Quotas plugin (strimzi) or Kafka’s built-in quotas management plugin (kafka).
-
The
strimziplugin provides storage utilization quotas and dynamic distribution of throughput limits. -
The
kafkaplugin applies throughput limits on a per-user, per-broker basis and includes additional CPU and operation rate limits.
The Strimzi Quotas plugin is now configured using .spec.kafka.quotas properties. Any configuration of the plugin inside .spec.kafka.config, as used in previous releases, is ignored and should be removed.
If you have previously configured the Strimzi Quotas plugin and are upgrading to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the new .spec.kafka.quotas properties to avoid reconciliation issues.
For more information, see Setting throughput and storage limits on brokers.
3.1.7. API users for Cruise Control Copy linkLink copied to clipboard!
With the necessary permissions, you can now create REST API users to safely access a secured Cruise Control REST API directly. Standard Cruise Control USER and VIEWER roles are supported.
For more information, see API users.
3.1.8. Topic replication factor modification Copy linkLink copied to clipboard!
It’s now possible to change the replication factor of topics by updating the replicas property value in a KafkaTopic resource managed by the Topic Operator. The Topic Operator uses Cruise Control to make the necessary changes, so Cruise Control must be deployed with Streams for Apache Kafka.
For more information, see Using Cruise Control to modify topic replication factor.
3.2. Proxy Copy linkLink copied to clipboard!
Streams for Apache Kafka Proxy is currently a technology preview.
3.2.1. Record Validation filter Copy linkLink copied to clipboard!
The Record Validation filter validates records sent by a producer. Only records that pass the validation are sent to the broker. This filter can be used to prevent poison messages—such as those containing corrupted data or invalid formats—from entering the Kafka system, which may otherwise lead to consumer failure.
For more information, see Record Validation filter.
3.2.2. AWS KMS integration for the Record Encryption filter Copy linkLink copied to clipboard!
The Record Encryption filter now supports integration with AWS Key Management Service (AWS KMS) as a Key Management Service (KMS). You can use AWS KMS to create encryption keys and give them aliases through which the filter references them.
For more information, see Record Encryption filter.
3.3. Console Copy linkLink copied to clipboard!
Streams for Apache Kafka Console is currently a technology preview.
3.3.1. Console operator Copy linkLink copied to clipboard!
The new operator simplifies and streamlines the process of deploying the console using the Operator Lifecycle Manager (OLM).
For more information, see Deploying and connecting console to a Kafka cluster.
3.3.2. Cluster authentication Copy linkLink copied to clipboard!
This release introduces per-cluster authentication support. You can now log in to provide credentials when accessing each Kafka cluster through the console.
For more information, see Logging into a Kafka cluster.
Chapter 4. Enhancements Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.8 adds a number of enhancements.
4.1. Kafka 3.8.0 enhancements Copy linkLink copied to clipboard!
For an overview of the enhancements introduced with Kafka 3.8.0, refer to the Kafka 3.8.0 Release Notes.
4.2. Streams for Apache Kafka Copy linkLink copied to clipboard!
4.2.1. KRaft: Support for JBOD storage Copy linkLink copied to clipboard!
JBOD storage is now supported in KRaft mode. Use the kraftMetadata property in the storage configuration of a KafkaNodePool resource to specify the JBOD volume that stores the KRaft metadata log. By default, the log is stored on the volume with the lowest ID.
For more information, see Configuring the storage volume used to store the KRaft metadata log
4.2.2. KRaft: Unregistering KRaft nodes after scale down Copy linkLink copied to clipboard!
When nodes are removed from a cluster, they are now deregistered so they are no longer tracked. The .status.nodeIds property in the Kafka custom resource stores a full list of node IDs, which is used to determine which nodes were removed and deregister them.
This is a temporary fix that will be removed when Kafka KIP-1073 is implemented for unregistering nodes.
4.2.3. OAuth 2.0: New JWT validation and client authentication properties Copy linkLink copied to clipboard!
Additional OAuth configuration options have been added for OAuth 2.0 authentication on the listener and the client.
-
On the listener,
serverBearerTokenLocationanduserNamePrefixhave been added. -
On the client,
accessTokenLocation,clientAssertion,clientAssertionLocation,clientAssertionType, andoauth.sasl.extensionhave been added.
For more information, see Configuring OAuth 2.0 authentication on listeners and Setting up OAuth 2.0 on Kafka components.
4.2.4. Add additional volumes to components Copy linkLink copied to clipboard!
Streams for Apache Kafka now supports specifying additional volumes and volume mounts for Kafka components, the User Operator, and the Topic Operator. You can configure volumes in the pod template (template.pod) and define volume mounts in the container template (template.KafkaContainer) within the component’s resource. All additional mounted paths are located inside /mnt to ensure compatibility with future Kafka and Streams for Apache Kafka updates.
For more information, see Additional volumes.
4.2.5. Support for trusted certificate filename patterns Copy linkLink copied to clipboard!
It’s now possible to specify certificates in resource configuration based on patterns instead of using certificate names using the new pattern property. For example, you can specify pattern: "*.crt" rather than specific certificate names when configuring trusted certificates. The means that the related custom resource does not need to be updated if the certificate file name changes.
You can add this configuration to the Kafka Connect, Kafka MirrorMaker, and Kafka Bridge components for TLS connections to the Kafka cluster. You can also use the pattern property in the configuration for oauth, keycloak, and opa authentication and authorization types that integrate with authorization servers.
For more information, see CertSecretSource schema reference.
4.2.6. Published addresses on listeners Copy linkLink copied to clipboard!
You can now configure external Kafka listeners with the publishNotReadyAddresses property to consider service endpoints as ready even if the pods are not.
For more information, see GenericKafkaListenerConfiguration schema properties.
4.2.7. Support for external IP addresses on node port listeners Copy linkLink copied to clipboard!
Support for specifying external IP addresses is now available when configuring node ports. Use the externalIPs property to associate external IP addresses with Kafka bootstrap and node port services. These addresses are used by clients external to the Kubernetes cluster to access the Kafka brokers.
For more information, see GenericKafkaListenerConfigurationBootstrap schema properties and GenericKafkaListenerConfigurationBroker schema properties.
4.2.8. Custom SASL configuration for standalone Topic Operator Copy linkLink copied to clipboard!
If you need to configure custom SASL authentication, you can now define the necessary authentication properties using the STRIMZI_SASL_CUSTOM_CONFIG_JSON environment variable for the standalone Topic Operator. For example, this configuration may be used for accessing a Kafka cluster in a cloud provider with a custom login module
For more information, see Deploying the standalone Topic Operator.
4.2.9. Expanded operator support for feature gates Copy linkLink copied to clipboard!
Supported feature gates are now applicable to all Streams for Apache Kafka operators. While a particular feature gate might be used by one operator and ignored by the others, it can still be configured in all operators. When deploying the User Operator and Topic Operator within the context of the`Kafka` custom resource, the Cluster Operator automatically propagates the feature gates configuration to them. When the User Operator and Topic Operator are deployed standalone, without a Cluster Operator available to configure the feature gates, they must be directly configured within their deployments.
4.2.10. MirrorMaker 2 target cluster check Copy linkLink copied to clipboard!
A warning is now triggered if the connectCluster configuration for a KafkaMirrorMaker2 resource does not specify the target Kafka cluster.
4.2.11. Alerts for failed connectors and connector tasks Copy linkLink copied to clipboard!
New alerts for failing connectors and tasks have been added to the metrics examples (prometheus-rules.yaml).
4.2.12. Metrics for certificate expiration Copy linkLink copied to clipboard!
Metrics are now available for monitoring certificate expiration. The example Grafana dashboard for operators (strimzi-operators.json) presents the time certificates expire per cluster.
4.3. Kafka Bridge Copy linkLink copied to clipboard!
4.3.1. Support for OpenAPI v3 Copy linkLink copied to clipboard!
Kafka Bridge now supports OpenAPI v3. Support for OpenAPI v2 is now deprecated.
4.3.2. New support for message timestamps Copy linkLink copied to clipboard!
Producers can now specify a timestamp explicitly in ProducerRecord objects. A timestamp in the ConsumerRecord can also be read in a request response.
-
Set the timestamp on a message sent using the
sendAPI. -
Get the timestamp on receiving a message using the
pollAPI.
For more information, see ProducerRecord and ConsumerRecord.
4.3.3. JSON arrays for record keys and values Copy linkLink copied to clipboard!
The json embedded data format for Kafka messages now supports JSON arrays for record keys and values in the OpenAPI definition.
Chapter 5. Technology Previews Copy linkLink copied to clipboard!
Technology Preview features included with Streams for Apache Kafka 2.8.
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.
5.1. Continue reconciliation when manual rolling update fails Copy linkLink copied to clipboard!
When enabled, the ContinueReconciliationOnManualRollingUpdateFailure feature gate allows the Cluster Operator to continue a reconciliation if the manual rolling update of the operands fails. Continuing the reconciliation after a manual rolling update failure allows the operator to recover from various situations that might prevent the update from succeeding.
For more information, see ContinueReconciliationOnManualRollingUpdateFailure feature gate.
5.2. Streams for Apache Kafka Console Copy linkLink copied to clipboard!
A console (user interface) for Streams for Apache Kafka is now available as a technology preview. The Streams for Apache Kafka Console is designed to seamlessly integrate with your Streams for Apache Kafka deployment, providing a centralized hub for monitoring and managing Kafka clusters. Deploy the console and connect it to your Kafka clusters managed by Streams for Apache Kafka.
Gain insights into each connected cluster through dedicated pages covering brokers, topics, and consumer groups. View essential information, such as the status of a Kafka cluster, before looking into specific information about brokers, topics, or connected consumer groups.
For more information, see the Streams for Apache Kafka Console guide.
5.3. Streams for Apache Kafka Proxy Copy linkLink copied to clipboard!
Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself.
As part of the technology preview, you can try the Record Encryption filter and Record Validation filter. The Record Encryption filter uses industry-standard cryptographic techniques to apply encryption to Kafka messages, ensuring the confidentiality of data stored in the Kafka Cluster. The Record Validation filter validates records sent by a producer. Only records that pass the validation are sent to the broker.
For more information, see the Streams for Apache Kafka Proxy guide.
5.4. Strimzi Quotas plugin configuration Copy linkLink copied to clipboard!
Use the technology preview of the Strimzi Quotas plugin to set throughput and storage limits on brokers in your Kafka cluster.
If you have previously configured the Strimzi Quotas plugin and are upgrading to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the new .spec.kafka.quotas properties to avoid reconciliation issues.
See Setting limits on brokers using the Kafka Static Quota plugin.
Chapter 6. Developer Previews Copy linkLink copied to clipboard!
Developer preview features included with Streams for Apache Kafka 2.8.
As a Kafka cluster administrator, you can toggle a subset of features on and off using feature gates in the Cluster Operator deployment configuration. The feature gates available as developer previews are at an alpha level of maturity and disabled by default.
Developer Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Developer Preview features in production environments. This Developer Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Developer Preview Support Scope.
6.1. Tiered storage for Kafka brokers Copy linkLink copied to clipboard!
Streams for Apache Kafka now supports tiered storage for Kafka brokers as a developer preview, allowing you to introduce custom remote storage solutions as well as local storage. Due to its current limitations, it is not recommended for production environments.
Remote storage configuration is specified using kafka.tieredStorage properties in the Kafka resource. You specify a custom remote storage manager to manage the tiered storage.
Example custom tiered storage configuration
- 1
- Configure the custom remote storage manager with the necessary settings. The keys are automatically prefixed with
rsm.configand appended to the Kafka broker configuration. - 2
- Streams for Apache Kafka uses the
TopicBasedRemoteLogMetadataManagerfor Remote Log Metadata Management (RLMM). Add RLMM configuration using anrlmm.config.prefix.
If you want to use custom tiered storage, you must first add the tiered storage plugin to the Streams for Apache Kafka image by building a custom container image.
Chapter 7. Deprecated features Copy linkLink copied to clipboard!
Deprecated features that were supported in previous releases of Streams for Apache Kafka.
7.1. Streams for Apache Kafka Copy linkLink copied to clipboard!
7.1.1. Schema property deprecations Copy linkLink copied to clipboard!
| Schema | Deprecated property | Replacement property |
|---|---|---|
|
|
|
|
|
|
| - |
|
|
| - |
|
|
| - |
|
|
| - |
|
|
| - |
|
|
|
|
|
|
| - |
|
|
| - |
|
|
| - |
|
|
|
Replaced by |
|
|
| - |
|
|
|
|
|
|
|
Replaced by |
|
|
|
Replaced by |
|
|
| - |
|
| all properties | - |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| - |
|
|
|
Replaced by |
See the Streams for Apache Kafka Custom Resource API Reference.
7.1.2. Java 11 deprecated in Streams for Apache Kafka 2.7.0 Copy linkLink copied to clipboard!
Support for Java 11 is deprecated from Kafka 3.7.0 and Streams for Apache Kafka 2.7.0. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in release 3.0.0.
Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17.
If you want to continue using Java 11 for the time being, Streams for Apache Kafka 2.5 provides Long Term Support (LTS). For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.
Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way.
7.1.3. Storage overrides Copy linkLink copied to clipboard!
The storage overrides (*.storage.overrides) for configuring per-broker storage are deprecated and will be removed in the future. If you are using the storage overrides, migrate to KafkaNodePool resources and use multiple node pools with a different storage class each.
For more information, see PersistentClaimStorage schema reference.
7.1.4. Environment variable configuration provider Copy linkLink copied to clipboard!
You can use configuration providers to load configuration data from external sources for all Kafka components, including producers and consumers.
Previously, you could enable the io.strimzi.kafka.EnvVarConfigProvider environment variable configuration provider using the config.providers properties in the spec configuration of a component. However, this provider is now deprecated and will be removed in the future. Therefore, it is recommended to update your implementation to use Kafka’s own environment variable configuration provider (org.apache.kafka.common.config.provider.EnvVarConfigProvider) to provide configuration properties as environment variables.
Example configuration to enable the environment variable configuration provider
7.1.5. Kafka MirrorMaker 2 identity replication policy Copy linkLink copied to clipboard!
Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster’s name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios.
To implement an identity replication policy, you must specify a replication policy class (replication.policy.class) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in the future. Therefore, it is recommended to update your implementation to use Kafka’s own replication policy class (org.apache.kafka.connect.mirror.IdentityReplicationPolicy).
For more information, see Configuring Kafka MirrorMaker 2.
7.1.6. Kafka MirrorMaker 1 Copy linkLink copied to clipboard!
Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0.0 and will be removed in Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters.
As a result, MirrorMaker 1 (referred to as MirrorMaker in the documentation) has been deprecated in Streams for Apache Kafka, including the KafkaMirrorMaker custom resource, and support will be removed when Kafka 4.0.0 is adopted. To avoid disruptions, please transition to MirrorMaker 2 before support ends.
If you’re using MirrorMaker 1, you can replicate its functionality in MirrorMaker 2 by using the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy class.. By default, MirrorMaker 2 renames topics replicated to a target cluster, but IdentityReplicationPolicy preserves the original topic names, enabling the same active/passive unidirectional replication as MirrorMaker 1.
For more information, see Configuring Kafka MirrorMaker 2.
7.2. Kafka Bridge Copy linkLink copied to clipboard!
7.2.1. OpenAPI v2 (Swagger) Copy linkLink copied to clipboard!
Support for OpenAPI v2 is now deprecated and will be removed in the future. OpenAPI v3 is now supported. Plan to move to using OpenAPI v3.
During the transition to using OpenAPI v2, the /openapi endpoint returns the OpenAPI v2 specification using an aditional /openapi/v2 endpoint. A new /openapi/v3 endpoint returns the OpenAPI v3 specification.
7.2.2. Kafka Bridge span attributes Copy linkLink copied to clipboard!
The following Kafka Bridge span attributes are deprecated with replacements shown where applicable:
-
http.methodreplaced byhttp.request.method -
http.urlreplaced byurl.scheme,url.path, andurl.query -
messaging.destinationreplaced bymessaging.destination.name -
http.status_codereplaced byhttp.response.status_code -
messaging.destination.kind=topicwithout replacement
Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge
Chapter 8. Fixed issues Copy linkLink copied to clipboard!
The issues fixed in Streams for Apache Kafka 2.8 on OpenShift.
For details of the issues fixed in Kafka 3.8.0, refer to the Kafka 3.8.0 Release Notes.
| Issue number | Description |
|---|---|
| Wrong keystore password error in re-built image | |
| Topic Operator replication factor changes seem to conflict with Cruise Control rebalancing | |
| Additional Volumes in Pod | |
| The correct pod might not be restarted during PVC resizing | |
| Unnecessary CA replacement run with custom CA | |
| Add support for Kafka 3.8 | |
| Continuously generating secrets in the Kafka instance namespace on OCP 4.16 | |
| Logging update does not effect for controllers until rolled manually | |
| Promote the UseKRaft feature gate to GA | |
| Duplicate volume IDs in JBOD storage cause Pod creation errors | |
| Logging configuration is never updated for Connect when connector operator is disabled | |
| MM2 connector auto-restarting does not seem to work | |
| Wrong parsing of SSL principal in Strimzi Quotas plugin | |
| Promote KafkaNodePools feature gate to GA | |
| RF Change | |
| JBOD support in KRaft mode | |
| Should manual rolling update failure fail the whole reconciliation? | |
| Allow declarative configuration of the default user quotas | |
| Remove Bidirectional TO and ZooKeeper use from TO | |
| Improvements to Quotas support | |
| Notifications and alerting when the user operator managed certificates are close to expiry |
| Issue number | Description |
|---|---|
| Console operator deployment name too general |
| Issue number | Description |
|---|---|
| Record Encryption does not use new key material resulting from a rotation to encrypt newly produced records |
| Issue Number | Description |
|---|---|
| CVE-2024-7254 protobuf: StackOverflow vulnerability in Protocol Buffers | |
| CVE-2024-47554 Apache Commons IO: Possible denial of service attack on untrusted input to XmlStreamReader | |
| CVE-2024-9823 org.eclipse.jetty/jetty-servlets: Jetty DOS vulnerability on DosFilter [amq-st-2] | |
| CVE-2024-8184 org.eclipse.jetty/jetty-server: Jetty ThreadLimitHandler.getRemote() vulnerable to remote DoS attacks [amq-st-2] | |
| CVE-2024-8285 io.kroxylicious-kroxylicious-parent: Missing upstream Kafka TLS hostname verification [amq-st-2] |
Security updates
Check the latest information about Streams for Apache Kafka security updates in the Red Hat Product Advisories portal.
Erratas
Check the latest security and product enhancement advisories for Streams for Apache Kafka.
Chapter 9. Known issues Copy linkLink copied to clipboard!
This section lists the known issues for Streams for Apache Kafka 2.8 on OpenShift.
9.1. Cruise Control CPU utilization estimation Copy linkLink copied to clipboard!
Cruise Control for Streams for Apache Kafka has a known issue that relates to the calculation of CPU utilization estimation. CPU utilization is calculated as a percentage of the defined capacity of a broker pod. The issue occurs when running Kafka brokers across nodes with varying CPU cores. For example, node1 might have 2 CPU cores and node2 might have 4 CPU cores. In this situation, Cruise Control can underestimate and overestimate CPU load of brokers The issue can prevent cluster rebalances when the pod is under heavy load.
There are two workarounds for this issue.
Workaround one: Equal CPU requests and limits
You can set CPU requests equal to CPU limits in Kafka.spec.kafka.resources. That way, all CPU resources are reserved upfront and are always available. This configuration allows Cruise Control to properly evaluate the CPU utilization when preparing the rebalance proposals based on CPU goals.
Workaround two: Exclude CPU goals
You can exclude CPU goals from the hard and default goals specified in the Cruise Control configuration.
Example Cruise Control configuration without CPU goals
For more information, see Insufficient CPU capacity.
9.2. JMX authentication when running in FIPS mode Copy linkLink copied to clipboard!
When running Streams for Apache Kafka in FIPS mode with JMX authentication enabled, clients may fail authentication. To work around this issue, do not enable JMX authentication while running in FIPS mode. We are investigating the issue and working to resolve it in a future release.
Chapter 10. Supported Configurations Copy linkLink copied to clipboard!
Supported configurations for the Streams for Apache Kafka 2.8 release.
10.1. Supported platforms Copy linkLink copied to clipboard!
The following platforms are tested for Streams for Apache Kafka 2.8 running with Kafka on the version of OpenShift stated.
| Platform | Version | Architecture |
|---|---|---|
| Red Hat OpenShift Container Platform | 4.12 and 4.14 to 4.17 | x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM) |
| Red Hat OpenShift Container Platform disconnected environment | Latest | x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM) |
| Red Hat OpenShift Dedicated | Latest | x86_64 |
| Microsoft Azure Red Hat OpenShift (ARO) | Latest | x86_64 |
|
Red Hat OpenShift Service on AWS (ROSA) | Latest | x86_64 |
| Red Hat MicroShift | Latest | x86_64 |
| Red Hat OpenShift Local | 2.13-2.19 (OCP 4.12), 2.29-2.33 (OCP 4.14), 2.34-2.38 (OCP 4.15), 2.39 and newer (OCP 4.16) | x86_64 |
OpenShift Local is a limited version of Red Hat OpenShift Container Platform (OCP). Use only for development and evaluation on the understanding that some features may be unavailable.
Unsupported features
- Red Hat MicroShift does not support Kafka Connect’s build configuration for building container images with connectors.
- IBM Z and IBM® LinuxONE s390x architecture does not support Streams for Apache Kafka OPA integration.
FIPS compliance
Streams for Apache Kafka is designed for FIPS. Streams for Apache Kafka container images are based on RHEL 9.2, which has been submitted to NIST for approval.
To check which versions of RHEL are approved by the National Institute of Standards and Technology (NIST), see the Cryptographic Module Validation Program on the NIST website.
Red Hat OpenShift Container Platform is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS validation only on the x86_64, ppc64le (IBM Power), s390x (IBM Z), and aarch64 (64-bit ARM) architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards.
OpenShift Container Platform 4.12 is the last version to support FIPS 140-2. Given the uncertainty surrounding the validation timeline for future OpenShift versions by NIST, Streams for Apache Kafka will be supported on OpenShift 4.12 until further notice.
10.2. Supported clients Copy linkLink copied to clipboard!
Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library, which is tested and supported on kafka-clients-3.7.0.redhat-00007 and newer. Clients are supported for use with Streams for Apache Kafka 2.8 on the following operating systems and architectures:
| Operating System | Architecture | JVM |
|---|---|---|
| RHEL and UBI 8 and 9 | x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM) | Java 11 (deprecated) and Java 17 |
Clients are tested with Open JDK 11 and 17, though Java 11 is deprecated in Streams for Apache Kafka 2.7.0. The IBM JDK is supported but not regularly tested against during each release. Oracle JDK 11 is not supported.
Support for Red Hat Universal Base Image (UBI) versions correspond to the same RHEL version.
10.3. Supported Apache Kafka ecosystem Copy linkLink copied to clipboard!
In Streams for Apache Kafka, only the following components released directly from the Apache Software Foundation are supported:
- Apache Kafka Broker
- Apache Kafka Connect
- Apache MirrorMaker
- Apache MirrorMaker 2
- Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams
- Apache ZooKeeper
Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes.
10.4. Additional supported features Copy linkLink copied to clipboard!
- Kafka Bridge
- Drain Cleaner
- Cruise Control
- Distributed Tracing
- Streams for Apache Kafka Console (technology preview)
- Streams for Apache Kafka Proxy (technology preview)
Streams for Apache Kafka Console and Streams for Apache Kafka Proxy are not production-ready. For the technology previews, they have been tested on x86 and amd64 only.
See also, Chapter 12, Supported integration with Red Hat products.
10.5. Console supported browsers Copy linkLink copied to clipboard!
Streams for Apache Kafka Console is supported on the most recent stable releases of Firefox, Edge, Chrome and Webkit-based browsers.
10.6. Subscription limits and core usage Copy linkLink copied to clipboard!
Cores used by Red Hat components and product operators do not count against subscription limits. Additionally, cores or vCPUs allocated to ZooKeeper nodes are excluded from subscription compliance calculations and do not count towards a subscription.
10.7. Storage requirements Copy linkLink copied to clipboard!
Streams for Apache Kafka has been tested with block storage and is compatible with the XFS and ext4 file systems, both of which are commonly used with Kafka. File storage options, such as NFS, are not compatible.
Chapter 11. Component details Copy linkLink copied to clipboard!
The following table shows the component versions for each Streams for Apache Kafka release.
Components like the operators, console, and proxy only apply to using Streams for Apache Kafka on OpenShift.
| Streams for Apache Kafka | Apache Kafka | Strimzi Operators | Kafka Bridge | Oauth | Cruise Control | Console | Proxy |
|---|---|---|---|---|---|---|---|
| 2.8.0 | 3.8.0 | 0.43.0 | 0.30 | 0.15.0 | 2.5.138 | 0.1 | 0.8.0 |
| 2.7.0 | 3.7.0 | 0.40.0 | 0.28 | 0.15.0 | 2.5.137 | 0.1 | 0.5.1 |
| 2.6.0 | 3.6.0 | 0.38.0 | 0.27 | 0.14.0 | 2.5.128 | - | - |
| 2.5.2 | 3.5.0 (+3.5.2) | 0.36.0 | 0.26 | 0.13.0 | 2.5.123 | - | - |
| 2.5.1 | 3.5.0 | 0.36.0 | 0.26 | 0.13.0 | 2.5.123 | - | - |
| 2.5.0 | 3.5.0 | 0.36.0 | 0.26 | 0.13.0 | 2.5.123 | - | - |
| 2.4.0 | 3.4.0 | 0.34.0 | 0.25.0 | 0.12.0 | 2.5.112 | - | - |
| 2.3.0 | 3.3.1 | 0.32.0 | 0.22.3 | 0.11.0 | 2.5.103 | - | - |
| 2.2.2 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.103 | - | - |
| 2.2.1 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.103 | - | - |
| 2.2.0 | 3.2.3 | 0.29.0 | 0.21.5 | 0.10.0 | 2.5.89 | - | - |
| 2.1.0 | 3.1.0 | 0.28.0 | 0.21.4 | 0.10.0 | 2.5.82 | - | - |
| 2.0.1 | 3.0.0 | 0.26.0 | 0.20.3 | 0.9.0 | 2.5.73 | - | - |
| 2.0.0 | 3.0.0 | 0.26.0 | 0.20.3 | 0.9.0 | 2.5.73 | - | - |
| 1.8.4 | 2.8.0 | 0.24.0 | 0.20.1 | 0.8.1 | 2.5.59 | - | - |
| 1.8.0 | 2.8.0 | 0.24.0 | 0.20.1 | 0.8.1 | 2.5.59 | - | - |
| 1.7.0 | 2.7.0 | 0.22.1 | 0.19.0 | 0.7.1 | 2.5.37 | - | - |
| 1.6.7 | 2.6.3 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.6 | 2.6.3 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.5 | 2.6.2 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.4 | 2.6.2 | 0.20.1 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.6.0 | 2.6.0 | 0.20.0 | 0.19.0 | 0.6.1 | 2.5.11 | - | - |
| 1.5.0 | 2.5.0 | 0.18.0 | 0.16.0 | 0.5.0 | - | - | - |
| 1.4.1 | 2.4.0 | 0.17.0 | 0.15.2 | 0.3.0 | - | - | - |
| 1.4.0 | 2.4.0 | 0.17.0 | 0.15.2 | 0.3.0 | - | - | - |
| 1.3.0 | 2.3.0 | 0.14.0 | 0.14.0 | 0.1.0 | - | - | - |
| 1.2.0 | 2.2.1 | 0.12.1 | 0.12.2 | - | - | - | - |
| 1.1.1 | 2.1.1 | 0.11.4 | - | - | - | - | - |
| 1.1.0 | 2.1.1 | 0.11.1 | - | - | - | - | - |
| 1.0 | 2.0.0 | 0.8.1 | - | - | - | - | - |
Chapter 12. Supported integration with Red Hat products Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.8 supports integration with the following Red Hat products:
- Red Hat build of Keycloak
- Provides OAuth 2.0 authentication and OAuth 2.0 authorization.
- Red Hat 3scale API Management
- Secures the Kafka Bridge and provides additional API management features.
- Red Hat build of Debezium
- Monitors databases and creates event streams.
- Red Hat build of Apicurio Registry
- Provides a centralized store of service schemas for data streaming.
- Red Hat build of Apache Camel K
- Provides a lightweight integration framework.
For information on the functionality these products can introduce to your Streams for Apache Kafka deployment, refer to the product documentation.
12.1. Red Hat build of Keycloak (formerly Red Hat Single Sign-On) Copy linkLink copied to clipboard!
Streams for Apache Kafka supports OAuth 2.0 token-based authorization through Red Hat build of Keycloak Authorization Services, providing centralized management of security policies and permissions.
Red Hat build of Keycloak replaces Red Hat Single Sign-On, which is now in maintenance support. We are working on updating our documentation, resources, and media to reflect this transition. In the interim, content that describes using Single Sign-On in the Streams for Apache Kafka documentation also applies to using the Red Hat build of Keycloak.
12.2. Red Hat 3scale API Management Copy linkLink copied to clipboard!
If you deployed the Kafka Bridge on OpenShift Container Platform, you can use it with 3scale. 3scale API Management can secure the Kafka Bridge with TLS, and provide authentication and authorization. Integration with 3scale also means that additional features like metrics, rate limiting and billing are available.
For information on deploying 3scale, see Using 3scale API Management with the Streams for Apache Kafka Bridge.
12.3. Red Hat build of Debezium for change data capture Copy linkLink copied to clipboard!
The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with Streams for Apache Kafka. Following a deployment of Streams for Apache Kafka, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to Streams for Apache Kafka on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.
For more information on deploying Debezium with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Debezium.
12.4. Red Hat build of Apicurio Registry for schema validation Copy linkLink copied to clipboard!
You can use the Red Hat build of Apicurio Registry as a centralized store of service schemas for data streaming. Red Hat build of Apicurio Registry provides schema registry support for schema technologies such as:
- Avro
- Protobuf
- JSON schema
Apicurio Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.
Using Apicurio Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.
For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.
Kafka client applications can push or pull their schemas from Apicurio Registry at runtime.
For more information on using the Red Hat build of Apicurio Registry with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Apicurio Registry.
12.5. Red Hat build of Apache Camel K Copy linkLink copied to clipboard!
The Red Hat build of Apache Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K supports serverless integration, which allows for development and deployment of integration tasks without the need to manage the underlying infrastructure. You can use Camel K to build and integrate event-driven applications with your Streams for Apache Kafka environment. For scenarios requiring real-time data synchronization between different systems or databases, Camel K can be used to capture and transform change in events and send them to Streams for Apache Kafka for distribution to other systems.
For more information on using the Camel K with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Apache Camel K.
Revised on 2024-11-19 16:01:16 UTC