Release Notes for Streams for Apache Kafka 2.8 on OpenShift


Red Hat Streams for Apache Kafka 2.8

Highlights of what's new and what's changed with this release of Streams for Apache Kafka on OpenShift Container Platform

Abstract

The release notes summarize the new features, enhancements, and fixes introduced in the Streams for Apache Kafka 2.8 release.

AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat’s product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.

The v1beta2 API version for all custom resources was introduced with Streams for Apache Kafka 1.7. For Streams for Apache Kafka 1.8, v1alpha1 and v1beta1 API versions were removed from all Streams for Apache Kafka custom resources apart from KafkaTopic and KafkaUser.

Upgrade of the custom resources to v1beta2 prepares Streams for Apache Kafka for a move to Kubernetes CRD v1, which is required for Kubernetes 1.22.

If you are upgrading from a Streams for Apache Kafka version prior to version 1.7:

  1. Upgrade to Streams for Apache Kafka 1.7
  2. Convert the custom resources to v1beta2
  3. Upgrade to Streams for Apache Kafka 1.8
Important

You must upgrade your custom resources to use API version v1beta2 before upgrading to Streams for Apache Kafka version 2.8.

2.1. Upgrading custom resources to v1beta2

To support the upgrade of custom resources to v1beta2, Streams for Apache Kafka provides an API conversion tool, which you can download from the Streams for Apache Kafka 1.8 software downloads page.

You perform the custom resources upgrades in two steps.

Step one: Convert the format of custom resources

Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways:

  • Converting the YAML files that describe the configuration for Streams for Apache Kafka custom resources
  • Converting Streams for Apache Kafka custom resources directly in the cluster

Alternatively, you can manually convert each custom resource into a format applicable to v1beta2. Instructions for manually converting custom resources are included in the documentation.

Step two: Upgrade CRDs to v1beta2

Next, using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually.

For more information, see Upgrading from a Streams for Apache Kafka version earlier than 1.7.

Chapter 3. Features

Streams for Apache Kafka 2.8 introduces the features described in this section.

Streams for Apache Kafka 2.8 on OpenShift is based on Apache Kafka 3.8.0 and Strimzi 0.43.x.

Note

To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.

3.1. Streams for Apache Kafka

3.1.1. OpenShift Container Platform support

Streams for Apache Kafka 2.8 is supported on OpenShift Container Platform 4.12 and 4.14 to 4.17.

For more information, see Chapter 10, Supported Configurations.

3.1.2. Kafka 3.8.0 support

Streams for Apache Kafka now supports and uses Apache Kafka version 3.8.0. Only Kafka distributions built by Red Hat are supported.

You must upgrade the Cluster Operator to Streams for Apache Kafka version 2.8 before you can upgrade brokers and client applications to Kafka 3.8.0. For upgrade instructions, see Upgrading Streams for Apache Kafka.

Refer to the Kafka 3.8.0 Release Notes for additional information.

Kafka 3.7.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.8.

Note

Kafka 3.8.0 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol.

3.1.3. KRaft support moves to GA

The UseKRaft feature gate moves to GA (General Availability) and is now permanently enabled. To deploy Kafka clusters in KRaft (Kafka Raft metadata) mode without ZooKeeper, the Kafka custom resource must include the annotation strimzi.io/kraft="enabled", and you must use KafkaNodePool resources to manage the configuration of groups of nodes.

For more information, see Deploying a Kafka cluster in KRaft mode and Configuring Kafka in KRaft mode.

Note

If you are using ZooKeeper for metadata management in your Kafka cluster, you can migrate to using Kafka in KRaft mode. Once KRaft mode is enabled, you cannot switch back to ZooKeeper. For more information, see Migrating to KRaft mode.

KRaft limitations

The following Kafka features are currently not supported in KRaft:

  • Scaling of KRaft controller-only nodes up or down

The KafkaNodePools feature gate moves to GA (General Availability) and is now permanently enabled. The feature gate enables the configuration of different pools of Apache Kafka nodes through the KafkaNodePool custom resource. To use the KafkaNodePool resources, you still need to use the strimzi.io/node-pools: enabled annotation on the Kafka custom resources.

For more information, see Configuring node pools.

The UnidirectionalTopicOperator feature gate moves to GA (General Availability) and is now permanently enabled. The feature gate introduces KRaft-compatible unidirectional topic management, enabling the creation of Kafka topics using the KafkaTopic resource. These topics are then managed by the Topic Operator.

Important

The bidirectional Topic Operator has been removed in this release and is no longer available. If you are upgrading from a version of Streams for Apache Kafka that uses the bidirectional Topic Operator, some cleanup tasks are required. For more information, see Upgrading from a Streams for Apache Kafka version using the Bidirectional Topic Operator.

For more information, see Using the Topic Operator.

Note

The Strimzi Quotas plugin is currently a technology preview.

A new configuration mechanism supports quotas management. Configure a Kafka resource to enable the Strimzi Quotas plugin (strimzi) or Kafka’s built-in quotas management plugin (kafka).

  • The strimzi plugin provides storage utilization quotas and dynamic distribution of throughput limits.
  • The kafka plugin applies throughput limits on a per-user, per-broker basis and includes additional CPU and operation rate limits.

The Strimzi Quotas plugin is now configured using .spec.kafka.quotas properties. Any configuration of the plugin inside .spec.kafka.config, as used in previous releases, is ignored and should be removed.

Warning

If you have previously configured the Strimzi Quotas plugin and are upgrading to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the new .spec.kafka.quotas properties to avoid reconciliation issues.

For more information, see Setting throughput and storage limits on brokers.

3.1.7. API users for Cruise Control

With the necessary permissions, you can now create REST API users to safely access a secured Cruise Control REST API directly. Standard Cruise Control USER and VIEWER roles are supported.

For more information, see API users.

3.1.8. Topic replication factor modification

It’s now possible to change the replication factor of topics by updating the replicas property value in a KafkaTopic resource managed by the Topic Operator. The Topic Operator uses Cruise Control to make the necessary changes, so Cruise Control must be deployed with Streams for Apache Kafka.

For more information, see Using Cruise Control to modify topic replication factor.

3.2. Proxy

Note

Streams for Apache Kafka Proxy is currently a technology preview.

3.2.1. Record Validation filter

The Record Validation filter validates records sent by a producer. Only records that pass the validation are sent to the broker. This filter can be used to prevent poison messages—such as those containing corrupted data or invalid formats—from entering the Kafka system, which may otherwise lead to consumer failure.

For more information, see Record Validation filter.

The Record Encryption filter now supports integration with AWS Key Management Service (AWS KMS) as a Key Management Service (KMS). You can use AWS KMS to create encryption keys and give them aliases through which the filter references them.

For more information, see Record Encryption filter.

3.3. Console

Note

Streams for Apache Kafka Console is currently a technology preview.

3.3.1. Console operator

The new operator simplifies and streamlines the process of deploying the console using the Operator Lifecycle Manager (OLM).

For more information, see Deploying and connecting console to a Kafka cluster.

3.3.2. Cluster authentication

This release introduces per-cluster authentication support. You can now log in to provide credentials when accessing each Kafka cluster through the console.

For more information, see Logging into a Kafka cluster.

Chapter 4. Enhancements

Streams for Apache Kafka 2.8 adds a number of enhancements.

4.1. Kafka 3.8.0 enhancements

For an overview of the enhancements introduced with Kafka 3.8.0, refer to the Kafka 3.8.0 Release Notes.

4.2. Streams for Apache Kafka

4.2.1. KRaft: Support for JBOD storage

JBOD storage is now supported in KRaft mode. Use the kraftMetadata property in the storage configuration of a KafkaNodePool resource to specify the JBOD volume that stores the KRaft metadata log. By default, the log is stored on the volume with the lowest ID.

For more information, see Configuring the storage volume used to store the KRaft metadata log

When nodes are removed from a cluster, they are now deregistered so they are no longer tracked. The .status.nodeIds property in the Kafka custom resource stores a full list of node IDs, which is used to determine which nodes were removed and deregister them.

Important

This is a temporary fix that will be removed when Kafka KIP-1073 is implemented for unregistering nodes.

Additional OAuth configuration options have been added for OAuth 2.0 authentication on the listener and the client.

  • On the listener, serverBearerTokenLocation and userNamePrefix have been added.
  • On the client, accessTokenLocation, clientAssertion, clientAssertionLocation, clientAssertionType, and oauth.sasl.extension have been added.

For more information, see Configuring OAuth 2.0 authentication on listeners and Setting up OAuth 2.0 on Kafka components.

4.2.4. Add additional volumes to components

Streams for Apache Kafka now supports specifying additional volumes and volume mounts for Kafka components, the User Operator, and the Topic Operator. You can configure volumes in the pod template (template.pod) and define volume mounts in the container template (template.KafkaContainer) within the component’s resource. All additional mounted paths are located inside /mnt to ensure compatibility with future Kafka and Streams for Apache Kafka updates.

For more information, see Additional volumes.

It’s now possible to specify certificates in resource configuration based on patterns instead of using certificate names using the new pattern property. For example, you can specify pattern: "*.crt" rather than specific certificate names when configuring trusted certificates. The means that the related custom resource does not need to be updated if the certificate file name changes.

You can add this configuration to the Kafka Connect, Kafka MirrorMaker, and Kafka Bridge components for TLS connections to the Kafka cluster. You can also use the pattern property in the configuration for oauth, keycloak, and opa authentication and authorization types that integrate with authorization servers.

For more information, see CertSecretSource schema reference.

4.2.6. Published addresses on listeners

You can now configure external Kafka listeners with the publishNotReadyAddresses property to consider service endpoints as ready even if the pods are not.

For more information, see GenericKafkaListenerConfiguration schema properties.

Support for specifying external IP addresses is now available when configuring node ports. Use the externalIPs property to associate external IP addresses with Kafka bootstrap and node port services. These addresses are used by clients external to the Kubernetes cluster to access the Kafka brokers.

For more information, see GenericKafkaListenerConfigurationBootstrap schema properties and GenericKafkaListenerConfigurationBroker schema properties.

If you need to configure custom SASL authentication, you can now define the necessary authentication properties using the STRIMZI_SASL_CUSTOM_CONFIG_JSON environment variable for the standalone Topic Operator. For example, this configuration may be used for accessing a Kafka cluster in a cloud provider with a custom login module

For more information, see Deploying the standalone Topic Operator.

4.2.9. Expanded operator support for feature gates

Supported feature gates are now applicable to all Streams for Apache Kafka operators. While a particular feature gate might be used by one operator and ignored by the others, it can still be configured in all operators. When deploying the User Operator and Topic Operator within the context of the`Kafka` custom resource, the Cluster Operator automatically propagates the feature gates configuration to them. When the User Operator and Topic Operator are deployed standalone, without a Cluster Operator available to configure the feature gates, they must be directly configured within their deployments.

4.2.10. MirrorMaker 2 target cluster check

A warning is now triggered if the connectCluster configuration for a KafkaMirrorMaker2 resource does not specify the target Kafka cluster.

New alerts for failing connectors and tasks have been added to the metrics examples (prometheus-rules.yaml).

4.2.12. Metrics for certificate expiration

Metrics are now available for monitoring certificate expiration. The example Grafana dashboard for operators (strimzi-operators.json) presents the time certificates expire per cluster.

4.3. Kafka Bridge

4.3.1. Support for OpenAPI v3

Kafka Bridge now supports OpenAPI v3. Support for OpenAPI v2 is now deprecated.

4.3.2. New support for message timestamps

Producers can now specify a timestamp explicitly in ProducerRecord objects. A timestamp in the ConsumerRecord can also be read in a request response.

  • Set the timestamp on a message sent using the send API.
  • Get the timestamp on receiving a message using the poll API.

For more information, see ProducerRecord and ConsumerRecord.

4.3.3. JSON arrays for record keys and values

The json embedded data format for Kafka messages now supports JSON arrays for record keys and values in the OpenAPI definition.

Chapter 5. Technology Previews

Technology Preview features included with Streams for Apache Kafka 2.8.

Important

Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.

When enabled, the ContinueReconciliationOnManualRollingUpdateFailure feature gate allows the Cluster Operator to continue a reconciliation if the manual rolling update of the operands fails. Continuing the reconciliation after a manual rolling update failure allows the operator to recover from various situations that might prevent the update from succeeding.

For more information, see ContinueReconciliationOnManualRollingUpdateFailure feature gate.

5.2. Streams for Apache Kafka Console

A console (user interface) for Streams for Apache Kafka is now available as a technology preview. The Streams for Apache Kafka Console is designed to seamlessly integrate with your Streams for Apache Kafka deployment, providing a centralized hub for monitoring and managing Kafka clusters. Deploy the console and connect it to your Kafka clusters managed by Streams for Apache Kafka.

Gain insights into each connected cluster through dedicated pages covering brokers, topics, and consumer groups. View essential information, such as the status of a Kafka cluster, before looking into specific information about brokers, topics, or connected consumer groups.

For more information, see the Streams for Apache Kafka Console guide.

5.3. Streams for Apache Kafka Proxy

Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself.

As part of the technology preview, you can try the Record Encryption filter and Record Validation filter. The Record Encryption filter uses industry-standard cryptographic techniques to apply encryption to Kafka messages, ensuring the confidentiality of data stored in the Kafka Cluster. The Record Validation filter validates records sent by a producer. Only records that pass the validation are sent to the broker.

For more information, see the Streams for Apache Kafka Proxy guide.

5.4. Strimzi Quotas plugin configuration

Use the technology preview of the Strimzi Quotas plugin to set throughput and storage limits on brokers in your Kafka cluster.

Warning

If you have previously configured the Strimzi Quotas plugin and are upgrading to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the new .spec.kafka.quotas properties to avoid reconciliation issues.

See Setting limits on brokers using the Kafka Static Quota plugin.

Chapter 6. Developer Previews

Developer preview features included with Streams for Apache Kafka 2.8.

As a Kafka cluster administrator, you can toggle a subset of features on and off using feature gates in the Cluster Operator deployment configuration. The feature gates available as developer previews are at an alpha level of maturity and disabled by default.

Important

Developer Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Developer Preview features in production environments. This Developer Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Developer Preview Support Scope.

6.1. Tiered storage for Kafka brokers

Streams for Apache Kafka now supports tiered storage for Kafka brokers as a developer preview, allowing you to introduce custom remote storage solutions as well as local storage. Due to its current limitations, it is not recommended for production environments.

Remote storage configuration is specified using kafka.tieredStorage properties in the Kafka resource. You specify a custom remote storage manager to manage the tiered storage.

Example custom tiered storage configuration

kafka:
  tieredStorage:
    type: custom
    remoteStorageManager:
      className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager
      classPath: /opt/kafka/plugins/tiered-storage-s3/*
      config:
        # remote storage manager configuration 
1

        storage.bucket.name: my-bucket
  config:
    ...
    rlmm.config.remote.log.metadata.topic.replication.factor: 1 
2
Copy to Clipboard Toggle word wrap

1
Configure the custom remote storage manager with the necessary settings. The keys are automatically prefixed with rsm.config and appended to the Kafka broker configuration.
2
Streams for Apache Kafka uses the TopicBasedRemoteLogMetadataManager for Remote Log Metadata Management (RLMM). Add RLMM configuration using an rlmm.config. prefix.
Note

If you want to use custom tiered storage, you must first add the tiered storage plugin to the Streams for Apache Kafka image by building a custom container image.

See Tiered storage (early access).

Chapter 7. Deprecated features

Deprecated features that were supported in previous releases of Streams for Apache Kafka.

7.1. Streams for Apache Kafka

7.1.1. Schema property deprecations

Expand
SchemaDeprecated propertyReplacement property

AclRule

operation

operation

CruiseControlSpec

tlsSidecar

-

CruiseControlTemplate

tlsSidecarContainer

-

CruiseControlSpec.BrokerCapacity

disk

-

CruiseControlSpec.BrokerCapacity

cpuUtilization

-

EntityOperatorSpec

tlsSidecar

-

EntityTopicOperatorSpec

reconciliationIntervalSeconds

reconciliationIntervalMs

EntityTopicOperatorSpec

zookeeperSessionTimeoutSeconds

-

EntityTopicOperatorSpec

topicMetadataMaxAttempts

-

EntityUserOperator

zookeeperSessionTimeoutSeconds

-

ExternalConfiguration

volumes

Replaced by template.pod.volumes and template.kafkaContainer.volumeMounts

JaegerTracing

type

-

KafkaConnectorSpec

pause

state

KafkaConnectTemplate

deployment

Replaced by StrimziPodSet resource

KafkaClusterTemplate

statefulset

Replaced by StrimziPodSet resource

KafkaExporterTemplate

service

-

KafkaMirrorMaker

all properties

-

KafkaMirrorMaker2ConnectorSpec

pause

state

KafkaMirrorMaker2MirrorSpec

topicsBlacklistPattern

topicsExcludePattern

KafkaMirrorMaker2MirrorSpec

groupsBlacklistPattern

groupsExcludePattern

ListenerStatus

type

name

PersistentClaimStorage

overrides

-

ZookeeperClusterTemplate

statefulset

Replaced by StrimziPodSet resource

See the Streams for Apache Kafka Custom Resource API Reference.

Support for Java 11 is deprecated from Kafka 3.7.0 and Streams for Apache Kafka 2.7.0. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in release 3.0.0.

Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17.

If you want to continue using Java 11 for the time being, Streams for Apache Kafka 2.5 provides Long Term Support (LTS). For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.

Note

Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way.

7.1.3. Storage overrides

The storage overrides (*.storage.overrides) for configuring per-broker storage are deprecated and will be removed in the future. If you are using the storage overrides, migrate to KafkaNodePool resources and use multiple node pools with a different storage class each.

For more information, see PersistentClaimStorage schema reference.

7.1.4. Environment variable configuration provider

You can use configuration providers to load configuration data from external sources for all Kafka components, including producers and consumers.

Previously, you could enable the io.strimzi.kafka.EnvVarConfigProvider environment variable configuration provider using the config.providers properties in the spec configuration of a component. However, this provider is now deprecated and will be removed in the future. Therefore, it is recommended to update your implementation to use Kafka’s own environment variable configuration provider (org.apache.kafka.common.config.provider.EnvVarConfigProvider) to provide configuration properties as environment variables.

Example configuration to enable the environment variable configuration provider

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect
  annotations:
    strimzi.io/use-connector-resources: "true"
spec:
  # ...
  config:
    # ...
    config.providers: env
    config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider
  # ...
Copy to Clipboard Toggle word wrap

Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster’s name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios.

To implement an identity replication policy, you must specify a replication policy class (replication.policy.class) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in the future. Therefore, it is recommended to update your implementation to use Kafka’s own replication policy class (org.apache.kafka.connect.mirror.IdentityReplicationPolicy).

For more information, see Configuring Kafka MirrorMaker 2.

7.1.6. Kafka MirrorMaker 1

Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0.0 and will be removed in Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters.

As a result, MirrorMaker 1 (referred to as MirrorMaker in the documentation) has been deprecated in Streams for Apache Kafka, including the KafkaMirrorMaker custom resource, and support will be removed when Kafka 4.0.0 is adopted. To avoid disruptions, please transition to MirrorMaker 2 before support ends.

If you’re using MirrorMaker 1, you can replicate its functionality in MirrorMaker 2 by using the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy class.. By default, MirrorMaker 2 renames topics replicated to a target cluster, but IdentityReplicationPolicy preserves the original topic names, enabling the same active/passive unidirectional replication as MirrorMaker 1.

For more information, see Configuring Kafka MirrorMaker 2.

7.2. Kafka Bridge

7.2.1. OpenAPI v2 (Swagger)

Support for OpenAPI v2 is now deprecated and will be removed in the future. OpenAPI v3 is now supported. Plan to move to using OpenAPI v3.

During the transition to using OpenAPI v2, the /openapi endpoint returns the OpenAPI v2 specification using an aditional /openapi/v2 endpoint. A new /openapi/v3 endpoint returns the OpenAPI v3 specification.

7.2.2. Kafka Bridge span attributes

The following Kafka Bridge span attributes are deprecated with replacements shown where applicable:

  • http.method replaced by http.request.method
  • http.url replaced by url.scheme, url.path, and url.query
  • messaging.destination replaced by messaging.destination.name
  • http.status_code replaced by http.response.status_code
  • messaging.destination.kind=topic without replacement

Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge

Chapter 8. Fixed issues

The issues fixed in Streams for Apache Kafka 2.8 on OpenShift.

For details of the issues fixed in Kafka 3.8.0, refer to the Kafka 3.8.0 Release Notes.

Expand
Table 8.1. Streams for Apache Kafka fixed issues
Issue numberDescription

ENTMQST-6403

Wrong keystore password error in re-built image

ENTMQST-6341

Topic Operator replication factor changes seem to conflict with Cruise Control rebalancing

ENTMQST-6257

Additional Volumes in Pod

ENTMQST-6225

The correct pod might not be restarted during PVC resizing

ENTMQST-6205

Unnecessary CA replacement run with custom CA

ENTMQST-6183

Add support for Kafka 3.8

ENTMQST-6129

Continuously generating secrets in the Kafka instance namespace on OCP 4.16

ENTMQST-6032

Logging update does not effect for controllers until rolled manually

ENTMQST-5915

Promote the UseKRaft feature gate to GA

ENTMQST-5865

Duplicate volume IDs in JBOD storage cause Pod creation errors

ENTMQST-5863

Logging configuration is never updated for Connect when connector operator is disabled

ENTMQST-5850

MM2 connector auto-restarting does not seem to work

ENTMQST-5843

Wrong parsing of SSL principal in Strimzi Quotas plugin

ENTMQST-5789

Promote KafkaNodePools feature gate to GA

ENTMQST-5740

RF Change

ENTMQST-5674

JBOD support in KRaft mode

ENTMQST-5669

Should manual rolling update failure fail the whole reconciliation?

ENTMQST-5199

Allow declarative configuration of the default user quotas

ENTMQST-4019

Remove Bidirectional TO and ZooKeeper use from TO

ENTMQST-3288

Improvements to Quotas support

ENTMQST-2632

Notifications and alerting when the user operator managed certificates are close to expiry

Expand
Table 8.2. Streams for Apache Kafka Console fixed issues
Issue numberDescription

ASUI-91

Console operator deployment name too general

Expand
Table 8.3. Streams for Apache Kafka Proxy fixed issues
Issue numberDescription

ENTMQSTPR-43

Record Encryption does not use new key material resulting from a rotation to encrypt newly produced records

Expand
Table 8.4. Fixed common vulnerabilities and exposures (CVEs)
Issue NumberDescription

ENTMQST-6422

CVE-2024-7254 protobuf: StackOverflow vulnerability in Protocol Buffers

ENTMQST-6421

CVE-2024-47554 Apache Commons IO: Possible denial of service attack on untrusted input to XmlStreamReader

ENTMQST-6396

CVE-2024-9823 org.eclipse.jetty/jetty-servlets: Jetty DOS vulnerability on DosFilter [amq-st-2]

ENTMQST-6395

CVE-2024-8184 org.eclipse.jetty/jetty-server: Jetty ThreadLimitHandler.getRemote() vulnerable to remote DoS attacks [amq-st-2]

ENTMQST-6288

CVE-2024-8285 io.kroxylicious-kroxylicious-parent: Missing upstream Kafka TLS hostname verification [amq-st-2]

Security updates

Check the latest information about Streams for Apache Kafka security updates in the Red Hat Product Advisories portal.

Erratas

Check the latest security and product enhancement advisories for Streams for Apache Kafka.

Chapter 9. Known issues

This section lists the known issues for Streams for Apache Kafka 2.8 on OpenShift.

9.1. Cruise Control CPU utilization estimation

Cruise Control for Streams for Apache Kafka has a known issue that relates to the calculation of CPU utilization estimation. CPU utilization is calculated as a percentage of the defined capacity of a broker pod. The issue occurs when running Kafka brokers across nodes with varying CPU cores. For example, node1 might have 2 CPU cores and node2 might have 4 CPU cores. In this situation, Cruise Control can underestimate and overestimate CPU load of brokers The issue can prevent cluster rebalances when the pod is under heavy load.

There are two workarounds for this issue.

Workaround one: Equal CPU requests and limits

You can set CPU requests equal to CPU limits in Kafka.spec.kafka.resources. That way, all CPU resources are reserved upfront and are always available. This configuration allows Cruise Control to properly evaluate the CPU utilization when preparing the rebalance proposals based on CPU goals.

Workaround two: Exclude CPU goals

You can exclude CPU goals from the hard and default goals specified in the Cruise Control configuration.

Example Cruise Control configuration without CPU goals

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    topicOperator: {}
    userOperator: {}
  cruiseControl:
    brokerCapacity:
      inboundNetwork: 10000KB/s
      outboundNetwork: 10000KB/s
    config:
      hard.goals: >
        com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal
      default.goals: >
        com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.PotentialNwOutGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskUsageDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundUsageDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundUsageDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.TopicReplicaDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderReplicaDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderBytesInDistributionGoal
Copy to Clipboard Toggle word wrap

For more information, see Insufficient CPU capacity.

9.2. JMX authentication when running in FIPS mode

When running Streams for Apache Kafka in FIPS mode with JMX authentication enabled, clients may fail authentication. To work around this issue, do not enable JMX authentication while running in FIPS mode. We are investigating the issue and working to resolve it in a future release.

Chapter 10. Supported Configurations

Supported configurations for the Streams for Apache Kafka 2.8 release.

10.1. Supported platforms

The following platforms are tested for Streams for Apache Kafka 2.8 running with Kafka on the version of OpenShift stated.

Expand
PlatformVersionArchitecture

Red Hat OpenShift Container Platform

4.12 and 4.14 to 4.17

x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM)

Red Hat OpenShift Container Platform disconnected environment

Latest

x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM)

Red Hat OpenShift Dedicated

Latest

x86_64

Microsoft Azure Red Hat OpenShift (ARO)

Latest

x86_64

Red Hat OpenShift Service on AWS (ROSA)
Includes ROSA with hosted control planes (HCP)

Latest

x86_64

Red Hat MicroShift

Latest

x86_64

Red Hat OpenShift Local

2.13-2.19 (OCP 4.12), 2.29-2.33 (OCP 4.14), 2.34-2.38 (OCP 4.15), 2.39 and newer (OCP 4.16)

x86_64

OpenShift Local is a limited version of Red Hat OpenShift Container Platform (OCP). Use only for development and evaluation on the understanding that some features may be unavailable.

Unsupported features

  • Red Hat MicroShift does not support Kafka Connect’s build configuration for building container images with connectors.
  • IBM Z and IBM® LinuxONE s390x architecture does not support Streams for Apache Kafka OPA integration.

FIPS compliance

Streams for Apache Kafka is designed for FIPS. Streams for Apache Kafka container images are based on RHEL 9.2, which has been submitted to NIST for approval.

To check which versions of RHEL are approved by the National Institute of Standards and Technology (NIST), see the Cryptographic Module Validation Program on the NIST website.

Red Hat OpenShift Container Platform is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS validation only on the x86_64, ppc64le (IBM Power), s390x (IBM Z), and aarch64 (64-bit ARM) architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards.

OpenShift Container Platform 4.12 is the last version to support FIPS 140-2. Given the uncertainty surrounding the validation timeline for future OpenShift versions by NIST, Streams for Apache Kafka will be supported on OpenShift 4.12 until further notice.

10.2. Supported clients

Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library, which is tested and supported on kafka-clients-3.7.0.redhat-00007 and newer. Clients are supported for use with Streams for Apache Kafka 2.8 on the following operating systems and architectures:

Expand
Operating SystemArchitectureJVM

RHEL and UBI 8 and 9

x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM)

Java 11 (deprecated) and Java 17

Clients are tested with Open JDK 11 and 17, though Java 11 is deprecated in Streams for Apache Kafka 2.7.0. The IBM JDK is supported but not regularly tested against during each release. Oracle JDK 11 is not supported.

Support for Red Hat Universal Base Image (UBI) versions correspond to the same RHEL version.

10.3. Supported Apache Kafka ecosystem

In Streams for Apache Kafka, only the following components released directly from the Apache Software Foundation are supported:

  • Apache Kafka Broker
  • Apache Kafka Connect
  • Apache MirrorMaker
  • Apache MirrorMaker 2
  • Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams
  • Apache ZooKeeper
Note

Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes.

10.4. Additional supported features

  • Kafka Bridge
  • Drain Cleaner
  • Cruise Control
  • Distributed Tracing
  • Streams for Apache Kafka Console (technology preview)
  • Streams for Apache Kafka Proxy (technology preview)
Note

Streams for Apache Kafka Console and Streams for Apache Kafka Proxy are not production-ready. For the technology previews, they have been tested on x86 and amd64 only.

See also, Chapter 12, Supported integration with Red Hat products.

10.5. Console supported browsers

Streams for Apache Kafka Console is supported on the most recent stable releases of Firefox, Edge, Chrome and Webkit-based browsers.

10.6. Subscription limits and core usage

Cores used by Red Hat components and product operators do not count against subscription limits. Additionally, cores or vCPUs allocated to ZooKeeper nodes are excluded from subscription compliance calculations and do not count towards a subscription.

10.7. Storage requirements

Streams for Apache Kafka has been tested with block storage and is compatible with the XFS and ext4 file systems, both of which are commonly used with Kafka. File storage options, such as NFS, are not compatible.

Chapter 11. Component details

The following table shows the component versions for each Streams for Apache Kafka release.

Note

Components like the operators, console, and proxy only apply to using Streams for Apache Kafka on OpenShift.

Expand
Streams for Apache KafkaApache KafkaStrimzi OperatorsKafka BridgeOauthCruise ControlConsoleProxy

2.8.0

3.8.0

0.43.0

0.30

0.15.0

2.5.138

0.1

0.8.0

2.7.0

3.7.0

0.40.0

0.28

0.15.0

2.5.137

0.1

0.5.1

2.6.0

3.6.0

0.38.0

0.27

0.14.0

2.5.128

-

-

2.5.2

3.5.0 (+3.5.2)

0.36.0

0.26

0.13.0

2.5.123

-

-

2.5.1

3.5.0

0.36.0

0.26

0.13.0

2.5.123

-

-

2.5.0

3.5.0

0.36.0

0.26

0.13.0

2.5.123

-

-

2.4.0

3.4.0

0.34.0

0.25.0

0.12.0

2.5.112

-

-

2.3.0

3.3.1

0.32.0

0.22.3

0.11.0

2.5.103

-

-

2.2.2

3.2.3

0.29.0

0.21.5

0.10.0

2.5.103

-

-

2.2.1

3.2.3

0.29.0

0.21.5

0.10.0

2.5.103

-

-

2.2.0

3.2.3

0.29.0

0.21.5

0.10.0

2.5.89

-

-

2.1.0

3.1.0

0.28.0

0.21.4

0.10.0

2.5.82

-

-

2.0.1

3.0.0

0.26.0

0.20.3

0.9.0

2.5.73

-

-

2.0.0

3.0.0

0.26.0

0.20.3

0.9.0

2.5.73

-

-

1.8.4

2.8.0

0.24.0

0.20.1

0.8.1

2.5.59

-

-

1.8.0

2.8.0

0.24.0

0.20.1

0.8.1

2.5.59

-

-

1.7.0

2.7.0

0.22.1

0.19.0

0.7.1

2.5.37

-

-

1.6.7

2.6.3

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.6

2.6.3

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.5

2.6.2

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.4

2.6.2

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.0

2.6.0

0.20.0

0.19.0

0.6.1

2.5.11

-

-

1.5.0

2.5.0

0.18.0

0.16.0

0.5.0

-

-

-

1.4.1

2.4.0

0.17.0

0.15.2

0.3.0

-

-

-

1.4.0

2.4.0

0.17.0

0.15.2

0.3.0

-

-

-

1.3.0

2.3.0

0.14.0

0.14.0

0.1.0

-

-

-

1.2.0

2.2.1

0.12.1

0.12.2

-

-

-

-

1.1.1

2.1.1

0.11.4

-

-

-

-

-

1.1.0

2.1.1

0.11.1

-

-

-

-

-

1.0

2.0.0

0.8.1

-

-

-

-

-

Streams for Apache Kafka 2.8 supports integration with the following Red Hat products:

Red Hat build of Keycloak
Provides OAuth 2.0 authentication and OAuth 2.0 authorization.
Red Hat 3scale API Management
Secures the Kafka Bridge and provides additional API management features.
Red Hat build of Debezium
Monitors databases and creates event streams.
Red Hat build of Apicurio Registry
Provides a centralized store of service schemas for data streaming.
Red Hat build of Apache Camel K
Provides a lightweight integration framework.

For information on the functionality these products can introduce to your Streams for Apache Kafka deployment, refer to the product documentation.

Streams for Apache Kafka supports OAuth 2.0 token-based authorization through Red Hat build of Keycloak Authorization Services, providing centralized management of security policies and permissions.

Note

Red Hat build of Keycloak replaces Red Hat Single Sign-On, which is now in maintenance support. We are working on updating our documentation, resources, and media to reflect this transition. In the interim, content that describes using Single Sign-On in the Streams for Apache Kafka documentation also applies to using the Red Hat build of Keycloak.

12.2. Red Hat 3scale API Management

If you deployed the Kafka Bridge on OpenShift Container Platform, you can use it with 3scale. 3scale API Management can secure the Kafka Bridge with TLS, and provide authentication and authorization. Integration with 3scale also means that additional features like metrics, rate limiting and billing are available.

For information on deploying 3scale, see Using 3scale API Management with the Streams for Apache Kafka Bridge.

The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with Streams for Apache Kafka. Following a deployment of Streams for Apache Kafka, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to Streams for Apache Kafka on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.

For more information on deploying Debezium with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Debezium.

You can use the Red Hat build of Apicurio Registry as a centralized store of service schemas for data streaming. Red Hat build of Apicurio Registry provides schema registry support for schema technologies such as:

  • Avro
  • Protobuf
  • JSON schema

Apicurio Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.

Using Apicurio Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.

For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.

Kafka client applications can push or pull their schemas from Apicurio Registry at runtime.

For more information on using the Red Hat build of Apicurio Registry with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Apicurio Registry.

12.5. Red Hat build of Apache Camel K

The Red Hat build of Apache Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K supports serverless integration, which allows for development and deployment of integration tasks without the need to manage the underlying infrastructure. You can use Camel K to build and integrate event-driven applications with your Streams for Apache Kafka environment. For scenarios requiring real-time data synchronization between different systems or databases, Camel K can be used to capture and transform change in events and send them to Streams for Apache Kafka for distribution to other systems.

For more information on using the Camel K with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Apache Camel K.

Revised on 2024-11-19 16:01:16 UTC

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat