이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Release Notes for Streams for Apache Kafka 2.9 on OpenShift


Red Hat Streams for Apache Kafka 2.9

Highlights of what's new and what's changed with this release of Streams for Apache Kafka on OpenShift Container Platform

Abstract

The release notes summarize the new features, enhancements, and fixes introduced in the Streams for Apache Kafka 2.9 release.

Chapter 1. Notification of name change to Streams for Apache Kafka

AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat’s product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.

Chapter 2. Streams for Apache Kafka 2.9 Long Term Support

Streams for Apache Kafka 2.9 is a Long Term Support (LTS) offering for Streams for Apache Kafka.

For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.

Chapter 3. Upgrading from a Streams version before 1.7

The v1beta2 API version for all custom resources was introduced with Streams for Apache Kafka 1.7. For Streams for Apache Kafka 1.8, v1alpha1 and v1beta1 API versions were removed from all Streams for Apache Kafka custom resources apart from KafkaTopic and KafkaUser.

Upgrade of the custom resources to v1beta2 prepares Streams for Apache Kafka for a move to Kubernetes CRD v1, which is required for Kubernetes 1.22.

If you are upgrading from a Streams for Apache Kafka version prior to version 1.7:

  1. Upgrade to Streams for Apache Kafka 1.7
  2. Convert the custom resources to v1beta2
  3. Upgrade to Streams for Apache Kafka 1.8
Important

You must upgrade your custom resources to use API version v1beta2 before upgrading to Streams for Apache Kafka version 2.9.

3.1. Upgrading custom resources to v1beta2

To support the upgrade of custom resources to v1beta2, Streams for Apache Kafka provides an API conversion tool, which you can download from the Streams for Apache Kafka 1.8 software downloads page.

You perform the custom resources upgrades in two steps.

Step one: Convert the format of custom resources

Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways:

  • Converting the YAML files that describe the configuration for Streams for Apache Kafka custom resources
  • Converting Streams for Apache Kafka custom resources directly in the cluster

Alternatively, you can manually convert each custom resource into a format applicable to v1beta2. Instructions for manually converting custom resources are included in the documentation.

Step two: Upgrade CRDs to v1beta2

Next, using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually.

For more information, see Upgrading from a Streams for Apache Kafka version earlier than 1.7.

Chapter 4. Kafka 4 impact and adoption schedule

Streams for Apache Kafka 3.0 is scheduled for release in 2025. The introduction of Apache Kafka 4 in the release brings significant changes to how Kafka clusters are deployed, configured, and operated.

For more information on how these changes affect the Streams for Apache Kafka 3.0 release, refer to the article Streams for Apache Kafka 3.0: Kafka 4 Impact and Adoption.

Chapter 5. Features

Streams for Apache Kafka 2.9 introduces the features described in this section.

Streams for Apache Kafka 2.9 on OpenShift is based on Apache Kafka 3.9.0 and Strimzi 0.45.x.

Note

To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.

5.1. OpenShift Container Platform support

Streams for Apache Kafka 2.9 is supported on OpenShift Container Platform 4.14 and later.

For more information, see Chapter 12, Supported Configurations.

5.2. Kafka 3.9.0 support

Streams for Apache Kafka now supports and uses Apache Kafka version 3.9.0. Only Kafka distributions built by Red Hat are supported.

You must upgrade the Cluster Operator to Streams for Apache Kafka version 2.9 before you can upgrade brokers and client applications to Kafka 3.9.0. For upgrade instructions, see Upgrading Streams for Apache Kafka.

Refer to the Kafka 3.9.0 Release Notes for additional information.

Kafka 3.8.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9.

Last release to support ZooKeeper

Kafka 3.9.0 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. Kafka 3.9 is the final version to support ZooKeeper. Consequently, Streams for Apache Kafka 2.9 is the last version compatible with Kafka clusters using ZooKeeper.

To deploy Kafka clusters in KRaft (Kafka Raft metadata) mode without ZooKeeper, the Kafka custom resource must include the annotation strimzi.io/kraft="enabled", and you must use KafkaNodePool resources to manage the configuration of groups of nodes.

To prepare for Streams for Apache Kafka 3.0, migrate to Kafka in KRaft mode.

KRaft mode limitations

For Kafka 3.8 and earlier, the controller quorums (which replace Zookeeper) were of fixed size (static). Dynamic controller quorums were introduced in Kafka 3.9.

Migration from a static to a dynamic controller quorum is not currently supported, though this feature is expected in a future Kafka release. This limitation means that users of static controller quorums cannot scale their controllers dynamically.

Existing KRaft-based clusters using static controller quorums must continue using them. To ensure compatibility with existing KRaft-based clusters, Streams for Apache Kafka on OpenShift continues to use static controller quorums as well.

5.3. Streams for Apache Kafka

5.3.1. Support for automatic rebalancing

You can scale a Kafka cluster by adjusting the number of brokers using the spec.replicas property in the Kafka or KafkaNodePool custom resource used in deployment.

Enable auto-rebalancing to automatically redistribute topic partitions when scaling a cluster up or down. Auto-rebalancing requires a Cruise Control deployment, a rebalancing template for the operation, and autorebalance configuration in the Kafka resource referencing the template. When enabled, auto-rebalancing rebalances clusters that have been scaled up or down without further intervention.

  • After scaling up, auto-rebalancing redistributes some existing partitions to the newly added brokers.
  • Before scaling down, if the brokers to be removed host partitions, the operator triggers auto-rebalancing to move the partitions, freeing the brokers for removal.

For more information, see Triggering auto-rebalances when scaling clusters.

5.3.2. Capability to move data between JBOD disks using Cruise Control

If you are using JBOD storage and have Cruise Control installed with Streams for Apache Kafka, you can now reassign partitions between the JBOD disks used for storage on the same broker. This capability also allows you to remove JBOD disks without data loss.

You configure a KafkaRebalance resource in remove-disks mode and specify a list of broker IDs with corresponding volume IDs for partition reassignment. Cruise Control generates an optimization proposal based on the configuration and reassigns the partitions when approved manually or automatically.

For more information, see Using Cruise Control to reassign partitions on JBOD disks.

5.3.3. Mechanism to manage connector offsets

A new mechanism allows connector offsets to be managed through KafkaConnect and KafkaMirrorMaker2 resources. It’s now possible to list, alter, and reset offsets.

For more information, see Configuring Kafka Connect connectors.

5.3.4. Templates for host and advertisedHost properties

Hostnames and advertised hostnames for individual brokers can be specified using the host and advertisedHost properties. This release introduces support for using variables, such as {nodeId} or {nodePodName}, in the following templates:

  • advertisedHostTemplate
  • hostTemplate

By using templates, you no longer need to configure each broker individually. Streams for Apache Kafka automatically replaces the template variables with the corresponding values for each broker.

For more information, see Overriding advertised addresses for brokers and Specifying listener types.

5.3.5. Environment variable configuration from config maps and secrets

Environment variables for any container deployed by Streams for Apache Kafka may now be based on values specified in a Secret or ConfigMap. This replaces the requirement to use the ExternalConfiguration schema for Kafka Connect and MirrorMaker 2 containers, which is now deprecated.

Values are referenced in the container configuration using the valueFrom.secretKeyRef or valueFrom.configMapKeyRef properties.

For more information, see Loading configuration values from environment variables.

5.3.6. Disabling pod disruption budget generation

Strimzi generates pod disruption budget resources for Kafka, Kafka Connect worker, MirrorMaker2 worker, and Kafka Bridge worker nodes.

If you want to use custom pod disruption budget resources, you can now set the STRIMZI_POD_DISRUPTION_BUDGET_GENERATION environment variable to false in the Cluster Operator configuration.

For more information, see Disabling pod disruption budget generation.

5.3.7. Support for CSI volumes in templates

In order to support the CSI volumes, a new property named csi has been added to the AdditionalVolume schema. This property maps to the Kubernetes API CSIVolumeSource structure, allowing CSI volumes to be defined in container template fields.

For more information, see AdditionalVolume schema reference and Additional volumes.

5.4. Kafka Bridge

5.4.1. Create topics

Use the new admin/topics endpoint of the Kafka Bridge API to create topics. You can specify the topic name, partition count, replication factor in the request body.

For more information, see Securing connections from clients.

5.5. Proxy

Note

Streams for Apache Kafka Proxy is currently a technology preview.

5.5.1. mTLS client authentication

When configuring proxies, you can now use trust properties to configure virtual clusters to use TLS client authentication.

For more information, see Securing connections from clients.

5.6. Console

5.6.1. Console moves to GA

The console (user interface) for Streams for Apache Kafka moves to GA. It is designed to seamlessly integrate with your Streams for Apache Kafka deployment, providing a centralized hub for monitoring and managing Kafka clusters. Deploy the console and connect it to Kafka clusters managed by Streams for Apache Kafka.

Gain insights into each connected cluster through dedicated console pages covering brokers, topics, and consumer groups. View essential information, such as the status of a Kafka cluster, before looking into specific details about brokers, topics, or connected consumer groups.

For more information, see the Streams for Apache Kafka Console guide.

5.6.2. Reset consumer offsets

You can now reset consumer offsets of a specific consumer group from the Consumer Groups page.

For more information, see Resetting consumer offsets.

5.6.3. Manage rebalances

When you configure KafkaRebalance resources to generate optimization proposals on a cluster, you can manage the proposals and any resulting rebalances from the Brokers page.

For more information, see Managing rebalances.

5.6.4. Pause reconciliations

Pause and resume cluster reconciliations from the Cluster overview page. While paused, any changes to the cluster configuration using the Kafka custom resource are ignored until reconciliation is resumed.

For more information, see Pausing reconciliation of clusters.

5.6.5. Support for authorization configuration

The console now supports configuration of authorization rules in the console deployment configuration. Enable secure console connections to Kafka clusters using an OpenID Connect (OIDC) provider, such as Red Hat build of Keycloak. The configuration can be set up for all clusters or at the cluster level.

For more information, see Deploying the console.

Chapter 6. Enhancements

Streams for Apache Kafka 2.9 adds a number of enhancements.

6.1. Kafka 3.9.0 enhancements

For an overview of the enhancements introduced with Kafka 3.9.0, refer to the Kafka 3.9.0 Release Notes.

6.2. Streams for Apache Kafka

6.2.1. Configuration mechanism for quotas management

The Strimzi Quotas plugin moves to GA (General Availability). Use the plugin properties to set throughput and storage limits on brokers in your Kafka cluster configuration.

Warning

If you have previously used the Strimzi Quotas plugin in releases prior to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the latest .spec.kafka.quotas properties to avoid reconciliation issues when upgrading.

For more information, see Setting limits on brokers using the Kafka Static Quota plugin.

6.2.2. Change to unmanaged topic reconciliation

When finalizers are enabled (default), the Topic Operator no longer restores them on unmanaged KafkaTopic resources if removed. This behavior aligns with paused topics, where finalizers are also not restored.

6.2.3. ContinueReconciliationOnManualRollingUpdateFailure feature gate

The technology preview of the ContinueReconciliationOnManualRollingUpdateFailure feature gate moves to beta stage and is enabled by default. If required, ContinueReconciliationOnManualRollingUpdateFailure can be disabled in the feature gates configuration in the Cluster Operator.

6.2.4. Rolling pods once for CA renewal

Pods are now rolled only when the cluster CA key is replaced, not when the clients CA key is replaced, which is used solely for trust. Consequently, the restart event reason ClientCaCertKeyReplaced has been removed, and either CaCertRenewed or CaCertHasOldGeneration is now used as the event reason.

6.2.5. Rolling updates for CA certificates resume after interruption

Rolling updates for new CA certificate generations now resume from where they left off after an interruption, instead of restarting the process and rolling all pods again.

Chapter 7. Technology Previews

Technology Preview features included with Streams for Apache Kafka 2.9.

Important

Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope.

7.1. Streams for Apache Kafka Proxy

Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware proxy designed to enhance Kafka-based systems. Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself.

As part of the technology preview, you can try the Record Encryption filter and Record Validation filter. The Record Encryption filter uses industry-standard cryptographic techniques to apply encryption to Kafka messages, ensuring the confidentiality of data stored in the Kafka Cluster. The Record Validation filter validates records sent by a producer. Only records that pass the validation are sent to the broker.

For more information, see the Streams for Apache Kafka Proxy guide.

Chapter 8. Developer Previews

Developer preview features included with Streams for Apache Kafka 2.9.

As a Kafka cluster administrator, you can toggle a subset of features on and off using feature gates in the Cluster Operator deployment configuration. The feature gates available as developer previews are at an alpha level of maturity and disabled by default.

Important

Developer Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Developer Preview features in production environments. This Developer Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Developer Preview Support Scope.

8.1. Tiered storage for Kafka brokers

Streams for Apache Kafka now supports tiered storage for Kafka brokers as a developer preview, allowing you to introduce custom remote storage solutions as well as local storage. Due to its current limitations, it is not recommended for production environments.

Remote storage configuration is specified using kafka.tieredStorage properties in the Kafka resource. You specify a custom remote storage manager to manage the tiered storage.

Example custom tiered storage configuration

kafka:
  tieredStorage:
    type: custom
    remoteStorageManager:
      className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager
      classPath: /opt/kafka/plugins/tiered-storage-s3/*
      config:
        # remote storage manager configuration 1
        storage.bucket.name: my-bucket
  config:
    ...
    rlmm.config.remote.log.metadata.topic.replication.factor: 1 2

1
Configure the custom remote storage manager with the necessary settings. The keys are automatically prefixed with rsm.config and appended to the Kafka broker configuration.
2
Streams for Apache Kafka uses the TopicBasedRemoteLogMetadataManager for Remote Log Metadata Management (RLMM). Add RLMM configuration using an rlmm.config. prefix.
Note

If you want to use custom tiered storage, you must first add the tiered storage plugin to the Streams for Apache Kafka image by building a custom container image.

See Tiered storage (early access).

Chapter 9. Deprecated features

Deprecated features that were supported in previous releases of Streams for Apache Kafka.

9.1. Streams for Apache Kafka

9.1.1. Schema property deprecations

SchemaDeprecated propertyReplacement property

AclRule

operation

operation

CruiseControlSpec

tlsSidecar

-

CruiseControlTemplate

tlsSidecarContainer

-

CruiseControlSpec.BrokerCapacity

disk

-

CruiseControlSpec.BrokerCapacity

cpuUtilization

-

EntityOperatorSpec

tlsSidecar

-

EntityTopicOperatorSpec

reconciliationIntervalSeconds

reconciliationIntervalMs

EntityTopicOperatorSpec

zookeeperSessionTimeoutSeconds

-

EntityTopicOperatorSpec

topicMetadataMaxAttempts

-

EntityUserOperator

zookeeperSessionTimeoutSeconds

-

ExternalConfiguration

env

Replaced by template.connectContainer.env

ExternalConfiguration

volumes

Replaced by template.pod.volumes and template.connectContainer.volumeMounts

JaegerTracing

type

-

KafkaConnectorSpec

pause

state

KafkaConnectTemplate

deployment

Replaced by StrimziPodSet resource

KafkaClusterTemplate

statefulset

Replaced by StrimziPodSet resource

KafkaExporterTemplate

service

-

KafkaMirrorMaker

all properties

-

KafkaMirrorMaker2ConnectorSpec

pause

state

KafkaMirrorMaker2MirrorSpec

topicsBlacklistPattern

topicsExcludePattern

KafkaMirrorMaker2MirrorSpec

groupsBlacklistPattern

groupsExcludePattern

ListenerStatus

type

name

PersistentClaimStorage

overrides

-

ZookeeperClusterTemplate

statefulset

Replaced by StrimziPodSet resource

See the Streams for Apache Kafka Custom Resource API Reference.

9.1.2. Java 11 deprecated in Streams for Apache Kafka 2.7

Support for Java 11 is deprecated from Kafka 3.7.0 and Streams for Apache Kafka 2.7. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in release 3.0.

Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17.

If you want to continue using Java 11 for the time being, Streams for Apache Kafka 2.5 provides Long Term Support (LTS). For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.

Note

Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way.

9.1.3. Storage overrides

The storage overrides (*.storage.overrides) for configuring per-broker storage are deprecated and will be removed in Streams for Apache Kafka 3.0. If you are using the storage overrides, migrate to KafkaNodePool resources and use multiple node pools with a different storage class each.

For more information, see PersistentClaimStorage schema reference.

9.1.4. Environment variable configuration provider

You can use configuration providers to load configuration data from external sources for all Kafka components, including producers and consumers.

Previously, you could enable the io.strimzi.kafka.EnvVarConfigProvider environment variable configuration provider using the config.providers properties in the spec configuration of a component. However, this provider is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka’s own environment variable configuration provider (org.apache.kafka.common.config.provider.EnvVarConfigProvider) to provide configuration properties as environment variables.

Example configuration to enable the environment variable configuration provider

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect
  annotations:
    strimzi.io/use-connector-resources: "true"
spec:
  # ...
  config:
    # ...
    config.providers: env
    config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider
  # ...

9.1.5. Kafka MirrorMaker 2 identity replication policy

Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster’s name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios.

To implement an identity replication policy, you must specify a replication policy class (replication.policy.class) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka’s own replication policy class (org.apache.kafka.connect.mirror.IdentityReplicationPolicy).

For more information, see Configuring Kafka MirrorMaker 2.

9.1.6. Kafka MirrorMaker 1

Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0 and will be removed in Streams for Apache Kafka 3.0, including the KafkaMirrorMaker custom resource, and Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. To avoid disruptions, please transition to MirrorMaker 2 before support ends.

If you’re using MirrorMaker 1, you can replicate its functionality in MirrorMaker 2 by using the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy class.. By default, MirrorMaker 2 renames topics replicated to a target cluster, but IdentityReplicationPolicy preserves the original topic names, enabling the same active/passive unidirectional replication as MirrorMaker 1.

For more information, see Configuring Kafka MirrorMaker 2.

9.2. Kafka Bridge

9.2.1. OpenAPI v2 (Swagger)

Support for OpenAPI v2 is now deprecated and will be removed in Streams for Apache Kafka 3.0. OpenAPI v3 is now supported. Plan to move to using OpenAPI v3.

During the transition to using OpenAPI v2, the /openapi endpoint returns the OpenAPI v2 specification using an additional /openapi/v2 endpoint. A new /openapi/v3 endpoint returns the OpenAPI v3 specification.

9.2.2. Kafka Bridge span attributes

The following Kafka Bridge span attributes are deprecated with replacements shown where applicable:

  • http.method replaced by http.request.method
  • http.url replaced by url.scheme, url.path, and url.query
  • messaging.destination replaced by messaging.destination.name
  • http.status_code replaced by http.response.status_code
  • messaging.destination.kind=topic without replacement

Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge

Chapter 10. Fixed issues

The issues fixed in Streams for Apache Kafka 2.9 on OpenShift.

For details of the issues fixed in Kafka 3.9.0, refer to the Kafka 3.9.0 Release Notes.

Table 10.1. Streams for Apache Kafka fixed issues
Issue NumberDescription

ENTMQST-4324

Make it possible to use Cruise Control to move all data between two JBOD disks

ENTMQST-5318

[KAFKA] Improve MirrorMaker logging in case of authorization errors

ENTMQST-6234

[BRIDGE] path label in metrics can contain very different values and that makes it hard to work with the metrics

ENTMQST-6277

Do not generate empty required arrays in OneOf definition

ENTMQST-6278

The namespace.mapper configuration option of Mongodb Sink connector is reported as forbidden

ENTMQST-6282

Fix port handling in the Kafka Agent

ENTMQST-6283

Improve handling of custom Cruise Control topic configurations

ENTMQST-6312

Improve handling of invalid topic configurations

ENTMQST-6313

The KafkaTopic.status.topicId is never updated

ENTMQST-6331

Use init container for Kafka nodes only when needed

ENTMQST-6340

Improve documentation, logging, and automation of certificate renewal activities on OpenShift

ENTMQST-6342

Remove-brokers rebalancing seems to get stuck by race condition

ENTMQST-6391

CA cert annotations aren’t updated during CaReconciler rolling update

ENTMQST-6410

Findings in DAST scans results for 2.8.0

ENTMQST-6444

Support for mounting CSI volumes

Table 10.2. Streams for Apache Kafka Console fixed issues
Issue numberDescription

ASUI-96

Pause and Resume Kafka Reconciliation

ASUI-77

Filtering topics by ID not working

ASUI-92

Kafka Rebalance Management

10.1. Security updates

Check the latest information about Streams for Apache Kafka security updates in the Red Hat Product Advisories portal.

10.2. Erratas

Check the latest security and product enhancement advisories for Streams for Apache Kafka.

Chapter 11. Known issues

This section lists the known issues for Streams for Apache Kafka 2.9 on OpenShift.

11.1. Multi-Version upgrades from the OperatorHub LTS channel

Currently, multi-version upgrades between Long Term Support (LTS) versions are not supported through the Operator Lifecycle Manager (OLM) when using the OperatorHub LTS channel.

For example, you cannot directly upgrade from version 2.2 LTS to version 2.9 LTS. Instead, you must perform incremental upgrades, stepping through each intermediate minor version to reach version 2.9.

11.2. Cruise Control CPU utilization estimation

Cruise Control for Streams for Apache Kafka has a known issue that relates to the calculation of CPU utilization estimation. CPU utilization is calculated as a percentage of the defined capacity of a broker pod. The issue occurs when running Kafka brokers across nodes with varying CPU cores. For example, node1 might have 2 CPU cores and node2 might have 4 CPU cores. In this situation, Cruise Control can underestimate and overestimate CPU load of brokers The issue can prevent cluster rebalances when the pod is under heavy load.

There are two workarounds for this issue.

Workaround one: Equal CPU requests and limits

You can set CPU requests equal to CPU limits in Kafka.spec.kafka.resources. That way, all CPU resources are reserved upfront and are always available. This configuration allows Cruise Control to properly evaluate the CPU utilization when preparing the rebalance proposals based on CPU goals.

Workaround two: Exclude CPU goals

You can exclude CPU goals from the hard and default goals specified in the Cruise Control configuration.

Example Cruise Control configuration without CPU goals

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    topicOperator: {}
    userOperator: {}
  cruiseControl:
    brokerCapacity:
      inboundNetwork: 10000KB/s
      outboundNetwork: 10000KB/s
    config:
      hard.goals: >
        com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal
      default.goals: >
        com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.PotentialNwOutGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskUsageDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundUsageDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundUsageDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.TopicReplicaDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderReplicaDistributionGoal,
        com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderBytesInDistributionGoal

For more information, see Insufficient CPU capacity.

11.3. JMX authentication when running in FIPS mode

When running Streams for Apache Kafka in FIPS mode with JMX authentication enabled, clients may fail authentication. To work around this issue, do not enable JMX authentication while running in FIPS mode. We are investigating the issue and working to resolve it in a future release.

Chapter 12. Supported Configurations

Supported configurations for the Streams for Apache Kafka 2.9 release.

12.1. Supported platforms

The following platforms are tested for Streams for Apache Kafka 2.9 running with Kafka on the version of OpenShift stated.

PlatformVersionArchitecture

Red Hat OpenShift Container Platform

4.14 and later

x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM)

Red Hat OpenShift Container Platform disconnected environment

Latest

x86_64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM)

Red Hat OpenShift Dedicated

Latest

x86_64

Microsoft Azure Red Hat OpenShift (ARO)

Latest

x86_64

Red Hat OpenShift Service on AWS (ROSA)
Includes ROSA with hosted control planes (HCP)

Latest

x86_64

Red Hat build of MicroShift

Latest

x86_64

Unsupported features

  • Red Hat MicroShift does not support Kafka Connect’s build configuration for building container images with connectors.
  • IBM Z and IBM® LinuxONE s390x architecture does not support Streams for Apache Kafka OPA integration.

FIPS compliance

Streams for Apache Kafka is designed for FIPS. Streams for Apache Kafka container images are based on RHEL 9.2, which contains cryptographic modules submitted to NIST for approval.

To check which versions of RHEL are approved by the National Institute of Standards and Technology (NIST), see the Cryptographic Module Validation Program on the NIST website.

Red Hat OpenShift Container Platform is designed for FIPS. When running on RHEL or RHEL CoreOS booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries submitted to NIST for FIPS validation only on the x86_64, ppc64le (IBM Power), s390x (IBM Z), and aarch64 (64-bit ARM) architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards.

12.2. Supported clients

Only client libraries built by Red Hat are supported for Streams for Apache Kafka. Currently, Streams for Apache Kafka only provides a Java client library, which is tested and supported on kafka-clients-3.8.0.redhat-00007 and newer. Clients are supported for use with Streams for Apache Kafka 2.9 on the following operating systems and architectures:

Operating SystemArchitectureJVM

RHEL and UBI 8 and 9

x86, amd64, ppc64le (IBM Power), s390x (IBM Z and IBM® LinuxONE), aarch64 (64-bit ARM)

Java 11 (deprecated) and Java 17

Clients are tested with Open JDK 11 and 17, though Java 11 is deprecated in Streams for Apache Kafka 2.7 and will be removed in version 3.0. The IBM JDK is supported but not regularly tested against during each release. Oracle JDK 11 is not supported.

Support for Red Hat Universal Base Image (UBI) versions correspond to the same RHEL version.

12.3. Supported Apache Kafka ecosystem

In Streams for Apache Kafka, only the following components released directly from the Apache Software Foundation are supported:

  • Apache Kafka Broker
  • Apache Kafka Connect
  • Apache MirrorMaker
  • Apache MirrorMaker 2
  • Apache Kafka Java Producer, Consumer, Management clients, and Kafka Streams
  • Apache ZooKeeper
Note

Apache ZooKeeper is supported solely as an implementation detail of Apache Kafka and should not be modified for other purposes.

12.4. Additional supported features

  • Kafka Bridge
  • Drain Cleaner
  • Cruise Control
  • Distributed Tracing
  • Streams for Apache Kafka Console
  • Streams for Apache Kafka Proxy (technology preview)
Note

Streams for Apache Kafka Proxy is not production-ready. For the technology preview, it has been tested on x86 and amd64 only.

See also, Chapter 14, Supported integration with Red Hat products.

12.5. Console supported browsers

Streams for Apache Kafka Console is supported on the most recent stable releases of Firefox, Edge, Chrome and Webkit-based browsers.

12.6. Subscription limits and core usage

Cores used by Red Hat components and product operators do not count against subscription limits. Additionally, cores or vCPUs allocated to ZooKeeper nodes are excluded from subscription compliance calculations and do not count towards a subscription.

12.7. Storage requirements

Streams for Apache Kafka has been tested with block storage and is compatible with the XFS and ext4 file systems, both of which are commonly used with Kafka. File storage options, such as NFS, are not compatible.

Additional resources

For information on the supported configurations for the latest LTS release, see the Streams for Apache Kafka LTS Support Policy.

Chapter 13. Component details

The following table shows the component versions for each Streams for Apache Kafka release.

Note

Components like the operators, console, and proxy only apply to using Streams for Apache Kafka on OpenShift.

Streams for Apache KafkaApache KafkaStrimzi OperatorsKafka BridgeOauthCruise ControlConsoleProxy

2.9.0

3.9.0

0.45.0

0.31

0.15.0

2.5.141

0.6

0.9.0

2.8.0

3.8.0

0.43.0

0.30

0.15.0

2.5.138

0.1

0.8.0

2.7.0

3.7.0

0.40.0

0.28

0.15.0

2.5.137

0.1

0.5.1

2.6.0

3.6.0

0.38.0

0.27

0.14.0

2.5.128

-

-

2.5.2

3.5.0 (+3.5.2)

0.36.0

0.26

0.13.0

2.5.123

-

-

2.5.1

3.5.0

0.36.0

0.26

0.13.0

2.5.123

-

-

2.5.0

3.5.0

0.36.0

0.26

0.13.0

2.5.123

-

-

2.4.0

3.4.0

0.34.0

0.25.0

0.12.0

2.5.112

-

-

2.3.0

3.3.1

0.32.0

0.22.3

0.11.0

2.5.103

-

-

2.2.2

3.2.3

0.29.0

0.21.5

0.10.0

2.5.103

-

-

2.2.1

3.2.3

0.29.0

0.21.5

0.10.0

2.5.103

-

-

2.2.0

3.2.3

0.29.0

0.21.5

0.10.0

2.5.89

-

-

2.1.0

3.1.0

0.28.0

0.21.4

0.10.0

2.5.82

-

-

2.0.1

3.0.0

0.26.0

0.20.3

0.9.0

2.5.73

-

-

2.0.0

3.0.0

0.26.0

0.20.3

0.9.0

2.5.73

-

-

1.8.4

2.8.0

0.24.0

0.20.1

0.8.1

2.5.59

-

-

1.8.0

2.8.0

0.24.0

0.20.1

0.8.1

2.5.59

-

-

1.7.0

2.7.0

0.22.1

0.19.0

0.7.1

2.5.37

-

-

1.6.7

2.6.3

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.6

2.6.3

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.5

2.6.2

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.4

2.6.2

0.20.1

0.19.0

0.6.1

2.5.11

-

-

1.6.0

2.6.0

0.20.0

0.19.0

0.6.1

2.5.11

-

-

1.5.0

2.5.0

0.18.0

0.16.0

0.5.0

-

-

-

1.4.1

2.4.0

0.17.0

0.15.2

0.3.0

-

-

-

1.4.0

2.4.0

0.17.0

0.15.2

0.3.0

-

-

-

1.3.0

2.3.0

0.14.0

0.14.0

0.1.0

-

-

-

1.2.0

2.2.1

0.12.1

0.12.2

-

-

-

-

1.1.1

2.1.1

0.11.4

-

-

-

-

-

1.1.0

2.1.1

0.11.1

-

-

-

-

-

1.0

2.0.0

0.8.1

-

-

-

-

-

Chapter 14. Supported integration with Red Hat products

Streams for Apache Kafka 2.9 supports integration with the following Red Hat products:

Red Hat build of Keycloak
Provides OAuth 2.0 authentication and OAuth 2.0 authorization.
Red Hat 3scale API Management
Secures the Kafka Bridge and provides additional API management features.
Red Hat build of Debezium
Monitors databases and creates event streams.
Red Hat build of Apicurio Registry
Provides a centralized store of service schemas for data streaming.
Red Hat build of Apache Camel K
Provides a lightweight integration framework.

For information on the functionality these products can introduce to your Streams for Apache Kafka deployment, refer to the product documentation.

14.1. Red Hat build of Keycloak (formerly Red Hat Single Sign-On)

Streams for Apache Kafka supports OAuth 2.0 token-based authorization through Red Hat build of Keycloak Authorization Services, providing centralized management of security policies and permissions.

Note

Red Hat build of Keycloak replaces Red Hat Single Sign-On, which is now in maintenance support. We are working on updating our documentation, resources, and media to reflect this transition. In the interim, content that describes using Single Sign-On in the Streams for Apache Kafka documentation also applies to using the Red Hat build of Keycloak.

14.2. Red Hat 3scale API Management

If you deployed the Kafka Bridge on OpenShift Container Platform, you can use it with 3scale. 3scale API Management can secure the Kafka Bridge with TLS, and provide authentication and authorization. Integration with 3scale also means that additional features like metrics, rate limiting and billing are available.

For information on deploying 3scale, see Using 3scale API Management with the Streams for Apache Kafka Bridge.

14.3. Red Hat build of Debezium for change data capture

The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with Streams for Apache Kafka. Following a deployment of Streams for Apache Kafka, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to Streams for Apache Kafka on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred.

For more information on deploying Debezium with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Debezium.

14.4. Red Hat build of Apicurio Registry for schema validation

You can use the Red Hat build of Apicurio Registry as a centralized store of service schemas for data streaming. Red Hat build of Apicurio Registry provides schema registry support for schema technologies such as:

  • Avro
  • Protobuf
  • JSON schema

Apicurio Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints.

Using Apicurio Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code.

For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas.

Kafka client applications can push or pull their schemas from Apicurio Registry at runtime.

For more information on using the Red Hat build of Apicurio Registry with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Apicurio Registry.

14.5. Red Hat build of Apache Camel K

The Red Hat build of Apache Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K supports serverless integration, which allows for development and deployment of integration tasks without the need to manage the underlying infrastructure. You can use Camel K to build and integrate event-driven applications with your Streams for Apache Kafka environment. For scenarios requiring real-time data synchronization between different systems or databases, Camel K can be used to capture and transform change in events and send them to Streams for Apache Kafka for distribution to other systems.

For more information on using the Camel K with Streams for Apache Kafka, refer to the product documentation for the Red Hat build of Apache Camel K.

Revised on 2025-03-14 17:22:41 UTC

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.