Chapter 2. Features


Streams for Apache Kafka 2.7 introduces the features described in this section.

Streams for Apache Kafka 2.7 on OpenShift is based on Apache Kafka 3.7.0 and Strimzi 0.40.x.

Note

To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.

2.1. OpenShift Container Platform support

Streams for Apache Kafka 2.7 is supported on OpenShift Container Platform 4.12 to 4.16.

For more information, see Chapter 10, Supported Configurations.

2.2. Kafka 3.7.0 support

Streams for Apache Kafka now supports and uses Apache Kafka version 3.7.0. Only Kafka distributions built by Red Hat are supported.

You must upgrade the Cluster Operator to Streams for Apache Kafka version 2.7 before you can upgrade brokers and client applications to Kafka 3.7.0. For upgrade instructions, see Upgrading Streams for Apache Kafka.

Refer to the Kafka 3.7.0 Release Notes for additional information.

Kafka 3.6.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7.

Note

Kafka 3.7.0 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol.

2.3. Supporting the v1beta2 API version

The v1beta2 API version for all custom resources was introduced with Streams for Apache Kafka 1.7. For Streams for Apache Kafka 1.8, v1alpha1 and v1beta1 API versions were removed from all Streams for Apache Kafka custom resources apart from KafkaTopic and KafkaUser.

Upgrade of the custom resources to v1beta2 prepares Streams for Apache Kafka for a move to Kubernetes CRD v1, which is required for Kubernetes 1.22.

If you are upgrading from a Streams for Apache Kafka version prior to version 1.7:

  1. Upgrade to Streams for Apache Kafka 1.7
  2. Convert the custom resources to v1beta2
  3. Upgrade to Streams for Apache Kafka 1.8
Important

You must upgrade your custom resources to use API version v1beta2 before upgrading to Streams for Apache Kafka version 2.7.

2.3.1. Upgrading custom resources to v1beta2

To support the upgrade of custom resources to v1beta2, Streams for Apache Kafka provides an API conversion tool, which you can download from the Streams for Apache Kafka 1.8 software downloads page.

You perform the custom resources upgrades in two steps.

Step one: Convert the format of custom resources

Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways:

  • Converting the YAML files that describe the configuration for Streams for Apache Kafka custom resources
  • Converting Streams for Apache Kafka custom resources directly in the cluster

Alternatively, you can manually convert each custom resource into a format applicable to v1beta2. Instructions for manually converting custom resources are included in the documentation.

Step two: Upgrade CRDs to v1beta2

Next, using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually.

For more information, see Upgrading from a Streams for Apache Kafka version earlier than 1.7.

The StableConnectIdentities feature gate moves to GA (General Availability) and is now permanently enabled.

The feature uses StrimziPodSet resources to manage Kafka Connect and Kafka MirrorMaker 2 pods instead of using Deployment resources. This helps to minimize the number of rebalances of connector tasks.

Important

With the StableConnectIdentities feature gate permanently enabled, direct downgrades from Streams for Apache Kafka 2.7 and newer to Streams for Apache Kafka 2.3 or earlier are not possible. You must first downgrade through one of the Streams for Apache Kafka versions in-between, disable the StableConnectIdentities feature gate, and then downgrade to Streams for Apache Kafka 2.3 or earlier.

See StableConnectIdentities feature gate.

This KafkaNodePools feature gate moves to a beta level of maturity and is now enabled by default. The feature gate enables the configuration of different pools of Apache Kafka nodes through the KafkaNodePool custom resource.

A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. The KafkaNodePool custom resource represents the configuration for nodes only in the node pool. Each pool has its own unique configuration, which includes mandatory settings such as the number of replicas, storage configuration, and a list of assigned roles. As you can assign roles to the nodes in a node pool, you can try the feature with a Kafka cluster that uses ZooKeeper for cluster management or KRaft mode. You can assign a controller role, broker role, or both roles. When used with a ZooKeeper-based Apache Kafka cluster, the role must be set to broker.

To disable the KafkaNodePools feature gate, specify -KafkaNodePools in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.

Disabling the KafkaNodePools feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: -KafkaNodePools
Copy to Clipboard Toggle word wrap

See Configuring node pools.

The UnidirectionalTopicOperator feature gate moves to a beta level of maturity and is now enabled by default. The feature gate introduces a unidirectional topic management mode. In unidirectional mode, you create Kafka topics using the KafkaTopic resource, which are then managed by the Topic Operator.

To disable the UnidirectionalTopicOperator feature gate, specify -UnidirectionalTopicOperator in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.

Disabling the UnidirectionalTopicOperator feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: -UnidirectionalTopicOperator
Copy to Clipboard Toggle word wrap

Note

The bidirectional Topic Operator is not supported in KRaft mode and is deprecated.

See Using the Topic Operator.

KRaft mode in Streams for Apache Kafka is a technology preview, with some limitations, but this release introduces a number of new features that support KRaft.

The UseKRaft feature gate moves to a beta level of maturity and is now enabled by default. With the UseKRaft feature gate enabled, Kafka clusters are deployed in KRaft (Kafka Raft metadata) mode without ZooKeeper. To use Kafka in KRaft mode, the Kafka custom resource must also have the annotation strimzi.io/kraft="enabled".

To use KRaft mode, you must also use KafkaNodePool resources to manage the configuration of groups of nodes.

To disable the UseKRaft feature gate, specify -UseKRaft,-KafkaNodePools as values for the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.

Disabling the UseKRaft feature gate

env:
  - name: STRIMZI_FEATURE_GATES
    value: +UseKRaft,+KafkaNodePools
Copy to Clipboard Toggle word wrap

See UseKRaft feature gate and Feature gate releases.

If you are using ZooKeeper for metadata management in your Kafka cluster, you can now migrate to using Kafka in KRaft mode.

During the migration, you do the following:

  1. Install a quorum of controller nodes as a node pool, which replaces ZooKeeper for management of your cluster.
  2. Enable KRaft migration in the cluster configuration by applying the strimzi.io/kraft="migration" annotation.
  3. Switch the brokers to using KRaft and the controllers out of migration mode using the strimzi.io/kraft="enabled" annotation.

See Migrating to KRaft mode.

2.9. KRaft: Support for KRaft role transitions

Streams for Apache Kafka supports node transitions to different KRaft roles. Through node pool configuration, it’s now possible to perform the following transitions:

Combining KRaft roles
Transition from separate node pools with broker-only and controller-only roles to using a dual-role node pool.
Splitting KRaft roles
Transition from using a node pool with combined controller and broker roles to using two node pools with separate roles.

If partitions are still assigned when removing the broker role in a node pool configuration, the change is prevented.

See Transitioning to dual-role nodes and Transitioning to separate broker and controller roles.

Note

Currently, scaling operations are only possible for broker-only node pools containing nodes that run as dedicated brokers.

KRaft to KRaft upgrades are now supported. Upgrade a KRaft-based Kafka cluster to a newer supported Kafka version and KRaft metadata version.

You specify the Kafka version, as before, and a KRaft metadata version using the new metadataVersion property in the Kafka resource:

KRaft metadata version configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    replicas: 3
    metadataVersion: 3.7-IV2
    version: 3.7.0
    # ...
Copy to Clipboard Toggle word wrap

If metadataVersion is not configured, Streams for Apache Kafka automatically updates it to the current default after the update to the Kafka version. Rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.

See Upgrading KRaft-based Kafka clusters and client applications.

2.11. Tiered storage for Kafka brokers

Tiered storage is an early access Kafka feature, which is also available in Streams for Apache Kafka as a developer preview.

2.12. RHEL 7 no longer supported

RHEL 7 is no longer supported. The decision was made due to known incompatibility issues.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat