Chapter 2. Features
Streams for Apache Kafka 2.7 introduces the features described in this section.
Streams for Apache Kafka 2.7 on OpenShift is based on Apache Kafka 3.7.0 and Strimzi 0.40.x.
To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.
2.1. OpenShift Container Platform support Copy linkLink copied to clipboard!
Streams for Apache Kafka 2.7 is supported on OpenShift Container Platform 4.12 to 4.16.
For more information, see Chapter 10, Supported Configurations.
2.2. Kafka 3.7.0 support Copy linkLink copied to clipboard!
Streams for Apache Kafka now supports and uses Apache Kafka version 3.7.0. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to Streams for Apache Kafka version 2.7 before you can upgrade brokers and client applications to Kafka 3.7.0. For upgrade instructions, see Upgrading Streams for Apache Kafka.
Refer to the Kafka 3.7.0 Release Notes for additional information.
Kafka 3.6.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.7.
Kafka 3.7.0 provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol.
2.3. Supporting the v1beta2 API version Copy linkLink copied to clipboard!
The v1beta2 API version for all custom resources was introduced with Streams for Apache Kafka 1.7. For Streams for Apache Kafka 1.8, v1alpha1 and v1beta1 API versions were removed from all Streams for Apache Kafka custom resources apart from KafkaTopic and KafkaUser.
Upgrade of the custom resources to v1beta2 prepares Streams for Apache Kafka for a move to Kubernetes CRD v1, which is required for Kubernetes 1.22.
If you are upgrading from a Streams for Apache Kafka version prior to version 1.7:
- Upgrade to Streams for Apache Kafka 1.7
-
Convert the custom resources to
v1beta2 - Upgrade to Streams for Apache Kafka 1.8
You must upgrade your custom resources to use API version v1beta2 before upgrading to Streams for Apache Kafka version 2.7.
2.3.1. Upgrading custom resources to v1beta2 Copy linkLink copied to clipboard!
To support the upgrade of custom resources to v1beta2, Streams for Apache Kafka provides an API conversion tool, which you can download from the Streams for Apache Kafka 1.8 software downloads page.
You perform the custom resources upgrades in two steps.
Step one: Convert the format of custom resources
Using the API conversion tool, you can convert the format of your custom resources into a format applicable to v1beta2 in one of two ways:
- Converting the YAML files that describe the configuration for Streams for Apache Kafka custom resources
- Converting Streams for Apache Kafka custom resources directly in the cluster
Alternatively, you can manually convert each custom resource into a format applicable to v1beta2. Instructions for manually converting custom resources are included in the documentation.
Step two: Upgrade CRDs to v1beta2
Next, using the API conversion tool with the crd-upgrade command, you must set v1beta2 as the storage API version in your CRDs. You cannot perform this step manually.
For more information, see Upgrading from a Streams for Apache Kafka version earlier than 1.7.
2.4. StableConnectIdentities feature gate permanently enabled Copy linkLink copied to clipboard!
The StableConnectIdentities feature gate moves to GA (General Availability) and is now permanently enabled.
The feature uses StrimziPodSet resources to manage Kafka Connect and Kafka MirrorMaker 2 pods instead of using Deployment resources. This helps to minimize the number of rebalances of connector tasks.
With the StableConnectIdentities feature gate permanently enabled, direct downgrades from Streams for Apache Kafka 2.7 and newer to Streams for Apache Kafka 2.3 or earlier are not possible. You must first downgrade through one of the Streams for Apache Kafka versions in-between, disable the StableConnectIdentities feature gate, and then downgrade to Streams for Apache Kafka 2.3 or earlier.
2.5. KafkaNodePools feature gate now enabled by default Copy linkLink copied to clipboard!
This KafkaNodePools feature gate moves to a beta level of maturity and is now enabled by default. The feature gate enables the configuration of different pools of Apache Kafka nodes through the KafkaNodePool custom resource.
A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. The KafkaNodePool custom resource represents the configuration for nodes only in the node pool. Each pool has its own unique configuration, which includes mandatory settings such as the number of replicas, storage configuration, and a list of assigned roles. As you can assign roles to the nodes in a node pool, you can try the feature with a Kafka cluster that uses ZooKeeper for cluster management or KRaft mode. You can assign a controller role, broker role, or both roles. When used with a ZooKeeper-based Apache Kafka cluster, the role must be set to broker.
To disable the KafkaNodePools feature gate, specify -KafkaNodePools in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.
Disabling the KafkaNodePools feature gate
env:
- name: STRIMZI_FEATURE_GATES
value: -KafkaNodePools
env:
- name: STRIMZI_FEATURE_GATES
value: -KafkaNodePools
2.6. UnidirectionalTopicOperator feature gate now enabled by default Copy linkLink copied to clipboard!
The UnidirectionalTopicOperator feature gate moves to a beta level of maturity and is now enabled by default. The feature gate introduces a unidirectional topic management mode. In unidirectional mode, you create Kafka topics using the KafkaTopic resource, which are then managed by the Topic Operator.
To disable the UnidirectionalTopicOperator feature gate, specify -UnidirectionalTopicOperator in the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.
Disabling the UnidirectionalTopicOperator feature gate
env:
- name: STRIMZI_FEATURE_GATES
value: -UnidirectionalTopicOperator
env:
- name: STRIMZI_FEATURE_GATES
value: -UnidirectionalTopicOperator
The bidirectional Topic Operator is not supported in KRaft mode and is deprecated.
2.7. KRaft: UseKRaft feature gate now enabled by default Copy linkLink copied to clipboard!
KRaft mode in Streams for Apache Kafka is a technology preview, with some limitations, but this release introduces a number of new features that support KRaft.
The UseKRaft feature gate moves to a beta level of maturity and is now enabled by default. With the UseKRaft feature gate enabled, Kafka clusters are deployed in KRaft (Kafka Raft metadata) mode without ZooKeeper. To use Kafka in KRaft mode, the Kafka custom resource must also have the annotation strimzi.io/kraft="enabled".
To use KRaft mode, you must also use KafkaNodePool resources to manage the configuration of groups of nodes.
To disable the UseKRaft feature gate, specify -UseKRaft,-KafkaNodePools as values for the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration.
Disabling the UseKRaft feature gate
env:
- name: STRIMZI_FEATURE_GATES
value: +UseKRaft,+KafkaNodePools
env:
- name: STRIMZI_FEATURE_GATES
value: +UseKRaft,+KafkaNodePools
2.8. KRaft: Support for migrating from ZooKeeper-based to KRaft-based Kafka clusters Copy linkLink copied to clipboard!
If you are using ZooKeeper for metadata management in your Kafka cluster, you can now migrate to using Kafka in KRaft mode.
During the migration, you do the following:
- Install a quorum of controller nodes as a node pool, which replaces ZooKeeper for management of your cluster.
-
Enable KRaft migration in the cluster configuration by applying the
strimzi.io/kraft="migration"annotation. -
Switch the brokers to using KRaft and the controllers out of migration mode using the
strimzi.io/kraft="enabled"annotation.
2.9. KRaft: Support for KRaft role transitions Copy linkLink copied to clipboard!
Streams for Apache Kafka supports node transitions to different KRaft roles. Through node pool configuration, it’s now possible to perform the following transitions:
- Combining KRaft roles
- Transition from separate node pools with broker-only and controller-only roles to using a dual-role node pool.
- Splitting KRaft roles
- Transition from using a node pool with combined controller and broker roles to using two node pools with separate roles.
If partitions are still assigned when removing the broker role in a node pool configuration, the change is prevented.
See Transitioning to dual-role nodes and Transitioning to separate broker and controller roles.
Currently, scaling operations are only possible for broker-only node pools containing nodes that run as dedicated brokers.
2.10. KRaft: Kafka upgrades for the KRaft-based clusters Copy linkLink copied to clipboard!
KRaft to KRaft upgrades are now supported. Upgrade a KRaft-based Kafka cluster to a newer supported Kafka version and KRaft metadata version.
You specify the Kafka version, as before, and a KRaft metadata version using the new metadataVersion property in the Kafka resource:
KRaft metadata version configuration
If metadataVersion is not configured, Streams for Apache Kafka automatically updates it to the current default after the update to the Kafka version. Rolling updates ensure that each pod is using the broker binaries for the new version of Kafka.
See Upgrading KRaft-based Kafka clusters and client applications.
2.11. Tiered storage for Kafka brokers Copy linkLink copied to clipboard!
Tiered storage is an early access Kafka feature, which is also available in Streams for Apache Kafka as a developer preview.
2.12. RHEL 7 no longer supported Copy linkLink copied to clipboard!
RHEL 7 is no longer supported. The decision was made due to known incompatibility issues.