이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 5. Features
Streams for Apache Kafka 2.9 introduces the features described in this section.
Streams for Apache Kafka 2.9 on OpenShift is based on Apache Kafka 3.9.x and Strimzi 0.45.x.
To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.
5.1. Streams for Apache Kafka 2.9.x (Long Term Support) 링크 복사링크가 클립보드에 복사되었습니다!
Streams for Apache Kafka 2.9.x is the Long Term Support (LTS) offering for Streams.
The latest patch release is Streams for Apache Kafka 2.9.3. The latest Streams for Apache Kafka product images have changed to version 2.9.3.
For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy.
5.2. OpenShift Container Platform support 링크 복사링크가 클립보드에 복사되었습니다!
Streams for Apache Kafka 2.9 is supported on OpenShift Container Platform 4.14 and later.
For more information, see Chapter 12, Supported Configurations.
5.3. Kafka 3.9.x support 링크 복사링크가 클립보드에 복사되었습니다!
Streams for Apache Kafka supports and uses Apache Kafka 3.9.x. Updates for Apache Kafka 3.9.1 were introduced in the 2.9.1 patch release, and the 2.9.3 patch release continues to use this version. Only Kafka distributions built by Red Hat are supported.
You must upgrade the Cluster Operator to Streams for Apache Kafka version 2.9 before you can upgrade brokers and client applications to Kafka 3.9.x. For upgrade instructions, see Upgrading Streams for Apache Kafka.
Refer to the Kafka 3.9.0 and Kafka 3.9.1 Release Notes for additional information.
Kafka 3.8.x is supported only for the purpose of upgrading to Streams for Apache Kafka 2.9.
Last release to support ZooKeeper
Kafka 3.9.x provides access to KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. Kafka 3.9 is the final version to support ZooKeeper. Consequently, Streams for Apache Kafka 2.9.x is the last version compatible with Kafka clusters using ZooKeeper.
To deploy Kafka clusters in KRaft (Kafka Raft metadata) mode without ZooKeeper, the Kafka custom resource must include the annotation strimzi.io/kraft="enabled", and you must use KafkaNodePool resources to manage the configuration of groups of nodes.
Migrating to Kafka in KRaft mode
To prepare for Streams for Apache Kafka 3.0, migrate to Kafka in KRaft mode.
It is possible to roll back the procedure for KRaft migration, but failure to follow the rollback steps correctly can lead to cluster failure due to metadata inconsistencies. For more information, see Performing a rollback on migration.
KRaft mode limitations
Kafka 3.9 introduces dynamic controller quorums, which allow you to scale controllers in KRaft mode without recreating the cluster. However, migration from static to dynamic controller quorums is not currently supported. This limitation means that existing Kafka clusters using static controller quorums cannot be migrated to use dynamic quorums.
To maintain compatibility with existing KRaft-based deployments, Streams for Apache Kafka on OpenShift continues to use static controller quorums only, even for new installations. As a result, dynamic controller quorums are not yet supported, regardless of whether you are creating a new cluster or migrating or managing an existing one.
Support for dynamic quorums is expected in a future Kafka release.
5.4. Streams for Apache Kafka 링크 복사링크가 클립보드에 복사되었습니다!
5.4.1. Support for automatic rebalancing 링크 복사링크가 클립보드에 복사되었습니다!
You can scale a Kafka cluster by adjusting the number of brokers using the spec.replicas property in the Kafka or KafkaNodePool custom resource used in deployment.
Enable auto-rebalancing to automatically redistribute topic partitions when scaling a cluster up or down. Auto-rebalancing requires a Cruise Control deployment, a rebalancing template for the operation, and autorebalance configuration in the Kafka resource referencing the template. When enabled, auto-rebalancing rebalances clusters that have been scaled up or down without further intervention.
- After scaling up, auto-rebalancing redistributes some existing partitions to the newly added brokers.
- Before scaling down, if the brokers to be removed host partitions, the operator triggers auto-rebalancing to move the partitions, freeing the brokers for removal.
For more information, see Triggering auto-rebalances when scaling clusters.
5.4.2. Capability to move data between JBOD disks using Cruise Control 링크 복사링크가 클립보드에 복사되었습니다!
If you are using JBOD storage and have Cruise Control installed with Streams for Apache Kafka, you can now reassign partitions between the JBOD disks used for storage on the same broker. This capability also allows you to remove JBOD disks without data loss.
You configure a KafkaRebalance resource in remove-disks mode and specify a list of broker IDs with corresponding volume IDs for partition reassignment. Cruise Control generates an optimization proposal based on the configuration and reassigns the partitions when approved manually or automatically.
For more information, see Using Cruise Control to reassign partitions on JBOD disks.
5.4.3. Mechanism to manage connector offsets 링크 복사링크가 클립보드에 복사되었습니다!
A new mechanism allows connector offsets to be managed through KafkaConnect and KafkaMirrorMaker2 resources. It’s now possible to list, alter, and reset offsets.
For more information, see Configuring Kafka Connect connectors.
5.4.4. Templates for host and advertisedHost properties 링크 복사링크가 클립보드에 복사되었습니다!
Hostnames and advertised hostnames for individual brokers can be specified using the host and advertisedHost properties. This release introduces support for using variables, such as {nodeId} or {nodePodName}, in the following templates:
-
advertisedHostTemplate -
hostTemplate
By using templates, you no longer need to configure each broker individually. Streams for Apache Kafka automatically replaces the template variables with the corresponding values for each broker.
For more information, see Overriding advertised addresses for brokers and Specifying listener types.
5.4.5. Environment variable configuration from config maps and secrets 링크 복사링크가 클립보드에 복사되었습니다!
Environment variables for any container deployed by Streams for Apache Kafka may now be based on values specified in a Secret or ConfigMap. This replaces the requirement to use the ExternalConfiguration schema for Kafka Connect and MirrorMaker 2 containers, which is now deprecated.
Values are referenced in the container configuration using the valueFrom.secretKeyRef or valueFrom.configMapKeyRef properties.
For more information, see Loading configuration values from environment variables.
5.4.6. Disabling pod disruption budget generation 링크 복사링크가 클립보드에 복사되었습니다!
Strimzi generates pod disruption budget resources for Kafka, Kafka Connect worker, MirrorMaker2 worker, and Kafka Bridge worker nodes.
If you want to use custom pod disruption budget resources, you can now set the STRIMZI_POD_DISRUPTION_BUDGET_GENERATION environment variable to false in the Cluster Operator configuration.
For more information, see Disabling pod disruption budget generation.
5.4.7. Support for CSI volumes in templates 링크 복사링크가 클립보드에 복사되었습니다!
In order to support the CSI volumes, a new property named csi has been added to the AdditionalVolume schema. This property maps to the Kubernetes API CSIVolumeSource structure, allowing CSI volumes to be defined in container template fields.
For more information, see AdditionalVolume schema reference and Additional volumes.
5.5. Kafka Bridge 링크 복사링크가 클립보드에 복사되었습니다!
5.5.1. Create topics 링크 복사링크가 클립보드에 복사되었습니다!
Use the new admin/topics endpoint of the Kafka Bridge API to create topics. You can specify the topic name, partition count, replication factor in the request body.
For more information, see Securing connections from clients.
5.6. Proxy 링크 복사링크가 클립보드에 복사되었습니다!
Streams for Apache Kafka Proxy is currently a technology preview.
5.6.1. mTLS client authentication 링크 복사링크가 클립보드에 복사되었습니다!
When configuring proxies, you can now use trust properties to configure virtual clusters to use TLS client authentication.
For more information, see Securing connections from clients.
5.7. Console 링크 복사링크가 클립보드에 복사되었습니다!
5.7.1. Console moves to GA 링크 복사링크가 클립보드에 복사되었습니다!
The console (user interface) for Streams for Apache Kafka moves to GA. It is designed to seamlessly integrate with your Streams for Apache Kafka deployment, providing a centralized hub for monitoring and managing Kafka clusters. Deploy the console and connect it to Kafka clusters managed by Streams for Apache Kafka.
Gain insights into each connected cluster through dedicated console pages covering brokers, topics, and consumer groups. View essential information, such as the status of a Kafka cluster, before looking into specific details about brokers, topics, or connected consumer groups.
For more information, see the Streams for Apache Kafka Console guide.
5.7.2. Reset consumer offsets 링크 복사링크가 클립보드에 복사되었습니다!
You can now reset consumer offsets of a specific consumer group from the Consumer Groups page.
For more information, see Resetting consumer offsets.
5.7.3. Manage rebalances 링크 복사링크가 클립보드에 복사되었습니다!
When you configure KafkaRebalance resources to generate optimization proposals on a cluster, you can manage the proposals and any resulting rebalances from the Brokers page.
For more information, see Managing rebalances.
5.7.4. Pause reconciliations 링크 복사링크가 클립보드에 복사되었습니다!
Pause and resume cluster reconciliations from the Cluster overview page. While paused, any changes to the cluster configuration using the Kafka custom resource are ignored until reconciliation is resumed.
For more information, see Pausing reconciliation of clusters.
5.7.5. Support for authorization configuration 링크 복사링크가 클립보드에 복사되었습니다!
The console now supports configuration of authorization rules in the console deployment configuration. Enable secure console connections to Kafka clusters using an OpenID Connect (OIDC) provider, such as Red Hat build of Keycloak. The configuration can be set up for all clusters or at the cluster level.
For more information, see Deploying the console.