Chapter 3. New features and enhancements


Features and enhancements introduced in Streams for Apache Kafka 3.1.

Streams for Apache Kafka 3.1 on OpenShift is based on Apache Kafka 4.1 and Strimzi 0.48.x.

Note

To view all the enhancements and bugs that are resolved in this release, see the Streams for Apache Kafka Jira project.

3.1. OpenShift 4.16–4.20

Streams for Apache Kafka 3.1 is tested on OpenShift Container Platform 4.16–4.20.

For more information, see Chapter 9, Supported Configurations.

3.2. Apache Kafka 4.1 support

Streams for Apache Kafka supports and uses Apache Kafka version Kafka 4.1. For an overview of the features and enhancements introduced with Kafka 4.1, refer to the Kafka 4.1 Release Notes.

Only Kafka distributions built by Red Hat are supported.

You must upgrade the Cluster Operator to Streams for Apache Kafka version 3.1 before you can upgrade Kafka nodes and client applications to Kafka 4.1. For upgrade instructions, see Upgrading Streams for Apache Kafka.

Kafka 4.0.x is supported only for the purpose of upgrading to Streams for Apache Kafka 3.1.

Note

Kafka 4.1 operates only in KRaft mode, where Kafka runs without ZooKeeper by utilizing the Raft protocol. Kafka 3.9 was the final version to support ZooKeeper. Consequently, Streams for Apache Kafka 2.9.x (LTS) is the last version compatible with Kafka clusters using ZooKeeper.

3.2.1. Requires KRaft clusters

To upgrade to Streams for Apache Kafka 3.0 or later, you must first migrate your Kafka clusters to KRaft mode.

3.2.2. KRaft limitations

Dynamic controller quorums are not currently supported in Streams for Apache Kafka on OpenShift, as the related Kafka feature was only recently resolved. For more information, see KAFKA-16538.

To maintain compatibility with existing KRaft-based deployments, Streams for Apache Kafka on OpenShift uses only static controller quorums. This applies to new and existing clusters, regardless of whether you’re deploying, migrating, or maintaining them.

Support for dynamic controller quorums is expected in a future Streams for Apache Kafka release.

3.3. Streams for Apache Kafka

A summary of new features and enhancements for Streams for Apache Kafka.

3.3.1. Metrics Reporter (technology preview)

As a technology preview, the Metrics Reporter exposes Kafka metrics directly over HTTP in a Prometheus-compatible format.

3.3.2. Cruise Control progress tracking

You can now monitor partition rebalance progress using status information in the KafkaRebalance resource. Progress details are stored in a ConfigMap, accessible with oc get configmaps, and include:

  • estimatedTimeToCompletionInMinutes — Estimated minutes until rebalance completes.
  • completedByteMovementPercentage — Percentage of data moved (0–100, rounded down).
  • executorState.json — Summary JSON from /kafkacruisecontrol/state?substates=executor, showing executor status, partition movement, concurrency limits, and total data to move.

3.3.3. Feature gate for Server Side Apply (SSA)

The ServerSideApplyPhase1 feature gate (disabled by default) adds Server Side Apply (SSA) support for ConfigMap, Ingress, PVC, Service, and ServiceAccount resources.

You can now configure Kafka Connect so that connectors are automatically mounted as Kubernetes Image Volumes. By defining the connector plugins using the .spec.plugins property of the KafkaConnect custom resource, Strimzi automatically mounts them into the Kafka Connect deployment.

The strimzi.io/node-pools and strimzi.io/kraft annotations are not required anymore and will be ignored if set.

3.3.6. Additional configurable Kafka properties

The following properties are now configurable for Kafka resources:

  • broker.session.timeout.ms
  • broker.heartbeat.interval.ms
  • controller.socket.timeout.ms

3.3.7. Ignore users through configuration

The STRIMZI_IGNORED_USERS_PATTERN environmental variable and a regex expression allows configuration of users for which any existing ACLs, Quotas and SCRAM-SHa credentials will be ignored. This option is useful when you want to configure the User Operator to ignore users that are managed through another mechanism.

3.3.8. Support for custom client authentication

The KafkaClientAuthenticationCustom schema supports custom client authentication. Custom client authentication allows you to use any type of Kafka-supported authentication mechanism.

3.3.9. Monitoring of custom resources

The state of custom resources can now be monitored using kube-state-metrics (KSM). Example configuration is provided in examples/metrics/kube-state-metrics.

A distinction is now made between changes to the cluster-wide and broker-specific properties applied dynamically. This related to properties configured in the Kafka and KafkaNodePool resources.

3.3.11. Pod disruption budget support extended

Support for configuring a pod disruption budget (PDB) has been extended to EntityOperator, CruiseControl and KafkaExporter resources through the PodDisruptionBudgetTemplate schema.

3.3.12. Status retrieval using the Kafka Admin API

The Kafka Admin API is now used to to get the list of registered brokers. The KafkaStatus schema registeredNodeIds property is no longer required and is deprecated.

3.3.13. Connector restart parameters

Refine connector restart behavior with the includeTasks and onlyFailed parameters, which both default to false.

3.3.14. OAuth parameters

Additional OAuth configuration options have been added for OAuth 2 authentication on the listener and the client. On the listener clientGrantType has been added. On the client grantType has been added.

Grant type is used when requesting a token from the authorization server.

3.3.15. JsonTemplateLayout support for logging

JsonTemplateLayout is now supported when configuring log appenders in Log4j2.

Kafka Connect now uses KubernetesSecretConfigProvider to load PEM-format truststore and keystore certificates directly from Kubernetes secrets.

3.4. Kafka Bridge

A summary of new features and enhancements for Kafka Bridge.

3.4.1. Schema validation error reporting

Kafka Bridge now includes a validation_errors field in the error response JSON when a schema validation error occurs. This field is part of the Error OpenAPI component and is omitted for other error types.

3.5. Proxy

A summary of new features and enhancements for Streams for Apache Kafka Proxy.

3.5.1. Authorization filter

A new Authorization filter is available. The filter applies Kafka-style topic authorization to client requests, starting with metadata.

3.5.2. SASL Inspection filter

A new SASL Inspection filter is available. The filter extracts the authenticated principal from a successful SASL exchange and makes it available to other filters in the chain.

Supported mechanisms:

  • PLAIN
  • SCRAM-SHA-256
  • SCRAM-SHA-512
  • OAUTHBEARER

A new Azure Key Vault Key Management Service (KMS) implementation is available for record encryption. This allows users to store and manage encryption keys in Azure Key Vault.

3.5.4. Retrieve topic names in filters

Filters can now map topic IDs to topic names using the new topicNames lookup, improving compatibility with topic-ID–based traffic.

Virtual clusters can now define a subjectBuilder to control how client identities are derived from mTLS certificates.

Record encryption now supports Azure Key Vault for key storage and retrieval.

3.5.7. Session ID allocation for tracking traffic

The proxy now assigns a session ID that links downstream and upstream connections, improving traceability in logs.

The operator can now automatically detect the plain listener bootstrap address from a Strimzi Kafka resource.

3.5.9. Metrics for active connections

New metrics report the number of active downstream/upstream connections handled by the proxy.

3.5.10. Duration metrics for back pressure

New metrics report how long the proxy applies back pressure on client connections.

The proxy now rejects configurations that contain unexpected or misspelled fields.

The operator now reports more detailed load balancer status information for virtual Kafka clusters.

3.6. Console

A summary of new features and enhancements for Streams for Apache Kafka Console.

As a technology preview, the console adds configuration and display options for Kafka Connect and its connectors. Connect clusters can be associated with one or more Kafka clusters, allowing users to view connected clusters, connectors, and their configurations.

The console now supports automatic configuration of KafkaUser custom resources when connecting to Kafka clusters secured with mutual TLS authentication.

3.6.3. Configurable OpenID Connect scopes

OpenID Connect (OIDC) scopes are now configurable. The groups scope is no longer a fixed requirement for the identity provider.

Security rule configurations now support the use of regular expressions in resourceNames definitions.

3.6.5. Enhanced platform and version display

The console now displays the actual platform and version (for example, OpenShift 4.18.5) on the home page. An About modal provides application version details to assist with support.

3.6.6. Automatic session termination on sign-out

When a user signs out, the console automatically terminates the session with the identity provider if an end_session_endpoint is available through OIDC discovery.

3.6.7. Authorization-based UI controls

Console UI controls are now automatically enabled or disabled based on user authorization configurations.

3.6.8. Light, dark, and auto color themes

The console supports light, dark, and automatic color themes. With the auto option, the console inherits the host operating system’s color preference.

3.6.9. Topic metrics from Metrics Reporter

The console now supports topic metrics emitted by the Streams for Apache Kafka Metrics Reporter.

3.6.10. Consistent date and time formatting

All dates in the console are displayed in ISO format, using the user’s local time zone except where UTC times are explicitly shown in the topic message browser.

When reconciliation is paused and the Kafka version in .status is unavailable, the console now displays the desired Kafka version.

If a user’s OAuth2 or OIDC token expires and cannot be refreshed, the console now automatically signs the user out and redirects them to the identity provider.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top